_id
stringlengths 36
36
| text
stringlengths 5
665k
| marker
stringlengths 3
6
| marker_offsets
sequence | label
stringlengths 28
32
|
---|---|---|---|---|
5975c4e6-887d-4db4-84e8-66f6d71e7cb3 | In recent years, machine learning models have become increasingly popular as methods to approximate potential energy surfaces of molecules.
These models range from classical learning methods such as kernel methods to methods based on deep neural networks.[1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]}, [17]}, [18]}, [19]}
A common denominator for these data-driven models is that they require an adequate training set in order to yield predictions of sufficient accuracy.
It is thus clear that informed and rational selection of training data is paramount to proper optimization of the data-efficiency of the machine learning models.
| [3] | [
[
267,
270
]
] | https://openalex.org/W2025444507 |
e3a400fb-e900-4edb-8a5d-ae44ded3905e | For machine learning models describing potential energy surfaces, two types of data seem particularly convenient as training data: single-point energies and atomic force vectors.
However, it has not yet been fully demonstrated when and to which degree the inclusion of force labels in the training set truly leads to an improvement of the accuracy of the trained model versus energy labels. It is even possible to find somewhat conflicting information in literature.
For example, the GDML and sGDML methods achieve state-of-the-art accuracy in certain cases for the MD17 benchmark dataset, despite training only on force labels, ignoring the energy labels.[1]}, [2]}, [3]}
In contrast, the HIP-NN neural network—when only trained on energy labels—ostensibly achieves similar predictive accuracy to the DTNN, SchNet and PhysNet neural networks, when these are trained on both forces and energy labels for 50K molecules from the MD17 dataset, despite the fact that HIP-NN is trained using far fewer training labels.[4]}, [5]}, [6]}, [7]}
Similarly, the family of ANI datasets have resulted in successful potential energy models for general organic chemistry solely trained on single-point energies on geometries distorted along normal-modes, although the corresponding gradient information would not have been much more costly to obtain at the density-functional theory (DFT) level employed.[8]}, [9]}, [10]}, [11]}
| [7] | [
[
1031,
1034
]
] | https://openalex.org/W2923693308 |
a57607e3-3130-4b2e-af66-426f9f940fa7 | While the relevant equations for each regressor can be derived both in the context of kernel ridge regression (KRR), as well as Gaussian process regression (GPR), we present them here in the notation most commonly used for the former.[1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}
For more detailed derivations, we refer to the work by Bartók and Csányi,[9]} as well as that of Mathias.[10]}
We further note that the equations for the force-only regressor have recently seen a different derivation in the work on “gradient domain” machine learning (GDML) by Chmiela et al.[5]}
| [5] | [
[
258,
261
],
[
572,
575
]
] | https://openalex.org/W2585152223 |
873be5e5-5a34-4877-91e2-942cd901b46e | While the relevant equations for each regressor can be derived both in the context of kernel ridge regression (KRR), as well as Gaussian process regression (GPR), we present them here in the notation most commonly used for the former.[1]}, [2]}, [3]}, [4]}, [5]}, [6]}, [7]}, [8]}
For more detailed derivations, we refer to the work by Bartók and Csányi,[9]} as well as that of Mathias.[10]}
We further note that the equations for the force-only regressor have recently seen a different derivation in the work on “gradient domain” machine learning (GDML) by Chmiela et al.[5]}
| [6] | [
[
264,
267
]
] | https://openalex.org/W3106310231 |
65a05d90-48f7-4a19-b57d-b7d6dd07b9d5 | To represent the environments of an atom in a molecule, we rely on the computationally more efficient variant of the Faber-Christensen-Huang-Lilienfeld (FCHL) representation[1]}, namely the FCHL19 representation [2]}.
Briefly, this representation is a vector which contains histograms of the radial distributions of atoms and a number of Fourier terms describing angular distributions of atoms in the environment of a certain atom.[3]}
In principle, any continuous representation that generalizes across chemical space could have been used for this purpose.
In addition, we use the localized kernel ansatz in which the kernel elements between two molecules correspond to the pairwise sum over the kernel functions between the respective representations of atomic environments in the two molecules.[4]}, [5]}
This makes it possible to train models that span molecules of varying size and chemical composition.
The following Gaussian kernel function is used throughout:
\(\mathbf {K}_{ij} = \sum _{I \in i} \sum _{J \in i} \delta _{Z_I Z_{J}} \exp \left(-\frac{\Vert \mathbf {q}_I - \mathbf {q}_{J} \Vert ^2_2}{2\sigma ^2}\right) \quad \text{for molecules}\)
| [2] | [
[
212,
215
]
] | https://openalex.org/W3003486042 |
841707ef-03a2-43b3-9629-422a2d51b29c | Also for learning curves using loss functions we find agreement with the power-law behavior that is expected from models trained on function values,[1]}, [2]} here demonstrated for models trained on function gradients.
In Table REF , we present resulting slopes and offsets for loss-function learning curves for the three types of trained machines trained using three different sets of training data and corresponding loss functions (rows).
These learning curves are shown graphically in Fig. REF in the Supplementary Information.
| [1] | [
[
148,
151
]
] | https://openalex.org/W2134429390 |
9b85723d-b99b-4458-b67c-ed0c25fff279 | All learning curves were generated using nested cross-validation as implemented in scikit-learn[1]} via the following recipe:
First, the datasets were randomized.
Secondly, the datasets were divided into 100 folds using random subsampling for Himmielblau's function, while datasets consisting of molecules were randomly divided into 5 folds using the KFold class implemented in scikit-learn.
Next, a grid-search with 4-fold cross-validation within the training set extracted from the fold was used to select the optimal choice of the hyperparameters \(\sigma \) (the kernel width) and \(\lambda \) (regularization strength), in order to avoid overfitting.
In order to select hyperparameters that simultaneously work well for both force and energy prediction, the following score function was used to select these:[2]}
\(\mathcal {L} = 0.01 \sum _i \left( U_i - \hat{U}_i \right)^2 + \sum _i \frac{1}{n_i} \Vert \mathbf {F}_i - \mathbf {\hat{F}}_i \Vert ^2\)
| [2] | [
[
815,
818
]
] | https://openalex.org/W2778051509 |
834add39-64ff-4970-8a83-552a09adcbe1 | For energy predictions throughout large datasets used to fit models for general chemistry, i.e. throughout chemical space, such as, for example, the family of ANI datasets[1]}, [2]}, [3]}, [4]}, it seems more valuable to build the most compact model using a compositionally diverse training set with only single-point energies, rather than “wasting” coefficients by training also on forces for more conformations of the same molecule.
If the goal, however, consists of also modeling forces, such as is typically the case for relaxing geometries throughout chemical compound space, our results indicate that the addition of forces (if acquisition cost is lower than for energies) in the training set is always beneficial.
This conclusion, however, also depends on the requirements for execution speed and the availability of training data: Models trained on force labels can be computationally substantially more expensive to train and execute compared to models on the same number of energy labels, since this often involves the derivative of, for example, a kernel matrix or a neural network.
Consequently, in an application scenario where sufficient energy labels are available, it might be best to train on energy labels only, as this enables numerically less complex training models.
Considering the prediction times for kernel-based models, it is also much more computationally expensive to evaluate kernel functions placed on derivatives compared to those placed on scalars.
If the ultimate goal is to have very fast prediction times for kernel-based models, it seems worthwhile to consider the use of kernel-based force models which do not require the evaluation of second-order kernel derivatives.
| [2] | [
[
177,
180
]
] | https://openalex.org/W2541404351 |
08042dbf-245a-4573-9d0c-6109a40fec0e | In the extended SM with the neutrino mass term, the GIM mechanism makes the branching ratio of the charged lepton flavor violating (CLFV) very small due to smallness of the mass of the neutrino comparing to the mass of the heaviest particle in the loop. Experimentally, the CLFV decays have never been observed yet but there are many models beyond the SM predict sizable rates up to the current experimental bounds. Here, we investigate the \(\ell _i \rightarrow \ell _j\ell _j\ell _j\) decay and \(Z\rightarrow \ell _f^+\ell _i^- \) decays in the extension of the SM with dimension 6 operators.The full list of operators of dimension 5 and 6 which can be
constructed out of SM fields is given in [1]}.
| [1] | [
[
699,
702
]
] | https://openalex.org/W1484222701 |
d93a7e3c-425f-42a5-9878-fd281dbb636f | This year a possible discovery of anti-stars in the Galaxy was reported [1]}.
Quoting the authors:
“We identify in the catalog 14 antistar candidates not associated with any objects belonging to established
gamma-ray source classes and with a spectrum compatible with baryon-antibaryon annihilation”
with characteristic energies of several hundred GeV.
This sensational statement nicely fits the prediction of refs. [2]}, [3]}.
| [2] | [
[
416,
419
]
] | https://openalex.org/W2080096802 |
d597afc1-8320-436c-b94e-7942ed2d41ca | This year a possible discovery of anti-stars in the Galaxy was reported [1]}.
Quoting the authors:
“We identify in the catalog 14 antistar candidates not associated with any objects belonging to established
gamma-ray source classes and with a spectrum compatible with baryon-antibaryon annihilation”
with characteristic energies of several hundred GeV.
This sensational statement nicely fits the prediction of refs. [2]}, [3]}.
| [3] | [
[
422,
425
]
] | https://openalex.org/W1978545637 |
c3a2cefe-7392-486b-a869-7828f8038688 | It is clear that if \(\beta \rightarrow +\infty \) , then \(\mathcal {S}(\alpha ,\beta )\rightarrow \mathcal {S}^*(\alpha )\) (the class of starlike functions of order \(\alpha \) , where \(0\le \alpha <1\) ). Thus
we have the following result (see [1]}).
| [1] | [
[
250,
253
]
] | https://openalex.org/W2581925993 |
3992adf2-a61f-46db-9da0-4b416d321079 | Let \(f\in \mathcal {M}(\delta )\) . Then by Theorem REF , we have
\(\log \left\lbrace \frac{f(z)}{z}\right\rbrace &\prec \mathcal {\widetilde{B}}_\delta (z)\quad (z\in \Delta ).\)
By using (REF ) and (REF ), the relation (REF ) implies that
\(\sum _{n=1}^{\infty }2\gamma _n z^n\prec \sum _{n=1}^{\infty }\frac{A_n}{n}z^n\quad (z\in \Delta ).\)
Now by Rogosinski's theorem (see [1]} or [2]}), we get
\(4\sum _{n=1}^{\infty }|\gamma _n|^2 &\le \sum _{n=1}^{\infty }\frac{1}{n^2}|A_n|^2\\&=\frac{1}{\sin ^2 \delta }\sum _{n=1}^{\infty } \frac{\sin ^2 n\delta }{n^4}\\&=\frac{1}{\sin ^2 \delta }\sum _{n=1}^{\infty }\left(\frac{1}{180}\left[\pi ^4-45Li_4\left(e^{-2i\delta }\right)-45Li_4\left(e^{2i\delta }\right)\right]\right),\)
where \(Li_4\) is defined by (REF ). Therefore the desired inequality (REF ) follows. For the sharpness of (REF ), consider
\(F_\delta (z)=z\exp \mathcal {\widetilde{B}}_\delta (z),\)
where \(\mathcal {\widetilde{B}}_\delta (z)\) is defined by (REF ). It is easy to see that \(F_\delta (z)\in \mathcal {M}(\delta )\) and \(\gamma _n(F_\delta )=A_n/2n\) , where \(A_n\) is given by (REF ). Therefore, we have the equality in (REF ). This is the end of proof.
| [1] | [
[
383,
386
]
] | https://openalex.org/W1973856482 |
912bbbf5-daa4-4628-8791-de880cad5879 | The early theory for such a kind of particle is Klein-Gordon theory [1]}-[2]}, that suffered serious problems.
A first problem is that the wave equation of Klein-Gordon theory is second order in time, while according to the general laws of quantum theory it should be first order.
| [1] | [
[
68,
71
]
] | https://openalex.org/W4235885538 |
3b73eb4e-7eb1-4c9f-be4a-321290c27b57 |
A Fuchsian group \(\Gamma \) is said to be “non-elementary” if it is generated by more than one element. Conjecture REF fails for elementary groups \(\Gamma = \langle \gamma \rangle \) . For elementary groups the quotient \( \langle \gamma \rangle \backslash \mathbb {H}^2\) is a hyperbolic cylinder, in which case the resonances are given by an explicit formula [1]}. It is clear from this formula that there is no spectral gap.
For every finite-index, normal subgroup \(\Gamma ^{\prime }\) of \(\Gamma \) we have \(\Lambda _{\Gamma ^{\prime }}=\Lambda _\Gamma \) . In particular, for any given family \(\Gamma _n\) satisfying the assumptions of Conjecture REF , we have \(\delta _{\Gamma _n} = \delta _\Gamma \) and the point \(s=\delta _\Gamma \) is a common simple resonance of the surfaces \(X_n\) and therefore also a common simple zero of the Selberg zeta function \(Z_{\Gamma _n}(s).\) Note also that every zero of \(Z_{\Gamma }(s)\) is a zero of \(Z_{\Gamma _n}(s)\) . This follows directly from the Venkov–Zograf factorization formula in §REF .
For \(\delta _\Gamma > \frac{1}{2}\) the statement in Conjecture REF follows from [2]}, since each resonance \(s\) in the half-plane \(\mathrm {Re}(s)>\frac{1}{2}\) gives an \(L^2\) -eigenvalue \(\lambda =s(1-s)\) of the Laplacian. On the other hand, if \(\delta _\Gamma \le \frac{1}{2}\) then \(X=\Gamma \backslash \mathbb {H}^2\) must be convex cocompact by the results Beardon [3]}, [4]}. Hence the case \(\delta _\Gamma \le \frac{1}{2}\) comes under the purview of Conjecture REF .
We finally point out that the methods used by Brooks [5]}, Burger [6]}, [7]} and Bourgain–Gamburd–Sarnak [2]} rely solely on \(L^2\) -methods. These methods are no longer available in the range \(\delta _\Gamma \le \frac{1}{2}.\)
| [1] | [
[
367,
370
]
] | https://openalex.org/W593487728 |
54eeb737-a6dd-4d23-afe2-b86f2b15bc33 | Identities such as () are well-known in thermodynamic formalism, a subject going back to Ruelle [1]}, at least in the case where \(\rho \) is the trivial one-dimensional representation. The relation between the Selberg zeta function and transfer operators has been studied by a number of different authors. For the convex cocompact setting (no cusps) we refer to [2]}, [3]}, [4]}. The extension to non-trivial twists \(\rho \) can be found in the more recent papers [5]}, [6]}, [7]}, [8]}. A proof of Proposition REF can be found in [9]}.
| [8] | [
[
486,
489
]
] | https://openalex.org/W3038057161 |
49861765-ae40-4e21-958a-c8b91ac23915 | as claimed. If, on the other hand, \(G\) has order less than \(K\) , we use the following argument: there are only finitely many coverings of \(X=\Gamma \backslash \mathbb {H}^2\) of degree less than \(K\) , up to isometry. Invoking Naud's result [1]} gives for each of these coverings a positive “spectral gap”. Letting \(\eta _0>0\) denote the smallest of these spectral gaps (of which there are only finitely many), we can replace the value \(\eta \) obtained above by \(\min \lbrace \eta _0,\eta \rbrace \) to obtain the same conclusion for every size of \(G\) . This concludes the proof of Corollary REF in the case \(r=1.\)
| [1] | [
[
249,
252
]
] | https://openalex.org/W2013872780 |
90a6781d-6429-4e55-9b5c-e030f3ca06ae | where \(\mu \) ranges over the set of \(T\) -invariant probability measures and \(h_\mu (T)\) is the measure theoretic entropy. We refer the reader to [1]} for general facts on topological pressure and thermodynamical formalism. More important for us is Bowen's celebrated result [2]} which says that the map
\(\mathbb {R}\rightarrow \mathbb {R}, \quad \sigma \mapsto P(\sigma ) \stackrel{\text{def}}{=}P(-\sigma \log \vert T^{\prime } \vert )\)
| [1] | [
[
153,
156
]
] | https://openalex.org/W1597540527 |
a89d58bd-28c8-4410-b526-d0db2398a60d | The convergence of the alignment methods and knowledge distillation:
Even though the alignment approaches are widely studied in the representation learning literature, the knowledge distillation (KD) approach has emerged as one of the promising alignment approaches. KD approach demonstrated superior performance via knowledge transfer using outcome distribution (i.e., soft targets) from the teacher to student models [1]}. The alignment of outcome distribution using Kullback–Leibler (KL) divergence helps the student model imitate and learn from a teacher model or ensemble of models. KD was first introduced in [2]}, [3]} and then developed by [1]} to compress the smaller learning model from a large model. In KD, the soft targets \(z_t\) and \(z_s\) are the softmax outputs of the student and teacher model, respectively. The loss function of the student model between the prediction \(z_s\) and ground-truth label \(y_s\) is the cross-entropy loss denoted as \(\ell _{CE}\) . In addition, the distillation loss using the KL divergence is defined as
\(\ell _{KD}=\tau ^2 KL(z_t, z_s),\)
| [1] | [
[
419,
422
],
[
648,
651
]
] | https://openalex.org/W1821462560 |
137cc0de-0d04-4d0d-94a8-e347bd05b6eb | In this section, we validate the efficacy of the \(\textsf {CDKT-FL}\) algorithm with the MNIST [1]}, Fashion-MNIST [2]}, CIFAR-10, and CIFAR-100 [3]} datasets for handwritten digits, fashion images, and objects recognition, respectively. We conduct the experiments with 10 selected clients in each round, where each client has median numbers of data samples at 63, \(70.5\) , \(966.5\) , and \(1163.5\) with MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100 respectively. To simulate the properties of non-i.i.d data distribution, we set each client's data from random 20 classes in 100 classes for CIFAR-100 dataset and only two classes in total 10 classes with the other datasets, and use \(20\%\) of private data samples for evaluating the testing performance. Thereby, the private datasets are unbalanced and have a small number of training samples. Further, the small proxy datasets have all classes data with, respectively, 355, 330, 4200, and 4200 samples for MNIST, Fashion-MNIST, and CIFAR-10 datasets. The global CNN model in the server consists of two convolution layers in MNIST and Fashion-MNIST dataset, whereas three convolution layers are used in the CIFAR-10 and CIFAR-100 dataset, followed by the pooling and fully connected layers. The client CNN models have a similar design or one CNN layer fewer than the global model one in the heterogeneous model setting. The learning parameters such as learning rates, trade-off parameters are tuned to achieve good results for different settings of algorithms; the number of local epochs \(K = 2\) , global epochs \(R = 2\) , and the batch size is 20.
| [1] | [
[
97,
100
]
] | https://openalex.org/W2112796928 |
8416842b-e5a7-4ae8-954e-34ae8c86d8b7 | In [1]}, Ferrario and Portaluri studied central configurations with this dihedral symmetry.
In Section 2 of [2]}, Fusco et al. got a new periodic solution in the case of \(n=4\) by applying some topological constraint, here our result is a generalization of theirs. Obviously, the trajectories \(u(t)\) are uniquely determined by the trajectory \(u_0(t)\) , and in [2]}, \(u_0(t)\) is called the generating particle of the motion.
Also we need some other symmetric constraints on the loop of \(u_0\) :
\(\left\lbrace \begin{matrix}\begin{aligned}u_0(t)&=\hat{R}_0u_0(-t),\\u_0(t)&=\hat{R}_su_0(\frac{T}{h}-t).\end{aligned}\end{matrix}\right.\)
| [2] | [
[
108,
111
],
[
367,
370
]
] | https://openalex.org/W2020455417 |
2399b5e5-455d-47fe-91c9-1c7efd0c159e | Remark 1.4
By the condition (REF ), we see that \(\mathbb {I}=[0,\frac{T}{2h}]\) is a fundamental domain of dihedral type for the trajectories (see [1]}) for more details), which implies that the motion of the particles on the whole period \([0,T]\) is determined by their motion on \(\mathbb {I}=[0,\frac{T}{2h}]\) through the symmetric conditions. And from (REF ) we see that \(u_0\) is the generating particle, so in Section we can only consider the motion of the generating particle \(u_0\) in the interval \(\mathbb {I}=[0,\frac{T}{2h}]\) .
| [1] | [
[
150,
153
]
] | https://openalex.org/W3098925088 |
c9f0fc29-343c-4d45-b93a-ef55a1f5f52d | This paper is organized as follows. In Section , we quickly review superamplitudes and their celestial counterparts. We then discuss dual conformal symmetry of momentum space amplitudes in Section REF and present its celestial counterpart in Section REF . In Section REF , we derive momentum space generalizations of the differential equations found in [1]} by connecting them to the behaviour of amplitudes under BCFW shifts and in Section REF , we provide physical interpretations for the hypergeometric equations satisfied by the celestial MHV tree-level amplitudes. Finally, in Section REF , we discuss the relation between these differential equations.
| [1] | [
[
354,
357
]
] | https://openalex.org/W3206858808 |
34176b57-ff08-460e-a21f-3954fe05548f |
\(\tilde{\mathcal {M}}_n\) has already been computed in [1]}. We will revisit this in Section REF in greater detail.
| [1] | [
[
58,
61
]
] | https://openalex.org/W2770349012 |
e86a519c-dbd1-477b-9992-83f16144cd5b | Dual superconformal symmetry is ordinary superconformal symmetry in variables \((x_i, \theta _i)\) . The amplitudes are covariant under this symmetry and the generators do not annihilate them. However, we can modify these generators such that they annihilate the amplitudes and express them in terms of \(\lambda _i, \, \tilde{\lambda }_i\) , see [1]}, [2]}
for more details. All the generators except \(K^{\alpha \dot{\alpha }}\) and \(S_{\alpha }^A\) either act trivially on the amplitude or are equal to one of their conformal counterparts. Hence, we only present expressions for \(K^{\alpha \dot{\alpha }}\) and \(S_{\alpha }^A\) here.
| [1] | [
[
347,
350
]
] | https://openalex.org/W4300491943 |
81752ad9-25ea-4567-aaa6-b2bc8ff4415d | Symmetries have played a pivotal role in determining scattering amplitudes in \(\mathcal {N}=4\) Yang-Mills. It is conceivable that understanding these symmetries on the celestial sphere will lead to some insight about the putative celestial conformal field theory governing these amplitudes.
Several results have already been obtained along these lines for Poincaré [1]}, conformal and superconformal symmetries [2]}, [3]}, [4]}, [5]}. Here we take the first steps towards an understanding of the implications of celestial dual superconformal symmetry by obtaining the form of the generators on the celestial sphere.
| [4] | [
[
426,
429
]
] | https://openalex.org/W3165753469 |
7aa2c91e-18c3-4ab7-98d1-159bbb6a4fdb | The Aomoto-Gelfand hypergeometric functions [1]} are associated to Grassmannians. In this section, we will make this connection explicit and derive the differential equations associated to them. For a quick introduction, see [2]} from which most of this section is adopted (with cosmetic changes).
| [1] | [
[
44,
47
]
] | https://openalex.org/W31208289 |
79a814a7-f226-48cf-afd3-bf2688993018 | Many mathematical aspects of spacetime field theory reappear in the worldline formalism in a one-dimensional setting,
such as worldline supersymmetry [1]}, [2]}, [3]}, [4]},
worldline instantons [5]}, [6]}, regularisation dependence of UV counterterms (summarized in [7]}),
and Fadeev–Popov determinants induced by gauge fixing [8]}.
| [7] | [
[
267,
270
]
] | https://openalex.org/W1562682436 |
b399916a-1d1d-46bb-ab8f-857be0dbd096 | However, it was only in the nineties, after the development of string theory, which demonstrated the
mathematical beauty and computational usefulness of first-quantized path integrals,
that Feynman's worldline path-integral formalism was finally taken seriously as a
competitor for Feynman diagrams. In this “string-inspired” approach to the worldline formalism
[1]}, [2]}, [3]}, [4]}, a central role is played by gaussian path integration
using “worldline Green's functions”. In QED, the only worldline correlators required are
\(G(\tau _1,\tau _2)\) , \(G_F(\tau _1,\tau _2)\) ,
obeying
\(\langle x^{\mu }(\tau _1)x^{\nu }(\tau _2) \rangle =-G(\tau _1,\tau _2) \delta ^{\mu \nu }, \qquad G(\tau _1,\tau _2) = \vert \tau _1 -\tau _2\vert - \frac{1}{T}\bigl (\tau _1 -\tau _2\bigr )^2,\nonumber \\\langle \psi ^{\mu }(\tau _1)\psi ^{\nu }(\tau _2)\rangle ={1\over 2}G_F(\tau _1,\tau _2) \delta ^{\mu \nu } , \qquad G_F(\tau _1,\tau _2) = {\rm sgn}(\tau _1 - \tau _2)\)
| [4] | [
[
380,
383
]
] | https://openalex.org/W2045978557 |
69846076-310c-4096-aafd-bc9d8e4418db |
It provides a highly compact generating function for the \(N\) -photon amplitudes in scalar QED, valid off-shell and on-shell.
It represents the sum of the corresponding Feynman diagrams including all non-equivalent orderings of the photons
along the loop.
Bern and Kosower in their seminal work [1]}, [2]} found a set of rules that allows one to obtain from this master formula,
by a pure pattern matching procedure, also the corresponding amplitudes with a spinor loop, as well as the \(N\) -gluon amplitudes
with a scalar, spinor or gluon loop.
In this formalism, the worldline Lagrangian contains only a linear coupling to the background field, corresponding to
a cubic vertex in field theory. The quartic seagull vertex arises only at the path-integration stage, and is represented
by the \(\delta (\tau _i - \tau _j)\) contained in \(\ddot{G}_{ij}\) , equation (REF ).
All the \(\ddot{G}_{ij}\) can be removed by a systematic integration-by-parts procedure, which homogenizes the integrand
and at the same time leads to the appearance of photon field strength tensors
\(f_i^{\mu \nu } \equiv k_i^{\mu }\varepsilon _i^{\nu } - \varepsilon _i^{\mu }k_i^{\nu }\) ,
as was noted by Strassler [3]}.
| [2] | [
[
305,
308
]
] | https://openalex.org/W1984146602 |
e3b4cd44-c7a9-4252-b9ea-85e36ef278aa | The model can be solved in closed form for both, number-conserving elastic collisions with time-dependent chemical potential \(\mu (t)\le 0\) , and number-changing inelastic collisions with \(\mu =0\) that correspond to merging or splitting of gluons. Splitting moves particles towards lower energy, whereas merging moves them towards higher energies. The most important inelastic collisions are those where an extra gluon is produced or absorbed. For fixed chemical potential, the NBDE-solutions have a time-varying particle content until their stationary Bose-Einstein limit is reached at \(t\rightarrow \infty \) .
Hence, the solutions with the proper density of states are suited to account for inelastic collisions.
At \(p=\mu =0\) , equilibrium is reached instantaneously through inelastic collisions as a consequence of the boundary conditions. In overpopulated systems [1]}, the average number density decreases through gluon merging, whereas for underpopulation it increases via splitting until equilibrium is reached. Thermalization in non-Abelian systems at \(p=0\) – and to a certain extent also in the low-momentum region \(p<Q_\mathrm {s}\) – is therefore mostly due to inelastic collisions, and the model calculations that are proposed in this work concentrate on this case for both over- and underpopulated situations.
| [1] | [
[
878,
881
]
] | https://openalex.org/W1524598366 |
f611259f-e340-408c-829d-215844d33d71 | For a more realistic description, one has to consider the interactions that mediate equilibration, and the inherent nonlinearity of the system.
A corresponding nonlinear partial differential equation
for the single-particle occupation probability distributions \(n\equiv n\,(p,t)\) had been derived
from the quantum Boltzmann collision term in ref. [1]} as
\(\frac{\partial n}{\partial t}=-\frac{\partial }{\partial {p}}\left[v\, n\,(1\pm n)+n\frac{\partial D}{\partial p}\right]+\frac{\partial ^2}{\partial {p}^2}\bigl [D\,n\,\bigr ]\)
| [1] | [
[
350,
353
]
] | https://openalex.org/W2772883043 |
efd3c104-6247-471a-a447-05e5c18e5875 | with an average initial occupation \(N_\mathrm {i}\) . If the chemical potential vanishes as in case of inelastic collisions, the equilibrium temperature \(T\) and the initial occupation were found to be related
in ref. [1]}
as \(T=[15N_\mathrm {i}/(4\pi ^4)]^{1/4}Q_\mathrm {s}\) , yielding \(T=600\) MeV
for \(Q_\mathrm {s}=1\) GeV\(/c\) and \(N_\mathrm {i}\simeq 3.37\) assuming conserved energy density. This corresponds to an overpopulated system where the total particle number decreases during the time evolution, essentially through gluon merging via inelastic collisions [2]}, [3]}. For a theta-function initial distribution, the boundary between under- and overpopulated systems is at \(N_\mathrm {i}^c\simeq 0.154\) [1]}, and the NBDE-solutions indeed fulfil particle-number conservation for this critical value of \(N_\mathrm {i}\) .
| [1] | [
[
221,
224
],
[
733,
736
]
] | https://openalex.org/W1964891266 |
ce2e8d4d-9f72-4c39-ab28-532edef5fdd0 | Collective light scattering [1]}, a cooperative emission process inducing directional scattered atoms, has been observed in various atomic systems ranging from thermal atoms [2]}, [3]}, degenerate Bose gases [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, free fermions [13]} to atoms coupled to the cavity mode [14]}, [15]}, [16]}.
Among them, a Bose-Einstein condensate (BEC) has served as a promising platform for exploring a matter-wave superradiant process owing to its unique coherence property with [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]} and without external light fields [26]}, [27]}, [28]}, [29]}.
When the external light shines atoms in the condensate, collective scattering of light creates a quasiparticle in the form of recoiling atoms that interfere with condensate atoms at rest, leading to the generation of matter-wave grating that is further enhanced by subsequent light scattering. So far, however, it has been assumed in the previous studies that the condensate is phase-coherent and the superradiant gain of the process entails only external parameters such as the sample size and geometry. Furthermore, previous studies have remained in a weakly interacting regime, and neither the effect of strong s-wave interactions nor anisotropic dipolar interactions [30]}, [31]}, [32]} has been addressed [33]}.
| [4] | [
[
208,
211
],
[
515,
518
]
] | https://openalex.org/W4243381436 |
1c332980-4f28-46f7-b1b5-8233d2d11a07 | Collective light scattering [1]}, a cooperative emission process inducing directional scattered atoms, has been observed in various atomic systems ranging from thermal atoms [2]}, [3]}, degenerate Bose gases [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]}, free fermions [13]} to atoms coupled to the cavity mode [14]}, [15]}, [16]}.
Among them, a Bose-Einstein condensate (BEC) has served as a promising platform for exploring a matter-wave superradiant process owing to its unique coherence property with [4]}, [5]}, [6]}, [7]}, [8]}, [9]}, [10]}, [11]}, [12]} and without external light fields [26]}, [27]}, [28]}, [29]}.
When the external light shines atoms in the condensate, collective scattering of light creates a quasiparticle in the form of recoiling atoms that interfere with condensate atoms at rest, leading to the generation of matter-wave grating that is further enhanced by subsequent light scattering. So far, however, it has been assumed in the previous studies that the condensate is phase-coherent and the superradiant gain of the process entails only external parameters such as the sample size and geometry. Furthermore, previous studies have remained in a weakly interacting regime, and neither the effect of strong s-wave interactions nor anisotropic dipolar interactions [30]}, [31]}, [32]} has been addressed [33]}.
| [15] | [
[
328,
332
]
] | https://openalex.org/W4254936722 |
956c4d8f-ed1a-40be-9db8-6a59ed2da0c6 | In the case where the matrix-sequence is a Toeplitz matrix-sequence generated by a function, the singular value distribution and the spectral distribution have been well studied in the past few decades. At the beginning Szegő in [1]} showed that the eigenvalues of the Toeplitz matrix \(T_n(f)\) generated by real-valued \(f\in L^{\infty }([-\pi ,\pi ])\) are asymptotically distributed as \(\) . Moreover, under the same assumption on \(f\) , Avram and Parter [2]}, [3]} proved that the singular values of \(T_n(f)\) are distributed as \(||\) . This result has been undergone many generalizations and extensions among the years (see [4]}, [5]}, [6]}, [7]} and the references therein).
| [6] | [
[
649,
652
]
] | https://openalex.org/W4242832160 |
df704d92-d65c-4db5-ab1f-f7ce18586afe | In the case where the matrix-sequence is a Toeplitz matrix-sequence generated by a function, the singular value distribution and the spectral distribution have been well studied in the past few decades. At the beginning Szegő in [1]} showed that the eigenvalues of the Toeplitz matrix \(T_n(f)\) generated by real-valued \(f\in L^{\infty }([-\pi ,\pi ])\) are asymptotically distributed as \(\) . Moreover, under the same assumption on \(f\) , Avram and Parter [2]}, [3]} proved that the singular values of \(T_n(f)\) are distributed as \(||\) . This result has been undergone many generalizations and extensions among the years (see [4]}, [5]}, [6]}, [7]} and the references therein).
| [7] | [
[
655,
658
]
] | https://openalex.org/W2976817015 |
9addcb01-9b80-4aff-a68a-96cdf8e5c69f | In this paper, we prove a higher weight analog of the
general Gross-Zagier formula of Yuan, S. Zhang and W. Zhang [1]} on Kuga-Sato varieties over the modular curve \(X(N)\) .
| [1] | [
[
114,
117
]
] | https://openalex.org/W656450974 |
092b17e9-a5fb-4d79-8500-7851fbc29b81 | We explain the reason to only work with Kuga-Sato varieties over modular curves, and outline some possible future works over quaternionic Shimura curves.
In our comparison between \(H(f)\) and the value of the Whittaker function, we first impose some weak regularity condition on the test function \(f\) to avoid self-intersection. Then we reinterpret S. Zhang's comparison [1]} to
cover certain test functions without the regularity condition. Finally, we use the action by the adelic group of \({\mathbb {A}}_K^{\infty ,\times }\) to generate all test functions.
Our comparison under the regularity condition is local and can be generalized to Shimura curves with some more effort. However, there is no analog of S. Zhang's work for general Shimura curves yet.
For Shimura curves over \({\mathbb {Q}}\) ,
Wen [2]} is working on the analog of S. Zhang's work.
After Wen's work, we should be able to generalize our results to Shimura curves over \({\mathbb {Q}}\) .
We hope to return to Shimura curves over totally real fields in the future.
| [1] | [
[
376,
379
]
] | https://openalex.org/W1985684082 |
d667cffc-0ac4-48fb-a623-7d8750b01598 | Remark 4.2.10 (1) Formally, the above holomorphic projection procedure is the same as [1]} [2]} which are in the weight 2 case, modulo the convergence issue in loc. cit..
| [2] | [
[
92,
95
]
] | https://openalex.org/W2963612230 |
072ca155-5251-4180-a675-47db0f24fce1 | Actually Lemma REF is a consequence of the following formula whose proof can be found in [1]}: if \(\Omega \) is the disk of radius \(R\) (centered at 0), then for \(n \ne 0\)
\(\mathcal {S}_{\partial \Omega } \left[\frac{1}{R} e^{in\theta } \right](r, \theta ) = {\left\lbrace \begin{array}{ll}\displaystyle -\frac{1}{2|n|} \left(\frac{r}{R}\right)^{|n|} e^{in\theta } \quad &\text{if } r \le R, \\\displaystyle -\frac{1}{2|n|} \left(\frac{R}{r}\right)^{|n|} e^{in\theta } \quad &\text{if } r>R.\end{array}\right.}\)
| [1] | [
[
90,
93
]
] | https://openalex.org/W2963102779 |
16724c91-68ee-4220-8ec9-f694b9f1dc80 | Planning path for mobile robots while avoiding obstacles effectively is an important problem of robotics. A lot of researches have been carried out to solve the problem, e.g. [1]}, [2]}. Although these approaches are all giving solutions to the problem of finding trajectory for the robots, they use quite different assumptions on obstacles, very different robot's motion models, and different sensors.
Although several strategies have been proposed, they are not effective for rapidly changing environments.
| [1] | [
[
175,
178
]
] | https://openalex.org/W2127617835 |
0247e504-e62b-415d-87eb-8b054c6d159a | In order to solve these problems, a biologically inspired reactive algorithm for dynamic environments with moving obstacles was proposed in 2013 [1]}. It is an approach based on Equiangular Navigation Guidance (ENG) laws [2]} [3]}, where they described the mobile robot with non-holonomic model and gave a strict constraint to the movement of obstacles. The proposed algorithm can successfully guide the robot to the target point with a number of assumptions. Although such algorithm is realizable in some situations, it is not always feasible in practice. It is because that on one hand the shapes of obstacles are limited strictly and on other hand obstacles can only move along a straight line with a constant speed.
| [1] | [
[
145,
148
]
] | https://openalex.org/W2144604575 |
462dc227-0cca-41a7-84c4-d2162e0b5533 | In order to solve these problems, a biologically inspired reactive algorithm for dynamic environments with moving obstacles was proposed in 2013 [1]}. It is an approach based on Equiangular Navigation Guidance (ENG) laws [2]} [3]}, where they described the mobile robot with non-holonomic model and gave a strict constraint to the movement of obstacles. The proposed algorithm can successfully guide the robot to the target point with a number of assumptions. Although such algorithm is realizable in some situations, it is not always feasible in practice. It is because that on one hand the shapes of obstacles are limited strictly and on other hand obstacles can only move along a straight line with a constant speed.
| [2] | [
[
221,
224
]
] | https://openalex.org/W2110482435 |
329a445e-1665-4cf1-9f1d-723d2632b8db | It has been proved that, for the situation that the distance between waypoints is sufficiently large for the turning radius, the spiral algorithm [1]} performs better when the waypoints are dense, and also for the low altitudes surveillance tasks. However, after observing the result of the complete trajectory, the total path length is less when employing the clustered spiral-alternating algorithm [2]}. The methods of the waypoints being clustered into different clusters are manually or using a cluster analysis algorithm.
| [1] | [
[
146,
149
]
] | https://openalex.org/W2165139660 |
1d068405-6efb-4cf2-ae49-74413d56f039 |
where \(\Vert \mathbf {z} \Vert _{2,\mathbf {l}} := \Vert (z_1/l_1, \dots , z_d/l_d)\Vert _2\) denotes the aniso-tropic norm of the \(d\) -component vector \(\mathbf {z}\) and where \(\delta _{\mathbf {x},\mathbf {x}^{\prime }}\) is 1 if \(\mathbf {x}=\mathbf {x}^{\prime }\) and 0 otherwise.
As usual, to optimize the hyperparameters \( \mathbf {l} \) and \( n \) , the marginal likelihood was maximized according to [1]}, using the Python package scikit-learn [2]}. Due to the anisotropy of the Matérn kernel, for each input dimension \(i\) , a separate hyperparameter \(l_i \) is calculated. As for the SIASCOR model, the input variables were transformed with the root function \(x_i^{\prime } = \sqrt{x_i}\) for all \(i=1, \dots , 5\) and then scaled to the unit hypercube. Table REF reports the pertinent performance indices and Figure REF shows two plots of the final GPR model.
| [2] | [
[
469,
472
]
] | https://openalex.org/W2101234009 |
43914e38-daac-417e-8027-4558f5ce03e6 | Recently, I2I applications are becoming widespread, including a plethora of diverse tasks such as attribute manipulation [1]}, [2]}, [3]}, [4]}, sketch-to-image [5]}, [6]}, style transfer [7]}, [8]}, semantic synthesis [9]}, [10]}, and others [11]}, [12]}, [13]}, [14]}. Among these, generative adversarial networks (GANs) [15]} are particularly suitable for this task due to few restrictions on the generator network. Indeed, GANs are deep generative models composed of a network whose goal is to generate new images given an input noise vector and a discriminator network that aims at distinguishing generated images from real one. These are trained in an adversarial fashion requiring very few constraints on models architecture, especially on the generator. The latter network covers the role of the mapping function \(G_{S \rightarrow T}\) in Eq. REF for GAN-based I2I applications.
| [3] | [
[
133,
136
]
] | https://openalex.org/W3034600949 |
377d579c-8ec0-4067-91d7-72a4760fb692 | Latest works on generative models, including I2I ones, have achieved impressive results by scaling-up models in terms of trainable parameters, computational complexity and memory requirements [1]}, [2]}, [3]}. Therefore, most of them are difficult to train with a lower budget, undermining their replicability. Furthermore, low attention has been paid to how multidimensional inputs such as color images are processed by these models. The human eye perceives an image with lots of color shades that are the result of interactions among the three RGB channels. Therefore, channels interplays are crucial for a proper image processing. Actually, common real-valued models do not leverage this detail treating each channel as a separate entity, causing an information loss.
| [2] | [
[
198,
201
]
] | https://openalex.org/W2893749619 |
b91b8f4a-7b17-4430-8513-2462fa0cf6bf | whereby \(\mathbf {A}_i\) describe the hypercomplex algebra rules by learning them directly from data (i.e., the Hamilton product for the quaternion domain) and \(\mathbf {F}_i\) are batch of weights that can be scalars for fully connected (FC) layers, or groups of filters for convolutional ones. In the first case, we deal with parameterized hypercomplex multiplication (PHM) layers [1]}, [2]}, while in the second one we employ parameterized hypercomplex convolutional (PHC) layers [3]}.
| [1] | [
[
387,
390
]
] | https://openalex.org/W3120074043 |
6d25427d-0998-44a4-bf9a-b511f105f5bd | Lately, these techniques have been applied to generative models. Indeed, state-of-the-art generative models usually comprise tens of million parameters and are often employed with multidimensional inputs such as color images or multichannel audio signals [1]}, [2]}. The quaternion-valued variational autoencoder and the family of quaternion generative adversarial networks have demonstrated to obtain comparable performance while reducing the storage memory amount due to the parameters reduction [3]}, [4]}, [5]}, [6]}. Encouraged by these results, we propose to expolit novel PHNNs methods to define a more advanced generative model for image-to-image translation.
| [2] | [
[
261,
264
]
] | https://openalex.org/W4214926101 |
66a06224-58a1-4ee5-a53f-7191dea4d551 | In the following we will discuss in more details a specific sub-amplitude which we found to be particularly interesting in terms of
sensitivity to NP, presenting ways to embed it in physical processes. The full discussion of the several processes considered (see Tables REF , REF and REF )
can be found in Ref. [1]}. Some of them have been recently measured at the LHC [2]}, [3]}, [4]}, [5]}, [6]} and studied with more in-depth analyses [7]}, [8]}, [9]}, [10]}, [11]}.
<TABLE><TABLE><TABLE> | [1] | [
[
312,
315
]
] | https://openalex.org/W3105105687 |
568de672-95fb-4a6e-adb5-0d49d382f00b | After having discussed the fitting methodology, we finally present the results of the global interpretation of top, Higgs and
EW diboson data. In particular we report the best fit values for the 50 Wilson coefficients of the analysis, as well as the
confidence intervals. We first discuss the quality of the fit and then move on to present the bounds, studying in particular the dependence
of the results from the choice of input datasets and the theory settings. For a more complete analysis of the interpretation we refer
to Ref. [1]}, while here we will focus on specific aspects relevant for this thesis.
| [1] | [
[
532,
535
]
] | https://openalex.org/W3190755860 |
16de8be1-311b-4a4a-ae27-4d11c733daf3 | Section is divided in two very different parts. In section REF we show how the "ACD-transformation" could provide a smaller deterministic parity automaton than the constructions of [1]} and [2]}. In section REF we analyse the information given by the "alternating cycle decomposition" and we provide two original proofs concerning the possibility of labelling automata with different acceptance conditions.
| [1] | [
[
183,
186
]
] | https://openalex.org/W2165726630 |
0be2052d-3ffd-4c33-8f12-7a0c565b4ec0 | Generalized weak conditions
Let \((V,E,\mathit {Source},\mathit {Target},q_0,\mathit {Acc})\) be a "transition system". An ""ordered partition"" of \( is a partition of \) V\(, \) V1,...,VsV\( such that for every pair of vertices \) pVi\(, \) qVj\(, if there is a transition from \) p\( to \) q\(, then \) ij\(. We call each subgraph \) Vi\( a ""component of the ordered partition"". Every such component must be a union of "strongly connected components" of \) , so we can imagine that the partition is the decomposition into strongly connected components suitably ordered. We remark that given an "ordered partition" of \(, a "run" will eventually stay in some component \) Vi\(.\) Given different representations of acceptance conditions \(\mathit {Acc}_1, \dots , \mathit {Acc}_m\) from some of the previous classes, a ""generalised weak condition"" is a condition for which we allow to use the different conditions in different components of an ordered partition of a transition system. We will mainly use the following type of "generalised weak" condition:
Given a transition system \( and an "ordered partition" \) (Vi)i=1s\(, a \) "Weakk"\(-condition is a "parity condition" such that in any component \) Vi\( there are at most \) k\( different priorities associated to transitions between vertices in \) Vi\(. It is the "generalised weak condition" for \) [1,k]\( and \) [0,k-1]\(.\) The adjective Weak has typically been used to refer to the condition \("\mathit {Weak}_1"\) . It correspond to a partition of \( into ``accepting^{\prime \prime } and ``rejecting^{\prime \prime } components. A "run" will be accepted if the component it finally stays in is accepting.\)
"Transition systems" (resp. "automata", "games") using an acceptance condition of type \(\mathcal {R}\) will be called ""\(\mathcal {R}\) -transition systems"" (resp. \(\mathcal {R}\) -automata, \(\mathcal {R}\) -games). We will also say that they are labelled with an \(\mathcal {R}\) -condition.
As we have already observed, we can always suppose that \(\Gamma =E\) . However, this supposition might affect the size of the representation of the acceptance conditions, and therefore the complexity of related algorithms as shown in [1]}.
""""
In figure REF we show three automata recognizing the language
\( \mathcal {L}= \lbrace u \in \lbrace 0,1\rbrace ^\omega \; : \; \mathit {Inf}(u)=\lbrace 1\rbrace \text{ or } (\mathit {Inf}(u)=\lbrace 0\rbrace \text{ and} \text{ there is an even number of 1's in } u) \rbrace \)
and using different acceptance conditions. We represent Büchi conditions by marking the accepting transitions with a • symbol. For Muller or parity conditions we write in each transition \(\alpha :{Green2}{a}\) , with \(\alpha \in \lbrace 0,1\rbrace \) the input letter and \({Green2}{a}\in \Gamma \) the output letter. The initial vertices are represented with an incoming arrow.
<FIGURE>In the following we will use a small abuse of notation and speak indifferently of an acceptance condition and its representation. For example, we will sometimes replace the "acceptance condition" of a "transition system" by a family of sets \(\mathcal {F}\) (representing a "Muller condition") or by a function assigning priorities to edges.
Equivalent conditions
Two different representations of acceptance conditions over a set \(\Gamma \) are ""equivalent"" if they define the same set \(\mathit {Acc}\subseteq \Gamma ^\infty \) .
Given a "transition system graph" \(G\) , two representations \(\mathcal {R}_1,\mathcal {R}_2\) of acceptance conditions are ""equivalent over"" \(G\) if they define the same accepting subset of runs of \("\mathpzc {Run}"_{T}\) . We write \((G,\mathcal {R}_1) \simeq (G,\mathcal {R}_2)\) .
If \(\mathcal {A}\) is the "transition system graph" of an automaton (as in example REF ), and \(\mathcal {R}_1,\mathcal {R}_2\) are two representations of acceptance conditions such that \((\mathcal {A},\mathcal {R}_1) \simeq (\mathcal {A},\mathcal {R}_2)\) , then they recognise the same language: \(\mathcal {L}(\mathcal {A},\mathcal {R}_1)=\mathcal {L}(\mathcal {A},\mathcal {R}_2)\) . However, the converse only holds for "deterministic" automata.
Let \(\mathcal {A}\) be the the "transition system graph" of a "deterministic automaton" over the alphabet \(\Sigma \) and let \(\mathcal {R}_1,\mathcal {R}_2\) be two representations of acceptance conditions such that \(\mathcal {L}(\mathcal {A},\mathcal {R}_1)=\mathcal {L}(\mathcal {A},\mathcal {R}_2)\) . Then, both conditions are "equivalent over" \(\mathcal {A}\) , \((\mathcal {A},\mathcal {R}_1) \simeq (\mathcal {A},\mathcal {R}_2)\) .
Let \(\varrho \in {\mathpzc {Run}}_{T}\) be an infinite "run" in \(\mathcal {A}\) , and let \(u\in \Sigma ^\omega \) be the word in the input alphabet such that \(\varrho \) is the "run over" \(u\) in \(\mathcal {A}\) . Since \(\mathcal {A}\) is deterministic, \(\varrho \) is the only "run over" \(u\) , then \(\varrho \) belongs to the "acceptance condition" of \((\mathcal {A},\mathcal {R}_i)\) if and only if the word \(u\) belongs to \(\mathcal {L}(\mathcal {A},\mathcal {R}_1)=\mathcal {L}(\mathcal {A},\mathcal {R}_2)\) , for \(i=1,2\) .
The deterministic parity hierarchy
As we have mentioned in the introduction, deterministic Büchi automata have strictly less expressive power than deterministic Muller automata. However, every language recognized by a Muller automaton can be recognized by a deterministic parity automaton, but we might require at least some number of priorities to do so. We can assign to each regular language \(L\subseteq \Sigma ^\omega \) the optimal number of priorities needed to recognise it using a "deterministic automaton". We obtain in this way the ""deterministic parity hierarchy"", first introduced by Mostowski in [2]}, represented in figure REF . In that figure, we denote by \([\mu ,\eta ]\) the set of languages over an alphabet \(\Sigma \) that can be recognized using a deterministic "\([\mu ,\eta ]\) -parity" automaton. The intersection of the levels \([0,k]\) and \([1,k+1]\) is exactly the set of languages recognized using a \("\mathit {Weak}_k"\) deterministic automaton.
This hierarchy is strict, that is, for each level of the hierarchy there are languages that do not appear in lower levels [3]}.
<FIGURE>We observe that the set of languages that can be recognised by a deterministic "Rabin" automaton using \(r\) Rabin pairs is the level \([1,2r+1]\) . Similarly, the languages recognisable by a deterministic "Streett" automaton using \(s\) pairs is \([0,2s]\) .
For non-deterministic automata the hierarchy collapses for the level \([0,1]\) (Büchi automata).
Trees
A ""tree"" is a set of sequences of non-negative integers \(T\subseteq \omega ^*\) that is prefix-closed: if \(\tau \cdot i \in T\) , for \(\tau \in \omega ^*, i\in \omega \) , then \(\tau \in T\) .
In this report we will only consider finite trees.
The elements of \(T\) are called ""nodes"". A ""subtree"" of \(T\) is a tree \(T^{\prime }\subseteq T\) . The empty sequence \(\varepsilon \) belongs to every non-empty "tree" and it is called the ""root"" of the tree. A "node" of the form \(\tau \cdot i\) , \(i\in \omega \) , is called a ""child"" of \(\tau \) , and \(\tau \) is called its ""parent"". We let \(\mathit {Children}(\tau )\) denote the set of children of a node \(\tau \) . Two different children \(\sigma _1,\sigma _2\) of \(\tau \) are called ""siblings"", and we say that \(\sigma _1\) is ""older"" than \(\sigma _2\) if \("\mathit {Last}"(\sigma _1)<"\mathit {Last}"(\sigma _2)\) . We will draw the children of a node from left to right following this order.
If two "nodes" \(\tau ,\sigma \) verify \(\tau {\sqsubseteq }\sigma \) , then \(\tau \) is called an ""ancestor"" of \(\sigma \) , and \(\sigma \) a ""descendant"" of \(\tau \) (we add the adjective “strict” if in addition they are not equal).
A "node" is called a ""leaf"" of \(T\) if it is a maximal sequence of \(T\) (for the "prefix relation" \({\sqsubseteq }\) ). A ""branch"" of \(T\) is the set of prefixes of a "leaf". The set of branches of \(T\) is denoted \(""\mathit {Branch}""(T)\) . We consider the lexicographic order over leaves, that is, for two leaves \(\sigma _1, \, \sigma _2\) , \(\sigma _1<_{\mathit {lex}}\sigma _2\) if \(\sigma _1(k)<\sigma _2(k)\) , where \(k\) is the smallest position such that \(\sigma _1(k)\ne \sigma _2(k)\) . We extend this order to \("\mathit {Branch}"(T)\) : let \(\beta _1\) , \(\beta _2\) be two branches defined by the leaves \(\sigma _1\) and \(\sigma _2\) respectively. We define \(\beta _1<\beta _2\) if \(\sigma _1 <_{\mathit {lex}}\sigma _2\) . That is, the set of branches is ordered from left to right.
For a node \(\tau \in T\) we define \(""\mathit {Subtree}_T""(\tau )\) as the subtree consisting on the set of nodes that appear below \(\tau \) , or above it in the same branch (they are "ancestors" or "descendants" of \(\tau \) ):
\( \mathit {Subtree}_T(\tau )= \lbrace \sigma \in T \; : \; \sigma {\sqsubseteq }\tau \text{ or } \tau {\sqsubseteq }\sigma \rbrace . \)
We omit the subscript \(T\) when the tree is clear from the context.
Given a node \(\tau \) of a tree \(T\) , the ""depth"" of \(\tau \) in \(T\) is defined as the length of \(\tau \) , \({\mathit {Depth}}(\tau )=|\tau |\) (the root \(\varepsilon \) has depth 0). The ""height of a tree"" \(T\) , written \({\mathit {Height}}(T)\) , is defined as the maximal depth of a "leaf" of \(T\) plus 1. The ""height of the node"" \(\tau \in T\) is \({\mathit {Height}}(T)-{\mathit {Depth}}(\tau )\) (maximal leaves have height 1).
A ""labelled tree"" is a pair \((T,\nu )\) , where \(T\) is a "tree" and \(\nu : T \rightarrow \Lambda \) is a labelling function into a set of labels \(\Lambda \) .
In figure REF we show a tree \(T\) of "height" 4 and we show \({\mathit {Subtree}}(\tau )\) for \(\tau =\langle 2 \rangle \) . The node \(\tau \) has "depth" 1 and height 3. The branches \(\alpha \) , \(\beta \) , \(\gamma \) are ordered as \(\alpha <\beta <\gamma \) .
<FIGURE>
An optimal transformation of Muller into parity conditions
In the previous section we have presented different classes of "acceptance conditions" for "transition systems" over infinite words, with "Muller conditions" being the most general kind of "\(\omega \) -regular conditions".
In order to translate a "Muller condition" \(\mathcal {F}\) over \(\Gamma \) into a simpler one, the usual procedure is to build
a "deterministic automaton" over \(\Gamma \) using a simpler condition that accepts \(\mathcal {F}\) , i.e., this automaton will accept the words \(u\in \Gamma ^\omega \) such that \("\mathit {Inf}"(u)\in \mathcal {F}\) . As we have asserted, the simplest condition that we could use in general in such a deterministic automaton is a "parity" one, and the number of priorities that we can use is determined by the position of the "Muller condition" in the "parity hierarchy".
In this section we build a deterministic parity automaton that recognises a given "Muller condition", and we prove that this automaton has minimal size and uses the optimal number of priorities. This construction is based in the notion of the "Zielonka tree", introduced in [4]} (applied there to the study of the optimal memory needed to solve a Muller game). In most cases, this automaton strictly improves other constructions such as the LAR [5]} or its modifications [6]}.
All constructions and proofs on this section can be regarded as a special case of those of section . However, we include the proofs for this case here since we think that this will help the reader to understand many ideas that will reappear in section in a more complicated context.
The Zielonka tree automaton
In this first section we present the Zielonka tree and the parity automaton that it induces.
[Zielonka tree of a Muller condition]
Let \(\Gamma \) be a finite set of colours and \(\mathcal {F}\subseteq \mathcal {P}(\Gamma )\) a "Muller condition" over \(\Gamma \) . The ""Zielonka tree"" of \(\mathcal {F}\) , written \(T_\mathcal {F}\) , is a tree labelled with subsets of \(\Gamma \) via the labelling \(\nu :T_\mathcal {F}\rightarrow \mathcal {P}(\Gamma )\) , defined inductively as:
\(\nu (\varepsilon )=\Gamma \)
If \(\tau \) is a node already constructed labelled with \(S=\nu (\tau )\) , we let \(S_1,\dots ,S_k\) be the maximal subsets of \(S\) verifying the property
\( S_i \in \mathcal {F}\; \Leftrightarrow \; S\notin \mathcal {F}\quad \text{ for each } i=1,\dots ,k . \)
For each \(i=1,\dots ,k\) we add a child to \(\tau \) labelled with \(S_i\) .
We have not specified the order in which children of a node appear in the Zielonka tree. Therefore, strictly speaking there will be several Zielonka trees of a "Muller condition". The order of the nodes will not have any relevance in this work and we will speak of “the” Zielonka tree of \(\mathcal {F}\) .
We say that the condition \(\mathcal {F}\) and the tree \("T_\mathcal {F}"\) are (Zielonka)even if \(\Gamma \in \mathcal {F}\) , and that they are (Zielonka)odd on the contrary. We associate a priority \(""p_Z(\tau )""\) to each node (to each level in fact) of the "Zielonka tree" as follows:
If \("T_\mathcal {F}"\) is (Zielonka)even, then \(p_Z(\tau )={\mathit {Depth}}(\tau )\) .
If \("T_\mathcal {F}"\) is (Zielonka)odd, then \(p_Z(\tau )={\mathit {Depth}}(\tau )+1\) .
In this way, \(p_Z(\tau )\) is even if and only if \(\nu (\tau )\in \mathcal {F}\) . We represent nodes \(\tau \in "T_\mathcal {F}"\) such that \("p_Z(\tau )"\) is even as a ""circle"" (round nodes), and those for which \(p_Z(\tau )\) is odd as a square.
""""
Let \(\Gamma _1=\lbrace a,b,c\rbrace \) and \(\mathcal {F}_1=\lbrace \lbrace a\rbrace ,\lbrace b\rbrace \rbrace \) (the "Muller condition" of the automaton of "example REF "). The "Zielonka tree" \(T_{\mathcal {F}_1}\) is shown in figure REF . It is (Zielonka)odd.
Let \(\Gamma _2=\lbrace a,b,c,d\rbrace \) and
\(\mathcal {F}_2=\lbrace \lbrace a,b,c,d\rbrace ,\lbrace a,b,d \rbrace ,\lbrace a,c,d\rbrace ,\lbrace b,c,d \rbrace ,\lbrace a,b\rbrace ,\lbrace a,d\rbrace ,\lbrace b,c\rbrace ,\lbrace b,d \rbrace ,\lbrace a\rbrace ,\lbrace b \rbrace ,\lbrace d \rbrace \rbrace .\)
The "Zielonka tree" \(T_{\mathcal {F}_2}\) is (Zielonka)even and it is shown on figure REF .
On the right of each tree there are the priorities assigned to the nodes of the corresponding level. We have named the branches of the Zielonka trees with greek letters and we indicate the names of the nodes in Violet2violet.
<FIGURE>We show next how to use the "Zielonka tree" of \(\mathcal {F}\) to build a "deterministic automaton" recognizing the "Muller condition" \(\mathcal {F}\) .
For a branch \(\beta \in {\mathit {Branch}}("T_\mathcal {F}")\) and a colour \(a\in \Gamma \) we define \((Zielonka){\mathit {Supp}}(\beta ,a)=\tau \) as the "deepest" node (maximal for \({\sqsubseteq }\) ) in \(\beta \) such that \(a\in \nu (\tau )\) .
Given a tree \(T\) , a branch \(\beta \in {\mathit {Branch}}(T)\) and a node \(\tau \in \beta \) , if \(\tau \) is not a "leaf" then it has a unique "child" \(\sigma _\beta \) such that \(\sigma _\beta \in \beta \) . In this case, we let \(""\mathit {Nextchild}""(\beta ,\tau )\) be the next "sibling" of \(\sigma _\beta \) on its right, that is:
\( \mathit {Nextchild}(\beta ,\tau )={\left\lbrace \begin{array}{ll}\text{ Smallest child of } \tau \text{ if } \sigma _\beta \text{ is the greatest child of } \tau .\\[2mm]\text{ Smallest older sibling of } \sigma _\beta \text{ if not.}\end{array}\right.} \)
We define \(""\mathit {Nextbranch}""(\beta ,\tau )\) as the leftmost branch in \(T\) (smallest in the order defined in section REF ) below \("\mathit {Nextchild}"(\beta ,\tau )\) , if \(\tau \) is not a "leaf", and we let \(\mathit {Nextbranch}(\beta ,\tau )= \beta \) if \(\tau \) is a leaf of \(T\) .
In the previous example, on the tree \("T_{\mathcal {F}_2}"\) of figure REF , we have that \((Zielonka){\mathit {Supp}}(\alpha ,c)=\langle 0 \rangle \) , \({\mathit {Nextchild}}(\beta ,\langle \varepsilon \rangle )=\langle 1 \rangle \) , \("\mathit {Nextbranch}"(\beta ,\langle \varepsilon \rangle )=\gamma \) , \({\mathit {Nextchild}}(\beta ,\langle 0 \rangle )=\langle 0 {,} 0 \rangle \) and \("\mathit {Nextbranch}"(\beta ,\langle 0 \rangle )=\alpha \) .
[Zielonka tree automaton]
Given a "Muller condition" \(\mathcal {F}\) over \(\Gamma \) with "Zielonka tree" \(T_\mathcal {F}\) , we define the ""Zielonka tree automaton"" \(\mathcal {\mathcal {Z}_{\mathcal {F}}}=(Q,\Gamma ,q_0, [\mu ,\eta ],\delta , p:Q\times \Gamma \rightarrow [\mu ,\eta ])\) as a "deterministic automaton" using a "parity" acceptance condition given by \(p\) , where
\(Q={\mathit {Branch}}(T_\mathcal {F})\) , the set of states is the set of branches of \(T_\mathcal {F}\) .
The initial state \(q_0\) is irrelevant, we pick the leftmost branch of \(T_\mathcal {F}\) .
\(\delta (\beta ,a)= {\mathit {Nextbranch}}(\beta ,(Zielonka){\mathit {Supp}}(\beta ,a))\) .
\(\mu =0, \; \eta ={\mathit {Height}}(T_\mathcal {F})-1\) if \(\mathcal {F}\) is (Zielonka)even.
\(\mu =1, \; \eta ={\mathit {Height}}(T_\mathcal {F})\) if \(\mathcal {F}\) is (Zielonka)odd.
\(p(\beta ,a)="p_Z"((Zielonka){\mathit {Supp}}(\beta ,a),a)\) .
The transitions of the automaton are determined as follows: if we are in a branch \(\beta \) and we read a colour \(a\) , then we move up in the branch \(\beta \) until we reach a node \(\tau \) that contains the colour \(a\) in its label. Then we pick the child of \(\tau \) just on the right of the branch \(\beta \) (in a cyclic way) and we move to the leftmost branch below it. We produce the priority corresponding to the depth of \(\tau \) .
Let us consider the conditions of example REF . The "Zielonka tree automaton" for the "Muller condition" \(\mathcal {F}_1\) is shown in figure REF , and that for \(\mathcal {F}_2\) in figure REF . States are the branches of the respective "Zielonka trees".
<FIGURE>[Correctness]
Let \(\mathcal {F}\subseteq \mathcal {P}(\Gamma )\) be a "Muller condition" over \(\Gamma \) . Then, a word \(u\in \Gamma ^\omega \) verifies \("\mathit {Inf}(u)"\in \mathcal {F}\) (\(u\) belongs to the Muller condition) if and only if \(u\) is "accepted by" \("\mathcal {\mathcal {Z}_{\mathcal {F}}}"\) .
Let us first remark that we can associate to each input word \(u\in \Gamma ^\omega \) an infinite sequence of nodes in the Zielonka tree \(\lbrace \tau _{u,i}\rbrace _{i=0}^{\infty }\) as follows: let \(\beta _i\) be the state of the Zielonka tree automaton (the branch of the tree \("T_\mathcal {F}"\) ) reached after reading \(u_0u_1\dots u_{i-1}\) (\(\beta _0\) being the leftmost branch), then
\( \tau _{u,i}=(Zielonka){\mathit {Supp}}(\beta _i,u_i) \)
The sequence of priorities produced by the automaton \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) when reading \(u\) is given by the priorities associated to \(\tau _{u,i}\) , that is, \("\mathit {Output}"_\mathcal {\mathcal {Z}_{\mathcal {F}}}(u)=\lbrace "p_Z"(\tau _{u,i})\rbrace _{i=0}^{\infty }\) .
Let \(p_{\min }\) be the minimal priority produced infinitely often in \("\mathit {Output}"_\mathcal {\mathcal {Z}_{\mathcal {F}}}(u)\) . We first show that there is a unique node appearing infinitely often in \(\lbrace \tau _{u,i}\rbrace _{i=0}^{\infty }\) such that \("p_Z"(\tau _{u,i})=p_{\min }\) . Indeed, transitions of \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) verify that \(\delta (\beta ,a)\) is a branch in the subtree under \((Zielonka){\mathit {Supp}}(\beta ,a)\) . However, subtrees below two different "siblings" have disjoint sets of branches, so if \(\tau _{u,i}\) and \(\tau _{u,k}\) are siblings, for some \(k>i\) , then there must exist some transition at position \(j\) , \(i<j<k\) such that \((Zielonka){\mathit {Supp}}(\beta _j,u_j)\) is a strict "ancestor" of \(\tau _{u,i}\) and \(\tau _{u,k}\) . Therefore, \("p_Z"(\tau _{u,j})<"p_Z"(\tau _{u,i})\) , what cannot happen infinitely often since \(p_{\min }="p_Z"(\tau _{u,i})\) .
We let \(\tau _p\) be the highest node visited infinitely often. The reasoning above also proves that all nodes appearing infinitely often in \(\lbrace \tau _{u,i}\rbrace _{i=0}^{\infty }\) are descendants of \(\tau _p\) , and therefore the states appearing infinitely often in the "run over \(u\) " in \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) are branches in \({\mathit {Subtree}}(\tau _p)\) . We will prove that
\(\mathit {Inf}(u)\subseteq \nu (\tau _p)\) .
For every child \(\sigma \) of \(\tau _p\) , \(\mathit {Inf}(u)\nsubseteq \nu (\sigma )\) .
Therefore, by the definition of the "Zielonka tree", \(\mathit {Inf}(u)\) is accepted if and only if \(\nu (\tau _p)\in \mathcal {F}\) and thus
\( \mathit {Inf}(u)\in \mathcal {F}\quad \Leftrightarrow \quad \nu (\tau _p)\in \mathcal {F}\quad \Leftrightarrow \quad "p_Z"(\tau _p)=p_{\min } \, \text{ is even.} \)
In order to see that \(\mathit {Inf}(u)\subseteq \nu (\tau _p)\) , it suffices to remark that for every branch \(\beta \) of \({\mathit {Subtree}}(\tau _p)\) and for every \(a\notin \nu (\tau _p)\) , we have that \((Zielonka){\mathit {Supp}}(\beta , a)\) is a strict "ancestor" of \(\tau _p\) . Since the nodes \(\tau _{u,i}\) appearing infinitely often are all descendants of \(\tau _p\) , the letter \(a\) cannot belong to \(\mathit {Inf}(u)\) if \(a\notin \nu (\tau _p)\) .
Finally, let us see that \(\mathit {Inf}(u)\nsubseteq \nu (\sigma )\) for every child of \(\tau _p\) . Suppose that \(\mathit {Inf}(u)\subseteq \nu (\sigma )\) for some child \(\sigma \) . Since we visit \(\tau _p\) infinitely often, transitions of the form \(\delta (\beta ,a)\) such that \(\tau _p = {\mathit {Supp}}(\beta ,a)\) take place infinitely often. By definition of \({\mathit {Nextbranch}}(\beta ,a)\) , after each of these transitions we move to a branch passing through the next child of \(\tau _p\) , so we visit all children of \(\tau _p\) infinitely often. Eventually we will have \(\sigma \in \delta (\beta ,a)\) (the state reached in \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) will be some branch \(\beta ^{\prime }\) below \(\sigma \) ).
However, since \(\mathit {Inf}(u)\subseteq \nu (\sigma )\) , for every \(a\in \mathit {Inf}(u)\) and every \(\beta ^{\prime }\in {\mathit {Subtree}}(\sigma )\) , we would have that \((Zielonka){\mathit {Supp}}(\beta ^{\prime },a)\) is a descendant of \(\sigma \) , and therefore we would not visit again \(\tau _p\) and the priority \(p_{\min }\) would not be produced infinitely often, a contradiction.
Optimality of the Zielonka tree automaton
We prove in this section the strong optimality of the "Zielonka tree automaton", both for the number of priorities (proposition REF ) and for the size (theorem REF ).
Proposition REF can be proved easily applying the results of [7]}. We present here a self-contained proof.
Let \(\mathcal {P}\) be a parity "transition system" with set of edges \(E\) and priorities given by \(p:E \rightarrow [\mu ,\eta ]\) such that the minimal priority it uses is \(\mu \) and the maximal one is \(\eta \) . If the number of different priorities used in \(\mathcal {P}\) (\(|p(E)|\) ) is smaller or equal than \(\eta -\mu \) , then we can relabel \(\mathcal {P}\) with a parity condition that is "equivalent over" \(\mathcal {P}\) that uses priorities in \([\mu ^{\prime }, \eta ^{\prime }]\) and \(\eta ^{\prime }-\mu ^{\prime } < \eta -\mu \) .
If \(\mathcal {P}\) uses less priorities than the length of the interval \([\mu , \eta ]\) , that means that there is some priority \(d\) , \(\mu <d< \eta \) that does not appear in \(\mathcal {P}\) . Then, we can relabel \(\mathcal {P}\) with the parity condition given by:
\( p^{\prime }(e)=\left\lbrace \begin{array}{c}p(e) \text{ if } p(e)<d\\p(e)-2 \text{ if } d <p(e) \end{array} \right. \)
that is clearly an "equivalent condition over" \(\mathcal {P}\) that uses priorities in \([\mu , \eta -2]\) .
[Optimal number of priorities]""""
The "Zielonka tree" gives the optimal number of priorities recognizing a "Muller condition" \(\mathcal {F}\) . More precisely, if \([\mu ,\eta ]\) are the priorities used by \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) and \(\mathcal {P}\) is another "parity automaton" recognizing \(\mathcal {F}\) , it uses at least \(\eta -\mu +1\) priorities. Moreover, if it uses priorities in \([\mu ^{\prime },\eta ^{\prime }]\) and \(\eta -\mu = \eta ^{\prime }-\mu ^{\prime }\) , then \(\mu \) and \(\mu ^{\prime }\) have the same parity.
Let \(\mathcal {P}\) be a "deterministic" "parity automaton" recognizing \(\mathcal {F}\) using priorities in \([\mu ^{\prime },\eta ^{\prime }]\) . After lemma REF , we can suppose that \(\mathcal {P}\) uses all priorities in this interval. Let \(\beta \) be a "branch" of \(\mathcal {T}_{\mathcal {F}}\) of maximal length \(h=\eta -\mu +1\) , and let \(S_0\subseteq S_{1} \subseteq \dots \subseteq S_{h-1}=\Gamma \) be the labellings of the nodes of this branch from bottom to top. Let us suppose \(S_0\in \mathcal {F}\) , the case \(S_0\notin \mathcal {F}\) being symmetric.
Let \(a_i\) be the finite word formed concatenating the colours of \(S_i\) . In particular \(a_i\) is accepted if and only if \(i\) is even. Let \(\eta ^{\prime }\) be the greatest priority appearing in the automaton \(\mathcal {P}\) . We prove by induction on \(j\) that, for every \(v\in \Gamma ^*\) , the "run over"
\((a_0a_1\dots a_jv)^\omega \) in \(\mathcal {P}\)
produces a priority smaller than or equal to \(\eta ^{\prime }-j\) , if \(\eta ^{\prime }\) even, and smaller than or equal to \(\eta ^{\prime }-j-1\) if \(\eta ^{\prime }\) is odd. We do here the case \(\eta ^{\prime }\) even, the case \(\eta ^{\prime }\) odd being symmetric.
For \(j=0\) this is clear, since \(\eta ^{\prime }\) is the greatest priority. For \(j>0\) , if it was not true, the smallest priority produced infinitely often reading \((a_0a_1\dots a_jv)^\omega \) would be strictly greater than \(\eta ^{\prime }-j\) for some \(v\in \Gamma ^*\) . Since \(\eta ^{\prime }-j\) has the same parity than \(j\) and \(S_j\in \mathcal {F}\) if and only if \(j\) is even, then the smallest priority produced infinitely often reading \((a_0a_1\dots a_jv)^\omega \) must have the same parity than \(j\) and cannot be \(\eta ^{\prime }-j+1\) , so it is greater than \(\eta ^{\prime }-j+2\) . However, by induction hypothesis, the run over \((a_0a_1\dots a_{j-1}w)^\omega \) produces a priority smaller than or equal to \(\eta ^{\prime }-(j-1)\) for every \(w\) , in particular for \(w=a_jv\) , contradicting the induction hypothesis.
In particular, taking \(v=\varepsilon \) , we have proved that the "run over"
\((a_0a_1\dots a_{h-1})^\omega \) in \(\mathcal {P}\) produces a priority smaller than or equal to \(\eta ^{\prime }-(h-1)\) that has to be even if and only if \(\mu \) is even. Therefore, \(\mathcal {P}\) must use all priorities in \([\eta ^{\prime }-(h-1),\eta ^{\prime }]\) , that is, at least \(h\) priorities.
In order to prove theorem REF we introduce the definition of an \(X\) -strongly connected component and we present two key lemmas.
Let \(\mathcal {A}=(Q,\Sigma , q_0, \Gamma , \delta ,\mathit {Acc})\) be a "deterministic automaton" and \(X\subseteq \Sigma \) a subset of letters of the input alphabet. An ""X-strongly connected component"" (abbreviated \(X\) -SCC) is a non-empty subset of states \(S\subseteq Q\) such that:
For every state \(q\in S\) and every letter \(x\in X\) , \(\delta (q,x)\in S\) .
For every pair of states \(q,q^{\prime }\in S\) there is a finite word \(w\in X^*\) such that \(\delta (q,w)=q^{\prime }\) .
That is, an \(X\) -SCC of \(\mathcal {A}\) are the states of a "\(X\) -complete" part of \(\mathcal {A}\) that forms a "strongly connected subgraph".
For every "deterministic automaton" \(\mathcal {A}=(Q,\Sigma , q_0, \Gamma , \delta ,\mathit {Acc})\) and every subset of letters \(X\subseteq \Sigma \) there is an "accessible" "\(X\) -SCC" in \(\mathcal {A}\) .
Restricting ourselves to the set of "accessible" states of \(\mathcal {A}\) we can suppose that every state of the automaton is accessible.
We prove the lemma by induction on \(|\mathcal {A}|\) . For \(|\mathcal {A}|=1\) , the state of the automaton forms an \(X\) -SCC. For \(|\mathcal {A}|>1\) , if \(Q\) is not an "\(X\) -SCC", there are \(q,q^{\prime }\in Q\) such that there does not exist a word \(w\in X^*\) such that \(\delta (q,w)=q^{\prime }\) . Let
\( Q_q=\lbrace p\in Q \; : \; \exists u\in X^* \text{ such that } p= \delta (q,u) \rbrace .\)
Since \(q^{\prime }\notin Q_q\) , the set \(Q_q\) is strictly smaller than \(Q\) . The set \(Q_q\) is non-empty and closed under transitions labelled by letters of \(X\) , so the restriction of \(\mathcal {A}\) to this set of states and the alphabet \(X\) forms an automaton \(\mathcal {A}_{Q_q,X}=(Q_q,X, q, \Gamma , \delta ^{\prime })\) (where \(\delta ^{\prime }\) is the restriction of \(\delta \) to these states and letters). By induction hypothesis, \(\mathcal {A}_{Q_q,X}\) contains an \(X\) -SCC that is also an \(X\) -SCC for \(\mathcal {A}\) .
Let \(\mathcal {F}\) be a "Muller condition" over \(\Gamma \) , \(T_\mathcal {F}\) its "Zielonka tree" and \(\mathcal {P}=(P,\Gamma ,p_0,[\mu ^{\prime },\eta ^{\prime }],\delta _P,p^{\prime }:P\rightarrow [\mu ^{\prime },\eta ^{\prime }])\) a "deterministic" parity automaton recognizing \(\mathcal {F}\) . Let \(\tau \) be a node of \(T_\mathcal {F}\) and \(C=\nu (\tau )\subseteq \Gamma \) its label. Finally, let \(A,B\subseteq C\) be two different subsets maximal such that \(C\in \mathcal {F}\, \Leftrightarrow \, A \notin \mathcal {F}\) , \(C\in \mathcal {F}\, \Leftrightarrow \, B \notin \mathcal {F}\) (they are the labels of two different children of \(\tau \) ). Then, if \(P_A\) and \(P_B\) are two "accessible" "\(A\) -SCC" and "\(B\) -SCC" of \(\mathcal {P}\) respectively, they satisfy \(P_A \cap P_B=\emptyset \) .
We can suppose that \(C\in \mathcal {F}\) and \(A,B \notin \mathcal {F}\) . Suppose that there is a state \(q\in P_A \cap P_B\) . Let \(A=\lbrace a_1,\dots ,a_l\rbrace \) , \(B=\lbrace b_1,\dots ,b_r\rbrace \) and \(q_1=\delta (q,a_1\cdots a_l)\in A\) , \(q_2=\delta (q,b_1\cdots b_r)\in B\) . By definition of an \(X\) -SCC, there are words \(u_1 \in A^*\) , \(u_2\in B^*\) such that \(\delta (q_1,u_1)=q\) and \(\delta (q_2,u_2)=q\) . Since \(A,B \notin \mathcal {F}\) , the minimum priorities \(p_1\) and \(p_2\) produced by the "runs over" \((a_1\cdots a_lu_1)^\omega \) and \((b_1\cdots b_r u_2)^\omega \) starting from \(q\) are odd. However, the run over \((a_1\cdots a_lu_1b_1\cdots b_r u_2)^\omega \) starting from \(q\) must produce an even minimum priority (since \(A\cup B \in \mathcal {F}\) ), but the minimum priority visited in this run is \(\min \lbrace p_1,p_2 \rbrace \) , odd, which leads to a contradiction.
[Optimal size of the Zielonka tree automaton]""""
Every "deterministic" "parity automaton" \(\mathcal {P}=(P,\Gamma ,p_0,[\mu ^{\prime },\eta ^{\prime }],\delta _P,p^{\prime }:P\times \Gamma \rightarrow [\mu ^{\prime },\eta ^{\prime }])\) accepting a "Muller condition" \(\mathcal {F}\) over \(\Gamma \) verifies
\( |"\mathcal {\mathcal {Z}_{\mathcal {F}}}"|\le |\mathcal {P}| .\)
Let \(\mathcal {P}\)
be a deterministic parity automaton accepting \(\mathcal {F}\) . To show \(|\mathcal {\mathcal {Z}_{\mathcal {F}}}|\le |\mathcal {P}|\) we proceed by induction on the number of colours \(|\Gamma |\) . For \(|\Gamma |=1\) the two possible "Zielonka tree automata" have one state, so the result holds. Suppose \(|\Gamma |>1\) and consider the first level of \(\mathcal {T}_{\mathcal {F}}\) .
Let \(n\) be the number of children of the root of \(\mathcal {T}_{\mathcal {F}}\) . For \(i=1,...,n\) , let \(A_i=\nu (\tau _i)\subseteq C\) be the label of the \(i\) -th child of the root of \(\mathcal {T}_{\mathcal {F}}\) , \(\tau _i\) , and let \(n_i\) be the number of branches of the subtree under \(\tau _i\) , \({\mathit {Subtree}}(\tau _i)\) . We remark that \(|\mathcal {\mathcal {Z}_{\mathcal {F}}}|=\sum _{i=1}^{n}n_i\) . Let \(\mathcal {F}\upharpoonright A_i:=\lbrace F\in \mathcal {F}\; : \; F\subseteq A_i \rbrace \) . Since each \(A_i\) verifies \(|A_i|<|C|\) and the "Zielonka tree" for \(\mathcal {F}\upharpoonright A_i\) is the subtree of \(\mathcal {T}_{\mathcal {F}}\) under the node \(\tau _i\) , every "deterministic parity automaton" accepting \(\mathcal {F}\upharpoonright A_i\) has at least \(n_i\) states, by induction hypothesis.
Thanks to lemma REF , for each \(i=1,\dots ,n\) there is an accessible "\(A_i\) -SCC" in \(\mathcal {P}\) , called \(P_i\) . Therefore, the restriction to \(P_i\) (with an arbitrary initial state) is an automaton recognising \(F\upharpoonright A_i\) . By induction hypothesis, for each \(i=1,...,n \) , \(|P_i|\ge n_i\) . Thanks to lemma REF , we know that for every \(i,j\in \lbrace 1,\dots ,n\rbrace ,\; i \ne j\) , \(P_i \cap P_j = \emptyset \) . We deduce that
\(|\mathcal {P}|\ge \sum \limits _{i=1}^{n}|P_i|\ge \sum \limits _{i=1}^{n}n_i=|\mathcal {\mathcal {Z}_{\mathcal {F}}}| .\)
The Zielonka tree of some classes of acceptance conditions
In this section we present some results proven by Zielonka in [4]} that show how we can use the "Zielonka tree" to deduce if a "Muller condition" is representable by a "Rabin", "Streett" or "parity" condition. These results are generalized to "transition systems" in section REF .
We first introduce some definitions. The terminology will be justified by the upcoming propositions.
Given a tree \(T\) and a function assigning priorities to nodes, \(p:T\rightarrow \mathbb {N} \) , we say that \((T,p)\) has
""Rabin shape"" if every node with an even priority assigned ("round" node) has at most one child.
""Streett shape"" if every node with an odd priority assigned ("square" node) has at most one child.
""Parity shape"" if every node has at most one child.
Let \(\mathcal {F}\subseteq \mathcal {P}( \Gamma )\) be a "Muller condition". The following conditions are equivalent:
\(\mathcal {F}\) is "equivalent" to a "Rabin condition".
The family \(\mathcal {F}\) is closed under intersection.
\("T_\mathcal {F}"\) has "Rabin shape".
Let \(\mathcal {F}\subseteq \mathcal {P}( \Gamma )\) be a "Muller condition". The following conditions are equivalent:
\(\mathcal {F}\) is "equivalent" to a "Streett condition".
The family \(\mathcal {F}\) is closed under union.
\("T_\mathcal {F}"\) has "Streett shape".
Let \(\mathcal {F}\subseteq \mathcal {P}( \Gamma )\) be a "Muller condition". The following conditions are equivalent:
\(\mathcal {F}\) is "equivalent" to a "parity condition".
The family \(\mathcal {F}\) is closed under union and intersection.
\("T_\mathcal {F}"\) has "parity shape".
Moreover, if some of these conditions is verified, \(\mathcal {F}\) is "equivalent" to a "\([1,\eta ]\) -parity condition" (resp. "\([0,\eta -1]\) -parity condition") if and only if \({\mathit {Height}}("T_\mathcal {F}")\le \eta \) and in case of equality \("T_\mathcal {F}"\) is (Zielonka)odd (resp. (Zielonka)even).
A "Muller condition" \(\mathcal {F}\subseteq \mathcal {P}(\Gamma )\) is equivalent to a parity condition if and only if it is equivalent to both Rabin and Streett conditions.
In figures REF and REF we represent "Zielonka trees" for some examples of "parity" and "Rabin" conditions.
We remark that for a fixed number of "Rabin" (or Street) pairs we can obtain "Zielonka trees" of very different shapes that range from a single branch (for "Rabin chain conditions") to a tree with a branch for each "Rabin pair" and height 3.
<FIGURE>
An optimal transformation of Muller into parity transition systems
In this section we present our main contribution: an optimal transformation of "Muller" "transition systems" into "parity" transition systems. Firstly, we formalise what we mean by “a transformation” using the notion of "locally bijective morphisms" in section REF .
Then, we describe a transformation from a "Muller transition system" to a parity one. Most transformations found in the literature use the "composition" of the "transition system" by a parity automaton recognising the Muller condition (such as \("\mathcal {Z}_\mathcal {F}"\) ). In order to achieve optimality this does not suffice, we need to take into account the structure of the transition system. Following ideas already present in [3]}, we analyse the alternating chains of accepting and rejecting cycles of the transition system. We arrange this information in a collection of "Zielonka trees" obtaining a data structure, the "alternating cycle decomposition", that subsumes all the structural information of the transition system necessary to determine whether a "run" is accepted or not. We present the "alternating cycle decomposition" in section REF and we show how to use this structure to obtain a parity transition system that mimics the former Muller one in section REF .
In section REF we prove the optimality of this construction. More precisely, we prove that if \(\mathcal {P}\) is a parity transition system that admits a "locally bijective morphism" to a Muller transition system \(, then the transformation of \) using the alternating cycle decomposition provides a smaller transition system than \(\mathcal {P}\) and using less priorities.
Locally bijective morphisms as witnesses of transformations
We start by defining locally bijective morphisms.
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathit {Acc})\) , \(=(V^{\prime },E^{\prime },\mathit {Source}^{\prime },\mathit {Target}^{\prime },I_0^{\prime },\mathit {Acc}^{\prime })\) be two "transition systems". A ""morphism of transition systems"", written \(\varphi : \) , is a pair of maps \((\varphi _V: V \rightarrow V^{\prime }, \varphi _E: E \rightarrow E^{\prime })\) such that:
\(\varphi _V(v_0)\in I_0^{\prime }\) for every \(v_0\in I_0\) (initial states are preserved).
\(\mathit {Source}^{\prime }(\varphi _E(e))=\varphi _V(\mathit {Source}(e))\) for every \(e\in E\) (origins of edges are preserved).
\(\mathit {Target}^{\prime }(\varphi _E(e))=\varphi _V(\mathit {Target}(e))\) for every \(e\in E\) (targets of edges are preserved).
For every "run" \(\varrho \in {\mathpzc {Run}}_{,\varrho \in \mathit {Acc}\; \Leftrightarrow \; \varphi _E(\varrho ) \in \mathit {Acc}^{\prime } (acceptance condition is preserved).}\) If \((l_V,l_E)\) , \((,l_V^{\prime },l_E^{\prime })\) are "labelled transition systems", we say that \(\varphi \) is a ""morphism of labelled transition systems"" if in addition it verifies
\(l_V^{\prime }(\varphi _V(v))=l_V(v)\) for every \(v\in V\) (labels of states are preserved).
\(l_E^{\prime }(\varphi _E(e))=l_E(e)\) for every \(e\in V\) (labels of edges are preserved).
We remark that it follows from the first three conditions that if \(\varrho \in {\mathpzc {Run}}_{ is a "run" in , then \varphi _E(\varrho )\in {\mathpzc {Run}}_{T^{\prime }} (it is a "run" in starting from some initial vertex).}Given a "morphism of transition systems" \) (V,E)\(, we will denote both maps by \)\( whenever no confusion arises. We extend \) E\( to \) E*\( and \) E\( component wise.\)
A "morphism of transition systems" \(\varphi =(\varphi _V, \varphi _E)\) is unequivocally characterized by the map \(\varphi _E\) . Nevertheless, it is convenient to keep the notation with both maps.
Given two "transition systems" \((V,E,\mathit {Source},\mathit {Target},I_0,\mathit {Acc})\) , \(=(V^{\prime },E^{\prime },\mathit {Source}^{\prime },\mathit {Target}^{\prime },I_0^{\prime },\mathit {Acc}^{\prime })\) , a "morphism of transition systems" \(\varphi : \) is called
""Locally surjective"" if
For every \(v_0^{\prime }\in I_0^{\prime }\) there exists \(v_0\in I_0\) such that \(\varphi (v_0)=v_0^{\prime }\) .
For every \(v\in V\) and every \( e^{\prime }\in E^{\prime }\) such that \( \mathit {Source}^{\prime }(e^{\prime })=\varphi (v)\)
there exists \(e\in E \) such that \( \varphi (e)=e^{\prime } \) and \( \mathit {Source}(e)=v\) .
"Locally injective" if
For every \(v_0^{\prime }\in I_0^{\prime }\) , there is at most one \(v_0\in I_0\) such that \(\varphi (v_0)=v_0^{\prime }\) .
For every \( v\in V\) and every \( e^{\prime }\in E^{\prime } \) such that \( \mathit {Source}^{\prime }(e^{\prime })=\varphi (v) \)
if there are \( e_1,e_2\in E \) such that \( \varphi (e_i)=e^{\prime }\) and \( \mathit {Source}(e_i)=v\) , for \( i=1,2 \) , then \( e_1=e_2 \) .
"Locally bijective" if it is both "locally surjective" and "locally injective".
Equivalently, a "morphism of transition systems" \(\varphi \) is "locally surjective" (resp. injective) if the restriction of \(\varphi _E\) to \("\mathit {Out}"(v)\) is a surjection (resp. an injection) into \("\mathit {Out}"(\varphi (v))\) for every \(v\in V\) and the restriction of \(\varphi _V\) to \(I_0\) is a surjection (resp. an injection) into \(I_0^{\prime }\) .
If we only consider the "underlying graph" of a "transition system", without the "accepting condition", the notion of "locally bijective morphism" is equivalent to the usual notion of bisimulation. However, when considering the accepting condition, we only impose that the acceptance of each "run" must be preserved (and not that the colouring of each transition is preserved). This allows us to compare transition systems using different classes of accepting conditions.
We state two simple, but key facts.
If \(\varphi : \) is a "locally bijective morphism", then \(\varphi \) induces a bijection between the runs in \({\mathpzc {Run}}_{ and {\mathpzc {Run}}_{} that preserves their acceptance.}\)
If \(\varphi \) is a "locally surjective morphism", then it is onto the "accessible part" of \(\) . That is, for every "accessible" state \(v^{\prime }\in \) , there exists some state \(v\in such that \) V(v)=v'\(. In particular if every state of \)\( is "accessible", \)\( is surjective.\)
Intuitively, if we transform a "transition system" 1 into 2 “without adding non-determinism”, we will have a locally bijective morphism \(\varphi : 2 \rightarrow 1\) . In particular, if we consider the "composition" \(2=\mathcal {B}\lhd 1\) of 1 by some "deterministic automaton" \(\mathcal {B}\) , as defined in section , the projection over 1 gives a "locally bijective morphism" from 2 to 1.
""""
Let \(\mathcal {A}\) be the "Muller automaton" presented in the example REF , and \(\mathcal {Z}_{\mathcal {F}_1}\) the "Zielonka tree automaton" for its Muller condition \(\mathcal {F}_1=\lbrace \lbrace a\rbrace ,\lbrace b\rbrace \rbrace \) as in the figure REF . We show them in figure REF and their "composition" \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\lhd \mathcal {A}\) in figure REF . If we name the states of \(\mathcal {A}\) with the letters \(A\) and \(B\) , and those of \(\mathcal {Z}_{\mathcal {F}_1}\) with \(\alpha ,\beta \) , there is a locally bijective morphism \(\varphi : \mathcal {\mathcal {Z}_{\mathcal {F}}}\lhd \mathcal {A}\rightarrow \mathcal {A}\) given by the projection on the first component
\( \varphi _V((X,y))=X \; \text{ for } X\in \lbrace A,B\rbrace ,\, y\in \lbrace \alpha ,\beta \rbrace \)
and \(\varphi _E\) associates to each edge \(e\in \mathit {Out}(X,y)\) labelled by \(a\in \lbrace 0,1\rbrace \) the only edge in \(\mathit {Out}(X)\) labelled with \(a\) .
<FIGURE><FIGURE>We know that \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) is a minimal automaton recognizing the "Muller condition" \(\mathcal {F}\) (theorem REF ). However, the "composition" \(Z_{\mathcal {F}_1} \lhd \mathcal {A}\) has 4 states, and in the example REF (figure REF ) we have shown a parity automaton recognizing \(\mathcal {L}(\mathcal {A})\) with only 3 states. Moreover, there is a "locally bijective" morphism from this smaller parity automaton to \(\mathcal {A}\) (we only have to send the two states on the left to \(A\) and the state on the right to \(B\) ). In the next section we will show a transformation that will produce the parity automaton with only 3 states starting from \(\mathcal {A}\) .
Morphisms of automata and games
Before presenting the optimal transformation of Muller transition systems, we will state some facts about "morphisms" in the particular case of "automata" and "games". When we speak about a "morphism" between two automata, we always refer implicitly to the morphism between the corresponding "labelled transition systems", as explained in "example REF ".
A "morphism" \(\varphi =(\varphi _V,\varphi _E)\) between two "deterministic automata" is always "locally bijective" and it is completely characterized by the map \(\varphi _V\) .
For each letter of the input alphabet and each state, there must be one and only one outgoing transition labelled with this letter.
Let \(\mathcal {A}=(Q,\Sigma , I_0, \Gamma , \delta , \mathit {Acc})\) , \(\mathcal {A}^{\prime }=(Q^{\prime },\Sigma , I_0^{\prime }, \Gamma , \delta ^{\prime }, \mathit {Acc}^{\prime })\) be two (possibly non-deterministic) "automata". If there is a "locally surjective morphism" \(\varphi : \mathcal {A}\rightarrow \mathcal {A}^{\prime }\) , then \("\mathcal {L}(\mathcal {A})"="\mathcal {L}(\mathcal {A}^{\prime })"\) .
Let \(u\in \Sigma ^\omega \) . If \(u\in \mathcal {L}(\mathcal {A})\) there is an accepting run, \(\varrho \) , over \(u\) in \(\mathcal {A}\) . By the definition of a "morphism of labelled transition systems", \(\varphi (\varrho )\) is also an accepting "run over \(u\) " in \(\mathcal {A}^{\prime }\) .
Conversely, if \(u\in \mathcal {L}(\mathcal {A}^{\prime })\) there is an accepting "run over \(u\) " \(\varrho ^{\prime }\) in \(\mathcal {A}^{\prime }\) . Since \(\varphi \) is locally surjective there is a run \(\varrho \) in \(\mathcal {A}\) , such that \(\varphi (\varrho )=\varrho ^{\prime }\) , and therefore \(\varrho \) is an accepting run over \(u\) .
The converse of the previous proposition does not hold: \("\mathcal {L}(\mathcal {A})"=\mathcal {L}(\mathcal {A}^{\prime })\) does not imply the existence of morphisms \(\varphi : \mathcal {A}\rightarrow \mathcal {A}^{\prime }\) or \(\varphi : \mathcal {A}^{\prime } \rightarrow \mathcal {A}\) , even if \(\mathcal {A}\) has minimal size among the Muller automata recognizing \(\mathcal {L}(\mathcal {A})\) .
If \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) are "non-deterministic automata" and \(\varphi :\mathcal {A}\rightarrow \mathcal {A}^{\prime }\) is a "locally bijective morphism", then \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) have to share some other important semantic properties. Two classes of automata that have been extensively studied are unambiguous and good for games automata. An automaton is ""unambiguous"" if for every input word \(w\in \Sigma ^\omega \) there is at most one accepting "run over" \(w\) . ""Good for games"" automata (GFG), first introduced by Henzinger and Piterman in [10]}, are automata that can resolve the non-determinism depending only in the prefix of the word read so far. These types of automata have many good properties and have been used in different contexts (as for example in the model checking of LTL formulas [11]} or in the theory of cost functions [12]}). Unambiguous automata can recognize \(\omega \) -regular languages using a "Büchi" condition (see [13]}) and GFG automata have strictly more expressive power than deterministic ones, being in some cases exponentially smaller (see [14]}, [15]}).
We omit the proof of the next proposition, being a consequence of fact REF and of the argument from the proof of proposition REF .
Let \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) be two "non-deterministic automata". If \(\varphi :\mathcal {A}\rightarrow \mathcal {A}^{\prime }\) is a "locally bijective morphism", then
\(\mathcal {A}\) is unambiguous if and only if \(\mathcal {A}^{\prime }\) is unambiguous.
\(\mathcal {A}\) is GFG if and only if \(\mathcal {A}^{\prime }\) is GFG.
Having a "locally bijective morphism" between two games implies that the "winning regions" of the players are preserved.
Let \(\mathcal {G}=(V, E, \mathit {Source}, \mathit {Target}, v_0, \mathit {Acc}, l_V)\) and \(\mathcal {G}^{\prime }=(V^{\prime },E^{\prime }, \mathit {Source}^{\prime }, \mathit {Target}^{\prime }, v_0^{\prime }, \mathit {Acc}^{\prime }, l_V^{\prime })\) be two "games" such that there is a "locally bijective morphism" \(\varphi :\mathcal {G}\rightarrow \mathcal {G}^{\prime }\) . Let \(P\in \lbrace Eve, Adam\rbrace \) be a player in those games. Then, \(P\) wins \(\mathcal {G}\) if and only if she/he wins \(\mathcal {G}^{\prime }\) . Moreover, if \(\varphi \) is surjective, the "winning region" of \(P\) in \(\mathcal {G}^{\prime }\) is the image by \(\varphi \) of her/his winning region in \(\mathcal {G}\) , \("\mathcal {W}_P"(\mathcal {G}^{\prime })=\varphi ("\mathcal {W}_P"(\mathcal {G}))\) .
Let \(S_P: {\mathpzc {Run}}_{\mathcal {G}}\cap E^* \rightarrow E\) be a winning "strategy" for player \(P\) in \(\mathcal {G}\) . Then, it is easy to verify that the strategy \(S_P^{\prime }: {\mathpzc {Run}}_{\mathcal {G}^{\prime }}\cap E^{\prime *} \rightarrow E^{\prime }\) defined as
\( S_P^{\prime }(\varrho ^{\prime }) = \varphi _E ( S_P(\varphi ^{-1}(\varrho ^{\prime }))) \)
is a winning "strategy" for \(P\) in \(\mathcal {G}^{\prime }\) . (Remark that thanks to fact REF , the morphism \(\varphi \) induces a bijection over "runs", allowing us to use \(\varphi ^{-1}\) in this case).
Conversely, if \(S_P^{\prime }: {\mathpzc {Run}}_{\mathcal {G}^{\prime }}\cap E^{\prime *} \rightarrow E^{\prime }\) is a winning "strategy" for \(P\) in \(\mathcal {G}^{\prime }\) , then \( S_P(\varrho ) = \varphi _E^{-1} ( S_P^{\prime }(\varphi (\varrho ))) \)
is a winning "strategy" for \(P\) in \(\mathcal {G}\) . Here \( \varphi _E^{-1} (e^{\prime })\) is the only edge \(e\in E\) in \("\mathit {Out}"(\mathit {Target}("\mathit {Last}"(\varrho )))\) such that \(\varphi _E(e)=e^{\prime }\) .
The equality \("\mathcal {W}_P"(\mathcal {G}^{\prime })=\varphi ("\mathcal {W}_P"(\mathcal {G}))\) stems from the fact that if we choose a different initial vertex \(v_1\) in \(\mathcal {G}\) , then \(\varphi \) is a "locally bijective morphism" to the game \(\mathcal {G}^{\prime }\) with initial vertex \(\varphi (v_1)\) . Conversely, if we take a different initial vertex \(v_1^{\prime }\) in \(\mathcal {G}^{\prime }\) , since \(\varphi \) is surjective we can take a vertex \(v_1\in \varphi ^{-1}(v_1^{\prime })\) , and \(\varphi \) remains a locally bijective morphism between the resulting games.
The alternating cycle decomposition
Most transformations of "Muller" into "parity" "transition systems" are based on the "composition" by some automaton converting the Muller condition into a parity one. These transformations act on the totality of the system uniformly, regardless of the local structure of the system and the "acceptance condition".
The transformation we introduce in this section takes into account the interplay between the particular "acceptance condition" and the "transition system", inspired by the alternating chains introduced in [3]}.
In the following we will consider "Muller transition systems" with the Muller acceptance condition using edges as colours. We can always suppose this, since given a transition system \( with edges coloured by \) : E C\( and a Muller condition \) FP(C)\(, the condition \) FP(E)\( defined as \) AF (A)F\( is an "equivalent condition over" \) . However, the size of the representation of the condition \(\mathcal {F}\) might change. Making this assumption corresponds to consider what are called explicit Muller conditions. In particular, solving Muller games with explicit Muller conditions is in \(\mathrm {PTIME}\) [1]}, while solving general Muller games is \(\mathrm {PSPACE}\) -complete [18]}.
Given a "transition system" \((V,E,\mathit {Source},\mathit {Target},I_0, \mathit {Acc})\) , a loop is a subset of edges \(l\subseteq E\) such that it exists \(v\in V\) and a finite "run" \(\varrho \in {\mathpzc {Run}}_{T,v}\) such that \("\mathit {First}"(\varrho )="\mathit {Last}"(\varrho )=v\) and \({\mathit {App}}(\varrho )=l\) . The set of "loops" of \( is denoted \) Loop(\(.For a "loop" \) lLoop(\( we write\)\( ""\mathit {States}""(l):= \lbrace v\in V \; : \; \exists e\in l, \; \mathit {Source}(e)=v \rbrace .\)\(\)
Observe that there is a natural partial order in the set \({\mathpzc {Loop}}(\) given by set inclusion.
If \(l\) is a "loop" in \({\mathpzc {Loop}}(\) , for every \(q\in {\mathit {States}}(l)\) there is a run \(\varrho \in {\mathpzc {Run}}_{q}\) such that \(\mathit {App}(\varrho )=l\) .
The maximal loops of \({\mathpzc {Loop}}(\) (for set inclusion) are disjoint and in one-to-one correspondence with the "strongly connected components" of \(.\)
[Alternating cycle decomposition]
Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" with "acceptance condition" given by \(\mathcal {F}\subseteq \mathcal {P}(E)\) . The alternating cycle decomposition (abbreviated ACD) of \(, noted \) ACD(T)\(, is a family of "labelled trees" \) (t1, 1),..., (tr,r)\( with nodes labelled by "loops" in \) Loop(\(, \) i: tiLoop(\(. We define it inductively as follows:\begin{itemize}\item Let \lbrace l_1,\dots , l_r\rbrace be the set of maximal loops of {\mathpzc {Loop}}(. For each i\in \lbrace 1,\dots , r\rbrace we consider a "tree" t_i and define \nu _i(\varepsilon )=l_i.\end{itemize}\item Given an already defined node \)\( of a tree \) ti\( we consider the maximal loops of the set\)\(\lbrace l\subseteq \nu _i(\tau ) \; : \; l\in {\mathpzc {Loop}}( \text{ and } l \in \mathcal {F}\; \Leftrightarrow \; \nu _i(\tau ) \notin \mathcal {F}\rbrace \)\(and for each of these loops \) l\( we add a child to \)\( in \) ti\( labelled by \) l\(. \)
For notational convenience we add a special "tree" \((t_0,\nu _0)\) with a single node \(\varepsilon \) labelled with the edges not appearing in any other tree of the forest, i.e., \(\nu _0(\varepsilon )=E \setminus \bigcup _{i=1}^{r}l_i\) (remark that this is not a "loop").
We define \(\mathit {States}(\nu _0(\varepsilon )):= V\setminus \bigcup _{i=1}^{r}\mathit {States}(l_i)\) (remark that this does not follow the general definition of \("\mathit {States}"()\) for loops).
We call the trees \(t_1,\dots , t_r\) the ""proper trees"" of the "alternating cycle decomposition" of \(.Given a node \)\( of \) ti\(, we note \) Statesi():=States(i())\(.\)
As for the "Zielonka tree", the "alternating cycle decomposition" of \( is not unique, since it depends on the order in which we introduce the children of each node. This will not affect the upcoming results, and we will refer to it as ``the^{\prime \prime } alternating cycle decomposition of \) .
For the rest of the section we fix a "Muller transition system" \((V,E,\mathit {Source},\mathit {Target}, I_0, \mathcal {F})\) with the "alternating cycle decomposition" given by \((t_0,\nu _0), (t_1,\nu _1),\dots , (t_r,\nu _r)\) .
The "Zielonka tree" for a "Muller condition" \(\mathcal {F}\) over the set of colours \(C\) can be seen as a special case of this construction, for the automaton with a single state, input alphabet \(C\) , a transition for each letter in \(C\) and "acceptance condition" \(\mathcal {F}\) .
Each state and edge of \( appears in exactly one of the "trees" of \) ACD(\(.\)
The ""index"" of a state \(q\in V\) (resp. of an edge \(e\in E\) ) in \({\mathcal {ACD}}(\) is the only number \(j\in \lbrace 0,1,\dots ,r\rbrace \) such that \(q\in "\mathit {States}"_j(\varepsilon )\) (resp. \(e \in \nu _j(\varepsilon )\) ).
For each state \(q\in V\) of "index" \(j\) we define the ""subtree associated to the state \(q\) "" as the "subtree" \(t_q\) of \(t_j\) consisting in the set of nodes \(\lbrace \tau \in t_j \; : \; q\in {\mathit {States}}_j(\tau ) \rbrace \) .
We refer to figures REF and REF for an example of \("t_q"\) .
For each "proper tree" \(t_i\) of \("{\mathcal {ACD}}" (\) we say that \(t_i\) is (ACD)even if \(\nu _i(\varepsilon )\in \mathcal {F}\) and that it is (ACD)odd if \(\nu _i(\varepsilon )\notin \mathcal {F}\) .
We say that the "alternating cycle decomposition" of \( is \emph {even} if all the trees of maximal "height" of \) ACD(\( are even; that it is \emph {odd} if all of them are odd, and that it is \emph {(ACD){ambiguous}} if there are even and odd trees of maximal "height".\)
For each \(\tau \in t_i\) , \(i=1,\dots ,r\) , we define the (node)priority of \(\tau \) in \(t_i\) , written \(p_i(\tau )\) as follows:
If \({\mathcal {ACD}}(\) is (ACD)even or (ACD)ambiguous
If \(t_i\) is (ACD)even (\(\nu _i(\varepsilon )\in \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )=|\tau |\) .
If \(t_i\) is (ACD)odd (\(\nu _i(\varepsilon )\notin \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+1=|\tau |+1\) .
If \({\mathcal {ACD}}(\) is (ACD)odd
If \(t_i\) is (ACD)even (\(\nu _i(\varepsilon )\in \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+2=|\tau |+2\) .
If \(t_i\) is (ACD)odd (\(\nu _i(\varepsilon )\notin \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+1=|\tau |+1\) .
For \(i=0\) , we define \(p_0(\varepsilon )=0\) if \("{\mathcal {ACD}}" (\) is (ACD)even or (ACD)ambiguous and \(p_0(\varepsilon )=1\) if \("{\mathcal {ACD}}" (\) is (ACD)odd.
The assignation of priorities to nodes produces a labelling of the levels of each tree. It will be used to determine the priorities needed by a parity "transition system" to simulate \(. The distinction between the cases \) ACD(\( even or odd is added only to obtain the minimal number of priorities in every case.\)
In figure REF we represent a "transition system" \((V,E,\mathit {Source},\mathit {Target},q_0,\mathcal {F})\) with \(V=\lbrace q_0,q_1,q_2,q_3,q_4,q_5\rbrace \) , \(E=\lbrace a,b,\dots ,j,k\rbrace \) and using the "Muller condition"
\(\mathcal {F}=\lbrace \lbrace c,d,e \rbrace ,\lbrace e \rbrace ,\lbrace g,h,i \rbrace ,\lbrace l \rbrace ,\lbrace h,i,j,k \rbrace ,\lbrace j,k \rbrace \rbrace .\)
It has 2 strongly connected components (with vertices \(S_1=\lbrace q_1,q_2\rbrace , S_2=\lbrace q_3,q_4,q_5\rbrace \) ), and a vertex \(q_0\) that does not belong to any strongly connected component.
The "alternating cycle decomposition" of this transition system is shown in figure REF . It consists of two proper "trees", \(t_1\) and \(t_2\) , corresponding to the strongly connected components of \( and the tree \) t0\( that corresponds to the edges not appearing in the strongly connected components.\) We observe that \({\mathcal {ACD}}(\) is (ACD)odd (\(t_2\) is the highest tree, and it starts with a non-accepting "loop"). It is for this reason that we start labelling the levels of \(t_1\) from 2 (if we had assigned priorities \(0,1\) to the nodes of \(t_1\) we would have used 4 priorities, when only 3 are strictly necessary).
In figure REF we show the "subtree associated to" \(q_4\) .
<FIGURE><FIGURE><FIGURE>
The alternating cycle decomposition transformation
We proceed to show how to use the "alternating cycle decomposition" of a "Muller transition system" to obtain a "parity transition system". Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" and \((t_0,\nu _0), (t_1, \nu _1),\dots , (t_r,\nu _r)\) , its "alternating cycle decomposition".
First, we adapt the definitions of \(\mathit {Supp}\) and \(\mathit {Nextbranch}\) to the setting with multiple trees.
For an edge \(e\in E\) such that \(\mathit {Target}(e)\) has "index" \(j\) , for \(i\in \lbrace 0,1,\dots ,r\rbrace \) and a branch \(\beta \) in some subtree of \(t_i\) , we define the ""support"" of \(e\) from \(\tau \) as:
\( {\mathit {Supp}}(\beta ,i,e)={\left\lbrace \begin{array}{ll}\end{array}\right.\text{The maximal node (for } {\sqsubseteq }\text{) } \tau \in \beta \text{ such that } e\in \nu _i(\tau ), \text{ if } i= j .\\[2mm]}\text{The root } \varepsilon \text{ of } t_j, \text{ if } i\ne j.\)
\(\)
Intuitively, \({\mathit {Supp}}(\beta ,i,e)\) is the highest node we visit if we want to go from the bottom of the branch \(\beta \) to a node of the tree that contains \(e\) “in an optimal trajectory” (going up as little as possible). If we have to jump to another tree, we define \({\mathit {Supp}}(\beta ,i,e)\) as the root of the destination tree.
Let \(i\in \lbrace 0,1,\dots ,r\rbrace \) , \(q\) be a state of "index" \(i\) , \(\beta \) be a branch of some "subtree" of \(t_i\) and \(\tau \in \beta \) be a node of \(t_i\) such that \(q\in {\mathit {States}}_i(\tau )\) . If \(\tau \) is not the deepest node of \(\beta \) , let \(\sigma _\beta \) be the unique child of \(\tau \) in \(t_i\) such that \(\sigma _\beta \in \beta \) . We define:
\( {\mathit {Nextchild}_{t_q}}(\beta ,\tau )={\left\lbrace \begin{array}{ll}\tau , \text{ if } \tau \text{ is a leaf in } {t_q}.\\[3mm]\parbox {8cm}{Smallest older sibling of \sigma _\beta in {t_q}, if \sigma _\beta is defined and there is any such older sibling.}\\[3mm]\text{Smallest child of } \tau \text{ in } {t_q} \text{ in any other case}.\end{array}\right.} \)
Let \(i\in \lbrace 0,1,\dots ,r\rbrace \) and \(\beta \) be a branch of some "subtree" of \(t_i\) . For a state \(q\) of "index" \(j\) and a node \(\tau \) such that \(q\in {\mathit {States}}_j(\tau )\) and such that \(\tau \in \beta \) if \(i=j\) , we define:
\( {\mathit {Nextbranch}_{t_q}}(\beta ,i,\tau )= {\left\lbrace \begin{array}{ll}\end{array}\right.\text{ Leftmost branch in } {t_q} \text{ below } {\mathit {Nextchild}_{t_q}}(\beta ,\tau ), \text{ if } i= j .\\[3mm]}\text{The leftmost branch in } {\mathit {Subtree}}_{t_q}(\tau ), \text{ if } i\ne j.\)
\(\)
[ACD-transformation]
Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" with "alternating cycle decomposition" \({\mathcal {ACD}}(= \lbrace (t_0,\nu _0),(t_1,\nu _1),\dots ,(t_r,\nu _r)\rbrace \) . We define its ""ACD-parity transition system"" (or ACD-transformation) \({\mathcal {P}_{\mathcal {ACD}(}}=(V_P,E_P,\mathit {Source}_P,\mathit {Target}_P,I_0^{\prime }, p:E_P\rightarrow \mathbb {N} )\) as follows:
\(V_P=\lbrace (q,i,\beta ) \; : \; q\in V \text{ of "index" } i \text{ and } \beta \in {\mathit {Branch}}({t_q}) \rbrace \) .
For each node \((q,i,\beta )\in V_P\) and each edge \(e\in {\mathit {Out}}(q)\) we define an edge \(e_{i,\beta }\) from \((q,i,\beta )\) . We set
\(\mathit {Source}_P(e_{i,\beta })=(q,i,\beta )\) , where \(q=\mathit {Source}(e)\) .
\(\mathit {Target}_P(e_{i,\beta })=(q^{\prime },k,{\mathit {Nextbranch}_{t_{q^{\prime }}}}(\beta ,i,\tau ))\) , where \(q^{\prime }=\mathit {Target}(e)\) , \(k\) is its "index" and \(\tau ={\mathit {Supp}}(\beta ,i,e)\) .
\(p(e_{i,\tau })=(node){p_j}({\mathit {Supp}}(\beta ,i,e))\) , where \(j\) is the "index" of \({\mathit {Supp}}(\beta ,i,e)\) .
\(I_0^{\prime }=\lbrace (q_0,i,\beta _0) \; : \; q_0\in I_0, \, i \text{ the index of } q_0\) and \(\beta _0\) the leftmost branch in \({t_{q_0}}\rbrace \) .
If \( is labelled by \) lV:VLV\(, \) lE:ELE\(, we label \) PACD(\( by \) lV'((q,i,))=lV(q)\( and \) lE'(ei,)=lE(e)\(.\)
The set of states of \(\mathcal {P}_{\mathcal {ACD}(}\) is build as follows: for each state \(q\in we consider the subtree of \) ACD(\( consisting of the nodes with \) q\( in its label, and we add a state for each branch of this subtree.\) Intuitively, to define transitions in the transition system \({\mathcal {P}_{\mathcal {ACD}(}}\) we move simultaneously in \( and in \) ACD(\(. We start from \) q0I0\( and from the leftmost branch of \) tq0\(. When we take a transition \) e\( in \) while being in a branch \(\beta \) , we climb the branch \(\beta \) searching a node \(\tau \) with \(q^{\prime }=\mathit {Target}(e)\) and \(e\) in its label, and we produce the priority corresponding to the level reached. If no such node exists, we jump to the root of the tree corresponding to \(q^{\prime }\) . Then, we move to the next child of \(\tau \) on the right of \(\beta \) in the tree \({t_{q^{\prime }}}\) , and we pick the leftmost branch under it in \({t_{q^{\prime }}}\) . If we had jumped to the root of \({t_{q^{\prime }}}\) from a different tree, we pick the leftmost branch of \({t_{q^{\prime }}}\) .
The size of \({\mathcal {P}_{\mathcal {ACD}(}}\) is
\( |\mathcal {P}_{\mathcal {ACD}(}|=\sum \limits _{q\in V} |{\mathit {Branch}}({t_q})|. \)
The number of priorities used by \({\mathcal {P}_{\mathcal {ACD}(}}\) is the "height" of a maximal tree of \({\mathcal {ACD}}(\) if \({\mathcal {ACD}}(\) is (ACD)even or (ACD)odd, and the "height" of a maximal tree plus one if \({\mathcal {ACD}}(\) is (ACD)ambiguous.
In figure REF we show the "ACD-parity transition system" \({\mathcal {P}_{\mathcal {ACD}(}}\) of the transition system of example REF (figure REF ). States are labelled with the corresponding state \(q_j\) in \(, the tree of its "index" and a node \) ti\( that is a leaf in \) tqj\( (defining a branch of it).\) We have tagged the edges of \({\mathcal {P}_{\mathcal {ACD}(}}\) with the names of edges of \( (even if it is not an automaton). These indicate the image of the edges by the "morphism" \) : PACD( , and make clear the bijection between "runs" in \( and in \) PACD(\(.\) In this example, we create one “copy” of states \(q_0,q_1\) and \(q_2\) , three “copies” of the state \(q_3\) and two“copies” of states \(q_4\) and \(q_5\) . The resulting "parity transition system" \({\mathcal {P}_{\mathcal {ACD}(}}\) has therefore 10 states.
<FIGURE>Let \(\mathcal {A}\) be the "Muller automaton" of example "REF ". Its "alternating cycle decomposition" has a single tree that coincides with the "Zielonka tree" of its "Muller acceptance condition" \(\mathcal {F}_1\) (shown in figure REF ). However, its "ACD-parity transition system" has only 3 states, less than the "composition" \(\mathcal {Z}_{\mathcal {F}_1} \lhd \mathcal {A}\) (figure REF ), as shown in figure REF .
<FIGURE>[Correctness]
Let \((V, E, \mathit {Source}, \mathit {Target}, I_0, \mathcal {F})\) be a "Muller transition system" and \({\mathcal {P}_{\mathcal {ACD}(}}=(V_P, E_P, \mathit {Source}_P, \mathit {Target}_P, I_0^{\prime }, p:E_P\rightarrow \mathbb {N} )\) its "ACD-transition system". Then, there exists a "locally bijective morphism" \( \varphi : {\mathcal {P}_{\mathcal {ACD}(}} \rightarrow .Moreover, if \) is a "labelled transition system", then \(\varphi \) is a "morphism of labelled transition systems".
We define \(\varphi _V : V_P \rightarrow V\) by \(\varphi _V((q,i,\beta ))=q\) and \(\varphi _E : E_P \rightarrow E\) by \(\varphi _E(e_{i,\tau })=e\) .
It is clear that this map preserves edges, initial states and labels. It is also clear that it is "locally bijective", since we have defined one initial state in \({\mathcal {P}_{\mathcal {ACD}(}}\) for each initial state in \(, and by definition the edges in \) Out((q,i,))\( are in bijection with \) Out(q)\(. It induces therefore a bijection between the runs of the transition systems(fact \ref {Fact_LocBijMorph_BijectionRuns}).\) Let us see that a "run" \(\varrho \) in \( is accepted if and only if \) -1()\( is accepted in \) PACD(\(. First, we remark that any infinite run \)\( of \) will eventually stay in a "loop" \(l\in {\mathpzc {Loop}}(\) such that \(\mathit {Inf}(\varrho )=l\) , and therefore we will eventually only visit states corresponding to the tree \(t_i\) such that \(l\subseteq \nu _i(\varepsilon )\) in the "alternating cycle decomposition". Let \(p_{\min }\) be the smallest priority produced infinitely often in the run \(\varphi ^{-1}(\varrho )\) in \(\mathcal {P}_{\mathcal {ACD}(}\) . As in the proof of proposition REF , there is a unique node \(\tau _p\) in \(t_i\) visited infinitely often such that \((node){p_i}(\tau _p)=p_{\min }\) . Moreover, the states visited infinitely often in \(\mathcal {P}_{\mathcal {ACD}(}\) correspond to branches below \(\tau _p\) , that is, they are of the form
\( (q,i,\beta )\) , with \(\beta \in {\mathit {Subtree}}_{t_q}(\tau _p), \text{ for } q\in {\mathit {States}}_i(\tau _p)\) .
We claim that \(\tau _p\) verifies:
\(l\subseteq \nu _i(\tau _p)\) .
\(l\nsubseteq \nu _i(\sigma )\) for every "child" \(\sigma \) of \(\tau _p\) .
By definition of \({\mathcal {ACD}}(\) this implies
\( l\in \mathcal {F}\; \Longleftrightarrow \; \nu _i(\tau _p)\in \mathcal {F}\; \Leftrightarrow \; p_{\min } \text{ is even.}\)
We show that \(l\subseteq \nu _i(\tau _p)\) . For every edge \(e\notin \nu _i(\tau _p)\) of "index" \(i\) and for every branch \(\beta \in {\mathit {Subtree}}_{t_q}(\tau _p), \text{ for } q\in {\mathit {States}}_i(\tau _p)\) , we have that \(\tau ^{\prime }={\mathit {Supp}}(\beta ,i,e)\) is a strict "ancestor" of \(\tau _p\) in \(t_i\) . Therefore, if \(l\) was not contained in \(\nu _i(\tau _p)\) we would produce infinitely often priorities strictly smaller than \(p_{\min }\) .
Finally, we show that \(l\nsubseteq \nu _i(\sigma )\) for every "child" \(\sigma \) of \(\tau _p\) . Since we reach \(\tau _p\) infinitely often, we take transitions \(e_{i,\beta }\) such that \(\tau _p={\mathit {Supp}}(\beta ,i,e)\) infinitely often. Let us reason by contradiction and let us suppose that there is some child \(\sigma \) of \(\tau _p\) such that \(l\subseteq \nu _i(\sigma )\) . Then for each edge \(e\in l\) , \(\mathit {Target}(e)\in {\mathit {States}}_i(\sigma )\) , and therefore \(\sigma \in t_q\) for all \(q\in {\mathit {States}}(l)\) and for each transition \(e_{i,\beta }\) such that \(\tau _p={\mathit {Supp}}(\beta ,i,e)\) , some branches passing through \(\sigma \) are considered as destinations. Eventually, we will go to some state \((q,i,\beta ^{\prime })\) , for some branch \(\beta ^{\prime }\in {\mathit {Subtree}}_{t_q}(\sigma )\) . But since \(l\subseteq \nu _i(\sigma )\) , then for every edge \(e\in l\) and branch \(\beta ^{\prime }\in {\mathit {Subtree}}_{t_q}(\sigma )\) it is verified that \({\mathit {Supp}}(\beta ^{\prime },i,e)\) is a "descendant" of \(\sigma \) , so we would not visit again \(\tau _p\) and all priorities produced infinitely often would be strictly greater than \(p_{\min }\) .
From the remarks at the end of section REF , we obtain:
If \(\mathcal {A}\) is a "Muller automaton" over \(\Sigma \) , the automaton \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) is a "parity automaton" recognizing \(\mathcal {L}(\mathcal {A})\) . Moreover,
\(\mathcal {A}\) is "deterministic" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is deterministic.
\(\mathcal {A}\) is "unambiguous" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is unambiguous.
\(\mathcal {A}\) is "GFG" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is GFG.
If \(\mathcal {G}\) is a "Muller game", then \(\mathcal {P}_{\mathcal {ACD}(\mathcal {G})}\) is a "parity game" that has the same winner than \(\mathcal {G}\) .
The "winning region" of \(\mathcal {G}\) for a player \(P\in \lbrace Eve, Adam\rbrace \) is \({\mathcal {W}_P}(\mathcal {G})=\varphi ("\mathcal {W}_P"(\mathcal {P}_{\mathcal {ACD}(\mathcal {G})}))\) , being \(\varphi \) the morphism of the proof of proposition REF .
Optimality of the alternating cycle decomposition transformation
In this section we prove the strong optimality of the "alternating cycle decomposition transformation", both for number of priorities (proposition REF ) and for size (theorem REF ). We use the same ideas as for proving the optimality of the "Zielonka tree automaton" in section REF .
[Optimality of the number of priorities]
Let \( be a "Muller transition system" such that all its states are "accessible" and let \) PACD(\( be its "ACD-transition system". If \) P\( is another "parity transition system" such that there is a "locally bijective morphism" \) :P, then \(\mathcal {P}\) uses at least the same number of priorities than \({\mathcal {P}_{\mathcal {ACD}(}}\) .
We distinguish 3 cases depending on whether \({\mathcal {ACD}}(\) is (ACD)even, (ACD)odd or (ACD)ambiguous.
We treat simultaneously the cases \({\mathcal {ACD}}(\) (ACD)even and (ACD)odd. In these cases, the number \(h\) of priorities used by \({\mathcal {P}_{\mathcal {ACD}(}}\) coincides with the maximal "height" of a tree in \({\mathcal {ACD}}(\) . Let \(t_i\) be a tree of maximal "height" \(h\) in \({\mathcal {ACD}}(\) , \(\beta =\lbrace \tau _1,\dots ,\tau _{h}\rbrace \in {\mathit {Branch}}(t_i)\) a branch of \(t_i\) of maximal length (ordered as \(\tau _1 {\sqsupseteq } \tau _2 {\sqsupseteq } \dots {\sqsupseteq } \tau _{h}=\varepsilon \) ) and \(l_j=\nu _i(\tau _j)\) , \(j=1,\dots , h\) . We fix \(q\in {\mathit {States}}_i(\tau _1)\) , where \(\tau _1\) is the leaf of \(\beta \) , and we write
\(\mathit {Loop}_q)=\lbrace w\in {\mathpzc {Run}}_{T,q}\cap E^* \; : \; {\mathit {First}}(w)={\mathit {Last}}(w)=q \rbrace ,\)
and for each \(j=1,\dots , h\) we choose \(w_j \in \mathit {Loop}_q)\) such that \({\mathit {App}}(w_j)=l_j\) . Let \(\eta ^{\prime }\) be the maximal priority appearing in \(\mathcal {P}\) . We show as in the proof of proposition REF that for every \(v\in \mathit {Loop}_q)\) , the "run" \(\varphi ^{-1}((w_1\dots w_k v)^\omega )\) must produce a priority smaller or equal to \(\eta ^{\prime }-k+1\) . Taking \(k=h\) , the "run" \(\varphi ^{-1}((w_1\dots w_h)^\omega )\) produces a priority smaller or equal to \(\eta ^{\prime }-h+1\) and even if and only if \({\mathcal {ACD}}(\) is (ACD)even. By lemma REF we can suppose that \(\mathcal {P}\) uses all priorities in \([\eta ^{\prime }-h+1, \eta ^{\prime }]\) . We conclude that \(\mathcal {P}\) uses at least \(h\) priorities, so at least as many as \({\mathcal {P}_{\mathcal {ACD}(}}\) .
In the case \({\mathcal {ACD}}(\) (ACD)ambiguous, if \(h\) is the maximal "height" of a tree in \({\mathcal {ACD}}(\) , then \({\mathcal {P}_{\mathcal {ACD}(}}\) uses \(h+1\) priorities. We can repeat the previous argument with two different maximal branches of respective maximal (ACD)even and (ACD)odd trees. We conclude that \(\mathcal {P}\) uses at least priorities in a range \([\mu ,\mu +h]\cup [\eta ,\eta +h]\) , with \(\mu \) even and \(\eta \) odd, so it uses at least \(h+1\) priorities.
A similar proof, or an application of the results from [7]} gives the following result:
If \(\mathcal {A}\) is a deterministic automaton, the accessible part of \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) uses the optimal number of priorities to recognize \(\mathcal {L}(\mathcal {A})\) .
Finally, we state and prove the optimality of \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) for size.
[Optimality of the number of states]
Let \( be a (possibly "labelled") "Muller transition system" such that all its states are "accessible" and let \) PACD(\( be its "ACD-transition system". If \) P\( is another "parity transition system" such that there is a "locally bijective morphism" \) :P, then
\( |"\mathcal {P}_{\mathcal {ACD}(}"|\le |\mathcal {P}| \) .
Proof of theorem REF
We follow the same steps as for proving theorem REF . We will suppose that all states of the "transition systems" considered are "accessible".
Let 1, 2 be "transition systems" such that there is a "morphism of transition systems" \(\varphi : 1 \rightarrow 2\) . Let \(l\in {\mathpzc {Loop}}(2)\) be a "loop" in 2. An ""\(l\) -SCC"" of 1 (with respect to \(\varphi \) ) is a non-empty "strongly connected subgraph" \((V_l,E_l)\) of the subgraph \((\varphi _V^{-1}({\mathit {States}}(l)),\varphi _E^{-1}(l) )\) such that
\(\nonumber & \text{for every } q_1\in V_l \text{ and every } e_2\in "\mathit {Out}"(\varphi (q_1))\cap l\\ &\text{there is an edge } e_1\in \varphi ^{-1}(e_2)\cap "\mathit {Out}"(q_1) \text{ such that } e_1\in E_l.\)
That is, an \(l\) -SCC is a "strongly connected subgraph" of 1 in which all states and transitions correspond via \(\varphi \) to states and transitions appearing in the "loop" \(l\) . Moreover, given a "run" staying in \(l\) in 2 we can simulate it in the \(l\) -SCC of 1 (property (REF )).
Let 1and 2
be two "transition systems" such that there is a "locally surjective" "morphism" \(\varphi : 1 \rightarrow 2\) . Let \(l\in "\mathpzc {Loop}"(2)\) and \(C_l=(V_l,E_l)\) be a non-empty "\(l\) -SCC" in 1. Then, for every "loop" \(l^{\prime }\in "\mathpzc {Loop}"(2)\) such that \(l^{\prime }\subseteq l\) there is a non-empty "\(l^{\prime }\) -SCC" in \(C_l\) .
Let \((V^{\prime },E^{\prime })=(V_l,E_l)\cap (\varphi _V^{-1}({\mathit {States}}(l^{\prime })),\varphi _E^{-1}(l^{\prime }))\) . We first prove that \((V^{\prime },E^{\prime })\) is non-empty. Let \(q_1\in V_l \subseteq {\mathit {States}}(l)\) . Let \(\varrho \in {\mathpzc {Run}}_{T_2,\varphi (q)}\) be a finite run in 1 from \(\varphi (q_1)\) , visiting only edges in \(l\) and ending in \(q_2\in {\mathit {States}}(l^{\prime })\) . From the local surjectivity, we can obtain a run in \(\varphi ^{-1}(\varrho )\) that will stay in \((V^{\prime },E^{\prime })\) and that will end in a state in \(\varphi _V^{-1}({\mathit {States}}(l^{\prime }))\) . The subgraph \((V^{\prime },E^{\prime })\) clearly has property (REF ) (for \(l^{\prime }\) ).
We prove by induction on the size that any non-empty subgraph \((V^{\prime },E^{\prime })\) verifying the property (REF ) (for \(l^{\prime }\) ) admits an \(l^{\prime }\) -SCC. If \(|V^{\prime }|=1\) , then \((V^{\prime },E^{\prime })\) forms by itself a "strongly connected graph". If \(|V^{\prime }|>1\) and \((V^{\prime },E^{\prime })\) is not strongly connected, then there are vertices \(q,q^{\prime }\in V^{\prime }\) such that there is no path from \(q\) to \(q^{\prime }\) following edges in \(E^{\prime }\) . We let
\( V^{\prime }_q=\lbrace p\in V^{\prime } \; : \; \text{there is a path from } q \text{ to } p \text{ in } (V^{\prime },E^{\prime })\rbrace \; ; \; E^{\prime }_q=E^{\prime }\cap "\mathit {Out}"(V^{\prime }_q)\cap "\mathit {In}"(V^{\prime }_q) .\)
Since \(q^{\prime }\notin V^{\prime }_q\) , the size \(|V^{\prime }_q|\) is strictly smaller than \(|V^{\prime }|\) .
Also, the subgraph \((V^{\prime }_q,E^{\prime }_q)\) is non-empty since \(q\in V^{\prime }_q\) .
The property (REF ) holds from the definition of \((V^{\prime }_q,E^{\prime }_q)\) . We conclude by induction hypothesis.
Let \( be a "Muller transition system" with acceptance condition \) F\( and let \) P\( be a "parity transition system" such that there is a "locally bijective morphism" \) : P. Let \(t_i\) be a "proper tree" of \({\mathcal {ACD}}(\) and \(\tau ,\sigma _1,\sigma _2\in t_i\) nodes in \(t_i\) such that \(\sigma _1,\sigma _2\) are different "children" of \(\tau \) , and let \(l_1=\nu _i(\sigma _1)\) and \(l_2=\nu _i(\sigma _2)\) . If \(C_1\) and \(C_2\) are two "\(l_1 \) -SCC" and "\(l_2\) -SCC" in \(\mathcal {P}\) , respectively, then \(C_1\cap C_2= \emptyset \) .
Suppose there is a state \(q\in C_1\cap C_2\) . Since \(\varphi _V(q)\in {\mathit {States}}(l_1)\cap {\mathit {States}}(l_2)\) , and \(l_1, l_2\) are "loops" there are finite "runs" \(\varrho _1,\varrho _2 \in {\mathpzc {Run}}_{\varphi _V(q)}\) such that \({\mathit {App}}(\varrho _1)=l_1\) and \(\mathit {App}(\varrho _2)=l_2\) . We can “simulate” these runs in \(C_1\) and \(C_2\) thanks to property (REF ), producing runs \(\varphi ^{-1}(\varrho _1)\) and \(\varphi ^{-1}(\varrho _2)\) in \({\mathpzc {Run}}_{\mathcal {P},q}\) and arriving to \(q_1="\mathit {Last}"(\varphi ^{-1}(\varrho _1))\) and \(q_2=\mathit {Last}(\varphi ^{-1}(\varrho _1))\) . Since \(C_1, C_2\) are "\(l_1,l_2\) -SCC", there are finite runs \(w_1\in {\mathpzc {Run}}_{\mathcal {P},q_1}\) , \(w_2\in {\mathpzc {Run}}_{q_2}\) such that \(\mathit {Last}(w_1)=\mathit {Last}(w_2)=q\) , so the runs \(\varphi ^{-1}(\varrho _1)w_1\) and \(\varphi ^{-1}(\varrho _2)w_2\) start and end in \(q\) . We remark that in \( the runs \) (-1(1)w1)=1E(w1)\( and \) (-1(2)w2)=2E(w2)\(start and end in \) V(q)\( andvisit, respectively, all the edges in \) l1\( and \) l2\(.From the definition of \) ACD(\( we have that \) l1F l2 F l1 l2 F\(. Since \)\( preserves the "acceptance condition", the minimal priority produced by \) -1(1)w1\( has the same parity than that of \) -1(2)w2\(, but concatenating both runs we must produce a minimal priority of the opposite parity, arriving to a contradiction.\)
Let \( be a "Muller transition system" and \) "PACD("\( its "ACD-parity transition system". For each tree \) ti\( of \) ACD(\(, each node \) ti\( and each state \) qStatesi()\( we write:\)\( \psi _{\tau ,i,q}=|{\mathit {Branch}}({\mathit {Subtree}}_{t_q}(\tau ))|=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \; \beta \, \text{ passes through } \tau \rbrace |.\)\(\vspace{-8.53581pt}\)\(\Psi _{\tau ,i}=\sum \limits _{q\in {\mathit {States}}_i(\tau )}\psi _{\tau ,i,q}=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \;q\in V \text{ of "index" } i \text{ and } \beta \, \text{ passes through } \tau \rbrace | .\)\(\)
If we consider the root of the trees in \({\mathcal {ACD}}(\) , then each \(\Psi _{\varepsilon ,i}\) is the number of states in \("\mathcal {P}_{\mathcal {ACD}(}"\) associated to this tree, i.e., \(\Psi _{\varepsilon ,i}=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \; q\in V,\; \beta \in {\mathit {Branch}}("t_q")\rbrace |\) . Therefore
\( |"\mathcal {P}_{\mathcal {ACD}(}"|=\sum \limits _{i=0}^{r}\Psi _{\varepsilon ,i} .\)
[Proof of theorem REF]
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a "Muller transition system", \("\mathcal {P}_{\mathcal {ACD}(}"\) the "ACD-parity transition system" of \( and \) P=(V',E',Source',Target',I0',p':E':N )\( a parity transition system such that there is a "locally bijective morphism" \) : P.
First of all, we construct two modified transition systems \(\widetilde{=(V,\widetilde{E},\mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e},\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t},I_0,\widetilde{\mathcal {F}}) and \widetilde{\mathcal {P}}=(V^{\prime },\widetilde{E^{\prime }},\mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e}^{\prime },\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t}^{\prime }, I_0^{\prime }, \widetilde{p^{\prime }}:\widetilde{E^{\prime }}:\rightarrow \mathbb {N} ), such that}\)
Each vertex of \(V\) belongs to a "strongly connected component".
All leaves \(\tau \in t_i\) verify \(|{\mathit {States}}_i(\tau )|=1\) , for every \(t_i\in {\mathcal {ACD}}(\widetilde{).\item Nodes \tau \in t_i verify {\mathit {States}}_i(\tau )=\bigcup _{\sigma \in \mathit {Children}(\tau )}\mathit {States}_i(\sigma ), for every t_i\in {\mathcal {ACD}}(\widetilde{).\item There is a "locally bijective morphism" \widetilde{\varphi }: \widetilde{\mathcal {P}} \rightarrow \widetilde{.\item |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|\le |\widetilde{\mathcal {P}}| \; \Rightarrow \; |\mathcal {P}_{\mathcal {ACD}(}|\le |\mathcal {P}|.}}}We define the transition system \widetilde{ by adding for each q\in V two new edges, e_{q,1}, e_{q,2} with \mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e}(e_{q,j})=\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t}(e_{q,j})=q, for j=1,2. The modified "acceptance condition" \widetilde{\mathcal {F}} is given by: let C\subseteq \widetilde{E}\begin{itemize}\item If C\cap E\ne \emptyset , then C\in \widetilde{\mathcal {F}} \; \Leftrightarrow \; C\cap E \in \mathcal {F} (the occurrence of edges e_{q,j} does not change the "acceptance condition").\item If C\cap E = \emptyset , if there are edges of the form e_{q,1} in C, for some q\in V, then C\in \widetilde{\mathcal {F}}. If all edges of C are of the form e_{q,2}, C\notin \mathcal {F}.\end{itemize}It is easy to verify that the "transition system" \widetilde{ and {\mathcal {ACD}}(\widetilde{) verify conditions 1,2 and 3.We perform equivalent operations in \mathcal {P}, obtaining \widetilde{\mathcal {P}}:we add a pair of edges e_{q,1}, e_{q,2} for each vertex in \mathcal {P}, and we assign them priorities \widetilde{p}(e_{q,1})=\eta +\epsilon and \widetilde{p}(e_{q,2})=\eta +\epsilon +1, where \eta is the maximum of the priorities in \mathcal {P} and \epsilon =0 if \eta is even, and \epsilon =1 if \eta is odd. We extend the "morphism" \varphi to \widetilde{\varphi }: \widetilde{\mathcal {P}} \rightarrow \widetilde{ conserving the "local bijectivity" by setting \widetilde{\varphi }_E(e_{q,j})=e_{\varphi (q),j} for j=1,2. Finally, it is not difficult to verify that the underlying graphs of \mathcal {P}_{\mathcal {ACD}(\widetilde{ )} and \widetilde{\mathcal {P}}_{\mathcal {ACD}(} are equal (the only differences are the priorities associated to the edges e_{q,j}), so in particular |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|=|\widetilde{\mathcal {P}}_{\mathcal {ACD}(}|=|\mathcal {P}_{\mathcal {ACD}(}|. Consequently, |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|\le |\widetilde{\mathcal {P}}| implies |\mathcal {P}_{\mathcal {ACD}(}|\le |\widetilde{\mathcal {P}}|=|\mathcal {P}|.}}}Therefore, it suffices to prove the theorem for the modified systems \widetilde{ and \widetilde{\mathcal {P}}. From now on, we take verifying the conditions 1, 2 and 3 above. In particular, all trees are "proper trees" in {\mathcal {ACD}}( . It also holds that for each q\in V and \tau \in t_i that is not a leaf, \psi _{\tau ,i,q}=\sum \limits _{\sigma \in \mathit {Children}(\tau )}\psi _{\sigma ,i,q}. Therefore, for each \tau \in t_i that is not a leaf \Psi _{\tau ,i}=\sum \limits _{\sigma \in \mathit {Children}(\tau )}\Psi _{\sigma ,i}, and for each leaf \sigma \in t_i we have \Psi _{\sigma ,i}=1.}Vertices of V^{\prime } are partitioned in the equivalence classes of the preimages by \varphi of the roots of the trees \lbrace t_1,\dots ,t_r\rbrace of {\mathcal {ACD}}(: V^{\prime }= \bigcup \limits _{i=1}^r\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon )) \quad \text{ and } \quad \varphi _V^{-1}( {\mathit {States}}_i(\varepsilon )) \cap \varphi _V^{-1}( {\mathit {States}}_j(\varepsilon ))=\emptyset \text{ for } i\ne j .}}\begin{claim*}For each i=1,\dots ,r and each \tau \in t_i, if C_\tau is a non-empty "\nu _i(\tau )-SCC", then|C_\tau |\ge \Psi _{\tau ,i}.\end{claim*}}}}\) Let us suppose this claim holds. In particular \((\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ),\varphi _E^{-1}( \nu _i(\varepsilon ))\) verifies the property (REF ) from definition REF , so from the proof of lemma REF we deduce that it contains a \(\nu _i(\varepsilon )\) -SCC and therefore \(|\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ))| \ge \Psi _{\varepsilon ,i}\) , so
\(|\mathcal {P}|=\sum \limits _{i=1}^r |\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ))|\ge \sum \limits _{i=1}^r \Psi _{\varepsilon ,i}=|"\mathcal {P}_{\mathcal {ACD}(}"|,\)
concluding the proof.
[Proof of the claim]
Let \(C_\tau \) be a "\(\nu _i(\tau )\) -SCC". Let us prove \(|C_\tau |\ge \Psi _{\tau ,i}\) by induction on the "height of the node" \(\tau \) . If \(\tau \) is a leaf (in particular if its height is 1), \(\Psi _{\tau ,i}=1\) and the claim is clear.
If \(\tau \) of height \(h>1\) is not a leaf, then it has children \(\sigma _1,\dots , \sigma _k\) , all of them of height \(h-1\) . Thanks to lemmas REF and REF , for \(j=1,\dots ,k\) , there exist disjoint "\(\nu _i(\sigma _j)\) -SCC" included in \(C_\tau \) , named \(C_1,\dots ,C_k\) , so by induction hypothesis
\( |C_\tau | \ge \sum \limits _{j=1}^k |C_j| \ge \sum \limits _{j=1}^k \Psi _{\sigma _j,i}= \Psi _{\tau ,i}. \)
From the hypothesis of theorem REF we cannot deduce that there is a "morphism" from \(\mathcal {P}\) to \("\mathcal {P}_{\mathcal {ACD}(}"\) or vice-versa. To produce a counter-example it is enough to remember the “non-determinism” in the construction of \(\mathcal {P}_{\mathcal {ACD}(}\) . Two different orderings in the nodes of the trees of \({\mathcal {ACD}}(\) will produce two incomparable, but minimal in size parity transition systems that admit a "locally bijective morphism" to \(. \)
However, we can prove the following result:
If \(\varphi _1: "\mathcal {P}_{\mathcal {ACD}(}" \rightarrow is the "locally bijective morphism" described in the proof of proposition \ref {Prop_Correctness-ACD}, then for every state \) q\( in \) of "index" \(i\) :
\( |\varphi _1^{-1}(q)|=\psi _{\varepsilon ,i,q}\le |\varphi ^{-1}(q)| \;, \; \text{ for every "locally bijective morphism" } \varphi : \mathcal {P}\rightarrow \)
It is enough to remark that if \(q \in {\mathit {States}}_i(\tau )\) , then any "\(\nu _i(\tau )\) -SCC" \(C_\tau \) of \(\mathcal {P}\) will contain some state in \(\varphi ^{-1}(q)\) . We prove by induction as in the proof of the claim that \(\psi _{\tau ,i,q} \le |C_\tau \cap \varphi ^{-1}(q)|\) .
Applications
Determinisation of Büchi automata
In many applications, such as the synthesis of reactive systems for \(LTL\) -formulas, we need to have "deterministic" automata. For this reason, the determinisation of automata is usually a crucial step. Since McNaughton showed in [20]} that Büchi automata can be transformed into Muller deterministic automata recognizing the same language, much effort has been put into finding an efficient way of performing this transformation. The first efficient solution was proposed by Safra in [21]}, producing a deterministic automaton using a "Rabin condition". Due to the many advantages of "parity conditions" (simplicity, easy complementation of automata, they admit memoryless strategies for games, closeness under union and intersection...), determinisation constructions towards parity automata have been proposed too. In [22]}, Piterman provides a construction producing a parity automaton that in addition improves the state-complexity of Safra's construction. In [23]}, Schewe breaks down Piterman's construction in two steps: the first one from a non deterministic Büchi automaton \(\mathcal {B}\) towards a "Rabin automaton" (\("\mathcal {R}_\mathcal {B}"\) ) and the second one gives Piterman's parity automaton (\("\mathcal {P}_\mathcal {B}"\) ).
In this section we prove that there is a "locally bijective morphism" from \("\mathcal {P}_\mathcal {B}"\) to \("\mathcal {R}_\mathcal {B}"\) , and therefore we would obtain a smaller parity automaton applying the "ACD-transformation" in the second step. We provide an example (example REF ) in which the "ACD-transformation" provides a strictly better parity automaton.
From non-deterministic Büchi to deterministic Rabin automata
In [23]}, Schewe presents a construction of a deterministic "Rabin automaton" \(""\mathcal {R}_\mathcal {B}""\) from a non-deterministic "Büchi automaton" \(\mathcal {B}\) . The set of states of the automaton \(\mathcal {R}_\mathcal {B}\) is formed of what he calls ""history trees"". The number of history trees for a Büchi automaton of size \(n\) is given by the function \(\mathit {hist}(n)\) , that is shown to be in \(o((1.65n)^n)\) in [23]}. This construction is presented starting from a state-labelled Büchi automaton. A construction starting from a transition-labelled Büchi automaton can be found in [26]}. In [27]}, Colcombet and Zdanowski proved the worst-case optimality of the construction.
[[23]}]
Given a non-deterministic "Büchi automaton" \(\mathcal {B}\) with \(n\) states, there is an effective construction of a deterministic "Rabin automaton" \(\mathcal {R}_\mathcal {B}\) with \(\mathit {hist}(n)\) states and using \(2^{n-1}\) Rabin pairs that recognizes the language \("\mathcal {L}(\mathcal {B})"\) .
[[27]}]
For every \(n\in \mathbb {N} \) there exists a non-deterministic Büchi automaton \(B_n\) of size \(n\) such that every deterministic Rabin automaton recognizing \(\mathcal {L}(\mathcal {B}_n)\) has at least \(\mathit {hist}(n)\) states.
From non-deterministic Büchi to deterministic parity automata
In order to build a deterministic "parity automaton" \(""\mathcal {P}_\mathcal {B}""\) that recognizes the language of a given "Büchi automaton" \(\mathcal {B}\) , Schewe transforms the automaton \("\mathcal {R}_\mathcal {B}"\) into a parity one using what he calls a later introduction record (LIR). The LIR construction can be seen as adding an ordering (satisfying some restrictions) to the nodes of the "history trees". States of \(\mathcal {P}_\mathcal {B}\) are therefore pairs of history trees with a LIR. In this way we obtain a similar parity automaton that with the Piterman's determinisation procedure [22]}.
The worst-case optimality of this construction was proved in [31]}, [26]}, generalising the methods of [27]}.
[[23]}]
Given a non-deterministic "Büchi automaton" \(\mathcal {B}\) with \(n\) states, there is an effective construction of a deterministic "parity automaton" \("\mathcal {P}_\mathcal {B}"\) with \(O(n!(n-1)!)\) states and using \(2n\) priorities that recognizes the language \("\mathcal {L}(\mathcal {B}))"\) .
[[31]}, [26]}]
For every \(n\in \mathbb {N} \) there exists a non-deterministic Büchi automaton \(B_n\) of size \(n\) such that \("\mathcal {P}_\mathcal {B}"\) has less than \(1.5\) times as many states as a minimal deterministic parity automaton recognizing \(\mathcal {L}(\mathcal {B}_n)\) .
A locally bijective morphism from \(\mathcal {P}_\mathcal {B}\) to \(\mathcal {R}_\mathcal {B}\)
Given a "Büchi automaton" \(\mathcal {B}\) and its determinisations to Rabin and parity automata \("\mathcal {R}_\mathcal {B}"\) and \("\mathcal {P}_\mathcal {B}"\) , there is a "locally bijective morphism" \(\varphi : \mathcal {P}_\mathcal {B}\rightarrow \mathcal {R}_\mathcal {B}\) .
Observing the construction of \(\mathcal {R}_\mathcal {B}\) and \(\mathcal {P}_\mathcal {B}\) in [23]}, we see that the states of \(\mathcal {P}_\mathcal {B}\) are of the form \((T,\chi )\) with \(T\) an state of \(\mathcal {R}_B\) (a "history tree"), and \(\chi : T \rightarrow \lbrace 1,\dots ,|B|\rbrace \) a LIR (that can be seen as an ordering of the nodes of \(T\) ).
It is easy to verify that the mapping \(\varphi _V((T,\chi ))=T\) defines a morphism \(\varphi : \mathcal {R}_\mathcal {B}\rightarrow \mathcal {P}_\mathcal {B}\) (from fact REF there is only one possible definition of \(\varphi _E\) ). Since the automata are deterministic, \(\varphi \) is a "locally bijective morphism".
Let \(\mathcal {B}\) be a "Büchi automaton" and \("\mathcal {R}_\mathcal {B}"\) , \("\mathcal {P}_\mathcal {B}"\) the deterministic Rabin and parity automata obtained by applying the Piterman-Schewe construction to \(\mathcal {B}\) . Then, the parity automaton \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}\) verifies
\( |\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}| \le |\mathcal {P}_\mathcal {B}| \)
and \("\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}"\) uses a smaller number of priorities than \("\mathcal {P}_\mathcal {B}"\) .
It is a direct consequence of propositions REF , REF and theorem REF .
Furthermore, after proposition REF , \("\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}"\) uses the optimal number of priorities to recognize \("\mathcal {L}(\mathcal {B})"\) , and we directly obtain this information from the "alternating cycle decomposition" of \(\mathcal {R}_\mathcal {B}\) , \({\mathcal {ACD}}(\mathcal {R}_\mathcal {B})\) .
In the "example REF " we show a case in which \(|\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}| < |\mathcal {P}_\mathcal {B}|\) and for which the gain in the number of priorities is clear.
In [27]} and [26]}, the lower bounds for the determinisation of "Büchi automata" to Rabin and parity automata where shown using the family of ""full Büchi automata"", \(\lbrace \mathcal {B}_n\rbrace _{n\in \mathbb {N} }\) , \(|\mathcal {B}_n|=n\) . The automaton \(\mathcal {B}_n\) can simulate any other Büchi automaton of the same size. For these automata, the constructions \(\mathcal {P}_{\mathcal {B}_n}\) and \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_{B_n})}\) coincide.
""""
We present a non-deterministic Büchi automaton \(\mathcal {B}\) such that the "ACD-parity automaton" of \("\mathcal {R}_\mathcal {B}"\) has strictly less states and uses strictly less priorities than \("\mathcal {P}_\mathcal {B}"\) .
In figure REF we show the automaton \(\mathcal {B}\) over the alphabet \(\Sigma =\lbrace a,b,c\rbrace \) . Accepting transitions for the "Büchi condition" are represented with a black dot on them. An accessible "strongly connected component" \(\mathcal {R}_\mathcal {B}^{\prime }\) of the determinisation to a "Rabin automaton" \("\mathcal {R}_\mathcal {B}"\) is shown in figure REF . It has 2 states that are "history trees" (as defined in [23]}). There is a "Rabin pair" \((E_\tau ,F_\tau )\) for each node appearing in some "history tree" (four in total), and these are represented by an array with four positions. We assign to each transition and each position \(\tau \) in the array the symbols \({Green2}{{TextRenderingMode=FillStroke,LineWidth=.5pt, }{}}\) , \({Red2}{\mathbf {X}}\) , or \({Orange2}{\bullet }\) depending on whether this transition belongs to \(E_\tau \) , \(F_\tau \) or neither of them, respectively (we can always suppose \(E_\tau \cap F_\tau = \emptyset \) ).
In figure REF there is the "alternating cycle decomposition" corresponding to \(\mathcal {R}_\mathcal {B}^{\prime }\) . We observe that the tree of \({\mathcal {ACD}}(\mathcal {R}_\mathcal {B}^{\prime })\) has a single branch of height 3.
This is, the Rabin condition over \(\mathcal {R}_\mathcal {B}^{\prime }\) is already a "\([1,3]\) -parity condition" and \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B^{\prime })}=\mathcal {R}_\mathcal {B}^{\prime }\) . In particular it has 2 states, and uses priorities in \([1,3]\) .
On the other hand, in figure REF we show the automaton \(\mathcal {P}_\mathcal {B}^{\prime }\) , that has 3 states and uses priorities in \([3,7]\) . The whole automata \(\mathcal {R}_\mathcal {B}\) and \(\mathcal {P}_\mathcal {B}\) are too big to be pictured in these pages, but the three states shown in figure REF are indeed accessible from the initial state of \(\mathcal {P}_\mathcal {B}\) .
<FIGURE><FIGURE><FIGURE><FIGURE>
On relabelling of transition systems by acceptance conditions
In this section we use the information given by the "alternating cycle decomposition" to provide characterisations of "transition systems" that can be labelled with "parity", "Rabin", "Streett" or \("\mathit {Weak}_k"\) conditions, generalising the results of [4]}.
As a consequence, these yield simple proofs of two results about the possibility to define different classes of acceptance conditions in a deterministic automaton. Theorem REF , first proven in [42]}, asserts that if we can define a Rabin and a Streett condition on top of an underlying automaton \(\mathcal {A}\) such that it recognizes the same language \(L\) with both conditions, then we can define a parity condition in \(\mathcal {A}\) recognizing \(L\) too. Theorem REF states that if we can define Büchi and co-Büchi conditions on top of an automaton \(\mathcal {A}\) recognizing the language \(L\) , then we can define a \("\mathit {Weak}"\) condition over \(\mathcal {A}\) such that it recognizes \(L\) .
First, we extend the definition REF of section REF to the "alternating cycle decomposition".
Given a Muller transition system \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) , we say that its "alternating cycle decomposition" \({\mathcal {ACD}}(\) is a
""Rabin ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "Rabin shape".
""Streett ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "Streett shape".
""parity ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "parity shape".
""\([1,\eta ]\) -parity ACD"" (resp. \([0,\eta -1]\) -parity ACD) if it is a parity ACD, every tree has "height" at most \(\eta \) and trees of height \(\eta \) are (ACD)odd (resp. (ACD)even).
""Büchi ACD"" if it is a \([0,1]\) -parity ACD.
""co-Büchi ACD"" if it is a \([1,2]\) -parity ACD.
""\(\mathit {Weak}_k\) ACD"" if it is a parity ACD and every tree \((t_i,\nu _i) \in {\mathcal {ACD}}(\) has "height" at most \(k\) .
The next proposition follows directly from the definitions.
Let \( be a Muller transition system. Then:\begin{itemize}\item {\mathcal {ACD}}( is a "parity ACD" if and only if it is a "Rabin ACD" and a "Streett ACD".\item {\mathcal {ACD}}( is a "\mathit {Weak}_k ACD" if and only if it is a "[0,k]-parity ACD" and a "[1,k+1]-parity ACD".\end{itemize}\)
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "Rabin condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\notin \mathcal {F}\) and \(l_2\notin \mathcal {F}\) , then \(l_1\cup l_2 \notin \mathcal {F}\) .
\({\mathcal {ACD}}(\) is a "Rabin ACD".
(\(1 \Rightarrow 2\) )
Suppose that \( uses a Rabin condition with Rabin pairs \) (E1,F1),...,(Er,Fr)\(. Let \) l1\( and \) l2\( be two rejecting loops. If \) l1l2\( was accepting, then there would be some Rabin pair \) (Ej,Fj)\( and some edge \) el1l2\( such that \) eEj\( and \) eFj\(. However, the edge \) e\( belongs to \) l1\( or to \) l2\(, and the loop it belongs to should be accepting too.\)
(\(2 \Rightarrow 3\) )
Let \(q\in V\) of "index" \(i\) , and \("t_q"\) be the subtree of \({\mathcal {ACD}}(\) associated to \(q\) . Suppose that there is a node \(\tau \in t_q\) such that \((node){p_i}(\tau )\) is even ("round" node) and that it has two different children \(\sigma _1\) and \(\sigma _2\) . The "loops" \(\nu _i(\sigma _1)\) and \(\nu _i(\sigma _2)\) are maximal rejecting loops contained in \(\nu _i(\tau )\) , and since they share the state \(q\) , their union is also a loop that must verify \( \nu _i(\sigma _1) \cup \nu _i(\sigma _2)\in \mathcal {F}\) , contradicting the hypothesis.
(\(3 \Rightarrow 1\) )
We define a "Rabin condition" over \(. For each tree \) ti\( in \) ACD(\( and each "round" node \) ti\( (\) (node)pi()\( even) we define the Rabin pair \) (Ei,,Fi,)\( given by:\)\( E_{i,\tau }=\nu _i(\tau )\setminus \bigcup _{\sigma \in "\mathit {Children}"(\tau )}\nu _i(\sigma ) \quad , \qquad F_{i,\tau }=E \setminus \nu _i(\tau ). \)\(\) Let us show that this condition is "equivalent to" \(\mathcal {F}\) over the transition system \(. We begin by proving the following consequence of being a "Rabin ACD":\begin{claim*}If \tau is a "round" node in the tree t_i of {\mathcal {ACD}}(, and l\in {\mathpzc {Loop}}( is a "loop" such that l\subseteq \nu _i(\tau ) and l\nsubseteq \nu _i(\sigma ) for any child \sigma of \tau , then there is some edge e\in l such that e\notin \nu _i(\sigma ) for any child \sigma of \tau .\end{claim*}\begin{claimproof}Since for each state q\in V the tree "t_q" has "Rabin shape", it is verified that {\mathit {States}}_i(\sigma )\cap {\mathit {States}}_i(\sigma ^{\prime })=\emptyset for every pair of different children \sigma , \sigma ^{\prime } of \tau . Therefore, the union of \nu _i(\sigma ) and \nu _i(\sigma ^{\prime }) is not a loop, and any loop l contained in this union must be contained either in \nu _i(\sigma ) or in \nu _i(\sigma ^{\prime }).\end{claimproof}\)
Let \(\varrho \in {\mathpzc {Run}}_{T}\) be a "run" in \(, let \) lLoop(\( be the loop of \) such that \(\mathit {Inf}(\varrho )=l\) and let \(i\) be the "index" of the edges in this loop. Let \(\tau \) be a maximal node in \(t_i\) (for \({\sqsubseteq }\) ) such that \(l\subseteq \nu _i(\tau )\) . If \(l\in \mathcal {F}\) , let \(\tau \) be a maximal node in \(t_i\) (for \({\sqsubseteq }\) ) such that \(l\subseteq \nu _i(\tau )\) . This node \(\tau \) is a round node, and from the previous claim it follows that there is some edge \(e\in l\) such that \(e\) does not belong to any child of \(\tau \) , so \(e\in E_{i,\tau }\) and \(e\notin F_{i,\tau }\) , so the "run" \(\varrho \) is accepted by the Rabin condition too. If \(l\notin \mathcal {F}\) , then for every round node \(\tau \) , if \(l\subseteq \nu _i(\tau )\) then \(l\subseteq \nu _i(\sigma )\) for some child \(\sigma \) of \(\tau \) . Therefore, for every Rabin pair \((E_{i,\tau },F_{i,\tau })\) and every \(e\in l\) , it is verified \(e\in E_{i,\tau } \, \Rightarrow \, e\in F_{i,\tau }\) .
The Rabin condition presented in this proof does not necessarily use the optimal number of Rabin pairs required to define a Rabin condition "equivalent to" \(\mathcal {F}\) over \(.\)
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "Streett condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\in \mathcal {F}\) and \(l_2\in \mathcal {F}\) , then \(l_1\cup l_2 \in \mathcal {F}\) .
\({\mathcal {ACD}}(\) is a "Streett ACD".
We omit the proof of proposition REF , being the dual case of proposition REF .
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "parity condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\in \mathcal {F}\, \Leftrightarrow \, l_2\in \mathcal {F}\) , then \(l_1\cup l_2 \in \mathcal {F}\, \Leftrightarrow \,l_1\in \mathcal {F}\) . That is, union of loops having the same “accepting status” preserves their “accepting status”.
\({\mathcal {ACD}}(\) is a "parity ACD".
Moreover, the parity condition we can define over \( is a "\) [1,]\(-parity" (resp. \) [0,-1]\(-parity~/~\) "Weakk"\() condition if and only if \) ACD(\( is a "\) [1,]\(-parity ACD" (resp. \) [0,-1]\(-parity ACD~/~"\) Weakk\( ACD").\)
(\(1 \Rightarrow 2\) )
Suppose that \( uses a parity acceptance condition with the priorities given by \) p:E N \(. Then, since \) l1\( and \) l2\( are both accepting or both rejecting, \) p1=p(l1)\( and \) p2=p(l2)\( have the same parity, that is also the same parity than \) p(l1 l2)={p1,p2}\(.\)
(\(2 \Rightarrow 3\) )
Let \(q\in V\) of "index" \(i\) , and \("t_q"\) be the subtree of \({\mathcal {ACD}}(\) associated to \(q\) . Suppose that there is a node \(\tau \in t_q\) with two different children \(\sigma _1\) and \(\sigma _2\) . The loops \(\nu _i(\sigma _1)\) and \(\nu _i(\sigma _2)\) are different maximal loops with the property \(\nu _i(\sigma )\subseteq \nu _i(\tau )\) and \(\nu _i(\sigma )\in \mathcal {F}\, \Leftrightarrow \, \nu _i(\tau ) \notin \mathcal {F}\) . Since they share the state \(q\) , their union is also a loop contained in \(\nu _i(\tau )\) and then
\( \nu _i(\sigma _1) \cup \nu _i(\sigma _2)\in \mathcal {F}\; \Leftrightarrow \; \nu _i(\tau ) \in \mathcal {F}\; \Leftrightarrow \; \nu _i(\sigma _1)\notin \mathcal {F}\)
contradicting the hypothesis.
(\(3 \Rightarrow 1\) )
From the construction of the "ACD-transformation", it follows that \("\mathcal {P}_{\mathcal {ACD}(}"\) is just a relabelling of \( with an equivalent parity condition.\)
For the implication from right to left of the last statement, we remark that if the trees of \({\mathcal {ACD}}(\) have priorities assigned in \([\mu ,\eta ]\) , then the parity transition system \("\mathcal {P}_{\mathcal {ACD}(}"\) will use priorities in \([\mu ,\eta ]\) . If \({\mathcal {ACD}}(\) is a "\(\mathit {Weak}_k\) ACD", then in each "strongly connected component" of \("\mathcal {P}_{\mathcal {ACD}(}"\) the number of priorities used will be the same as the "height" of the corresponding tree of \({\mathcal {ACD}}(\) (at most \(k\) ).
For the other implication it suffices to remark that the priorities assigned by \({\mathcal {ACD}}(\) are optimal (proposition REF ).
Given a "transition system graph" \(G=(V,E,\mathit {Source},\mathit {Target},I_0)\) and a "Muller condition" \(\mathcal {F}\subseteq \mathcal {P}(E)\) , we can define a "parity condition" \(p:E\rightarrow \mathbb {N} \) "equivalent to" \(\mathcal {F}\) over \(G\) if and only if we can define a "Rabin condition" \(R\) and a "Streett condition" \(S\) over \(G\) such that
\( (G,\mathcal {F}) \,"\simeq "\, (G,R)\, "\simeq "\, (G,S) \) .
Moreover, if the Rabin condition \(R\) uses \(r\) Rabin pairs and the Streett condition \(S\) uses \(s\) Streett pairs, we can take the parity condition \(p\) using priorities in
\([1,2r+1]\) if \(r\le s\) .
\([0,2s]\) if \(s\le r\) .
The first statement is a consequence of the characterisations (2) or (3) from propositions REF , REF and REF .
For the second statement we remark that the trees of \({\mathcal {ACD}}(\) have "height" at most \(\min \lbrace 2r+1, 2s+1\rbrace \) . If \(r\ge s\) , then the height \(2r+1\) can only be reached by (ACD)odd trees, and if \(s\ge r\) , the height \(2s+1\) only by (ACD)even trees.
From the last statement of proposition REF and thanks to the second item of proposition REF , we obtain:
Given a "transition system graph" \(G\) and a Muller condition \(\mathcal {F}\) over \(G\) , there is an equivalent \("\mathit {Weak}_k"\) condition over \(G\) if and only if there are both \([0,k]\) and "\([1,k+1]\) -parity" conditions "equivalent to" \(\mathcal {F}\) over \(G\) .
In particular, there is an equivalent "Weak condition" if and only if there are "Büchi" and "co-Büchi" conditions equivalent to \(\mathcal {F}\) over \(G\) .
It is important to notice that the previous results are stated for non-labelled transition systems. We must be careful when translating these results to automata and formal languages. For instance, in [42]} there is an example of a non-deterministic automaton \(\mathcal {A}\) , such that we can put on top of it Rabin and Streett conditions \(R\) and \(S\) such that \(\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) , but we cannot put a parity condition on top of it recognising the same language. However, proposition REF allows us to obtain analogous results for "deterministic automata".
[[42]}]
Let \(\mathcal {A}\) be the "transition system graph" of a "deterministic automaton" with set of states \(Q\) . Let \(R\) be a Rabin condition over \(\mathcal {A}\) with \(r\) pairs and \(S\) a Streett condition over \(\mathcal {A}\) with \(s\) pairs such that \(\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) . Then, there exists a parity condition \(p: Q \times \Sigma \rightarrow \mathbb {N} \) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},p)=\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) .
Moreover,
if \(r\le s\) , we can take \(p\) to be a "\([1,2r+1]\) -parity condition".
if \(s\le r\) , we can take \(p\) to be a "\([0,2s]\) -parity condition".
Proposition REF implies that \((\mathcal {A},R)"\simeq "(\mathcal {A},S)\) , and after corollary REF , there is a parity condition \(p\) using the proclaimed priorities such that \((\mathcal {A},p)"\simeq "(\mathcal {A},R)\) . Therefore \(\mathcal {L}(\mathcal {A},p)"\simeq "\mathcal {L}(\mathcal {A},R)\) (since for both deterministic and non-deterministic \((\mathcal {A},p)"\simeq "(\mathcal {A},R)\) implies \(\mathcal {L}(\mathcal {A},p)"\simeq "\mathcal {L}(\mathcal {A},R)\) ).
Let \(\mathcal {A}\) be the "transition system graph" of a deterministic automaton and \(p\) and \(p^{\prime }\) be \([0,k]\) and "\([1,k+1]\) -parity conditions" respectively over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},p)= \mathcal {L}(\mathcal {A},p^{\prime })\) . Then, there exists a \("\mathit {Weak}_k"\) condition \(W\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},W)=\mathcal {L}(\mathcal {A},p)\) .
In particular, there is a \("\mathit {Weak}"\) condition \(W\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},W)=L\) if and only if there are both "Büchi" and "co-Büchi" conditions \(B,B^{\prime }\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},B)= \mathcal {L}(\mathcal {A},B^{\prime })=L\) .
If follows from proposition REF and corollary REF .
Conclusions
We have presented a transformation that, given a Muller "transition system", provides an equivalent "parity" transition system that has minimal size and uses an optimal number of priorities among those which accept a "locally bijective morphism" to the original Muller transition system. In order to describe this transformation we have introduced the "alternating cycle decomposition", a data structure that arranges all the information about the acceptance condition of the transition system and the interplay between this condition and the structure of the system.
We have shown in section how the alternating cycle decomposition can be useful to reason about acceptance conditions, and we hope that this representation of the information will be helpful in future works.
We have not discussed the complexity of effectively computing the "alternating cycle decomposition" of a Muller transition system. It is known that solving Muller games is \(\mathrm {PSPACE}\) -complete when the acceptance condition is given as a list of accepting sets of colours
[18]}. However, given a Muller game \(\mathcal {G}\) and the "Zielonka tree" of its Muller condition, we have a transformation into a parity game of polynomial size on the size of \(\mathcal {G}\) , so solving Muller games with this extra information is in \(\mathrm {NP}\cap \mathrm {co}\) -\(\mathrm {NP}\) . Also, in order to build \({\mathcal {ACD}}(\) we suppose that the Muller condition is expressed using as colours the set of edges of the game (that is, as an explicit Muller condition), and solving explicit Muller games is in \(\mathrm {PTIME}\) [1]}. Consequently, unless \(\mathrm {PSPACE}\) is contained in \(\mathrm {NP}\cap \mathrm {co}\) -\(\mathrm {NP}\) , we cannot compute the "Zielonka tree" of a Muller condition, nor the "alternating cycle decomposition" of a Muller transition system in polynomial time.
| [4] | [
[
11270,
11273
],
[
33461,
33464
],
[
99018,
99021
]
] | https://openalex.org/W2060375000 |
81f02457-f21d-4d50-81ce-7cb28bf20894 | Generalized weak conditions
Let \((V,E,\mathit {Source},\mathit {Target},q_0,\mathit {Acc})\) be a "transition system". An ""ordered partition"" of \( is a partition of \) V\(, \) V1,...,VsV\( such that for every pair of vertices \) pVi\(, \) qVj\(, if there is a transition from \) p\( to \) q\(, then \) ij\(. We call each subgraph \) Vi\( a ""component of the ordered partition"". Every such component must be a union of "strongly connected components" of \) , so we can imagine that the partition is the decomposition into strongly connected components suitably ordered. We remark that given an "ordered partition" of \(, a "run" will eventually stay in some component \) Vi\(.\) Given different representations of acceptance conditions \(\mathit {Acc}_1, \dots , \mathit {Acc}_m\) from some of the previous classes, a ""generalised weak condition"" is a condition for which we allow to use the different conditions in different components of an ordered partition of a transition system. We will mainly use the following type of "generalised weak" condition:
Given a transition system \( and an "ordered partition" \) (Vi)i=1s\(, a \) "Weakk"\(-condition is a "parity condition" such that in any component \) Vi\( there are at most \) k\( different priorities associated to transitions between vertices in \) Vi\(. It is the "generalised weak condition" for \) [1,k]\( and \) [0,k-1]\(.\) The adjective Weak has typically been used to refer to the condition \("\mathit {Weak}_1"\) . It correspond to a partition of \( into ``accepting^{\prime \prime } and ``rejecting^{\prime \prime } components. A "run" will be accepted if the component it finally stays in is accepting.\)
"Transition systems" (resp. "automata", "games") using an acceptance condition of type \(\mathcal {R}\) will be called ""\(\mathcal {R}\) -transition systems"" (resp. \(\mathcal {R}\) -automata, \(\mathcal {R}\) -games). We will also say that they are labelled with an \(\mathcal {R}\) -condition.
As we have already observed, we can always suppose that \(\Gamma =E\) . However, this supposition might affect the size of the representation of the acceptance conditions, and therefore the complexity of related algorithms as shown in [1]}.
""""
In figure REF we show three automata recognizing the language
\( \mathcal {L}= \lbrace u \in \lbrace 0,1\rbrace ^\omega \; : \; \mathit {Inf}(u)=\lbrace 1\rbrace \text{ or } (\mathit {Inf}(u)=\lbrace 0\rbrace \text{ and} \text{ there is an even number of 1's in } u) \rbrace \)
and using different acceptance conditions. We represent Büchi conditions by marking the accepting transitions with a • symbol. For Muller or parity conditions we write in each transition \(\alpha :{Green2}{a}\) , with \(\alpha \in \lbrace 0,1\rbrace \) the input letter and \({Green2}{a}\in \Gamma \) the output letter. The initial vertices are represented with an incoming arrow.
<FIGURE>In the following we will use a small abuse of notation and speak indifferently of an acceptance condition and its representation. For example, we will sometimes replace the "acceptance condition" of a "transition system" by a family of sets \(\mathcal {F}\) (representing a "Muller condition") or by a function assigning priorities to edges.
Equivalent conditions
Two different representations of acceptance conditions over a set \(\Gamma \) are ""equivalent"" if they define the same set \(\mathit {Acc}\subseteq \Gamma ^\infty \) .
Given a "transition system graph" \(G\) , two representations \(\mathcal {R}_1,\mathcal {R}_2\) of acceptance conditions are ""equivalent over"" \(G\) if they define the same accepting subset of runs of \("\mathpzc {Run}"_{T}\) . We write \((G,\mathcal {R}_1) \simeq (G,\mathcal {R}_2)\) .
If \(\mathcal {A}\) is the "transition system graph" of an automaton (as in example REF ), and \(\mathcal {R}_1,\mathcal {R}_2\) are two representations of acceptance conditions such that \((\mathcal {A},\mathcal {R}_1) \simeq (\mathcal {A},\mathcal {R}_2)\) , then they recognise the same language: \(\mathcal {L}(\mathcal {A},\mathcal {R}_1)=\mathcal {L}(\mathcal {A},\mathcal {R}_2)\) . However, the converse only holds for "deterministic" automata.
Let \(\mathcal {A}\) be the the "transition system graph" of a "deterministic automaton" over the alphabet \(\Sigma \) and let \(\mathcal {R}_1,\mathcal {R}_2\) be two representations of acceptance conditions such that \(\mathcal {L}(\mathcal {A},\mathcal {R}_1)=\mathcal {L}(\mathcal {A},\mathcal {R}_2)\) . Then, both conditions are "equivalent over" \(\mathcal {A}\) , \((\mathcal {A},\mathcal {R}_1) \simeq (\mathcal {A},\mathcal {R}_2)\) .
Let \(\varrho \in {\mathpzc {Run}}_{T}\) be an infinite "run" in \(\mathcal {A}\) , and let \(u\in \Sigma ^\omega \) be the word in the input alphabet such that \(\varrho \) is the "run over" \(u\) in \(\mathcal {A}\) . Since \(\mathcal {A}\) is deterministic, \(\varrho \) is the only "run over" \(u\) , then \(\varrho \) belongs to the "acceptance condition" of \((\mathcal {A},\mathcal {R}_i)\) if and only if the word \(u\) belongs to \(\mathcal {L}(\mathcal {A},\mathcal {R}_1)=\mathcal {L}(\mathcal {A},\mathcal {R}_2)\) , for \(i=1,2\) .
The deterministic parity hierarchy
As we have mentioned in the introduction, deterministic Büchi automata have strictly less expressive power than deterministic Muller automata. However, every language recognized by a Muller automaton can be recognized by a deterministic parity automaton, but we might require at least some number of priorities to do so. We can assign to each regular language \(L\subseteq \Sigma ^\omega \) the optimal number of priorities needed to recognise it using a "deterministic automaton". We obtain in this way the ""deterministic parity hierarchy"", first introduced by Mostowski in [2]}, represented in figure REF . In that figure, we denote by \([\mu ,\eta ]\) the set of languages over an alphabet \(\Sigma \) that can be recognized using a deterministic "\([\mu ,\eta ]\) -parity" automaton. The intersection of the levels \([0,k]\) and \([1,k+1]\) is exactly the set of languages recognized using a \("\mathit {Weak}_k"\) deterministic automaton.
This hierarchy is strict, that is, for each level of the hierarchy there are languages that do not appear in lower levels [3]}.
<FIGURE>We observe that the set of languages that can be recognised by a deterministic "Rabin" automaton using \(r\) Rabin pairs is the level \([1,2r+1]\) . Similarly, the languages recognisable by a deterministic "Streett" automaton using \(s\) pairs is \([0,2s]\) .
For non-deterministic automata the hierarchy collapses for the level \([0,1]\) (Büchi automata).
Trees
A ""tree"" is a set of sequences of non-negative integers \(T\subseteq \omega ^*\) that is prefix-closed: if \(\tau \cdot i \in T\) , for \(\tau \in \omega ^*, i\in \omega \) , then \(\tau \in T\) .
In this report we will only consider finite trees.
The elements of \(T\) are called ""nodes"". A ""subtree"" of \(T\) is a tree \(T^{\prime }\subseteq T\) . The empty sequence \(\varepsilon \) belongs to every non-empty "tree" and it is called the ""root"" of the tree. A "node" of the form \(\tau \cdot i\) , \(i\in \omega \) , is called a ""child"" of \(\tau \) , and \(\tau \) is called its ""parent"". We let \(\mathit {Children}(\tau )\) denote the set of children of a node \(\tau \) . Two different children \(\sigma _1,\sigma _2\) of \(\tau \) are called ""siblings"", and we say that \(\sigma _1\) is ""older"" than \(\sigma _2\) if \("\mathit {Last}"(\sigma _1)<"\mathit {Last}"(\sigma _2)\) . We will draw the children of a node from left to right following this order.
If two "nodes" \(\tau ,\sigma \) verify \(\tau {\sqsubseteq }\sigma \) , then \(\tau \) is called an ""ancestor"" of \(\sigma \) , and \(\sigma \) a ""descendant"" of \(\tau \) (we add the adjective “strict” if in addition they are not equal).
A "node" is called a ""leaf"" of \(T\) if it is a maximal sequence of \(T\) (for the "prefix relation" \({\sqsubseteq }\) ). A ""branch"" of \(T\) is the set of prefixes of a "leaf". The set of branches of \(T\) is denoted \(""\mathit {Branch}""(T)\) . We consider the lexicographic order over leaves, that is, for two leaves \(\sigma _1, \, \sigma _2\) , \(\sigma _1<_{\mathit {lex}}\sigma _2\) if \(\sigma _1(k)<\sigma _2(k)\) , where \(k\) is the smallest position such that \(\sigma _1(k)\ne \sigma _2(k)\) . We extend this order to \("\mathit {Branch}"(T)\) : let \(\beta _1\) , \(\beta _2\) be two branches defined by the leaves \(\sigma _1\) and \(\sigma _2\) respectively. We define \(\beta _1<\beta _2\) if \(\sigma _1 <_{\mathit {lex}}\sigma _2\) . That is, the set of branches is ordered from left to right.
For a node \(\tau \in T\) we define \(""\mathit {Subtree}_T""(\tau )\) as the subtree consisting on the set of nodes that appear below \(\tau \) , or above it in the same branch (they are "ancestors" or "descendants" of \(\tau \) ):
\( \mathit {Subtree}_T(\tau )= \lbrace \sigma \in T \; : \; \sigma {\sqsubseteq }\tau \text{ or } \tau {\sqsubseteq }\sigma \rbrace . \)
We omit the subscript \(T\) when the tree is clear from the context.
Given a node \(\tau \) of a tree \(T\) , the ""depth"" of \(\tau \) in \(T\) is defined as the length of \(\tau \) , \({\mathit {Depth}}(\tau )=|\tau |\) (the root \(\varepsilon \) has depth 0). The ""height of a tree"" \(T\) , written \({\mathit {Height}}(T)\) , is defined as the maximal depth of a "leaf" of \(T\) plus 1. The ""height of the node"" \(\tau \in T\) is \({\mathit {Height}}(T)-{\mathit {Depth}}(\tau )\) (maximal leaves have height 1).
A ""labelled tree"" is a pair \((T,\nu )\) , where \(T\) is a "tree" and \(\nu : T \rightarrow \Lambda \) is a labelling function into a set of labels \(\Lambda \) .
In figure REF we show a tree \(T\) of "height" 4 and we show \({\mathit {Subtree}}(\tau )\) for \(\tau =\langle 2 \rangle \) . The node \(\tau \) has "depth" 1 and height 3. The branches \(\alpha \) , \(\beta \) , \(\gamma \) are ordered as \(\alpha <\beta <\gamma \) .
<FIGURE>
An optimal transformation of Muller into parity conditions
In the previous section we have presented different classes of "acceptance conditions" for "transition systems" over infinite words, with "Muller conditions" being the most general kind of "\(\omega \) -regular conditions".
In order to translate a "Muller condition" \(\mathcal {F}\) over \(\Gamma \) into a simpler one, the usual procedure is to build
a "deterministic automaton" over \(\Gamma \) using a simpler condition that accepts \(\mathcal {F}\) , i.e., this automaton will accept the words \(u\in \Gamma ^\omega \) such that \("\mathit {Inf}"(u)\in \mathcal {F}\) . As we have asserted, the simplest condition that we could use in general in such a deterministic automaton is a "parity" one, and the number of priorities that we can use is determined by the position of the "Muller condition" in the "parity hierarchy".
In this section we build a deterministic parity automaton that recognises a given "Muller condition", and we prove that this automaton has minimal size and uses the optimal number of priorities. This construction is based in the notion of the "Zielonka tree", introduced in [4]} (applied there to the study of the optimal memory needed to solve a Muller game). In most cases, this automaton strictly improves other constructions such as the LAR [5]} or its modifications [6]}.
All constructions and proofs on this section can be regarded as a special case of those of section . However, we include the proofs for this case here since we think that this will help the reader to understand many ideas that will reappear in section in a more complicated context.
The Zielonka tree automaton
In this first section we present the Zielonka tree and the parity automaton that it induces.
[Zielonka tree of a Muller condition]
Let \(\Gamma \) be a finite set of colours and \(\mathcal {F}\subseteq \mathcal {P}(\Gamma )\) a "Muller condition" over \(\Gamma \) . The ""Zielonka tree"" of \(\mathcal {F}\) , written \(T_\mathcal {F}\) , is a tree labelled with subsets of \(\Gamma \) via the labelling \(\nu :T_\mathcal {F}\rightarrow \mathcal {P}(\Gamma )\) , defined inductively as:
\(\nu (\varepsilon )=\Gamma \)
If \(\tau \) is a node already constructed labelled with \(S=\nu (\tau )\) , we let \(S_1,\dots ,S_k\) be the maximal subsets of \(S\) verifying the property
\( S_i \in \mathcal {F}\; \Leftrightarrow \; S\notin \mathcal {F}\quad \text{ for each } i=1,\dots ,k . \)
For each \(i=1,\dots ,k\) we add a child to \(\tau \) labelled with \(S_i\) .
We have not specified the order in which children of a node appear in the Zielonka tree. Therefore, strictly speaking there will be several Zielonka trees of a "Muller condition". The order of the nodes will not have any relevance in this work and we will speak of “the” Zielonka tree of \(\mathcal {F}\) .
We say that the condition \(\mathcal {F}\) and the tree \("T_\mathcal {F}"\) are (Zielonka)even if \(\Gamma \in \mathcal {F}\) , and that they are (Zielonka)odd on the contrary. We associate a priority \(""p_Z(\tau )""\) to each node (to each level in fact) of the "Zielonka tree" as follows:
If \("T_\mathcal {F}"\) is (Zielonka)even, then \(p_Z(\tau )={\mathit {Depth}}(\tau )\) .
If \("T_\mathcal {F}"\) is (Zielonka)odd, then \(p_Z(\tau )={\mathit {Depth}}(\tau )+1\) .
In this way, \(p_Z(\tau )\) is even if and only if \(\nu (\tau )\in \mathcal {F}\) . We represent nodes \(\tau \in "T_\mathcal {F}"\) such that \("p_Z(\tau )"\) is even as a ""circle"" (round nodes), and those for which \(p_Z(\tau )\) is odd as a square.
""""
Let \(\Gamma _1=\lbrace a,b,c\rbrace \) and \(\mathcal {F}_1=\lbrace \lbrace a\rbrace ,\lbrace b\rbrace \rbrace \) (the "Muller condition" of the automaton of "example REF "). The "Zielonka tree" \(T_{\mathcal {F}_1}\) is shown in figure REF . It is (Zielonka)odd.
Let \(\Gamma _2=\lbrace a,b,c,d\rbrace \) and
\(\mathcal {F}_2=\lbrace \lbrace a,b,c,d\rbrace ,\lbrace a,b,d \rbrace ,\lbrace a,c,d\rbrace ,\lbrace b,c,d \rbrace ,\lbrace a,b\rbrace ,\lbrace a,d\rbrace ,\lbrace b,c\rbrace ,\lbrace b,d \rbrace ,\lbrace a\rbrace ,\lbrace b \rbrace ,\lbrace d \rbrace \rbrace .\)
The "Zielonka tree" \(T_{\mathcal {F}_2}\) is (Zielonka)even and it is shown on figure REF .
On the right of each tree there are the priorities assigned to the nodes of the corresponding level. We have named the branches of the Zielonka trees with greek letters and we indicate the names of the nodes in Violet2violet.
<FIGURE>We show next how to use the "Zielonka tree" of \(\mathcal {F}\) to build a "deterministic automaton" recognizing the "Muller condition" \(\mathcal {F}\) .
For a branch \(\beta \in {\mathit {Branch}}("T_\mathcal {F}")\) and a colour \(a\in \Gamma \) we define \((Zielonka){\mathit {Supp}}(\beta ,a)=\tau \) as the "deepest" node (maximal for \({\sqsubseteq }\) ) in \(\beta \) such that \(a\in \nu (\tau )\) .
Given a tree \(T\) , a branch \(\beta \in {\mathit {Branch}}(T)\) and a node \(\tau \in \beta \) , if \(\tau \) is not a "leaf" then it has a unique "child" \(\sigma _\beta \) such that \(\sigma _\beta \in \beta \) . In this case, we let \(""\mathit {Nextchild}""(\beta ,\tau )\) be the next "sibling" of \(\sigma _\beta \) on its right, that is:
\( \mathit {Nextchild}(\beta ,\tau )={\left\lbrace \begin{array}{ll}\text{ Smallest child of } \tau \text{ if } \sigma _\beta \text{ is the greatest child of } \tau .\\[2mm]\text{ Smallest older sibling of } \sigma _\beta \text{ if not.}\end{array}\right.} \)
We define \(""\mathit {Nextbranch}""(\beta ,\tau )\) as the leftmost branch in \(T\) (smallest in the order defined in section REF ) below \("\mathit {Nextchild}"(\beta ,\tau )\) , if \(\tau \) is not a "leaf", and we let \(\mathit {Nextbranch}(\beta ,\tau )= \beta \) if \(\tau \) is a leaf of \(T\) .
In the previous example, on the tree \("T_{\mathcal {F}_2}"\) of figure REF , we have that \((Zielonka){\mathit {Supp}}(\alpha ,c)=\langle 0 \rangle \) , \({\mathit {Nextchild}}(\beta ,\langle \varepsilon \rangle )=\langle 1 \rangle \) , \("\mathit {Nextbranch}"(\beta ,\langle \varepsilon \rangle )=\gamma \) , \({\mathit {Nextchild}}(\beta ,\langle 0 \rangle )=\langle 0 {,} 0 \rangle \) and \("\mathit {Nextbranch}"(\beta ,\langle 0 \rangle )=\alpha \) .
[Zielonka tree automaton]
Given a "Muller condition" \(\mathcal {F}\) over \(\Gamma \) with "Zielonka tree" \(T_\mathcal {F}\) , we define the ""Zielonka tree automaton"" \(\mathcal {\mathcal {Z}_{\mathcal {F}}}=(Q,\Gamma ,q_0, [\mu ,\eta ],\delta , p:Q\times \Gamma \rightarrow [\mu ,\eta ])\) as a "deterministic automaton" using a "parity" acceptance condition given by \(p\) , where
\(Q={\mathit {Branch}}(T_\mathcal {F})\) , the set of states is the set of branches of \(T_\mathcal {F}\) .
The initial state \(q_0\) is irrelevant, we pick the leftmost branch of \(T_\mathcal {F}\) .
\(\delta (\beta ,a)= {\mathit {Nextbranch}}(\beta ,(Zielonka){\mathit {Supp}}(\beta ,a))\) .
\(\mu =0, \; \eta ={\mathit {Height}}(T_\mathcal {F})-1\) if \(\mathcal {F}\) is (Zielonka)even.
\(\mu =1, \; \eta ={\mathit {Height}}(T_\mathcal {F})\) if \(\mathcal {F}\) is (Zielonka)odd.
\(p(\beta ,a)="p_Z"((Zielonka){\mathit {Supp}}(\beta ,a),a)\) .
The transitions of the automaton are determined as follows: if we are in a branch \(\beta \) and we read a colour \(a\) , then we move up in the branch \(\beta \) until we reach a node \(\tau \) that contains the colour \(a\) in its label. Then we pick the child of \(\tau \) just on the right of the branch \(\beta \) (in a cyclic way) and we move to the leftmost branch below it. We produce the priority corresponding to the depth of \(\tau \) .
Let us consider the conditions of example REF . The "Zielonka tree automaton" for the "Muller condition" \(\mathcal {F}_1\) is shown in figure REF , and that for \(\mathcal {F}_2\) in figure REF . States are the branches of the respective "Zielonka trees".
<FIGURE>[Correctness]
Let \(\mathcal {F}\subseteq \mathcal {P}(\Gamma )\) be a "Muller condition" over \(\Gamma \) . Then, a word \(u\in \Gamma ^\omega \) verifies \("\mathit {Inf}(u)"\in \mathcal {F}\) (\(u\) belongs to the Muller condition) if and only if \(u\) is "accepted by" \("\mathcal {\mathcal {Z}_{\mathcal {F}}}"\) .
Let us first remark that we can associate to each input word \(u\in \Gamma ^\omega \) an infinite sequence of nodes in the Zielonka tree \(\lbrace \tau _{u,i}\rbrace _{i=0}^{\infty }\) as follows: let \(\beta _i\) be the state of the Zielonka tree automaton (the branch of the tree \("T_\mathcal {F}"\) ) reached after reading \(u_0u_1\dots u_{i-1}\) (\(\beta _0\) being the leftmost branch), then
\( \tau _{u,i}=(Zielonka){\mathit {Supp}}(\beta _i,u_i) \)
The sequence of priorities produced by the automaton \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) when reading \(u\) is given by the priorities associated to \(\tau _{u,i}\) , that is, \("\mathit {Output}"_\mathcal {\mathcal {Z}_{\mathcal {F}}}(u)=\lbrace "p_Z"(\tau _{u,i})\rbrace _{i=0}^{\infty }\) .
Let \(p_{\min }\) be the minimal priority produced infinitely often in \("\mathit {Output}"_\mathcal {\mathcal {Z}_{\mathcal {F}}}(u)\) . We first show that there is a unique node appearing infinitely often in \(\lbrace \tau _{u,i}\rbrace _{i=0}^{\infty }\) such that \("p_Z"(\tau _{u,i})=p_{\min }\) . Indeed, transitions of \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) verify that \(\delta (\beta ,a)\) is a branch in the subtree under \((Zielonka){\mathit {Supp}}(\beta ,a)\) . However, subtrees below two different "siblings" have disjoint sets of branches, so if \(\tau _{u,i}\) and \(\tau _{u,k}\) are siblings, for some \(k>i\) , then there must exist some transition at position \(j\) , \(i<j<k\) such that \((Zielonka){\mathit {Supp}}(\beta _j,u_j)\) is a strict "ancestor" of \(\tau _{u,i}\) and \(\tau _{u,k}\) . Therefore, \("p_Z"(\tau _{u,j})<"p_Z"(\tau _{u,i})\) , what cannot happen infinitely often since \(p_{\min }="p_Z"(\tau _{u,i})\) .
We let \(\tau _p\) be the highest node visited infinitely often. The reasoning above also proves that all nodes appearing infinitely often in \(\lbrace \tau _{u,i}\rbrace _{i=0}^{\infty }\) are descendants of \(\tau _p\) , and therefore the states appearing infinitely often in the "run over \(u\) " in \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) are branches in \({\mathit {Subtree}}(\tau _p)\) . We will prove that
\(\mathit {Inf}(u)\subseteq \nu (\tau _p)\) .
For every child \(\sigma \) of \(\tau _p\) , \(\mathit {Inf}(u)\nsubseteq \nu (\sigma )\) .
Therefore, by the definition of the "Zielonka tree", \(\mathit {Inf}(u)\) is accepted if and only if \(\nu (\tau _p)\in \mathcal {F}\) and thus
\( \mathit {Inf}(u)\in \mathcal {F}\quad \Leftrightarrow \quad \nu (\tau _p)\in \mathcal {F}\quad \Leftrightarrow \quad "p_Z"(\tau _p)=p_{\min } \, \text{ is even.} \)
In order to see that \(\mathit {Inf}(u)\subseteq \nu (\tau _p)\) , it suffices to remark that for every branch \(\beta \) of \({\mathit {Subtree}}(\tau _p)\) and for every \(a\notin \nu (\tau _p)\) , we have that \((Zielonka){\mathit {Supp}}(\beta , a)\) is a strict "ancestor" of \(\tau _p\) . Since the nodes \(\tau _{u,i}\) appearing infinitely often are all descendants of \(\tau _p\) , the letter \(a\) cannot belong to \(\mathit {Inf}(u)\) if \(a\notin \nu (\tau _p)\) .
Finally, let us see that \(\mathit {Inf}(u)\nsubseteq \nu (\sigma )\) for every child of \(\tau _p\) . Suppose that \(\mathit {Inf}(u)\subseteq \nu (\sigma )\) for some child \(\sigma \) . Since we visit \(\tau _p\) infinitely often, transitions of the form \(\delta (\beta ,a)\) such that \(\tau _p = {\mathit {Supp}}(\beta ,a)\) take place infinitely often. By definition of \({\mathit {Nextbranch}}(\beta ,a)\) , after each of these transitions we move to a branch passing through the next child of \(\tau _p\) , so we visit all children of \(\tau _p\) infinitely often. Eventually we will have \(\sigma \in \delta (\beta ,a)\) (the state reached in \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) will be some branch \(\beta ^{\prime }\) below \(\sigma \) ).
However, since \(\mathit {Inf}(u)\subseteq \nu (\sigma )\) , for every \(a\in \mathit {Inf}(u)\) and every \(\beta ^{\prime }\in {\mathit {Subtree}}(\sigma )\) , we would have that \((Zielonka){\mathit {Supp}}(\beta ^{\prime },a)\) is a descendant of \(\sigma \) , and therefore we would not visit again \(\tau _p\) and the priority \(p_{\min }\) would not be produced infinitely often, a contradiction.
Optimality of the Zielonka tree automaton
We prove in this section the strong optimality of the "Zielonka tree automaton", both for the number of priorities (proposition REF ) and for the size (theorem REF ).
Proposition REF can be proved easily applying the results of [7]}. We present here a self-contained proof.
Let \(\mathcal {P}\) be a parity "transition system" with set of edges \(E\) and priorities given by \(p:E \rightarrow [\mu ,\eta ]\) such that the minimal priority it uses is \(\mu \) and the maximal one is \(\eta \) . If the number of different priorities used in \(\mathcal {P}\) (\(|p(E)|\) ) is smaller or equal than \(\eta -\mu \) , then we can relabel \(\mathcal {P}\) with a parity condition that is "equivalent over" \(\mathcal {P}\) that uses priorities in \([\mu ^{\prime }, \eta ^{\prime }]\) and \(\eta ^{\prime }-\mu ^{\prime } < \eta -\mu \) .
If \(\mathcal {P}\) uses less priorities than the length of the interval \([\mu , \eta ]\) , that means that there is some priority \(d\) , \(\mu <d< \eta \) that does not appear in \(\mathcal {P}\) . Then, we can relabel \(\mathcal {P}\) with the parity condition given by:
\( p^{\prime }(e)=\left\lbrace \begin{array}{c}p(e) \text{ if } p(e)<d\\p(e)-2 \text{ if } d <p(e) \end{array} \right. \)
that is clearly an "equivalent condition over" \(\mathcal {P}\) that uses priorities in \([\mu , \eta -2]\) .
[Optimal number of priorities]""""
The "Zielonka tree" gives the optimal number of priorities recognizing a "Muller condition" \(\mathcal {F}\) . More precisely, if \([\mu ,\eta ]\) are the priorities used by \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) and \(\mathcal {P}\) is another "parity automaton" recognizing \(\mathcal {F}\) , it uses at least \(\eta -\mu +1\) priorities. Moreover, if it uses priorities in \([\mu ^{\prime },\eta ^{\prime }]\) and \(\eta -\mu = \eta ^{\prime }-\mu ^{\prime }\) , then \(\mu \) and \(\mu ^{\prime }\) have the same parity.
Let \(\mathcal {P}\) be a "deterministic" "parity automaton" recognizing \(\mathcal {F}\) using priorities in \([\mu ^{\prime },\eta ^{\prime }]\) . After lemma REF , we can suppose that \(\mathcal {P}\) uses all priorities in this interval. Let \(\beta \) be a "branch" of \(\mathcal {T}_{\mathcal {F}}\) of maximal length \(h=\eta -\mu +1\) , and let \(S_0\subseteq S_{1} \subseteq \dots \subseteq S_{h-1}=\Gamma \) be the labellings of the nodes of this branch from bottom to top. Let us suppose \(S_0\in \mathcal {F}\) , the case \(S_0\notin \mathcal {F}\) being symmetric.
Let \(a_i\) be the finite word formed concatenating the colours of \(S_i\) . In particular \(a_i\) is accepted if and only if \(i\) is even. Let \(\eta ^{\prime }\) be the greatest priority appearing in the automaton \(\mathcal {P}\) . We prove by induction on \(j\) that, for every \(v\in \Gamma ^*\) , the "run over"
\((a_0a_1\dots a_jv)^\omega \) in \(\mathcal {P}\)
produces a priority smaller than or equal to \(\eta ^{\prime }-j\) , if \(\eta ^{\prime }\) even, and smaller than or equal to \(\eta ^{\prime }-j-1\) if \(\eta ^{\prime }\) is odd. We do here the case \(\eta ^{\prime }\) even, the case \(\eta ^{\prime }\) odd being symmetric.
For \(j=0\) this is clear, since \(\eta ^{\prime }\) is the greatest priority. For \(j>0\) , if it was not true, the smallest priority produced infinitely often reading \((a_0a_1\dots a_jv)^\omega \) would be strictly greater than \(\eta ^{\prime }-j\) for some \(v\in \Gamma ^*\) . Since \(\eta ^{\prime }-j\) has the same parity than \(j\) and \(S_j\in \mathcal {F}\) if and only if \(j\) is even, then the smallest priority produced infinitely often reading \((a_0a_1\dots a_jv)^\omega \) must have the same parity than \(j\) and cannot be \(\eta ^{\prime }-j+1\) , so it is greater than \(\eta ^{\prime }-j+2\) . However, by induction hypothesis, the run over \((a_0a_1\dots a_{j-1}w)^\omega \) produces a priority smaller than or equal to \(\eta ^{\prime }-(j-1)\) for every \(w\) , in particular for \(w=a_jv\) , contradicting the induction hypothesis.
In particular, taking \(v=\varepsilon \) , we have proved that the "run over"
\((a_0a_1\dots a_{h-1})^\omega \) in \(\mathcal {P}\) produces a priority smaller than or equal to \(\eta ^{\prime }-(h-1)\) that has to be even if and only if \(\mu \) is even. Therefore, \(\mathcal {P}\) must use all priorities in \([\eta ^{\prime }-(h-1),\eta ^{\prime }]\) , that is, at least \(h\) priorities.
In order to prove theorem REF we introduce the definition of an \(X\) -strongly connected component and we present two key lemmas.
Let \(\mathcal {A}=(Q,\Sigma , q_0, \Gamma , \delta ,\mathit {Acc})\) be a "deterministic automaton" and \(X\subseteq \Sigma \) a subset of letters of the input alphabet. An ""X-strongly connected component"" (abbreviated \(X\) -SCC) is a non-empty subset of states \(S\subseteq Q\) such that:
For every state \(q\in S\) and every letter \(x\in X\) , \(\delta (q,x)\in S\) .
For every pair of states \(q,q^{\prime }\in S\) there is a finite word \(w\in X^*\) such that \(\delta (q,w)=q^{\prime }\) .
That is, an \(X\) -SCC of \(\mathcal {A}\) are the states of a "\(X\) -complete" part of \(\mathcal {A}\) that forms a "strongly connected subgraph".
For every "deterministic automaton" \(\mathcal {A}=(Q,\Sigma , q_0, \Gamma , \delta ,\mathit {Acc})\) and every subset of letters \(X\subseteq \Sigma \) there is an "accessible" "\(X\) -SCC" in \(\mathcal {A}\) .
Restricting ourselves to the set of "accessible" states of \(\mathcal {A}\) we can suppose that every state of the automaton is accessible.
We prove the lemma by induction on \(|\mathcal {A}|\) . For \(|\mathcal {A}|=1\) , the state of the automaton forms an \(X\) -SCC. For \(|\mathcal {A}|>1\) , if \(Q\) is not an "\(X\) -SCC", there are \(q,q^{\prime }\in Q\) such that there does not exist a word \(w\in X^*\) such that \(\delta (q,w)=q^{\prime }\) . Let
\( Q_q=\lbrace p\in Q \; : \; \exists u\in X^* \text{ such that } p= \delta (q,u) \rbrace .\)
Since \(q^{\prime }\notin Q_q\) , the set \(Q_q\) is strictly smaller than \(Q\) . The set \(Q_q\) is non-empty and closed under transitions labelled by letters of \(X\) , so the restriction of \(\mathcal {A}\) to this set of states and the alphabet \(X\) forms an automaton \(\mathcal {A}_{Q_q,X}=(Q_q,X, q, \Gamma , \delta ^{\prime })\) (where \(\delta ^{\prime }\) is the restriction of \(\delta \) to these states and letters). By induction hypothesis, \(\mathcal {A}_{Q_q,X}\) contains an \(X\) -SCC that is also an \(X\) -SCC for \(\mathcal {A}\) .
Let \(\mathcal {F}\) be a "Muller condition" over \(\Gamma \) , \(T_\mathcal {F}\) its "Zielonka tree" and \(\mathcal {P}=(P,\Gamma ,p_0,[\mu ^{\prime },\eta ^{\prime }],\delta _P,p^{\prime }:P\rightarrow [\mu ^{\prime },\eta ^{\prime }])\) a "deterministic" parity automaton recognizing \(\mathcal {F}\) . Let \(\tau \) be a node of \(T_\mathcal {F}\) and \(C=\nu (\tau )\subseteq \Gamma \) its label. Finally, let \(A,B\subseteq C\) be two different subsets maximal such that \(C\in \mathcal {F}\, \Leftrightarrow \, A \notin \mathcal {F}\) , \(C\in \mathcal {F}\, \Leftrightarrow \, B \notin \mathcal {F}\) (they are the labels of two different children of \(\tau \) ). Then, if \(P_A\) and \(P_B\) are two "accessible" "\(A\) -SCC" and "\(B\) -SCC" of \(\mathcal {P}\) respectively, they satisfy \(P_A \cap P_B=\emptyset \) .
We can suppose that \(C\in \mathcal {F}\) and \(A,B \notin \mathcal {F}\) . Suppose that there is a state \(q\in P_A \cap P_B\) . Let \(A=\lbrace a_1,\dots ,a_l\rbrace \) , \(B=\lbrace b_1,\dots ,b_r\rbrace \) and \(q_1=\delta (q,a_1\cdots a_l)\in A\) , \(q_2=\delta (q,b_1\cdots b_r)\in B\) . By definition of an \(X\) -SCC, there are words \(u_1 \in A^*\) , \(u_2\in B^*\) such that \(\delta (q_1,u_1)=q\) and \(\delta (q_2,u_2)=q\) . Since \(A,B \notin \mathcal {F}\) , the minimum priorities \(p_1\) and \(p_2\) produced by the "runs over" \((a_1\cdots a_lu_1)^\omega \) and \((b_1\cdots b_r u_2)^\omega \) starting from \(q\) are odd. However, the run over \((a_1\cdots a_lu_1b_1\cdots b_r u_2)^\omega \) starting from \(q\) must produce an even minimum priority (since \(A\cup B \in \mathcal {F}\) ), but the minimum priority visited in this run is \(\min \lbrace p_1,p_2 \rbrace \) , odd, which leads to a contradiction.
[Optimal size of the Zielonka tree automaton]""""
Every "deterministic" "parity automaton" \(\mathcal {P}=(P,\Gamma ,p_0,[\mu ^{\prime },\eta ^{\prime }],\delta _P,p^{\prime }:P\times \Gamma \rightarrow [\mu ^{\prime },\eta ^{\prime }])\) accepting a "Muller condition" \(\mathcal {F}\) over \(\Gamma \) verifies
\( |"\mathcal {\mathcal {Z}_{\mathcal {F}}}"|\le |\mathcal {P}| .\)
Let \(\mathcal {P}\)
be a deterministic parity automaton accepting \(\mathcal {F}\) . To show \(|\mathcal {\mathcal {Z}_{\mathcal {F}}}|\le |\mathcal {P}|\) we proceed by induction on the number of colours \(|\Gamma |\) . For \(|\Gamma |=1\) the two possible "Zielonka tree automata" have one state, so the result holds. Suppose \(|\Gamma |>1\) and consider the first level of \(\mathcal {T}_{\mathcal {F}}\) .
Let \(n\) be the number of children of the root of \(\mathcal {T}_{\mathcal {F}}\) . For \(i=1,...,n\) , let \(A_i=\nu (\tau _i)\subseteq C\) be the label of the \(i\) -th child of the root of \(\mathcal {T}_{\mathcal {F}}\) , \(\tau _i\) , and let \(n_i\) be the number of branches of the subtree under \(\tau _i\) , \({\mathit {Subtree}}(\tau _i)\) . We remark that \(|\mathcal {\mathcal {Z}_{\mathcal {F}}}|=\sum _{i=1}^{n}n_i\) . Let \(\mathcal {F}\upharpoonright A_i:=\lbrace F\in \mathcal {F}\; : \; F\subseteq A_i \rbrace \) . Since each \(A_i\) verifies \(|A_i|<|C|\) and the "Zielonka tree" for \(\mathcal {F}\upharpoonright A_i\) is the subtree of \(\mathcal {T}_{\mathcal {F}}\) under the node \(\tau _i\) , every "deterministic parity automaton" accepting \(\mathcal {F}\upharpoonright A_i\) has at least \(n_i\) states, by induction hypothesis.
Thanks to lemma REF , for each \(i=1,\dots ,n\) there is an accessible "\(A_i\) -SCC" in \(\mathcal {P}\) , called \(P_i\) . Therefore, the restriction to \(P_i\) (with an arbitrary initial state) is an automaton recognising \(F\upharpoonright A_i\) . By induction hypothesis, for each \(i=1,...,n \) , \(|P_i|\ge n_i\) . Thanks to lemma REF , we know that for every \(i,j\in \lbrace 1,\dots ,n\rbrace ,\; i \ne j\) , \(P_i \cap P_j = \emptyset \) . We deduce that
\(|\mathcal {P}|\ge \sum \limits _{i=1}^{n}|P_i|\ge \sum \limits _{i=1}^{n}n_i=|\mathcal {\mathcal {Z}_{\mathcal {F}}}| .\)
The Zielonka tree of some classes of acceptance conditions
In this section we present some results proven by Zielonka in [4]} that show how we can use the "Zielonka tree" to deduce if a "Muller condition" is representable by a "Rabin", "Streett" or "parity" condition. These results are generalized to "transition systems" in section REF .
We first introduce some definitions. The terminology will be justified by the upcoming propositions.
Given a tree \(T\) and a function assigning priorities to nodes, \(p:T\rightarrow \mathbb {N} \) , we say that \((T,p)\) has
""Rabin shape"" if every node with an even priority assigned ("round" node) has at most one child.
""Streett shape"" if every node with an odd priority assigned ("square" node) has at most one child.
""Parity shape"" if every node has at most one child.
Let \(\mathcal {F}\subseteq \mathcal {P}( \Gamma )\) be a "Muller condition". The following conditions are equivalent:
\(\mathcal {F}\) is "equivalent" to a "Rabin condition".
The family \(\mathcal {F}\) is closed under intersection.
\("T_\mathcal {F}"\) has "Rabin shape".
Let \(\mathcal {F}\subseteq \mathcal {P}( \Gamma )\) be a "Muller condition". The following conditions are equivalent:
\(\mathcal {F}\) is "equivalent" to a "Streett condition".
The family \(\mathcal {F}\) is closed under union.
\("T_\mathcal {F}"\) has "Streett shape".
Let \(\mathcal {F}\subseteq \mathcal {P}( \Gamma )\) be a "Muller condition". The following conditions are equivalent:
\(\mathcal {F}\) is "equivalent" to a "parity condition".
The family \(\mathcal {F}\) is closed under union and intersection.
\("T_\mathcal {F}"\) has "parity shape".
Moreover, if some of these conditions is verified, \(\mathcal {F}\) is "equivalent" to a "\([1,\eta ]\) -parity condition" (resp. "\([0,\eta -1]\) -parity condition") if and only if \({\mathit {Height}}("T_\mathcal {F}")\le \eta \) and in case of equality \("T_\mathcal {F}"\) is (Zielonka)odd (resp. (Zielonka)even).
A "Muller condition" \(\mathcal {F}\subseteq \mathcal {P}(\Gamma )\) is equivalent to a parity condition if and only if it is equivalent to both Rabin and Streett conditions.
In figures REF and REF we represent "Zielonka trees" for some examples of "parity" and "Rabin" conditions.
We remark that for a fixed number of "Rabin" (or Street) pairs we can obtain "Zielonka trees" of very different shapes that range from a single branch (for "Rabin chain conditions") to a tree with a branch for each "Rabin pair" and height 3.
<FIGURE>
An optimal transformation of Muller into parity transition systems
In this section we present our main contribution: an optimal transformation of "Muller" "transition systems" into "parity" transition systems. Firstly, we formalise what we mean by “a transformation” using the notion of "locally bijective morphisms" in section REF .
Then, we describe a transformation from a "Muller transition system" to a parity one. Most transformations found in the literature use the "composition" of the "transition system" by a parity automaton recognising the Muller condition (such as \("\mathcal {Z}_\mathcal {F}"\) ). In order to achieve optimality this does not suffice, we need to take into account the structure of the transition system. Following ideas already present in [3]}, we analyse the alternating chains of accepting and rejecting cycles of the transition system. We arrange this information in a collection of "Zielonka trees" obtaining a data structure, the "alternating cycle decomposition", that subsumes all the structural information of the transition system necessary to determine whether a "run" is accepted or not. We present the "alternating cycle decomposition" in section REF and we show how to use this structure to obtain a parity transition system that mimics the former Muller one in section REF .
In section REF we prove the optimality of this construction. More precisely, we prove that if \(\mathcal {P}\) is a parity transition system that admits a "locally bijective morphism" to a Muller transition system \(, then the transformation of \) using the alternating cycle decomposition provides a smaller transition system than \(\mathcal {P}\) and using less priorities.
Locally bijective morphisms as witnesses of transformations
We start by defining locally bijective morphisms.
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathit {Acc})\) , \(=(V^{\prime },E^{\prime },\mathit {Source}^{\prime },\mathit {Target}^{\prime },I_0^{\prime },\mathit {Acc}^{\prime })\) be two "transition systems". A ""morphism of transition systems"", written \(\varphi : \) , is a pair of maps \((\varphi _V: V \rightarrow V^{\prime }, \varphi _E: E \rightarrow E^{\prime })\) such that:
\(\varphi _V(v_0)\in I_0^{\prime }\) for every \(v_0\in I_0\) (initial states are preserved).
\(\mathit {Source}^{\prime }(\varphi _E(e))=\varphi _V(\mathit {Source}(e))\) for every \(e\in E\) (origins of edges are preserved).
\(\mathit {Target}^{\prime }(\varphi _E(e))=\varphi _V(\mathit {Target}(e))\) for every \(e\in E\) (targets of edges are preserved).
For every "run" \(\varrho \in {\mathpzc {Run}}_{,\varrho \in \mathit {Acc}\; \Leftrightarrow \; \varphi _E(\varrho ) \in \mathit {Acc}^{\prime } (acceptance condition is preserved).}\) If \((l_V,l_E)\) , \((,l_V^{\prime },l_E^{\prime })\) are "labelled transition systems", we say that \(\varphi \) is a ""morphism of labelled transition systems"" if in addition it verifies
\(l_V^{\prime }(\varphi _V(v))=l_V(v)\) for every \(v\in V\) (labels of states are preserved).
\(l_E^{\prime }(\varphi _E(e))=l_E(e)\) for every \(e\in V\) (labels of edges are preserved).
We remark that it follows from the first three conditions that if \(\varrho \in {\mathpzc {Run}}_{ is a "run" in , then \varphi _E(\varrho )\in {\mathpzc {Run}}_{T^{\prime }} (it is a "run" in starting from some initial vertex).}Given a "morphism of transition systems" \) (V,E)\(, we will denote both maps by \)\( whenever no confusion arises. We extend \) E\( to \) E*\( and \) E\( component wise.\)
A "morphism of transition systems" \(\varphi =(\varphi _V, \varphi _E)\) is unequivocally characterized by the map \(\varphi _E\) . Nevertheless, it is convenient to keep the notation with both maps.
Given two "transition systems" \((V,E,\mathit {Source},\mathit {Target},I_0,\mathit {Acc})\) , \(=(V^{\prime },E^{\prime },\mathit {Source}^{\prime },\mathit {Target}^{\prime },I_0^{\prime },\mathit {Acc}^{\prime })\) , a "morphism of transition systems" \(\varphi : \) is called
""Locally surjective"" if
For every \(v_0^{\prime }\in I_0^{\prime }\) there exists \(v_0\in I_0\) such that \(\varphi (v_0)=v_0^{\prime }\) .
For every \(v\in V\) and every \( e^{\prime }\in E^{\prime }\) such that \( \mathit {Source}^{\prime }(e^{\prime })=\varphi (v)\)
there exists \(e\in E \) such that \( \varphi (e)=e^{\prime } \) and \( \mathit {Source}(e)=v\) .
"Locally injective" if
For every \(v_0^{\prime }\in I_0^{\prime }\) , there is at most one \(v_0\in I_0\) such that \(\varphi (v_0)=v_0^{\prime }\) .
For every \( v\in V\) and every \( e^{\prime }\in E^{\prime } \) such that \( \mathit {Source}^{\prime }(e^{\prime })=\varphi (v) \)
if there are \( e_1,e_2\in E \) such that \( \varphi (e_i)=e^{\prime }\) and \( \mathit {Source}(e_i)=v\) , for \( i=1,2 \) , then \( e_1=e_2 \) .
"Locally bijective" if it is both "locally surjective" and "locally injective".
Equivalently, a "morphism of transition systems" \(\varphi \) is "locally surjective" (resp. injective) if the restriction of \(\varphi _E\) to \("\mathit {Out}"(v)\) is a surjection (resp. an injection) into \("\mathit {Out}"(\varphi (v))\) for every \(v\in V\) and the restriction of \(\varphi _V\) to \(I_0\) is a surjection (resp. an injection) into \(I_0^{\prime }\) .
If we only consider the "underlying graph" of a "transition system", without the "accepting condition", the notion of "locally bijective morphism" is equivalent to the usual notion of bisimulation. However, when considering the accepting condition, we only impose that the acceptance of each "run" must be preserved (and not that the colouring of each transition is preserved). This allows us to compare transition systems using different classes of accepting conditions.
We state two simple, but key facts.
If \(\varphi : \) is a "locally bijective morphism", then \(\varphi \) induces a bijection between the runs in \({\mathpzc {Run}}_{ and {\mathpzc {Run}}_{} that preserves their acceptance.}\)
If \(\varphi \) is a "locally surjective morphism", then it is onto the "accessible part" of \(\) . That is, for every "accessible" state \(v^{\prime }\in \) , there exists some state \(v\in such that \) V(v)=v'\(. In particular if every state of \)\( is "accessible", \)\( is surjective.\)
Intuitively, if we transform a "transition system" 1 into 2 “without adding non-determinism”, we will have a locally bijective morphism \(\varphi : 2 \rightarrow 1\) . In particular, if we consider the "composition" \(2=\mathcal {B}\lhd 1\) of 1 by some "deterministic automaton" \(\mathcal {B}\) , as defined in section , the projection over 1 gives a "locally bijective morphism" from 2 to 1.
""""
Let \(\mathcal {A}\) be the "Muller automaton" presented in the example REF , and \(\mathcal {Z}_{\mathcal {F}_1}\) the "Zielonka tree automaton" for its Muller condition \(\mathcal {F}_1=\lbrace \lbrace a\rbrace ,\lbrace b\rbrace \rbrace \) as in the figure REF . We show them in figure REF and their "composition" \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\lhd \mathcal {A}\) in figure REF . If we name the states of \(\mathcal {A}\) with the letters \(A\) and \(B\) , and those of \(\mathcal {Z}_{\mathcal {F}_1}\) with \(\alpha ,\beta \) , there is a locally bijective morphism \(\varphi : \mathcal {\mathcal {Z}_{\mathcal {F}}}\lhd \mathcal {A}\rightarrow \mathcal {A}\) given by the projection on the first component
\( \varphi _V((X,y))=X \; \text{ for } X\in \lbrace A,B\rbrace ,\, y\in \lbrace \alpha ,\beta \rbrace \)
and \(\varphi _E\) associates to each edge \(e\in \mathit {Out}(X,y)\) labelled by \(a\in \lbrace 0,1\rbrace \) the only edge in \(\mathit {Out}(X)\) labelled with \(a\) .
<FIGURE><FIGURE>We know that \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) is a minimal automaton recognizing the "Muller condition" \(\mathcal {F}\) (theorem REF ). However, the "composition" \(Z_{\mathcal {F}_1} \lhd \mathcal {A}\) has 4 states, and in the example REF (figure REF ) we have shown a parity automaton recognizing \(\mathcal {L}(\mathcal {A})\) with only 3 states. Moreover, there is a "locally bijective" morphism from this smaller parity automaton to \(\mathcal {A}\) (we only have to send the two states on the left to \(A\) and the state on the right to \(B\) ). In the next section we will show a transformation that will produce the parity automaton with only 3 states starting from \(\mathcal {A}\) .
Morphisms of automata and games
Before presenting the optimal transformation of Muller transition systems, we will state some facts about "morphisms" in the particular case of "automata" and "games". When we speak about a "morphism" between two automata, we always refer implicitly to the morphism between the corresponding "labelled transition systems", as explained in "example REF ".
A "morphism" \(\varphi =(\varphi _V,\varphi _E)\) between two "deterministic automata" is always "locally bijective" and it is completely characterized by the map \(\varphi _V\) .
For each letter of the input alphabet and each state, there must be one and only one outgoing transition labelled with this letter.
Let \(\mathcal {A}=(Q,\Sigma , I_0, \Gamma , \delta , \mathit {Acc})\) , \(\mathcal {A}^{\prime }=(Q^{\prime },\Sigma , I_0^{\prime }, \Gamma , \delta ^{\prime }, \mathit {Acc}^{\prime })\) be two (possibly non-deterministic) "automata". If there is a "locally surjective morphism" \(\varphi : \mathcal {A}\rightarrow \mathcal {A}^{\prime }\) , then \("\mathcal {L}(\mathcal {A})"="\mathcal {L}(\mathcal {A}^{\prime })"\) .
Let \(u\in \Sigma ^\omega \) . If \(u\in \mathcal {L}(\mathcal {A})\) there is an accepting run, \(\varrho \) , over \(u\) in \(\mathcal {A}\) . By the definition of a "morphism of labelled transition systems", \(\varphi (\varrho )\) is also an accepting "run over \(u\) " in \(\mathcal {A}^{\prime }\) .
Conversely, if \(u\in \mathcal {L}(\mathcal {A}^{\prime })\) there is an accepting "run over \(u\) " \(\varrho ^{\prime }\) in \(\mathcal {A}^{\prime }\) . Since \(\varphi \) is locally surjective there is a run \(\varrho \) in \(\mathcal {A}\) , such that \(\varphi (\varrho )=\varrho ^{\prime }\) , and therefore \(\varrho \) is an accepting run over \(u\) .
The converse of the previous proposition does not hold: \("\mathcal {L}(\mathcal {A})"=\mathcal {L}(\mathcal {A}^{\prime })\) does not imply the existence of morphisms \(\varphi : \mathcal {A}\rightarrow \mathcal {A}^{\prime }\) or \(\varphi : \mathcal {A}^{\prime } \rightarrow \mathcal {A}\) , even if \(\mathcal {A}\) has minimal size among the Muller automata recognizing \(\mathcal {L}(\mathcal {A})\) .
If \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) are "non-deterministic automata" and \(\varphi :\mathcal {A}\rightarrow \mathcal {A}^{\prime }\) is a "locally bijective morphism", then \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) have to share some other important semantic properties. Two classes of automata that have been extensively studied are unambiguous and good for games automata. An automaton is ""unambiguous"" if for every input word \(w\in \Sigma ^\omega \) there is at most one accepting "run over" \(w\) . ""Good for games"" automata (GFG), first introduced by Henzinger and Piterman in [10]}, are automata that can resolve the non-determinism depending only in the prefix of the word read so far. These types of automata have many good properties and have been used in different contexts (as for example in the model checking of LTL formulas [11]} or in the theory of cost functions [12]}). Unambiguous automata can recognize \(\omega \) -regular languages using a "Büchi" condition (see [13]}) and GFG automata have strictly more expressive power than deterministic ones, being in some cases exponentially smaller (see [14]}, [15]}).
We omit the proof of the next proposition, being a consequence of fact REF and of the argument from the proof of proposition REF .
Let \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) be two "non-deterministic automata". If \(\varphi :\mathcal {A}\rightarrow \mathcal {A}^{\prime }\) is a "locally bijective morphism", then
\(\mathcal {A}\) is unambiguous if and only if \(\mathcal {A}^{\prime }\) is unambiguous.
\(\mathcal {A}\) is GFG if and only if \(\mathcal {A}^{\prime }\) is GFG.
Having a "locally bijective morphism" between two games implies that the "winning regions" of the players are preserved.
Let \(\mathcal {G}=(V, E, \mathit {Source}, \mathit {Target}, v_0, \mathit {Acc}, l_V)\) and \(\mathcal {G}^{\prime }=(V^{\prime },E^{\prime }, \mathit {Source}^{\prime }, \mathit {Target}^{\prime }, v_0^{\prime }, \mathit {Acc}^{\prime }, l_V^{\prime })\) be two "games" such that there is a "locally bijective morphism" \(\varphi :\mathcal {G}\rightarrow \mathcal {G}^{\prime }\) . Let \(P\in \lbrace Eve, Adam\rbrace \) be a player in those games. Then, \(P\) wins \(\mathcal {G}\) if and only if she/he wins \(\mathcal {G}^{\prime }\) . Moreover, if \(\varphi \) is surjective, the "winning region" of \(P\) in \(\mathcal {G}^{\prime }\) is the image by \(\varphi \) of her/his winning region in \(\mathcal {G}\) , \("\mathcal {W}_P"(\mathcal {G}^{\prime })=\varphi ("\mathcal {W}_P"(\mathcal {G}))\) .
Let \(S_P: {\mathpzc {Run}}_{\mathcal {G}}\cap E^* \rightarrow E\) be a winning "strategy" for player \(P\) in \(\mathcal {G}\) . Then, it is easy to verify that the strategy \(S_P^{\prime }: {\mathpzc {Run}}_{\mathcal {G}^{\prime }}\cap E^{\prime *} \rightarrow E^{\prime }\) defined as
\( S_P^{\prime }(\varrho ^{\prime }) = \varphi _E ( S_P(\varphi ^{-1}(\varrho ^{\prime }))) \)
is a winning "strategy" for \(P\) in \(\mathcal {G}^{\prime }\) . (Remark that thanks to fact REF , the morphism \(\varphi \) induces a bijection over "runs", allowing us to use \(\varphi ^{-1}\) in this case).
Conversely, if \(S_P^{\prime }: {\mathpzc {Run}}_{\mathcal {G}^{\prime }}\cap E^{\prime *} \rightarrow E^{\prime }\) is a winning "strategy" for \(P\) in \(\mathcal {G}^{\prime }\) , then \( S_P(\varrho ) = \varphi _E^{-1} ( S_P^{\prime }(\varphi (\varrho ))) \)
is a winning "strategy" for \(P\) in \(\mathcal {G}\) . Here \( \varphi _E^{-1} (e^{\prime })\) is the only edge \(e\in E\) in \("\mathit {Out}"(\mathit {Target}("\mathit {Last}"(\varrho )))\) such that \(\varphi _E(e)=e^{\prime }\) .
The equality \("\mathcal {W}_P"(\mathcal {G}^{\prime })=\varphi ("\mathcal {W}_P"(\mathcal {G}))\) stems from the fact that if we choose a different initial vertex \(v_1\) in \(\mathcal {G}\) , then \(\varphi \) is a "locally bijective morphism" to the game \(\mathcal {G}^{\prime }\) with initial vertex \(\varphi (v_1)\) . Conversely, if we take a different initial vertex \(v_1^{\prime }\) in \(\mathcal {G}^{\prime }\) , since \(\varphi \) is surjective we can take a vertex \(v_1\in \varphi ^{-1}(v_1^{\prime })\) , and \(\varphi \) remains a locally bijective morphism between the resulting games.
The alternating cycle decomposition
Most transformations of "Muller" into "parity" "transition systems" are based on the "composition" by some automaton converting the Muller condition into a parity one. These transformations act on the totality of the system uniformly, regardless of the local structure of the system and the "acceptance condition".
The transformation we introduce in this section takes into account the interplay between the particular "acceptance condition" and the "transition system", inspired by the alternating chains introduced in [3]}.
In the following we will consider "Muller transition systems" with the Muller acceptance condition using edges as colours. We can always suppose this, since given a transition system \( with edges coloured by \) : E C\( and a Muller condition \) FP(C)\(, the condition \) FP(E)\( defined as \) AF (A)F\( is an "equivalent condition over" \) . However, the size of the representation of the condition \(\mathcal {F}\) might change. Making this assumption corresponds to consider what are called explicit Muller conditions. In particular, solving Muller games with explicit Muller conditions is in \(\mathrm {PTIME}\) [1]}, while solving general Muller games is \(\mathrm {PSPACE}\) -complete [18]}.
Given a "transition system" \((V,E,\mathit {Source},\mathit {Target},I_0, \mathit {Acc})\) , a loop is a subset of edges \(l\subseteq E\) such that it exists \(v\in V\) and a finite "run" \(\varrho \in {\mathpzc {Run}}_{T,v}\) such that \("\mathit {First}"(\varrho )="\mathit {Last}"(\varrho )=v\) and \({\mathit {App}}(\varrho )=l\) . The set of "loops" of \( is denoted \) Loop(\(.For a "loop" \) lLoop(\( we write\)\( ""\mathit {States}""(l):= \lbrace v\in V \; : \; \exists e\in l, \; \mathit {Source}(e)=v \rbrace .\)\(\)
Observe that there is a natural partial order in the set \({\mathpzc {Loop}}(\) given by set inclusion.
If \(l\) is a "loop" in \({\mathpzc {Loop}}(\) , for every \(q\in {\mathit {States}}(l)\) there is a run \(\varrho \in {\mathpzc {Run}}_{q}\) such that \(\mathit {App}(\varrho )=l\) .
The maximal loops of \({\mathpzc {Loop}}(\) (for set inclusion) are disjoint and in one-to-one correspondence with the "strongly connected components" of \(.\)
[Alternating cycle decomposition]
Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" with "acceptance condition" given by \(\mathcal {F}\subseteq \mathcal {P}(E)\) . The alternating cycle decomposition (abbreviated ACD) of \(, noted \) ACD(T)\(, is a family of "labelled trees" \) (t1, 1),..., (tr,r)\( with nodes labelled by "loops" in \) Loop(\(, \) i: tiLoop(\(. We define it inductively as follows:\begin{itemize}\item Let \lbrace l_1,\dots , l_r\rbrace be the set of maximal loops of {\mathpzc {Loop}}(. For each i\in \lbrace 1,\dots , r\rbrace we consider a "tree" t_i and define \nu _i(\varepsilon )=l_i.\end{itemize}\item Given an already defined node \)\( of a tree \) ti\( we consider the maximal loops of the set\)\(\lbrace l\subseteq \nu _i(\tau ) \; : \; l\in {\mathpzc {Loop}}( \text{ and } l \in \mathcal {F}\; \Leftrightarrow \; \nu _i(\tau ) \notin \mathcal {F}\rbrace \)\(and for each of these loops \) l\( we add a child to \)\( in \) ti\( labelled by \) l\(. \)
For notational convenience we add a special "tree" \((t_0,\nu _0)\) with a single node \(\varepsilon \) labelled with the edges not appearing in any other tree of the forest, i.e., \(\nu _0(\varepsilon )=E \setminus \bigcup _{i=1}^{r}l_i\) (remark that this is not a "loop").
We define \(\mathit {States}(\nu _0(\varepsilon )):= V\setminus \bigcup _{i=1}^{r}\mathit {States}(l_i)\) (remark that this does not follow the general definition of \("\mathit {States}"()\) for loops).
We call the trees \(t_1,\dots , t_r\) the ""proper trees"" of the "alternating cycle decomposition" of \(.Given a node \)\( of \) ti\(, we note \) Statesi():=States(i())\(.\)
As for the "Zielonka tree", the "alternating cycle decomposition" of \( is not unique, since it depends on the order in which we introduce the children of each node. This will not affect the upcoming results, and we will refer to it as ``the^{\prime \prime } alternating cycle decomposition of \) .
For the rest of the section we fix a "Muller transition system" \((V,E,\mathit {Source},\mathit {Target}, I_0, \mathcal {F})\) with the "alternating cycle decomposition" given by \((t_0,\nu _0), (t_1,\nu _1),\dots , (t_r,\nu _r)\) .
The "Zielonka tree" for a "Muller condition" \(\mathcal {F}\) over the set of colours \(C\) can be seen as a special case of this construction, for the automaton with a single state, input alphabet \(C\) , a transition for each letter in \(C\) and "acceptance condition" \(\mathcal {F}\) .
Each state and edge of \( appears in exactly one of the "trees" of \) ACD(\(.\)
The ""index"" of a state \(q\in V\) (resp. of an edge \(e\in E\) ) in \({\mathcal {ACD}}(\) is the only number \(j\in \lbrace 0,1,\dots ,r\rbrace \) such that \(q\in "\mathit {States}"_j(\varepsilon )\) (resp. \(e \in \nu _j(\varepsilon )\) ).
For each state \(q\in V\) of "index" \(j\) we define the ""subtree associated to the state \(q\) "" as the "subtree" \(t_q\) of \(t_j\) consisting in the set of nodes \(\lbrace \tau \in t_j \; : \; q\in {\mathit {States}}_j(\tau ) \rbrace \) .
We refer to figures REF and REF for an example of \("t_q"\) .
For each "proper tree" \(t_i\) of \("{\mathcal {ACD}}" (\) we say that \(t_i\) is (ACD)even if \(\nu _i(\varepsilon )\in \mathcal {F}\) and that it is (ACD)odd if \(\nu _i(\varepsilon )\notin \mathcal {F}\) .
We say that the "alternating cycle decomposition" of \( is \emph {even} if all the trees of maximal "height" of \) ACD(\( are even; that it is \emph {odd} if all of them are odd, and that it is \emph {(ACD){ambiguous}} if there are even and odd trees of maximal "height".\)
For each \(\tau \in t_i\) , \(i=1,\dots ,r\) , we define the (node)priority of \(\tau \) in \(t_i\) , written \(p_i(\tau )\) as follows:
If \({\mathcal {ACD}}(\) is (ACD)even or (ACD)ambiguous
If \(t_i\) is (ACD)even (\(\nu _i(\varepsilon )\in \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )=|\tau |\) .
If \(t_i\) is (ACD)odd (\(\nu _i(\varepsilon )\notin \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+1=|\tau |+1\) .
If \({\mathcal {ACD}}(\) is (ACD)odd
If \(t_i\) is (ACD)even (\(\nu _i(\varepsilon )\in \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+2=|\tau |+2\) .
If \(t_i\) is (ACD)odd (\(\nu _i(\varepsilon )\notin \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+1=|\tau |+1\) .
For \(i=0\) , we define \(p_0(\varepsilon )=0\) if \("{\mathcal {ACD}}" (\) is (ACD)even or (ACD)ambiguous and \(p_0(\varepsilon )=1\) if \("{\mathcal {ACD}}" (\) is (ACD)odd.
The assignation of priorities to nodes produces a labelling of the levels of each tree. It will be used to determine the priorities needed by a parity "transition system" to simulate \(. The distinction between the cases \) ACD(\( even or odd is added only to obtain the minimal number of priorities in every case.\)
In figure REF we represent a "transition system" \((V,E,\mathit {Source},\mathit {Target},q_0,\mathcal {F})\) with \(V=\lbrace q_0,q_1,q_2,q_3,q_4,q_5\rbrace \) , \(E=\lbrace a,b,\dots ,j,k\rbrace \) and using the "Muller condition"
\(\mathcal {F}=\lbrace \lbrace c,d,e \rbrace ,\lbrace e \rbrace ,\lbrace g,h,i \rbrace ,\lbrace l \rbrace ,\lbrace h,i,j,k \rbrace ,\lbrace j,k \rbrace \rbrace .\)
It has 2 strongly connected components (with vertices \(S_1=\lbrace q_1,q_2\rbrace , S_2=\lbrace q_3,q_4,q_5\rbrace \) ), and a vertex \(q_0\) that does not belong to any strongly connected component.
The "alternating cycle decomposition" of this transition system is shown in figure REF . It consists of two proper "trees", \(t_1\) and \(t_2\) , corresponding to the strongly connected components of \( and the tree \) t0\( that corresponds to the edges not appearing in the strongly connected components.\) We observe that \({\mathcal {ACD}}(\) is (ACD)odd (\(t_2\) is the highest tree, and it starts with a non-accepting "loop"). It is for this reason that we start labelling the levels of \(t_1\) from 2 (if we had assigned priorities \(0,1\) to the nodes of \(t_1\) we would have used 4 priorities, when only 3 are strictly necessary).
In figure REF we show the "subtree associated to" \(q_4\) .
<FIGURE><FIGURE><FIGURE>
The alternating cycle decomposition transformation
We proceed to show how to use the "alternating cycle decomposition" of a "Muller transition system" to obtain a "parity transition system". Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" and \((t_0,\nu _0), (t_1, \nu _1),\dots , (t_r,\nu _r)\) , its "alternating cycle decomposition".
First, we adapt the definitions of \(\mathit {Supp}\) and \(\mathit {Nextbranch}\) to the setting with multiple trees.
For an edge \(e\in E\) such that \(\mathit {Target}(e)\) has "index" \(j\) , for \(i\in \lbrace 0,1,\dots ,r\rbrace \) and a branch \(\beta \) in some subtree of \(t_i\) , we define the ""support"" of \(e\) from \(\tau \) as:
\( {\mathit {Supp}}(\beta ,i,e)={\left\lbrace \begin{array}{ll}\end{array}\right.\text{The maximal node (for } {\sqsubseteq }\text{) } \tau \in \beta \text{ such that } e\in \nu _i(\tau ), \text{ if } i= j .\\[2mm]}\text{The root } \varepsilon \text{ of } t_j, \text{ if } i\ne j.\)
\(\)
Intuitively, \({\mathit {Supp}}(\beta ,i,e)\) is the highest node we visit if we want to go from the bottom of the branch \(\beta \) to a node of the tree that contains \(e\) “in an optimal trajectory” (going up as little as possible). If we have to jump to another tree, we define \({\mathit {Supp}}(\beta ,i,e)\) as the root of the destination tree.
Let \(i\in \lbrace 0,1,\dots ,r\rbrace \) , \(q\) be a state of "index" \(i\) , \(\beta \) be a branch of some "subtree" of \(t_i\) and \(\tau \in \beta \) be a node of \(t_i\) such that \(q\in {\mathit {States}}_i(\tau )\) . If \(\tau \) is not the deepest node of \(\beta \) , let \(\sigma _\beta \) be the unique child of \(\tau \) in \(t_i\) such that \(\sigma _\beta \in \beta \) . We define:
\( {\mathit {Nextchild}_{t_q}}(\beta ,\tau )={\left\lbrace \begin{array}{ll}\tau , \text{ if } \tau \text{ is a leaf in } {t_q}.\\[3mm]\parbox {8cm}{Smallest older sibling of \sigma _\beta in {t_q}, if \sigma _\beta is defined and there is any such older sibling.}\\[3mm]\text{Smallest child of } \tau \text{ in } {t_q} \text{ in any other case}.\end{array}\right.} \)
Let \(i\in \lbrace 0,1,\dots ,r\rbrace \) and \(\beta \) be a branch of some "subtree" of \(t_i\) . For a state \(q\) of "index" \(j\) and a node \(\tau \) such that \(q\in {\mathit {States}}_j(\tau )\) and such that \(\tau \in \beta \) if \(i=j\) , we define:
\( {\mathit {Nextbranch}_{t_q}}(\beta ,i,\tau )= {\left\lbrace \begin{array}{ll}\end{array}\right.\text{ Leftmost branch in } {t_q} \text{ below } {\mathit {Nextchild}_{t_q}}(\beta ,\tau ), \text{ if } i= j .\\[3mm]}\text{The leftmost branch in } {\mathit {Subtree}}_{t_q}(\tau ), \text{ if } i\ne j.\)
\(\)
[ACD-transformation]
Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" with "alternating cycle decomposition" \({\mathcal {ACD}}(= \lbrace (t_0,\nu _0),(t_1,\nu _1),\dots ,(t_r,\nu _r)\rbrace \) . We define its ""ACD-parity transition system"" (or ACD-transformation) \({\mathcal {P}_{\mathcal {ACD}(}}=(V_P,E_P,\mathit {Source}_P,\mathit {Target}_P,I_0^{\prime }, p:E_P\rightarrow \mathbb {N} )\) as follows:
\(V_P=\lbrace (q,i,\beta ) \; : \; q\in V \text{ of "index" } i \text{ and } \beta \in {\mathit {Branch}}({t_q}) \rbrace \) .
For each node \((q,i,\beta )\in V_P\) and each edge \(e\in {\mathit {Out}}(q)\) we define an edge \(e_{i,\beta }\) from \((q,i,\beta )\) . We set
\(\mathit {Source}_P(e_{i,\beta })=(q,i,\beta )\) , where \(q=\mathit {Source}(e)\) .
\(\mathit {Target}_P(e_{i,\beta })=(q^{\prime },k,{\mathit {Nextbranch}_{t_{q^{\prime }}}}(\beta ,i,\tau ))\) , where \(q^{\prime }=\mathit {Target}(e)\) , \(k\) is its "index" and \(\tau ={\mathit {Supp}}(\beta ,i,e)\) .
\(p(e_{i,\tau })=(node){p_j}({\mathit {Supp}}(\beta ,i,e))\) , where \(j\) is the "index" of \({\mathit {Supp}}(\beta ,i,e)\) .
\(I_0^{\prime }=\lbrace (q_0,i,\beta _0) \; : \; q_0\in I_0, \, i \text{ the index of } q_0\) and \(\beta _0\) the leftmost branch in \({t_{q_0}}\rbrace \) .
If \( is labelled by \) lV:VLV\(, \) lE:ELE\(, we label \) PACD(\( by \) lV'((q,i,))=lV(q)\( and \) lE'(ei,)=lE(e)\(.\)
The set of states of \(\mathcal {P}_{\mathcal {ACD}(}\) is build as follows: for each state \(q\in we consider the subtree of \) ACD(\( consisting of the nodes with \) q\( in its label, and we add a state for each branch of this subtree.\) Intuitively, to define transitions in the transition system \({\mathcal {P}_{\mathcal {ACD}(}}\) we move simultaneously in \( and in \) ACD(\(. We start from \) q0I0\( and from the leftmost branch of \) tq0\(. When we take a transition \) e\( in \) while being in a branch \(\beta \) , we climb the branch \(\beta \) searching a node \(\tau \) with \(q^{\prime }=\mathit {Target}(e)\) and \(e\) in its label, and we produce the priority corresponding to the level reached. If no such node exists, we jump to the root of the tree corresponding to \(q^{\prime }\) . Then, we move to the next child of \(\tau \) on the right of \(\beta \) in the tree \({t_{q^{\prime }}}\) , and we pick the leftmost branch under it in \({t_{q^{\prime }}}\) . If we had jumped to the root of \({t_{q^{\prime }}}\) from a different tree, we pick the leftmost branch of \({t_{q^{\prime }}}\) .
The size of \({\mathcal {P}_{\mathcal {ACD}(}}\) is
\( |\mathcal {P}_{\mathcal {ACD}(}|=\sum \limits _{q\in V} |{\mathit {Branch}}({t_q})|. \)
The number of priorities used by \({\mathcal {P}_{\mathcal {ACD}(}}\) is the "height" of a maximal tree of \({\mathcal {ACD}}(\) if \({\mathcal {ACD}}(\) is (ACD)even or (ACD)odd, and the "height" of a maximal tree plus one if \({\mathcal {ACD}}(\) is (ACD)ambiguous.
In figure REF we show the "ACD-parity transition system" \({\mathcal {P}_{\mathcal {ACD}(}}\) of the transition system of example REF (figure REF ). States are labelled with the corresponding state \(q_j\) in \(, the tree of its "index" and a node \) ti\( that is a leaf in \) tqj\( (defining a branch of it).\) We have tagged the edges of \({\mathcal {P}_{\mathcal {ACD}(}}\) with the names of edges of \( (even if it is not an automaton). These indicate the image of the edges by the "morphism" \) : PACD( , and make clear the bijection between "runs" in \( and in \) PACD(\(.\) In this example, we create one “copy” of states \(q_0,q_1\) and \(q_2\) , three “copies” of the state \(q_3\) and two“copies” of states \(q_4\) and \(q_5\) . The resulting "parity transition system" \({\mathcal {P}_{\mathcal {ACD}(}}\) has therefore 10 states.
<FIGURE>Let \(\mathcal {A}\) be the "Muller automaton" of example "REF ". Its "alternating cycle decomposition" has a single tree that coincides with the "Zielonka tree" of its "Muller acceptance condition" \(\mathcal {F}_1\) (shown in figure REF ). However, its "ACD-parity transition system" has only 3 states, less than the "composition" \(\mathcal {Z}_{\mathcal {F}_1} \lhd \mathcal {A}\) (figure REF ), as shown in figure REF .
<FIGURE>[Correctness]
Let \((V, E, \mathit {Source}, \mathit {Target}, I_0, \mathcal {F})\) be a "Muller transition system" and \({\mathcal {P}_{\mathcal {ACD}(}}=(V_P, E_P, \mathit {Source}_P, \mathit {Target}_P, I_0^{\prime }, p:E_P\rightarrow \mathbb {N} )\) its "ACD-transition system". Then, there exists a "locally bijective morphism" \( \varphi : {\mathcal {P}_{\mathcal {ACD}(}} \rightarrow .Moreover, if \) is a "labelled transition system", then \(\varphi \) is a "morphism of labelled transition systems".
We define \(\varphi _V : V_P \rightarrow V\) by \(\varphi _V((q,i,\beta ))=q\) and \(\varphi _E : E_P \rightarrow E\) by \(\varphi _E(e_{i,\tau })=e\) .
It is clear that this map preserves edges, initial states and labels. It is also clear that it is "locally bijective", since we have defined one initial state in \({\mathcal {P}_{\mathcal {ACD}(}}\) for each initial state in \(, and by definition the edges in \) Out((q,i,))\( are in bijection with \) Out(q)\(. It induces therefore a bijection between the runs of the transition systems(fact \ref {Fact_LocBijMorph_BijectionRuns}).\) Let us see that a "run" \(\varrho \) in \( is accepted if and only if \) -1()\( is accepted in \) PACD(\(. First, we remark that any infinite run \)\( of \) will eventually stay in a "loop" \(l\in {\mathpzc {Loop}}(\) such that \(\mathit {Inf}(\varrho )=l\) , and therefore we will eventually only visit states corresponding to the tree \(t_i\) such that \(l\subseteq \nu _i(\varepsilon )\) in the "alternating cycle decomposition". Let \(p_{\min }\) be the smallest priority produced infinitely often in the run \(\varphi ^{-1}(\varrho )\) in \(\mathcal {P}_{\mathcal {ACD}(}\) . As in the proof of proposition REF , there is a unique node \(\tau _p\) in \(t_i\) visited infinitely often such that \((node){p_i}(\tau _p)=p_{\min }\) . Moreover, the states visited infinitely often in \(\mathcal {P}_{\mathcal {ACD}(}\) correspond to branches below \(\tau _p\) , that is, they are of the form
\( (q,i,\beta )\) , with \(\beta \in {\mathit {Subtree}}_{t_q}(\tau _p), \text{ for } q\in {\mathit {States}}_i(\tau _p)\) .
We claim that \(\tau _p\) verifies:
\(l\subseteq \nu _i(\tau _p)\) .
\(l\nsubseteq \nu _i(\sigma )\) for every "child" \(\sigma \) of \(\tau _p\) .
By definition of \({\mathcal {ACD}}(\) this implies
\( l\in \mathcal {F}\; \Longleftrightarrow \; \nu _i(\tau _p)\in \mathcal {F}\; \Leftrightarrow \; p_{\min } \text{ is even.}\)
We show that \(l\subseteq \nu _i(\tau _p)\) . For every edge \(e\notin \nu _i(\tau _p)\) of "index" \(i\) and for every branch \(\beta \in {\mathit {Subtree}}_{t_q}(\tau _p), \text{ for } q\in {\mathit {States}}_i(\tau _p)\) , we have that \(\tau ^{\prime }={\mathit {Supp}}(\beta ,i,e)\) is a strict "ancestor" of \(\tau _p\) in \(t_i\) . Therefore, if \(l\) was not contained in \(\nu _i(\tau _p)\) we would produce infinitely often priorities strictly smaller than \(p_{\min }\) .
Finally, we show that \(l\nsubseteq \nu _i(\sigma )\) for every "child" \(\sigma \) of \(\tau _p\) . Since we reach \(\tau _p\) infinitely often, we take transitions \(e_{i,\beta }\) such that \(\tau _p={\mathit {Supp}}(\beta ,i,e)\) infinitely often. Let us reason by contradiction and let us suppose that there is some child \(\sigma \) of \(\tau _p\) such that \(l\subseteq \nu _i(\sigma )\) . Then for each edge \(e\in l\) , \(\mathit {Target}(e)\in {\mathit {States}}_i(\sigma )\) , and therefore \(\sigma \in t_q\) for all \(q\in {\mathit {States}}(l)\) and for each transition \(e_{i,\beta }\) such that \(\tau _p={\mathit {Supp}}(\beta ,i,e)\) , some branches passing through \(\sigma \) are considered as destinations. Eventually, we will go to some state \((q,i,\beta ^{\prime })\) , for some branch \(\beta ^{\prime }\in {\mathit {Subtree}}_{t_q}(\sigma )\) . But since \(l\subseteq \nu _i(\sigma )\) , then for every edge \(e\in l\) and branch \(\beta ^{\prime }\in {\mathit {Subtree}}_{t_q}(\sigma )\) it is verified that \({\mathit {Supp}}(\beta ^{\prime },i,e)\) is a "descendant" of \(\sigma \) , so we would not visit again \(\tau _p\) and all priorities produced infinitely often would be strictly greater than \(p_{\min }\) .
From the remarks at the end of section REF , we obtain:
If \(\mathcal {A}\) is a "Muller automaton" over \(\Sigma \) , the automaton \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) is a "parity automaton" recognizing \(\mathcal {L}(\mathcal {A})\) . Moreover,
\(\mathcal {A}\) is "deterministic" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is deterministic.
\(\mathcal {A}\) is "unambiguous" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is unambiguous.
\(\mathcal {A}\) is "GFG" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is GFG.
If \(\mathcal {G}\) is a "Muller game", then \(\mathcal {P}_{\mathcal {ACD}(\mathcal {G})}\) is a "parity game" that has the same winner than \(\mathcal {G}\) .
The "winning region" of \(\mathcal {G}\) for a player \(P\in \lbrace Eve, Adam\rbrace \) is \({\mathcal {W}_P}(\mathcal {G})=\varphi ("\mathcal {W}_P"(\mathcal {P}_{\mathcal {ACD}(\mathcal {G})}))\) , being \(\varphi \) the morphism of the proof of proposition REF .
Optimality of the alternating cycle decomposition transformation
In this section we prove the strong optimality of the "alternating cycle decomposition transformation", both for number of priorities (proposition REF ) and for size (theorem REF ). We use the same ideas as for proving the optimality of the "Zielonka tree automaton" in section REF .
[Optimality of the number of priorities]
Let \( be a "Muller transition system" such that all its states are "accessible" and let \) PACD(\( be its "ACD-transition system". If \) P\( is another "parity transition system" such that there is a "locally bijective morphism" \) :P, then \(\mathcal {P}\) uses at least the same number of priorities than \({\mathcal {P}_{\mathcal {ACD}(}}\) .
We distinguish 3 cases depending on whether \({\mathcal {ACD}}(\) is (ACD)even, (ACD)odd or (ACD)ambiguous.
We treat simultaneously the cases \({\mathcal {ACD}}(\) (ACD)even and (ACD)odd. In these cases, the number \(h\) of priorities used by \({\mathcal {P}_{\mathcal {ACD}(}}\) coincides with the maximal "height" of a tree in \({\mathcal {ACD}}(\) . Let \(t_i\) be a tree of maximal "height" \(h\) in \({\mathcal {ACD}}(\) , \(\beta =\lbrace \tau _1,\dots ,\tau _{h}\rbrace \in {\mathit {Branch}}(t_i)\) a branch of \(t_i\) of maximal length (ordered as \(\tau _1 {\sqsupseteq } \tau _2 {\sqsupseteq } \dots {\sqsupseteq } \tau _{h}=\varepsilon \) ) and \(l_j=\nu _i(\tau _j)\) , \(j=1,\dots , h\) . We fix \(q\in {\mathit {States}}_i(\tau _1)\) , where \(\tau _1\) is the leaf of \(\beta \) , and we write
\(\mathit {Loop}_q)=\lbrace w\in {\mathpzc {Run}}_{T,q}\cap E^* \; : \; {\mathit {First}}(w)={\mathit {Last}}(w)=q \rbrace ,\)
and for each \(j=1,\dots , h\) we choose \(w_j \in \mathit {Loop}_q)\) such that \({\mathit {App}}(w_j)=l_j\) . Let \(\eta ^{\prime }\) be the maximal priority appearing in \(\mathcal {P}\) . We show as in the proof of proposition REF that for every \(v\in \mathit {Loop}_q)\) , the "run" \(\varphi ^{-1}((w_1\dots w_k v)^\omega )\) must produce a priority smaller or equal to \(\eta ^{\prime }-k+1\) . Taking \(k=h\) , the "run" \(\varphi ^{-1}((w_1\dots w_h)^\omega )\) produces a priority smaller or equal to \(\eta ^{\prime }-h+1\) and even if and only if \({\mathcal {ACD}}(\) is (ACD)even. By lemma REF we can suppose that \(\mathcal {P}\) uses all priorities in \([\eta ^{\prime }-h+1, \eta ^{\prime }]\) . We conclude that \(\mathcal {P}\) uses at least \(h\) priorities, so at least as many as \({\mathcal {P}_{\mathcal {ACD}(}}\) .
In the case \({\mathcal {ACD}}(\) (ACD)ambiguous, if \(h\) is the maximal "height" of a tree in \({\mathcal {ACD}}(\) , then \({\mathcal {P}_{\mathcal {ACD}(}}\) uses \(h+1\) priorities. We can repeat the previous argument with two different maximal branches of respective maximal (ACD)even and (ACD)odd trees. We conclude that \(\mathcal {P}\) uses at least priorities in a range \([\mu ,\mu +h]\cup [\eta ,\eta +h]\) , with \(\mu \) even and \(\eta \) odd, so it uses at least \(h+1\) priorities.
A similar proof, or an application of the results from [7]} gives the following result:
If \(\mathcal {A}\) is a deterministic automaton, the accessible part of \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) uses the optimal number of priorities to recognize \(\mathcal {L}(\mathcal {A})\) .
Finally, we state and prove the optimality of \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) for size.
[Optimality of the number of states]
Let \( be a (possibly "labelled") "Muller transition system" such that all its states are "accessible" and let \) PACD(\( be its "ACD-transition system". If \) P\( is another "parity transition system" such that there is a "locally bijective morphism" \) :P, then
\( |"\mathcal {P}_{\mathcal {ACD}(}"|\le |\mathcal {P}| \) .
Proof of theorem REF
We follow the same steps as for proving theorem REF . We will suppose that all states of the "transition systems" considered are "accessible".
Let 1, 2 be "transition systems" such that there is a "morphism of transition systems" \(\varphi : 1 \rightarrow 2\) . Let \(l\in {\mathpzc {Loop}}(2)\) be a "loop" in 2. An ""\(l\) -SCC"" of 1 (with respect to \(\varphi \) ) is a non-empty "strongly connected subgraph" \((V_l,E_l)\) of the subgraph \((\varphi _V^{-1}({\mathit {States}}(l)),\varphi _E^{-1}(l) )\) such that
\(\nonumber & \text{for every } q_1\in V_l \text{ and every } e_2\in "\mathit {Out}"(\varphi (q_1))\cap l\\ &\text{there is an edge } e_1\in \varphi ^{-1}(e_2)\cap "\mathit {Out}"(q_1) \text{ such that } e_1\in E_l.\)
That is, an \(l\) -SCC is a "strongly connected subgraph" of 1 in which all states and transitions correspond via \(\varphi \) to states and transitions appearing in the "loop" \(l\) . Moreover, given a "run" staying in \(l\) in 2 we can simulate it in the \(l\) -SCC of 1 (property (REF )).
Let 1and 2
be two "transition systems" such that there is a "locally surjective" "morphism" \(\varphi : 1 \rightarrow 2\) . Let \(l\in "\mathpzc {Loop}"(2)\) and \(C_l=(V_l,E_l)\) be a non-empty "\(l\) -SCC" in 1. Then, for every "loop" \(l^{\prime }\in "\mathpzc {Loop}"(2)\) such that \(l^{\prime }\subseteq l\) there is a non-empty "\(l^{\prime }\) -SCC" in \(C_l\) .
Let \((V^{\prime },E^{\prime })=(V_l,E_l)\cap (\varphi _V^{-1}({\mathit {States}}(l^{\prime })),\varphi _E^{-1}(l^{\prime }))\) . We first prove that \((V^{\prime },E^{\prime })\) is non-empty. Let \(q_1\in V_l \subseteq {\mathit {States}}(l)\) . Let \(\varrho \in {\mathpzc {Run}}_{T_2,\varphi (q)}\) be a finite run in 1 from \(\varphi (q_1)\) , visiting only edges in \(l\) and ending in \(q_2\in {\mathit {States}}(l^{\prime })\) . From the local surjectivity, we can obtain a run in \(\varphi ^{-1}(\varrho )\) that will stay in \((V^{\prime },E^{\prime })\) and that will end in a state in \(\varphi _V^{-1}({\mathit {States}}(l^{\prime }))\) . The subgraph \((V^{\prime },E^{\prime })\) clearly has property (REF ) (for \(l^{\prime }\) ).
We prove by induction on the size that any non-empty subgraph \((V^{\prime },E^{\prime })\) verifying the property (REF ) (for \(l^{\prime }\) ) admits an \(l^{\prime }\) -SCC. If \(|V^{\prime }|=1\) , then \((V^{\prime },E^{\prime })\) forms by itself a "strongly connected graph". If \(|V^{\prime }|>1\) and \((V^{\prime },E^{\prime })\) is not strongly connected, then there are vertices \(q,q^{\prime }\in V^{\prime }\) such that there is no path from \(q\) to \(q^{\prime }\) following edges in \(E^{\prime }\) . We let
\( V^{\prime }_q=\lbrace p\in V^{\prime } \; : \; \text{there is a path from } q \text{ to } p \text{ in } (V^{\prime },E^{\prime })\rbrace \; ; \; E^{\prime }_q=E^{\prime }\cap "\mathit {Out}"(V^{\prime }_q)\cap "\mathit {In}"(V^{\prime }_q) .\)
Since \(q^{\prime }\notin V^{\prime }_q\) , the size \(|V^{\prime }_q|\) is strictly smaller than \(|V^{\prime }|\) .
Also, the subgraph \((V^{\prime }_q,E^{\prime }_q)\) is non-empty since \(q\in V^{\prime }_q\) .
The property (REF ) holds from the definition of \((V^{\prime }_q,E^{\prime }_q)\) . We conclude by induction hypothesis.
Let \( be a "Muller transition system" with acceptance condition \) F\( and let \) P\( be a "parity transition system" such that there is a "locally bijective morphism" \) : P. Let \(t_i\) be a "proper tree" of \({\mathcal {ACD}}(\) and \(\tau ,\sigma _1,\sigma _2\in t_i\) nodes in \(t_i\) such that \(\sigma _1,\sigma _2\) are different "children" of \(\tau \) , and let \(l_1=\nu _i(\sigma _1)\) and \(l_2=\nu _i(\sigma _2)\) . If \(C_1\) and \(C_2\) are two "\(l_1 \) -SCC" and "\(l_2\) -SCC" in \(\mathcal {P}\) , respectively, then \(C_1\cap C_2= \emptyset \) .
Suppose there is a state \(q\in C_1\cap C_2\) . Since \(\varphi _V(q)\in {\mathit {States}}(l_1)\cap {\mathit {States}}(l_2)\) , and \(l_1, l_2\) are "loops" there are finite "runs" \(\varrho _1,\varrho _2 \in {\mathpzc {Run}}_{\varphi _V(q)}\) such that \({\mathit {App}}(\varrho _1)=l_1\) and \(\mathit {App}(\varrho _2)=l_2\) . We can “simulate” these runs in \(C_1\) and \(C_2\) thanks to property (REF ), producing runs \(\varphi ^{-1}(\varrho _1)\) and \(\varphi ^{-1}(\varrho _2)\) in \({\mathpzc {Run}}_{\mathcal {P},q}\) and arriving to \(q_1="\mathit {Last}"(\varphi ^{-1}(\varrho _1))\) and \(q_2=\mathit {Last}(\varphi ^{-1}(\varrho _1))\) . Since \(C_1, C_2\) are "\(l_1,l_2\) -SCC", there are finite runs \(w_1\in {\mathpzc {Run}}_{\mathcal {P},q_1}\) , \(w_2\in {\mathpzc {Run}}_{q_2}\) such that \(\mathit {Last}(w_1)=\mathit {Last}(w_2)=q\) , so the runs \(\varphi ^{-1}(\varrho _1)w_1\) and \(\varphi ^{-1}(\varrho _2)w_2\) start and end in \(q\) . We remark that in \( the runs \) (-1(1)w1)=1E(w1)\( and \) (-1(2)w2)=2E(w2)\(start and end in \) V(q)\( andvisit, respectively, all the edges in \) l1\( and \) l2\(.From the definition of \) ACD(\( we have that \) l1F l2 F l1 l2 F\(. Since \)\( preserves the "acceptance condition", the minimal priority produced by \) -1(1)w1\( has the same parity than that of \) -1(2)w2\(, but concatenating both runs we must produce a minimal priority of the opposite parity, arriving to a contradiction.\)
Let \( be a "Muller transition system" and \) "PACD("\( its "ACD-parity transition system". For each tree \) ti\( of \) ACD(\(, each node \) ti\( and each state \) qStatesi()\( we write:\)\( \psi _{\tau ,i,q}=|{\mathit {Branch}}({\mathit {Subtree}}_{t_q}(\tau ))|=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \; \beta \, \text{ passes through } \tau \rbrace |.\)\(\vspace{-8.53581pt}\)\(\Psi _{\tau ,i}=\sum \limits _{q\in {\mathit {States}}_i(\tau )}\psi _{\tau ,i,q}=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \;q\in V \text{ of "index" } i \text{ and } \beta \, \text{ passes through } \tau \rbrace | .\)\(\)
If we consider the root of the trees in \({\mathcal {ACD}}(\) , then each \(\Psi _{\varepsilon ,i}\) is the number of states in \("\mathcal {P}_{\mathcal {ACD}(}"\) associated to this tree, i.e., \(\Psi _{\varepsilon ,i}=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \; q\in V,\; \beta \in {\mathit {Branch}}("t_q")\rbrace |\) . Therefore
\( |"\mathcal {P}_{\mathcal {ACD}(}"|=\sum \limits _{i=0}^{r}\Psi _{\varepsilon ,i} .\)
[Proof of theorem REF]
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a "Muller transition system", \("\mathcal {P}_{\mathcal {ACD}(}"\) the "ACD-parity transition system" of \( and \) P=(V',E',Source',Target',I0',p':E':N )\( a parity transition system such that there is a "locally bijective morphism" \) : P.
First of all, we construct two modified transition systems \(\widetilde{=(V,\widetilde{E},\mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e},\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t},I_0,\widetilde{\mathcal {F}}) and \widetilde{\mathcal {P}}=(V^{\prime },\widetilde{E^{\prime }},\mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e}^{\prime },\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t}^{\prime }, I_0^{\prime }, \widetilde{p^{\prime }}:\widetilde{E^{\prime }}:\rightarrow \mathbb {N} ), such that}\)
Each vertex of \(V\) belongs to a "strongly connected component".
All leaves \(\tau \in t_i\) verify \(|{\mathit {States}}_i(\tau )|=1\) , for every \(t_i\in {\mathcal {ACD}}(\widetilde{).\item Nodes \tau \in t_i verify {\mathit {States}}_i(\tau )=\bigcup _{\sigma \in \mathit {Children}(\tau )}\mathit {States}_i(\sigma ), for every t_i\in {\mathcal {ACD}}(\widetilde{).\item There is a "locally bijective morphism" \widetilde{\varphi }: \widetilde{\mathcal {P}} \rightarrow \widetilde{.\item |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|\le |\widetilde{\mathcal {P}}| \; \Rightarrow \; |\mathcal {P}_{\mathcal {ACD}(}|\le |\mathcal {P}|.}}}We define the transition system \widetilde{ by adding for each q\in V two new edges, e_{q,1}, e_{q,2} with \mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e}(e_{q,j})=\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t}(e_{q,j})=q, for j=1,2. The modified "acceptance condition" \widetilde{\mathcal {F}} is given by: let C\subseteq \widetilde{E}\begin{itemize}\item If C\cap E\ne \emptyset , then C\in \widetilde{\mathcal {F}} \; \Leftrightarrow \; C\cap E \in \mathcal {F} (the occurrence of edges e_{q,j} does not change the "acceptance condition").\item If C\cap E = \emptyset , if there are edges of the form e_{q,1} in C, for some q\in V, then C\in \widetilde{\mathcal {F}}. If all edges of C are of the form e_{q,2}, C\notin \mathcal {F}.\end{itemize}It is easy to verify that the "transition system" \widetilde{ and {\mathcal {ACD}}(\widetilde{) verify conditions 1,2 and 3.We perform equivalent operations in \mathcal {P}, obtaining \widetilde{\mathcal {P}}:we add a pair of edges e_{q,1}, e_{q,2} for each vertex in \mathcal {P}, and we assign them priorities \widetilde{p}(e_{q,1})=\eta +\epsilon and \widetilde{p}(e_{q,2})=\eta +\epsilon +1, where \eta is the maximum of the priorities in \mathcal {P} and \epsilon =0 if \eta is even, and \epsilon =1 if \eta is odd. We extend the "morphism" \varphi to \widetilde{\varphi }: \widetilde{\mathcal {P}} \rightarrow \widetilde{ conserving the "local bijectivity" by setting \widetilde{\varphi }_E(e_{q,j})=e_{\varphi (q),j} for j=1,2. Finally, it is not difficult to verify that the underlying graphs of \mathcal {P}_{\mathcal {ACD}(\widetilde{ )} and \widetilde{\mathcal {P}}_{\mathcal {ACD}(} are equal (the only differences are the priorities associated to the edges e_{q,j}), so in particular |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|=|\widetilde{\mathcal {P}}_{\mathcal {ACD}(}|=|\mathcal {P}_{\mathcal {ACD}(}|. Consequently, |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|\le |\widetilde{\mathcal {P}}| implies |\mathcal {P}_{\mathcal {ACD}(}|\le |\widetilde{\mathcal {P}}|=|\mathcal {P}|.}}}Therefore, it suffices to prove the theorem for the modified systems \widetilde{ and \widetilde{\mathcal {P}}. From now on, we take verifying the conditions 1, 2 and 3 above. In particular, all trees are "proper trees" in {\mathcal {ACD}}( . It also holds that for each q\in V and \tau \in t_i that is not a leaf, \psi _{\tau ,i,q}=\sum \limits _{\sigma \in \mathit {Children}(\tau )}\psi _{\sigma ,i,q}. Therefore, for each \tau \in t_i that is not a leaf \Psi _{\tau ,i}=\sum \limits _{\sigma \in \mathit {Children}(\tau )}\Psi _{\sigma ,i}, and for each leaf \sigma \in t_i we have \Psi _{\sigma ,i}=1.}Vertices of V^{\prime } are partitioned in the equivalence classes of the preimages by \varphi of the roots of the trees \lbrace t_1,\dots ,t_r\rbrace of {\mathcal {ACD}}(: V^{\prime }= \bigcup \limits _{i=1}^r\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon )) \quad \text{ and } \quad \varphi _V^{-1}( {\mathit {States}}_i(\varepsilon )) \cap \varphi _V^{-1}( {\mathit {States}}_j(\varepsilon ))=\emptyset \text{ for } i\ne j .}}\begin{claim*}For each i=1,\dots ,r and each \tau \in t_i, if C_\tau is a non-empty "\nu _i(\tau )-SCC", then|C_\tau |\ge \Psi _{\tau ,i}.\end{claim*}}}}\) Let us suppose this claim holds. In particular \((\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ),\varphi _E^{-1}( \nu _i(\varepsilon ))\) verifies the property (REF ) from definition REF , so from the proof of lemma REF we deduce that it contains a \(\nu _i(\varepsilon )\) -SCC and therefore \(|\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ))| \ge \Psi _{\varepsilon ,i}\) , so
\(|\mathcal {P}|=\sum \limits _{i=1}^r |\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ))|\ge \sum \limits _{i=1}^r \Psi _{\varepsilon ,i}=|"\mathcal {P}_{\mathcal {ACD}(}"|,\)
concluding the proof.
[Proof of the claim]
Let \(C_\tau \) be a "\(\nu _i(\tau )\) -SCC". Let us prove \(|C_\tau |\ge \Psi _{\tau ,i}\) by induction on the "height of the node" \(\tau \) . If \(\tau \) is a leaf (in particular if its height is 1), \(\Psi _{\tau ,i}=1\) and the claim is clear.
If \(\tau \) of height \(h>1\) is not a leaf, then it has children \(\sigma _1,\dots , \sigma _k\) , all of them of height \(h-1\) . Thanks to lemmas REF and REF , for \(j=1,\dots ,k\) , there exist disjoint "\(\nu _i(\sigma _j)\) -SCC" included in \(C_\tau \) , named \(C_1,\dots ,C_k\) , so by induction hypothesis
\( |C_\tau | \ge \sum \limits _{j=1}^k |C_j| \ge \sum \limits _{j=1}^k \Psi _{\sigma _j,i}= \Psi _{\tau ,i}. \)
From the hypothesis of theorem REF we cannot deduce that there is a "morphism" from \(\mathcal {P}\) to \("\mathcal {P}_{\mathcal {ACD}(}"\) or vice-versa. To produce a counter-example it is enough to remember the “non-determinism” in the construction of \(\mathcal {P}_{\mathcal {ACD}(}\) . Two different orderings in the nodes of the trees of \({\mathcal {ACD}}(\) will produce two incomparable, but minimal in size parity transition systems that admit a "locally bijective morphism" to \(. \)
However, we can prove the following result:
If \(\varphi _1: "\mathcal {P}_{\mathcal {ACD}(}" \rightarrow is the "locally bijective morphism" described in the proof of proposition \ref {Prop_Correctness-ACD}, then for every state \) q\( in \) of "index" \(i\) :
\( |\varphi _1^{-1}(q)|=\psi _{\varepsilon ,i,q}\le |\varphi ^{-1}(q)| \;, \; \text{ for every "locally bijective morphism" } \varphi : \mathcal {P}\rightarrow \)
It is enough to remark that if \(q \in {\mathit {States}}_i(\tau )\) , then any "\(\nu _i(\tau )\) -SCC" \(C_\tau \) of \(\mathcal {P}\) will contain some state in \(\varphi ^{-1}(q)\) . We prove by induction as in the proof of the claim that \(\psi _{\tau ,i,q} \le |C_\tau \cap \varphi ^{-1}(q)|\) .
Applications
Determinisation of Büchi automata
In many applications, such as the synthesis of reactive systems for \(LTL\) -formulas, we need to have "deterministic" automata. For this reason, the determinisation of automata is usually a crucial step. Since McNaughton showed in [20]} that Büchi automata can be transformed into Muller deterministic automata recognizing the same language, much effort has been put into finding an efficient way of performing this transformation. The first efficient solution was proposed by Safra in [21]}, producing a deterministic automaton using a "Rabin condition". Due to the many advantages of "parity conditions" (simplicity, easy complementation of automata, they admit memoryless strategies for games, closeness under union and intersection...), determinisation constructions towards parity automata have been proposed too. In [22]}, Piterman provides a construction producing a parity automaton that in addition improves the state-complexity of Safra's construction. In [23]}, Schewe breaks down Piterman's construction in two steps: the first one from a non deterministic Büchi automaton \(\mathcal {B}\) towards a "Rabin automaton" (\("\mathcal {R}_\mathcal {B}"\) ) and the second one gives Piterman's parity automaton (\("\mathcal {P}_\mathcal {B}"\) ).
In this section we prove that there is a "locally bijective morphism" from \("\mathcal {P}_\mathcal {B}"\) to \("\mathcal {R}_\mathcal {B}"\) , and therefore we would obtain a smaller parity automaton applying the "ACD-transformation" in the second step. We provide an example (example REF ) in which the "ACD-transformation" provides a strictly better parity automaton.
From non-deterministic Büchi to deterministic Rabin automata
In [23]}, Schewe presents a construction of a deterministic "Rabin automaton" \(""\mathcal {R}_\mathcal {B}""\) from a non-deterministic "Büchi automaton" \(\mathcal {B}\) . The set of states of the automaton \(\mathcal {R}_\mathcal {B}\) is formed of what he calls ""history trees"". The number of history trees for a Büchi automaton of size \(n\) is given by the function \(\mathit {hist}(n)\) , that is shown to be in \(o((1.65n)^n)\) in [23]}. This construction is presented starting from a state-labelled Büchi automaton. A construction starting from a transition-labelled Büchi automaton can be found in [26]}. In [27]}, Colcombet and Zdanowski proved the worst-case optimality of the construction.
[[23]}]
Given a non-deterministic "Büchi automaton" \(\mathcal {B}\) with \(n\) states, there is an effective construction of a deterministic "Rabin automaton" \(\mathcal {R}_\mathcal {B}\) with \(\mathit {hist}(n)\) states and using \(2^{n-1}\) Rabin pairs that recognizes the language \("\mathcal {L}(\mathcal {B})"\) .
[[27]}]
For every \(n\in \mathbb {N} \) there exists a non-deterministic Büchi automaton \(B_n\) of size \(n\) such that every deterministic Rabin automaton recognizing \(\mathcal {L}(\mathcal {B}_n)\) has at least \(\mathit {hist}(n)\) states.
From non-deterministic Büchi to deterministic parity automata
In order to build a deterministic "parity automaton" \(""\mathcal {P}_\mathcal {B}""\) that recognizes the language of a given "Büchi automaton" \(\mathcal {B}\) , Schewe transforms the automaton \("\mathcal {R}_\mathcal {B}"\) into a parity one using what he calls a later introduction record (LIR). The LIR construction can be seen as adding an ordering (satisfying some restrictions) to the nodes of the "history trees". States of \(\mathcal {P}_\mathcal {B}\) are therefore pairs of history trees with a LIR. In this way we obtain a similar parity automaton that with the Piterman's determinisation procedure [22]}.
The worst-case optimality of this construction was proved in [31]}, [26]}, generalising the methods of [27]}.
[[23]}]
Given a non-deterministic "Büchi automaton" \(\mathcal {B}\) with \(n\) states, there is an effective construction of a deterministic "parity automaton" \("\mathcal {P}_\mathcal {B}"\) with \(O(n!(n-1)!)\) states and using \(2n\) priorities that recognizes the language \("\mathcal {L}(\mathcal {B}))"\) .
[[31]}, [26]}]
For every \(n\in \mathbb {N} \) there exists a non-deterministic Büchi automaton \(B_n\) of size \(n\) such that \("\mathcal {P}_\mathcal {B}"\) has less than \(1.5\) times as many states as a minimal deterministic parity automaton recognizing \(\mathcal {L}(\mathcal {B}_n)\) .
A locally bijective morphism from \(\mathcal {P}_\mathcal {B}\) to \(\mathcal {R}_\mathcal {B}\)
Given a "Büchi automaton" \(\mathcal {B}\) and its determinisations to Rabin and parity automata \("\mathcal {R}_\mathcal {B}"\) and \("\mathcal {P}_\mathcal {B}"\) , there is a "locally bijective morphism" \(\varphi : \mathcal {P}_\mathcal {B}\rightarrow \mathcal {R}_\mathcal {B}\) .
Observing the construction of \(\mathcal {R}_\mathcal {B}\) and \(\mathcal {P}_\mathcal {B}\) in [23]}, we see that the states of \(\mathcal {P}_\mathcal {B}\) are of the form \((T,\chi )\) with \(T\) an state of \(\mathcal {R}_B\) (a "history tree"), and \(\chi : T \rightarrow \lbrace 1,\dots ,|B|\rbrace \) a LIR (that can be seen as an ordering of the nodes of \(T\) ).
It is easy to verify that the mapping \(\varphi _V((T,\chi ))=T\) defines a morphism \(\varphi : \mathcal {R}_\mathcal {B}\rightarrow \mathcal {P}_\mathcal {B}\) (from fact REF there is only one possible definition of \(\varphi _E\) ). Since the automata are deterministic, \(\varphi \) is a "locally bijective morphism".
Let \(\mathcal {B}\) be a "Büchi automaton" and \("\mathcal {R}_\mathcal {B}"\) , \("\mathcal {P}_\mathcal {B}"\) the deterministic Rabin and parity automata obtained by applying the Piterman-Schewe construction to \(\mathcal {B}\) . Then, the parity automaton \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}\) verifies
\( |\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}| \le |\mathcal {P}_\mathcal {B}| \)
and \("\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}"\) uses a smaller number of priorities than \("\mathcal {P}_\mathcal {B}"\) .
It is a direct consequence of propositions REF , REF and theorem REF .
Furthermore, after proposition REF , \("\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}"\) uses the optimal number of priorities to recognize \("\mathcal {L}(\mathcal {B})"\) , and we directly obtain this information from the "alternating cycle decomposition" of \(\mathcal {R}_\mathcal {B}\) , \({\mathcal {ACD}}(\mathcal {R}_\mathcal {B})\) .
In the "example REF " we show a case in which \(|\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}| < |\mathcal {P}_\mathcal {B}|\) and for which the gain in the number of priorities is clear.
In [27]} and [26]}, the lower bounds for the determinisation of "Büchi automata" to Rabin and parity automata where shown using the family of ""full Büchi automata"", \(\lbrace \mathcal {B}_n\rbrace _{n\in \mathbb {N} }\) , \(|\mathcal {B}_n|=n\) . The automaton \(\mathcal {B}_n\) can simulate any other Büchi automaton of the same size. For these automata, the constructions \(\mathcal {P}_{\mathcal {B}_n}\) and \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_{B_n})}\) coincide.
""""
We present a non-deterministic Büchi automaton \(\mathcal {B}\) such that the "ACD-parity automaton" of \("\mathcal {R}_\mathcal {B}"\) has strictly less states and uses strictly less priorities than \("\mathcal {P}_\mathcal {B}"\) .
In figure REF we show the automaton \(\mathcal {B}\) over the alphabet \(\Sigma =\lbrace a,b,c\rbrace \) . Accepting transitions for the "Büchi condition" are represented with a black dot on them. An accessible "strongly connected component" \(\mathcal {R}_\mathcal {B}^{\prime }\) of the determinisation to a "Rabin automaton" \("\mathcal {R}_\mathcal {B}"\) is shown in figure REF . It has 2 states that are "history trees" (as defined in [23]}). There is a "Rabin pair" \((E_\tau ,F_\tau )\) for each node appearing in some "history tree" (four in total), and these are represented by an array with four positions. We assign to each transition and each position \(\tau \) in the array the symbols \({Green2}{{TextRenderingMode=FillStroke,LineWidth=.5pt, }{}}\) , \({Red2}{\mathbf {X}}\) , or \({Orange2}{\bullet }\) depending on whether this transition belongs to \(E_\tau \) , \(F_\tau \) or neither of them, respectively (we can always suppose \(E_\tau \cap F_\tau = \emptyset \) ).
In figure REF there is the "alternating cycle decomposition" corresponding to \(\mathcal {R}_\mathcal {B}^{\prime }\) . We observe that the tree of \({\mathcal {ACD}}(\mathcal {R}_\mathcal {B}^{\prime })\) has a single branch of height 3.
This is, the Rabin condition over \(\mathcal {R}_\mathcal {B}^{\prime }\) is already a "\([1,3]\) -parity condition" and \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B^{\prime })}=\mathcal {R}_\mathcal {B}^{\prime }\) . In particular it has 2 states, and uses priorities in \([1,3]\) .
On the other hand, in figure REF we show the automaton \(\mathcal {P}_\mathcal {B}^{\prime }\) , that has 3 states and uses priorities in \([3,7]\) . The whole automata \(\mathcal {R}_\mathcal {B}\) and \(\mathcal {P}_\mathcal {B}\) are too big to be pictured in these pages, but the three states shown in figure REF are indeed accessible from the initial state of \(\mathcal {P}_\mathcal {B}\) .
<FIGURE><FIGURE><FIGURE><FIGURE>
On relabelling of transition systems by acceptance conditions
In this section we use the information given by the "alternating cycle decomposition" to provide characterisations of "transition systems" that can be labelled with "parity", "Rabin", "Streett" or \("\mathit {Weak}_k"\) conditions, generalising the results of [4]}.
As a consequence, these yield simple proofs of two results about the possibility to define different classes of acceptance conditions in a deterministic automaton. Theorem REF , first proven in [42]}, asserts that if we can define a Rabin and a Streett condition on top of an underlying automaton \(\mathcal {A}\) such that it recognizes the same language \(L\) with both conditions, then we can define a parity condition in \(\mathcal {A}\) recognizing \(L\) too. Theorem REF states that if we can define Büchi and co-Büchi conditions on top of an automaton \(\mathcal {A}\) recognizing the language \(L\) , then we can define a \("\mathit {Weak}"\) condition over \(\mathcal {A}\) such that it recognizes \(L\) .
First, we extend the definition REF of section REF to the "alternating cycle decomposition".
Given a Muller transition system \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) , we say that its "alternating cycle decomposition" \({\mathcal {ACD}}(\) is a
""Rabin ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "Rabin shape".
""Streett ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "Streett shape".
""parity ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "parity shape".
""\([1,\eta ]\) -parity ACD"" (resp. \([0,\eta -1]\) -parity ACD) if it is a parity ACD, every tree has "height" at most \(\eta \) and trees of height \(\eta \) are (ACD)odd (resp. (ACD)even).
""Büchi ACD"" if it is a \([0,1]\) -parity ACD.
""co-Büchi ACD"" if it is a \([1,2]\) -parity ACD.
""\(\mathit {Weak}_k\) ACD"" if it is a parity ACD and every tree \((t_i,\nu _i) \in {\mathcal {ACD}}(\) has "height" at most \(k\) .
The next proposition follows directly from the definitions.
Let \( be a Muller transition system. Then:\begin{itemize}\item {\mathcal {ACD}}( is a "parity ACD" if and only if it is a "Rabin ACD" and a "Streett ACD".\item {\mathcal {ACD}}( is a "\mathit {Weak}_k ACD" if and only if it is a "[0,k]-parity ACD" and a "[1,k+1]-parity ACD".\end{itemize}\)
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "Rabin condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\notin \mathcal {F}\) and \(l_2\notin \mathcal {F}\) , then \(l_1\cup l_2 \notin \mathcal {F}\) .
\({\mathcal {ACD}}(\) is a "Rabin ACD".
(\(1 \Rightarrow 2\) )
Suppose that \( uses a Rabin condition with Rabin pairs \) (E1,F1),...,(Er,Fr)\(. Let \) l1\( and \) l2\( be two rejecting loops. If \) l1l2\( was accepting, then there would be some Rabin pair \) (Ej,Fj)\( and some edge \) el1l2\( such that \) eEj\( and \) eFj\(. However, the edge \) e\( belongs to \) l1\( or to \) l2\(, and the loop it belongs to should be accepting too.\)
(\(2 \Rightarrow 3\) )
Let \(q\in V\) of "index" \(i\) , and \("t_q"\) be the subtree of \({\mathcal {ACD}}(\) associated to \(q\) . Suppose that there is a node \(\tau \in t_q\) such that \((node){p_i}(\tau )\) is even ("round" node) and that it has two different children \(\sigma _1\) and \(\sigma _2\) . The "loops" \(\nu _i(\sigma _1)\) and \(\nu _i(\sigma _2)\) are maximal rejecting loops contained in \(\nu _i(\tau )\) , and since they share the state \(q\) , their union is also a loop that must verify \( \nu _i(\sigma _1) \cup \nu _i(\sigma _2)\in \mathcal {F}\) , contradicting the hypothesis.
(\(3 \Rightarrow 1\) )
We define a "Rabin condition" over \(. For each tree \) ti\( in \) ACD(\( and each "round" node \) ti\( (\) (node)pi()\( even) we define the Rabin pair \) (Ei,,Fi,)\( given by:\)\( E_{i,\tau }=\nu _i(\tau )\setminus \bigcup _{\sigma \in "\mathit {Children}"(\tau )}\nu _i(\sigma ) \quad , \qquad F_{i,\tau }=E \setminus \nu _i(\tau ). \)\(\) Let us show that this condition is "equivalent to" \(\mathcal {F}\) over the transition system \(. We begin by proving the following consequence of being a "Rabin ACD":\begin{claim*}If \tau is a "round" node in the tree t_i of {\mathcal {ACD}}(, and l\in {\mathpzc {Loop}}( is a "loop" such that l\subseteq \nu _i(\tau ) and l\nsubseteq \nu _i(\sigma ) for any child \sigma of \tau , then there is some edge e\in l such that e\notin \nu _i(\sigma ) for any child \sigma of \tau .\end{claim*}\begin{claimproof}Since for each state q\in V the tree "t_q" has "Rabin shape", it is verified that {\mathit {States}}_i(\sigma )\cap {\mathit {States}}_i(\sigma ^{\prime })=\emptyset for every pair of different children \sigma , \sigma ^{\prime } of \tau . Therefore, the union of \nu _i(\sigma ) and \nu _i(\sigma ^{\prime }) is not a loop, and any loop l contained in this union must be contained either in \nu _i(\sigma ) or in \nu _i(\sigma ^{\prime }).\end{claimproof}\)
Let \(\varrho \in {\mathpzc {Run}}_{T}\) be a "run" in \(, let \) lLoop(\( be the loop of \) such that \(\mathit {Inf}(\varrho )=l\) and let \(i\) be the "index" of the edges in this loop. Let \(\tau \) be a maximal node in \(t_i\) (for \({\sqsubseteq }\) ) such that \(l\subseteq \nu _i(\tau )\) . If \(l\in \mathcal {F}\) , let \(\tau \) be a maximal node in \(t_i\) (for \({\sqsubseteq }\) ) such that \(l\subseteq \nu _i(\tau )\) . This node \(\tau \) is a round node, and from the previous claim it follows that there is some edge \(e\in l\) such that \(e\) does not belong to any child of \(\tau \) , so \(e\in E_{i,\tau }\) and \(e\notin F_{i,\tau }\) , so the "run" \(\varrho \) is accepted by the Rabin condition too. If \(l\notin \mathcal {F}\) , then for every round node \(\tau \) , if \(l\subseteq \nu _i(\tau )\) then \(l\subseteq \nu _i(\sigma )\) for some child \(\sigma \) of \(\tau \) . Therefore, for every Rabin pair \((E_{i,\tau },F_{i,\tau })\) and every \(e\in l\) , it is verified \(e\in E_{i,\tau } \, \Rightarrow \, e\in F_{i,\tau }\) .
The Rabin condition presented in this proof does not necessarily use the optimal number of Rabin pairs required to define a Rabin condition "equivalent to" \(\mathcal {F}\) over \(.\)
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "Streett condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\in \mathcal {F}\) and \(l_2\in \mathcal {F}\) , then \(l_1\cup l_2 \in \mathcal {F}\) .
\({\mathcal {ACD}}(\) is a "Streett ACD".
We omit the proof of proposition REF , being the dual case of proposition REF .
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "parity condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\in \mathcal {F}\, \Leftrightarrow \, l_2\in \mathcal {F}\) , then \(l_1\cup l_2 \in \mathcal {F}\, \Leftrightarrow \,l_1\in \mathcal {F}\) . That is, union of loops having the same “accepting status” preserves their “accepting status”.
\({\mathcal {ACD}}(\) is a "parity ACD".
Moreover, the parity condition we can define over \( is a "\) [1,]\(-parity" (resp. \) [0,-1]\(-parity~/~\) "Weakk"\() condition if and only if \) ACD(\( is a "\) [1,]\(-parity ACD" (resp. \) [0,-1]\(-parity ACD~/~"\) Weakk\( ACD").\)
(\(1 \Rightarrow 2\) )
Suppose that \( uses a parity acceptance condition with the priorities given by \) p:E N \(. Then, since \) l1\( and \) l2\( are both accepting or both rejecting, \) p1=p(l1)\( and \) p2=p(l2)\( have the same parity, that is also the same parity than \) p(l1 l2)={p1,p2}\(.\)
(\(2 \Rightarrow 3\) )
Let \(q\in V\) of "index" \(i\) , and \("t_q"\) be the subtree of \({\mathcal {ACD}}(\) associated to \(q\) . Suppose that there is a node \(\tau \in t_q\) with two different children \(\sigma _1\) and \(\sigma _2\) . The loops \(\nu _i(\sigma _1)\) and \(\nu _i(\sigma _2)\) are different maximal loops with the property \(\nu _i(\sigma )\subseteq \nu _i(\tau )\) and \(\nu _i(\sigma )\in \mathcal {F}\, \Leftrightarrow \, \nu _i(\tau ) \notin \mathcal {F}\) . Since they share the state \(q\) , their union is also a loop contained in \(\nu _i(\tau )\) and then
\( \nu _i(\sigma _1) \cup \nu _i(\sigma _2)\in \mathcal {F}\; \Leftrightarrow \; \nu _i(\tau ) \in \mathcal {F}\; \Leftrightarrow \; \nu _i(\sigma _1)\notin \mathcal {F}\)
contradicting the hypothesis.
(\(3 \Rightarrow 1\) )
From the construction of the "ACD-transformation", it follows that \("\mathcal {P}_{\mathcal {ACD}(}"\) is just a relabelling of \( with an equivalent parity condition.\)
For the implication from right to left of the last statement, we remark that if the trees of \({\mathcal {ACD}}(\) have priorities assigned in \([\mu ,\eta ]\) , then the parity transition system \("\mathcal {P}_{\mathcal {ACD}(}"\) will use priorities in \([\mu ,\eta ]\) . If \({\mathcal {ACD}}(\) is a "\(\mathit {Weak}_k\) ACD", then in each "strongly connected component" of \("\mathcal {P}_{\mathcal {ACD}(}"\) the number of priorities used will be the same as the "height" of the corresponding tree of \({\mathcal {ACD}}(\) (at most \(k\) ).
For the other implication it suffices to remark that the priorities assigned by \({\mathcal {ACD}}(\) are optimal (proposition REF ).
Given a "transition system graph" \(G=(V,E,\mathit {Source},\mathit {Target},I_0)\) and a "Muller condition" \(\mathcal {F}\subseteq \mathcal {P}(E)\) , we can define a "parity condition" \(p:E\rightarrow \mathbb {N} \) "equivalent to" \(\mathcal {F}\) over \(G\) if and only if we can define a "Rabin condition" \(R\) and a "Streett condition" \(S\) over \(G\) such that
\( (G,\mathcal {F}) \,"\simeq "\, (G,R)\, "\simeq "\, (G,S) \) .
Moreover, if the Rabin condition \(R\) uses \(r\) Rabin pairs and the Streett condition \(S\) uses \(s\) Streett pairs, we can take the parity condition \(p\) using priorities in
\([1,2r+1]\) if \(r\le s\) .
\([0,2s]\) if \(s\le r\) .
The first statement is a consequence of the characterisations (2) or (3) from propositions REF , REF and REF .
For the second statement we remark that the trees of \({\mathcal {ACD}}(\) have "height" at most \(\min \lbrace 2r+1, 2s+1\rbrace \) . If \(r\ge s\) , then the height \(2r+1\) can only be reached by (ACD)odd trees, and if \(s\ge r\) , the height \(2s+1\) only by (ACD)even trees.
From the last statement of proposition REF and thanks to the second item of proposition REF , we obtain:
Given a "transition system graph" \(G\) and a Muller condition \(\mathcal {F}\) over \(G\) , there is an equivalent \("\mathit {Weak}_k"\) condition over \(G\) if and only if there are both \([0,k]\) and "\([1,k+1]\) -parity" conditions "equivalent to" \(\mathcal {F}\) over \(G\) .
In particular, there is an equivalent "Weak condition" if and only if there are "Büchi" and "co-Büchi" conditions equivalent to \(\mathcal {F}\) over \(G\) .
It is important to notice that the previous results are stated for non-labelled transition systems. We must be careful when translating these results to automata and formal languages. For instance, in [42]} there is an example of a non-deterministic automaton \(\mathcal {A}\) , such that we can put on top of it Rabin and Streett conditions \(R\) and \(S\) such that \(\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) , but we cannot put a parity condition on top of it recognising the same language. However, proposition REF allows us to obtain analogous results for "deterministic automata".
[[42]}]
Let \(\mathcal {A}\) be the "transition system graph" of a "deterministic automaton" with set of states \(Q\) . Let \(R\) be a Rabin condition over \(\mathcal {A}\) with \(r\) pairs and \(S\) a Streett condition over \(\mathcal {A}\) with \(s\) pairs such that \(\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) . Then, there exists a parity condition \(p: Q \times \Sigma \rightarrow \mathbb {N} \) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},p)=\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) .
Moreover,
if \(r\le s\) , we can take \(p\) to be a "\([1,2r+1]\) -parity condition".
if \(s\le r\) , we can take \(p\) to be a "\([0,2s]\) -parity condition".
Proposition REF implies that \((\mathcal {A},R)"\simeq "(\mathcal {A},S)\) , and after corollary REF , there is a parity condition \(p\) using the proclaimed priorities such that \((\mathcal {A},p)"\simeq "(\mathcal {A},R)\) . Therefore \(\mathcal {L}(\mathcal {A},p)"\simeq "\mathcal {L}(\mathcal {A},R)\) (since for both deterministic and non-deterministic \((\mathcal {A},p)"\simeq "(\mathcal {A},R)\) implies \(\mathcal {L}(\mathcal {A},p)"\simeq "\mathcal {L}(\mathcal {A},R)\) ).
Let \(\mathcal {A}\) be the "transition system graph" of a deterministic automaton and \(p\) and \(p^{\prime }\) be \([0,k]\) and "\([1,k+1]\) -parity conditions" respectively over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},p)= \mathcal {L}(\mathcal {A},p^{\prime })\) . Then, there exists a \("\mathit {Weak}_k"\) condition \(W\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},W)=\mathcal {L}(\mathcal {A},p)\) .
In particular, there is a \("\mathit {Weak}"\) condition \(W\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},W)=L\) if and only if there are both "Büchi" and "co-Büchi" conditions \(B,B^{\prime }\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},B)= \mathcal {L}(\mathcal {A},B^{\prime })=L\) .
If follows from proposition REF and corollary REF .
Conclusions
We have presented a transformation that, given a Muller "transition system", provides an equivalent "parity" transition system that has minimal size and uses an optimal number of priorities among those which accept a "locally bijective morphism" to the original Muller transition system. In order to describe this transformation we have introduced the "alternating cycle decomposition", a data structure that arranges all the information about the acceptance condition of the transition system and the interplay between this condition and the structure of the system.
We have shown in section how the alternating cycle decomposition can be useful to reason about acceptance conditions, and we hope that this representation of the information will be helpful in future works.
We have not discussed the complexity of effectively computing the "alternating cycle decomposition" of a Muller transition system. It is known that solving Muller games is \(\mathrm {PSPACE}\) -complete when the acceptance condition is given as a list of accepting sets of colours
[18]}. However, given a Muller game \(\mathcal {G}\) and the "Zielonka tree" of its Muller condition, we have a transformation into a parity game of polynomial size on the size of \(\mathcal {G}\) , so solving Muller games with this extra information is in \(\mathrm {NP}\cap \mathrm {co}\) -\(\mathrm {NP}\) . Also, in order to build \({\mathcal {ACD}}(\) we suppose that the Muller condition is expressed using as colours the set of edges of the game (that is, as an explicit Muller condition), and solving explicit Muller games is in \(\mathrm {PTIME}\) [1]}. Consequently, unless \(\mathrm {PSPACE}\) is contained in \(\mathrm {NP}\cap \mathrm {co}\) -\(\mathrm {NP}\) , we cannot compute the "Zielonka tree" of a Muller condition, nor the "alternating cycle decomposition" of a Muller transition system in polynomial time.
| [21] | [
[
89863,
89867
]
] | https://openalex.org/W2031592188 |
4be55834-3315-4946-aae7-01479e77a92a |
\(\varphi _V(v_0)\in I_0^{\prime }\) for every \(v_0\in I_0\) (initial states are preserved).
\(\mathit {Source}^{\prime }(\varphi _E(e))=\varphi _V(\mathit {Source}(e))\) for every \(e\in E\) (origins of edges are preserved).
\(\mathit {Target}^{\prime }(\varphi _E(e))=\varphi _V(\mathit {Target}(e))\) for every \(e\in E\) (targets of edges are preserved).
For every "run" \(\varrho \in {\mathpzc {Run}}_{,\varrho \in \mathit {Acc}\; \Leftrightarrow \; \varphi _E(\varrho ) \in \mathit {Acc}^{\prime } (acceptance condition is preserved).}\) If \((l_V,l_E)\) , \((,l_V^{\prime },l_E^{\prime })\) are "labelled transition systems", we say that \(\varphi \) is a ""morphism of labelled transition systems"" if in addition it verifies
\(l_V^{\prime }(\varphi _V(v))=l_V(v)\) for every \(v\in V\) (labels of states are preserved).
\(l_E^{\prime }(\varphi _E(e))=l_E(e)\) for every \(e\in V\) (labels of edges are preserved).
We remark that it follows from the first three conditions that if \(\varrho \in {\mathpzc {Run}}_{ is a "run" in , then \varphi _E(\varrho )\in {\mathpzc {Run}}_{T^{\prime }} (it is a "run" in starting from some initial vertex).}Given a "morphism of transition systems" \) (V,E)\(, we will denote both maps by \)\( whenever no confusion arises. We extend \) E\( to \) E*\( and \) E\( component wise.\)
A "morphism of transition systems" \(\varphi =(\varphi _V, \varphi _E)\) is unequivocally characterized by the map \(\varphi _E\) . Nevertheless, it is convenient to keep the notation with both maps.
Given two "transition systems" \((V,E,\mathit {Source},\mathit {Target},I_0,\mathit {Acc})\) , \(=(V^{\prime },E^{\prime },\mathit {Source}^{\prime },\mathit {Target}^{\prime },I_0^{\prime },\mathit {Acc}^{\prime })\) , a "morphism of transition systems" \(\varphi : \) is called
""Locally surjective"" if
For every \(v_0^{\prime }\in I_0^{\prime }\) there exists \(v_0\in I_0\) such that \(\varphi (v_0)=v_0^{\prime }\) .
For every \(v\in V\) and every \( e^{\prime }\in E^{\prime }\) such that \( \mathit {Source}^{\prime }(e^{\prime })=\varphi (v)\)
there exists \(e\in E \) such that \( \varphi (e)=e^{\prime } \) and \( \mathit {Source}(e)=v\) .
"Locally injective" if
For every \(v_0^{\prime }\in I_0^{\prime }\) , there is at most one \(v_0\in I_0\) such that \(\varphi (v_0)=v_0^{\prime }\) .
For every \( v\in V\) and every \( e^{\prime }\in E^{\prime } \) such that \( \mathit {Source}^{\prime }(e^{\prime })=\varphi (v) \)
if there are \( e_1,e_2\in E \) such that \( \varphi (e_i)=e^{\prime }\) and \( \mathit {Source}(e_i)=v\) , for \( i=1,2 \) , then \( e_1=e_2 \) .
"Locally bijective" if it is both "locally surjective" and "locally injective".
Equivalently, a "morphism of transition systems" \(\varphi \) is "locally surjective" (resp. injective) if the restriction of \(\varphi _E\) to \("\mathit {Out}"(v)\) is a surjection (resp. an injection) into \("\mathit {Out}"(\varphi (v))\) for every \(v\in V\) and the restriction of \(\varphi _V\) to \(I_0\) is a surjection (resp. an injection) into \(I_0^{\prime }\) .
If we only consider the "underlying graph" of a "transition system", without the "accepting condition", the notion of "locally bijective morphism" is equivalent to the usual notion of bisimulation. However, when considering the accepting condition, we only impose that the acceptance of each "run" must be preserved (and not that the colouring of each transition is preserved). This allows us to compare transition systems using different classes of accepting conditions.
We state two simple, but key facts.
If \(\varphi : \) is a "locally bijective morphism", then \(\varphi \) induces a bijection between the runs in \({\mathpzc {Run}}_{ and {\mathpzc {Run}}_{} that preserves their acceptance.}\)
If \(\varphi \) is a "locally surjective morphism", then it is onto the "accessible part" of \(\) . That is, for every "accessible" state \(v^{\prime }\in \) , there exists some state \(v\in such that \) V(v)=v'\(. In particular if every state of \)\( is "accessible", \)\( is surjective.\)
Intuitively, if we transform a "transition system" 1 into 2 “without adding non-determinism”, we will have a locally bijective morphism \(\varphi : 2 \rightarrow 1\) . In particular, if we consider the "composition" \(2=\mathcal {B}\lhd 1\) of 1 by some "deterministic automaton" \(\mathcal {B}\) , as defined in section , the projection over 1 gives a "locally bijective morphism" from 2 to 1.
""""
Let \(\mathcal {A}\) be the "Muller automaton" presented in the example REF , and \(\mathcal {Z}_{\mathcal {F}_1}\) the "Zielonka tree automaton" for its Muller condition \(\mathcal {F}_1=\lbrace \lbrace a\rbrace ,\lbrace b\rbrace \rbrace \) as in the figure REF . We show them in figure REF and their "composition" \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\lhd \mathcal {A}\) in figure REF . If we name the states of \(\mathcal {A}\) with the letters \(A\) and \(B\) , and those of \(\mathcal {Z}_{\mathcal {F}_1}\) with \(\alpha ,\beta \) , there is a locally bijective morphism \(\varphi : \mathcal {\mathcal {Z}_{\mathcal {F}}}\lhd \mathcal {A}\rightarrow \mathcal {A}\) given by the projection on the first component
\( \varphi _V((X,y))=X \; \text{ for } X\in \lbrace A,B\rbrace ,\, y\in \lbrace \alpha ,\beta \rbrace \)
and \(\varphi _E\) associates to each edge \(e\in \mathit {Out}(X,y)\) labelled by \(a\in \lbrace 0,1\rbrace \) the only edge in \(\mathit {Out}(X)\) labelled with \(a\) .
<FIGURE><FIGURE>We know that \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) is a minimal automaton recognizing the "Muller condition" \(\mathcal {F}\) (theorem REF ). However, the "composition" \(Z_{\mathcal {F}_1} \lhd \mathcal {A}\) has 4 states, and in the example REF (figure REF ) we have shown a parity automaton recognizing \(\mathcal {L}(\mathcal {A})\) with only 3 states. Moreover, there is a "locally bijective" morphism from this smaller parity automaton to \(\mathcal {A}\) (we only have to send the two states on the left to \(A\) and the state on the right to \(B\) ). In the next section we will show a transformation that will produce the parity automaton with only 3 states starting from \(\mathcal {A}\) .
Morphisms of automata and games
Before presenting the optimal transformation of Muller transition systems, we will state some facts about "morphisms" in the particular case of "automata" and "games". When we speak about a "morphism" between two automata, we always refer implicitly to the morphism between the corresponding "labelled transition systems", as explained in "example REF ".
A "morphism" \(\varphi =(\varphi _V,\varphi _E)\) between two "deterministic automata" is always "locally bijective" and it is completely characterized by the map \(\varphi _V\) .
For each letter of the input alphabet and each state, there must be one and only one outgoing transition labelled with this letter.
Let \(\mathcal {A}=(Q,\Sigma , I_0, \Gamma , \delta , \mathit {Acc})\) , \(\mathcal {A}^{\prime }=(Q^{\prime },\Sigma , I_0^{\prime }, \Gamma , \delta ^{\prime }, \mathit {Acc}^{\prime })\) be two (possibly non-deterministic) "automata". If there is a "locally surjective morphism" \(\varphi : \mathcal {A}\rightarrow \mathcal {A}^{\prime }\) , then \("\mathcal {L}(\mathcal {A})"="\mathcal {L}(\mathcal {A}^{\prime })"\) .
Let \(u\in \Sigma ^\omega \) . If \(u\in \mathcal {L}(\mathcal {A})\) there is an accepting run, \(\varrho \) , over \(u\) in \(\mathcal {A}\) . By the definition of a "morphism of labelled transition systems", \(\varphi (\varrho )\) is also an accepting "run over \(u\) " in \(\mathcal {A}^{\prime }\) .
Conversely, if \(u\in \mathcal {L}(\mathcal {A}^{\prime })\) there is an accepting "run over \(u\) " \(\varrho ^{\prime }\) in \(\mathcal {A}^{\prime }\) . Since \(\varphi \) is locally surjective there is a run \(\varrho \) in \(\mathcal {A}\) , such that \(\varphi (\varrho )=\varrho ^{\prime }\) , and therefore \(\varrho \) is an accepting run over \(u\) .
The converse of the previous proposition does not hold: \("\mathcal {L}(\mathcal {A})"=\mathcal {L}(\mathcal {A}^{\prime })\) does not imply the existence of morphisms \(\varphi : \mathcal {A}\rightarrow \mathcal {A}^{\prime }\) or \(\varphi : \mathcal {A}^{\prime } \rightarrow \mathcal {A}\) , even if \(\mathcal {A}\) has minimal size among the Muller automata recognizing \(\mathcal {L}(\mathcal {A})\) .
If \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) are "non-deterministic automata" and \(\varphi :\mathcal {A}\rightarrow \mathcal {A}^{\prime }\) is a "locally bijective morphism", then \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) have to share some other important semantic properties. Two classes of automata that have been extensively studied are unambiguous and good for games automata. An automaton is ""unambiguous"" if for every input word \(w\in \Sigma ^\omega \) there is at most one accepting "run over" \(w\) . ""Good for games"" automata (GFG), first introduced by Henzinger and Piterman in [1]}, are automata that can resolve the non-determinism depending only in the prefix of the word read so far. These types of automata have many good properties and have been used in different contexts (as for example in the model checking of LTL formulas [2]} or in the theory of cost functions [3]}). Unambiguous automata can recognize \(\omega \) -regular languages using a "Büchi" condition (see [4]}) and GFG automata have strictly more expressive power than deterministic ones, being in some cases exponentially smaller (see [5]}, [6]}).
We omit the proof of the next proposition, being a consequence of fact REF and of the argument from the proof of proposition REF .
Let \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) be two "non-deterministic automata". If \(\varphi :\mathcal {A}\rightarrow \mathcal {A}^{\prime }\) is a "locally bijective morphism", then
\(\mathcal {A}\) is unambiguous if and only if \(\mathcal {A}^{\prime }\) is unambiguous.
\(\mathcal {A}\) is GFG if and only if \(\mathcal {A}^{\prime }\) is GFG.
Having a "locally bijective morphism" between two games implies that the "winning regions" of the players are preserved.
Let \(\mathcal {G}=(V, E, \mathit {Source}, \mathit {Target}, v_0, \mathit {Acc}, l_V)\) and \(\mathcal {G}^{\prime }=(V^{\prime },E^{\prime }, \mathit {Source}^{\prime }, \mathit {Target}^{\prime }, v_0^{\prime }, \mathit {Acc}^{\prime }, l_V^{\prime })\) be two "games" such that there is a "locally bijective morphism" \(\varphi :\mathcal {G}\rightarrow \mathcal {G}^{\prime }\) . Let \(P\in \lbrace Eve, Adam\rbrace \) be a player in those games. Then, \(P\) wins \(\mathcal {G}\) if and only if she/he wins \(\mathcal {G}^{\prime }\) . Moreover, if \(\varphi \) is surjective, the "winning region" of \(P\) in \(\mathcal {G}^{\prime }\) is the image by \(\varphi \) of her/his winning region in \(\mathcal {G}\) , \("\mathcal {W}_P"(\mathcal {G}^{\prime })=\varphi ("\mathcal {W}_P"(\mathcal {G}))\) .
Let \(S_P: {\mathpzc {Run}}_{\mathcal {G}}\cap E^* \rightarrow E\) be a winning "strategy" for player \(P\) in \(\mathcal {G}\) . Then, it is easy to verify that the strategy \(S_P^{\prime }: {\mathpzc {Run}}_{\mathcal {G}^{\prime }}\cap E^{\prime *} \rightarrow E^{\prime }\) defined as
\( S_P^{\prime }(\varrho ^{\prime }) = \varphi _E ( S_P(\varphi ^{-1}(\varrho ^{\prime }))) \)
is a winning "strategy" for \(P\) in \(\mathcal {G}^{\prime }\) . (Remark that thanks to fact REF , the morphism \(\varphi \) induces a bijection over "runs", allowing us to use \(\varphi ^{-1}\) in this case).
Conversely, if \(S_P^{\prime }: {\mathpzc {Run}}_{\mathcal {G}^{\prime }}\cap E^{\prime *} \rightarrow E^{\prime }\) is a winning "strategy" for \(P\) in \(\mathcal {G}^{\prime }\) , then \( S_P(\varrho ) = \varphi _E^{-1} ( S_P^{\prime }(\varphi (\varrho ))) \)
is a winning "strategy" for \(P\) in \(\mathcal {G}\) . Here \( \varphi _E^{-1} (e^{\prime })\) is the only edge \(e\in E\) in \("\mathit {Out}"(\mathit {Target}("\mathit {Last}"(\varrho )))\) such that \(\varphi _E(e)=e^{\prime }\) .
The equality \("\mathcal {W}_P"(\mathcal {G}^{\prime })=\varphi ("\mathcal {W}_P"(\mathcal {G}))\) stems from the fact that if we choose a different initial vertex \(v_1\) in \(\mathcal {G}\) , then \(\varphi \) is a "locally bijective morphism" to the game \(\mathcal {G}^{\prime }\) with initial vertex \(\varphi (v_1)\) . Conversely, if we take a different initial vertex \(v_1^{\prime }\) in \(\mathcal {G}^{\prime }\) , since \(\varphi \) is surjective we can take a vertex \(v_1\in \varphi ^{-1}(v_1^{\prime })\) , and \(\varphi \) remains a locally bijective morphism between the resulting games.
The alternating cycle decomposition
Most transformations of "Muller" into "parity" "transition systems" are based on the "composition" by some automaton converting the Muller condition into a parity one. These transformations act on the totality of the system uniformly, regardless of the local structure of the system and the "acceptance condition".
The transformation we introduce in this section takes into account the interplay between the particular "acceptance condition" and the "transition system", inspired by the alternating chains introduced in [7]}.
In the following we will consider "Muller transition systems" with the Muller acceptance condition using edges as colours. We can always suppose this, since given a transition system \( with edges coloured by \) : E C\( and a Muller condition \) FP(C)\(, the condition \) FP(E)\( defined as \) AF (A)F\( is an "equivalent condition over" \) . However, the size of the representation of the condition \(\mathcal {F}\) might change. Making this assumption corresponds to consider what are called explicit Muller conditions. In particular, solving Muller games with explicit Muller conditions is in \(\mathrm {PTIME}\) [8]}, while solving general Muller games is \(\mathrm {PSPACE}\) -complete [9]}.
Given a "transition system" \((V,E,\mathit {Source},\mathit {Target},I_0, \mathit {Acc})\) , a loop is a subset of edges \(l\subseteq E\) such that it exists \(v\in V\) and a finite "run" \(\varrho \in {\mathpzc {Run}}_{T,v}\) such that \("\mathit {First}"(\varrho )="\mathit {Last}"(\varrho )=v\) and \({\mathit {App}}(\varrho )=l\) . The set of "loops" of \( is denoted \) Loop(\(.For a "loop" \) lLoop(\( we write\)\( ""\mathit {States}""(l):= \lbrace v\in V \; : \; \exists e\in l, \; \mathit {Source}(e)=v \rbrace .\)\(\)
Observe that there is a natural partial order in the set \({\mathpzc {Loop}}(\) given by set inclusion.
If \(l\) is a "loop" in \({\mathpzc {Loop}}(\) , for every \(q\in {\mathit {States}}(l)\) there is a run \(\varrho \in {\mathpzc {Run}}_{q}\) such that \(\mathit {App}(\varrho )=l\) .
The maximal loops of \({\mathpzc {Loop}}(\) (for set inclusion) are disjoint and in one-to-one correspondence with the "strongly connected components" of \(.\)
[Alternating cycle decomposition]
Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" with "acceptance condition" given by \(\mathcal {F}\subseteq \mathcal {P}(E)\) . The alternating cycle decomposition (abbreviated ACD) of \(, noted \) ACD(T)\(, is a family of "labelled trees" \) (t1, 1),..., (tr,r)\( with nodes labelled by "loops" in \) Loop(\(, \) i: tiLoop(\(. We define it inductively as follows:\begin{itemize}\item Let \lbrace l_1,\dots , l_r\rbrace be the set of maximal loops of {\mathpzc {Loop}}(. For each i\in \lbrace 1,\dots , r\rbrace we consider a "tree" t_i and define \nu _i(\varepsilon )=l_i.\end{itemize}\item Given an already defined node \)\( of a tree \) ti\( we consider the maximal loops of the set\)\(\lbrace l\subseteq \nu _i(\tau ) \; : \; l\in {\mathpzc {Loop}}( \text{ and } l \in \mathcal {F}\; \Leftrightarrow \; \nu _i(\tau ) \notin \mathcal {F}\rbrace \)\(and for each of these loops \) l\( we add a child to \)\( in \) ti\( labelled by \) l\(. \)
For notational convenience we add a special "tree" \((t_0,\nu _0)\) with a single node \(\varepsilon \) labelled with the edges not appearing in any other tree of the forest, i.e., \(\nu _0(\varepsilon )=E \setminus \bigcup _{i=1}^{r}l_i\) (remark that this is not a "loop").
We define \(\mathit {States}(\nu _0(\varepsilon )):= V\setminus \bigcup _{i=1}^{r}\mathit {States}(l_i)\) (remark that this does not follow the general definition of \("\mathit {States}"()\) for loops).
We call the trees \(t_1,\dots , t_r\) the ""proper trees"" of the "alternating cycle decomposition" of \(.Given a node \)\( of \) ti\(, we note \) Statesi():=States(i())\(.\)
As for the "Zielonka tree", the "alternating cycle decomposition" of \( is not unique, since it depends on the order in which we introduce the children of each node. This will not affect the upcoming results, and we will refer to it as ``the^{\prime \prime } alternating cycle decomposition of \) .
For the rest of the section we fix a "Muller transition system" \((V,E,\mathit {Source},\mathit {Target}, I_0, \mathcal {F})\) with the "alternating cycle decomposition" given by \((t_0,\nu _0), (t_1,\nu _1),\dots , (t_r,\nu _r)\) .
The "Zielonka tree" for a "Muller condition" \(\mathcal {F}\) over the set of colours \(C\) can be seen as a special case of this construction, for the automaton with a single state, input alphabet \(C\) , a transition for each letter in \(C\) and "acceptance condition" \(\mathcal {F}\) .
Each state and edge of \( appears in exactly one of the "trees" of \) ACD(\(.\)
The ""index"" of a state \(q\in V\) (resp. of an edge \(e\in E\) ) in \({\mathcal {ACD}}(\) is the only number \(j\in \lbrace 0,1,\dots ,r\rbrace \) such that \(q\in "\mathit {States}"_j(\varepsilon )\) (resp. \(e \in \nu _j(\varepsilon )\) ).
For each state \(q\in V\) of "index" \(j\) we define the ""subtree associated to the state \(q\) "" as the "subtree" \(t_q\) of \(t_j\) consisting in the set of nodes \(\lbrace \tau \in t_j \; : \; q\in {\mathit {States}}_j(\tau ) \rbrace \) .
We refer to figures REF and REF for an example of \("t_q"\) .
For each "proper tree" \(t_i\) of \("{\mathcal {ACD}}" (\) we say that \(t_i\) is (ACD)even if \(\nu _i(\varepsilon )\in \mathcal {F}\) and that it is (ACD)odd if \(\nu _i(\varepsilon )\notin \mathcal {F}\) .
We say that the "alternating cycle decomposition" of \( is \emph {even} if all the trees of maximal "height" of \) ACD(\( are even; that it is \emph {odd} if all of them are odd, and that it is \emph {(ACD){ambiguous}} if there are even and odd trees of maximal "height".\)
For each \(\tau \in t_i\) , \(i=1,\dots ,r\) , we define the (node)priority of \(\tau \) in \(t_i\) , written \(p_i(\tau )\) as follows:
If \({\mathcal {ACD}}(\) is (ACD)even or (ACD)ambiguous
If \(t_i\) is (ACD)even (\(\nu _i(\varepsilon )\in \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )=|\tau |\) .
If \(t_i\) is (ACD)odd (\(\nu _i(\varepsilon )\notin \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+1=|\tau |+1\) .
If \({\mathcal {ACD}}(\) is (ACD)odd
If \(t_i\) is (ACD)even (\(\nu _i(\varepsilon )\in \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+2=|\tau |+2\) .
If \(t_i\) is (ACD)odd (\(\nu _i(\varepsilon )\notin \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+1=|\tau |+1\) .
For \(i=0\) , we define \(p_0(\varepsilon )=0\) if \("{\mathcal {ACD}}" (\) is (ACD)even or (ACD)ambiguous and \(p_0(\varepsilon )=1\) if \("{\mathcal {ACD}}" (\) is (ACD)odd.
The assignation of priorities to nodes produces a labelling of the levels of each tree. It will be used to determine the priorities needed by a parity "transition system" to simulate \(. The distinction between the cases \) ACD(\( even or odd is added only to obtain the minimal number of priorities in every case.\)
In figure REF we represent a "transition system" \((V,E,\mathit {Source},\mathit {Target},q_0,\mathcal {F})\) with \(V=\lbrace q_0,q_1,q_2,q_3,q_4,q_5\rbrace \) , \(E=\lbrace a,b,\dots ,j,k\rbrace \) and using the "Muller condition"
\(\mathcal {F}=\lbrace \lbrace c,d,e \rbrace ,\lbrace e \rbrace ,\lbrace g,h,i \rbrace ,\lbrace l \rbrace ,\lbrace h,i,j,k \rbrace ,\lbrace j,k \rbrace \rbrace .\)
It has 2 strongly connected components (with vertices \(S_1=\lbrace q_1,q_2\rbrace , S_2=\lbrace q_3,q_4,q_5\rbrace \) ), and a vertex \(q_0\) that does not belong to any strongly connected component.
The "alternating cycle decomposition" of this transition system is shown in figure REF . It consists of two proper "trees", \(t_1\) and \(t_2\) , corresponding to the strongly connected components of \( and the tree \) t0\( that corresponds to the edges not appearing in the strongly connected components.\) We observe that \({\mathcal {ACD}}(\) is (ACD)odd (\(t_2\) is the highest tree, and it starts with a non-accepting "loop"). It is for this reason that we start labelling the levels of \(t_1\) from 2 (if we had assigned priorities \(0,1\) to the nodes of \(t_1\) we would have used 4 priorities, when only 3 are strictly necessary).
In figure REF we show the "subtree associated to" \(q_4\) .
<FIGURE><FIGURE><FIGURE>
The alternating cycle decomposition transformation
We proceed to show how to use the "alternating cycle decomposition" of a "Muller transition system" to obtain a "parity transition system". Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" and \((t_0,\nu _0), (t_1, \nu _1),\dots , (t_r,\nu _r)\) , its "alternating cycle decomposition".
First, we adapt the definitions of \(\mathit {Supp}\) and \(\mathit {Nextbranch}\) to the setting with multiple trees.
For an edge \(e\in E\) such that \(\mathit {Target}(e)\) has "index" \(j\) , for \(i\in \lbrace 0,1,\dots ,r\rbrace \) and a branch \(\beta \) in some subtree of \(t_i\) , we define the ""support"" of \(e\) from \(\tau \) as:
\( {\mathit {Supp}}(\beta ,i,e)={\left\lbrace \begin{array}{ll}\end{array}\right.\text{The maximal node (for } {\sqsubseteq }\text{) } \tau \in \beta \text{ such that } e\in \nu _i(\tau ), \text{ if } i= j .\\[2mm]}\text{The root } \varepsilon \text{ of } t_j, \text{ if } i\ne j.\)
\(\)
Intuitively, \({\mathit {Supp}}(\beta ,i,e)\) is the highest node we visit if we want to go from the bottom of the branch \(\beta \) to a node of the tree that contains \(e\) “in an optimal trajectory” (going up as little as possible). If we have to jump to another tree, we define \({\mathit {Supp}}(\beta ,i,e)\) as the root of the destination tree.
Let \(i\in \lbrace 0,1,\dots ,r\rbrace \) , \(q\) be a state of "index" \(i\) , \(\beta \) be a branch of some "subtree" of \(t_i\) and \(\tau \in \beta \) be a node of \(t_i\) such that \(q\in {\mathit {States}}_i(\tau )\) . If \(\tau \) is not the deepest node of \(\beta \) , let \(\sigma _\beta \) be the unique child of \(\tau \) in \(t_i\) such that \(\sigma _\beta \in \beta \) . We define:
\( {\mathit {Nextchild}_{t_q}}(\beta ,\tau )={\left\lbrace \begin{array}{ll}\tau , \text{ if } \tau \text{ is a leaf in } {t_q}.\\[3mm]\parbox {8cm}{Smallest older sibling of \sigma _\beta in {t_q}, if \sigma _\beta is defined and there is any such older sibling.}\\[3mm]\text{Smallest child of } \tau \text{ in } {t_q} \text{ in any other case}.\end{array}\right.} \)
Let \(i\in \lbrace 0,1,\dots ,r\rbrace \) and \(\beta \) be a branch of some "subtree" of \(t_i\) . For a state \(q\) of "index" \(j\) and a node \(\tau \) such that \(q\in {\mathit {States}}_j(\tau )\) and such that \(\tau \in \beta \) if \(i=j\) , we define:
\( {\mathit {Nextbranch}_{t_q}}(\beta ,i,\tau )= {\left\lbrace \begin{array}{ll}\end{array}\right.\text{ Leftmost branch in } {t_q} \text{ below } {\mathit {Nextchild}_{t_q}}(\beta ,\tau ), \text{ if } i= j .\\[3mm]}\text{The leftmost branch in } {\mathit {Subtree}}_{t_q}(\tau ), \text{ if } i\ne j.\)
\(\)
[ACD-transformation]
Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" with "alternating cycle decomposition" \({\mathcal {ACD}}(= \lbrace (t_0,\nu _0),(t_1,\nu _1),\dots ,(t_r,\nu _r)\rbrace \) . We define its ""ACD-parity transition system"" (or ACD-transformation) \({\mathcal {P}_{\mathcal {ACD}(}}=(V_P,E_P,\mathit {Source}_P,\mathit {Target}_P,I_0^{\prime }, p:E_P\rightarrow \mathbb {N} )\) as follows:
\(V_P=\lbrace (q,i,\beta ) \; : \; q\in V \text{ of "index" } i \text{ and } \beta \in {\mathit {Branch}}({t_q}) \rbrace \) .
For each node \((q,i,\beta )\in V_P\) and each edge \(e\in {\mathit {Out}}(q)\) we define an edge \(e_{i,\beta }\) from \((q,i,\beta )\) . We set
\(\mathit {Source}_P(e_{i,\beta })=(q,i,\beta )\) , where \(q=\mathit {Source}(e)\) .
\(\mathit {Target}_P(e_{i,\beta })=(q^{\prime },k,{\mathit {Nextbranch}_{t_{q^{\prime }}}}(\beta ,i,\tau ))\) , where \(q^{\prime }=\mathit {Target}(e)\) , \(k\) is its "index" and \(\tau ={\mathit {Supp}}(\beta ,i,e)\) .
\(p(e_{i,\tau })=(node){p_j}({\mathit {Supp}}(\beta ,i,e))\) , where \(j\) is the "index" of \({\mathit {Supp}}(\beta ,i,e)\) .
\(I_0^{\prime }=\lbrace (q_0,i,\beta _0) \; : \; q_0\in I_0, \, i \text{ the index of } q_0\) and \(\beta _0\) the leftmost branch in \({t_{q_0}}\rbrace \) .
If \( is labelled by \) lV:VLV\(, \) lE:ELE\(, we label \) PACD(\( by \) lV'((q,i,))=lV(q)\( and \) lE'(ei,)=lE(e)\(.\)
The set of states of \(\mathcal {P}_{\mathcal {ACD}(}\) is build as follows: for each state \(q\in we consider the subtree of \) ACD(\( consisting of the nodes with \) q\( in its label, and we add a state for each branch of this subtree.\) Intuitively, to define transitions in the transition system \({\mathcal {P}_{\mathcal {ACD}(}}\) we move simultaneously in \( and in \) ACD(\(. We start from \) q0I0\( and from the leftmost branch of \) tq0\(. When we take a transition \) e\( in \) while being in a branch \(\beta \) , we climb the branch \(\beta \) searching a node \(\tau \) with \(q^{\prime }=\mathit {Target}(e)\) and \(e\) in its label, and we produce the priority corresponding to the level reached. If no such node exists, we jump to the root of the tree corresponding to \(q^{\prime }\) . Then, we move to the next child of \(\tau \) on the right of \(\beta \) in the tree \({t_{q^{\prime }}}\) , and we pick the leftmost branch under it in \({t_{q^{\prime }}}\) . If we had jumped to the root of \({t_{q^{\prime }}}\) from a different tree, we pick the leftmost branch of \({t_{q^{\prime }}}\) .
The size of \({\mathcal {P}_{\mathcal {ACD}(}}\) is
\( |\mathcal {P}_{\mathcal {ACD}(}|=\sum \limits _{q\in V} |{\mathit {Branch}}({t_q})|. \)
The number of priorities used by \({\mathcal {P}_{\mathcal {ACD}(}}\) is the "height" of a maximal tree of \({\mathcal {ACD}}(\) if \({\mathcal {ACD}}(\) is (ACD)even or (ACD)odd, and the "height" of a maximal tree plus one if \({\mathcal {ACD}}(\) is (ACD)ambiguous.
In figure REF we show the "ACD-parity transition system" \({\mathcal {P}_{\mathcal {ACD}(}}\) of the transition system of example REF (figure REF ). States are labelled with the corresponding state \(q_j\) in \(, the tree of its "index" and a node \) ti\( that is a leaf in \) tqj\( (defining a branch of it).\) We have tagged the edges of \({\mathcal {P}_{\mathcal {ACD}(}}\) with the names of edges of \( (even if it is not an automaton). These indicate the image of the edges by the "morphism" \) : PACD( , and make clear the bijection between "runs" in \( and in \) PACD(\(.\) In this example, we create one “copy” of states \(q_0,q_1\) and \(q_2\) , three “copies” of the state \(q_3\) and two“copies” of states \(q_4\) and \(q_5\) . The resulting "parity transition system" \({\mathcal {P}_{\mathcal {ACD}(}}\) has therefore 10 states.
<FIGURE>Let \(\mathcal {A}\) be the "Muller automaton" of example "REF ". Its "alternating cycle decomposition" has a single tree that coincides with the "Zielonka tree" of its "Muller acceptance condition" \(\mathcal {F}_1\) (shown in figure REF ). However, its "ACD-parity transition system" has only 3 states, less than the "composition" \(\mathcal {Z}_{\mathcal {F}_1} \lhd \mathcal {A}\) (figure REF ), as shown in figure REF .
<FIGURE>[Correctness]
Let \((V, E, \mathit {Source}, \mathit {Target}, I_0, \mathcal {F})\) be a "Muller transition system" and \({\mathcal {P}_{\mathcal {ACD}(}}=(V_P, E_P, \mathit {Source}_P, \mathit {Target}_P, I_0^{\prime }, p:E_P\rightarrow \mathbb {N} )\) its "ACD-transition system". Then, there exists a "locally bijective morphism" \( \varphi : {\mathcal {P}_{\mathcal {ACD}(}} \rightarrow .Moreover, if \) is a "labelled transition system", then \(\varphi \) is a "morphism of labelled transition systems".
We define \(\varphi _V : V_P \rightarrow V\) by \(\varphi _V((q,i,\beta ))=q\) and \(\varphi _E : E_P \rightarrow E\) by \(\varphi _E(e_{i,\tau })=e\) .
It is clear that this map preserves edges, initial states and labels. It is also clear that it is "locally bijective", since we have defined one initial state in \({\mathcal {P}_{\mathcal {ACD}(}}\) for each initial state in \(, and by definition the edges in \) Out((q,i,))\( are in bijection with \) Out(q)\(. It induces therefore a bijection between the runs of the transition systems(fact \ref {Fact_LocBijMorph_BijectionRuns}).\) Let us see that a "run" \(\varrho \) in \( is accepted if and only if \) -1()\( is accepted in \) PACD(\(. First, we remark that any infinite run \)\( of \) will eventually stay in a "loop" \(l\in {\mathpzc {Loop}}(\) such that \(\mathit {Inf}(\varrho )=l\) , and therefore we will eventually only visit states corresponding to the tree \(t_i\) such that \(l\subseteq \nu _i(\varepsilon )\) in the "alternating cycle decomposition". Let \(p_{\min }\) be the smallest priority produced infinitely often in the run \(\varphi ^{-1}(\varrho )\) in \(\mathcal {P}_{\mathcal {ACD}(}\) . As in the proof of proposition REF , there is a unique node \(\tau _p\) in \(t_i\) visited infinitely often such that \((node){p_i}(\tau _p)=p_{\min }\) . Moreover, the states visited infinitely often in \(\mathcal {P}_{\mathcal {ACD}(}\) correspond to branches below \(\tau _p\) , that is, they are of the form
\( (q,i,\beta )\) , with \(\beta \in {\mathit {Subtree}}_{t_q}(\tau _p), \text{ for } q\in {\mathit {States}}_i(\tau _p)\) .
We claim that \(\tau _p\) verifies:
\(l\subseteq \nu _i(\tau _p)\) .
\(l\nsubseteq \nu _i(\sigma )\) for every "child" \(\sigma \) of \(\tau _p\) .
By definition of \({\mathcal {ACD}}(\) this implies
\( l\in \mathcal {F}\; \Longleftrightarrow \; \nu _i(\tau _p)\in \mathcal {F}\; \Leftrightarrow \; p_{\min } \text{ is even.}\)
We show that \(l\subseteq \nu _i(\tau _p)\) . For every edge \(e\notin \nu _i(\tau _p)\) of "index" \(i\) and for every branch \(\beta \in {\mathit {Subtree}}_{t_q}(\tau _p), \text{ for } q\in {\mathit {States}}_i(\tau _p)\) , we have that \(\tau ^{\prime }={\mathit {Supp}}(\beta ,i,e)\) is a strict "ancestor" of \(\tau _p\) in \(t_i\) . Therefore, if \(l\) was not contained in \(\nu _i(\tau _p)\) we would produce infinitely often priorities strictly smaller than \(p_{\min }\) .
Finally, we show that \(l\nsubseteq \nu _i(\sigma )\) for every "child" \(\sigma \) of \(\tau _p\) . Since we reach \(\tau _p\) infinitely often, we take transitions \(e_{i,\beta }\) such that \(\tau _p={\mathit {Supp}}(\beta ,i,e)\) infinitely often. Let us reason by contradiction and let us suppose that there is some child \(\sigma \) of \(\tau _p\) such that \(l\subseteq \nu _i(\sigma )\) . Then for each edge \(e\in l\) , \(\mathit {Target}(e)\in {\mathit {States}}_i(\sigma )\) , and therefore \(\sigma \in t_q\) for all \(q\in {\mathit {States}}(l)\) and for each transition \(e_{i,\beta }\) such that \(\tau _p={\mathit {Supp}}(\beta ,i,e)\) , some branches passing through \(\sigma \) are considered as destinations. Eventually, we will go to some state \((q,i,\beta ^{\prime })\) , for some branch \(\beta ^{\prime }\in {\mathit {Subtree}}_{t_q}(\sigma )\) . But since \(l\subseteq \nu _i(\sigma )\) , then for every edge \(e\in l\) and branch \(\beta ^{\prime }\in {\mathit {Subtree}}_{t_q}(\sigma )\) it is verified that \({\mathit {Supp}}(\beta ^{\prime },i,e)\) is a "descendant" of \(\sigma \) , so we would not visit again \(\tau _p\) and all priorities produced infinitely often would be strictly greater than \(p_{\min }\) .
From the remarks at the end of section REF , we obtain:
If \(\mathcal {A}\) is a "Muller automaton" over \(\Sigma \) , the automaton \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) is a "parity automaton" recognizing \(\mathcal {L}(\mathcal {A})\) . Moreover,
\(\mathcal {A}\) is "deterministic" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is deterministic.
\(\mathcal {A}\) is "unambiguous" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is unambiguous.
\(\mathcal {A}\) is "GFG" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is GFG.
If \(\mathcal {G}\) is a "Muller game", then \(\mathcal {P}_{\mathcal {ACD}(\mathcal {G})}\) is a "parity game" that has the same winner than \(\mathcal {G}\) .
The "winning region" of \(\mathcal {G}\) for a player \(P\in \lbrace Eve, Adam\rbrace \) is \({\mathcal {W}_P}(\mathcal {G})=\varphi ("\mathcal {W}_P"(\mathcal {P}_{\mathcal {ACD}(\mathcal {G})}))\) , being \(\varphi \) the morphism of the proof of proposition REF .
Optimality of the alternating cycle decomposition transformation
In this section we prove the strong optimality of the "alternating cycle decomposition transformation", both for number of priorities (proposition REF ) and for size (theorem REF ). We use the same ideas as for proving the optimality of the "Zielonka tree automaton" in section REF .
[Optimality of the number of priorities]
Let \( be a "Muller transition system" such that all its states are "accessible" and let \) PACD(\( be its "ACD-transition system". If \) P\( is another "parity transition system" such that there is a "locally bijective morphism" \) :P, then \(\mathcal {P}\) uses at least the same number of priorities than \({\mathcal {P}_{\mathcal {ACD}(}}\) .
We distinguish 3 cases depending on whether \({\mathcal {ACD}}(\) is (ACD)even, (ACD)odd or (ACD)ambiguous.
We treat simultaneously the cases \({\mathcal {ACD}}(\) (ACD)even and (ACD)odd. In these cases, the number \(h\) of priorities used by \({\mathcal {P}_{\mathcal {ACD}(}}\) coincides with the maximal "height" of a tree in \({\mathcal {ACD}}(\) . Let \(t_i\) be a tree of maximal "height" \(h\) in \({\mathcal {ACD}}(\) , \(\beta =\lbrace \tau _1,\dots ,\tau _{h}\rbrace \in {\mathit {Branch}}(t_i)\) a branch of \(t_i\) of maximal length (ordered as \(\tau _1 {\sqsupseteq } \tau _2 {\sqsupseteq } \dots {\sqsupseteq } \tau _{h}=\varepsilon \) ) and \(l_j=\nu _i(\tau _j)\) , \(j=1,\dots , h\) . We fix \(q\in {\mathit {States}}_i(\tau _1)\) , where \(\tau _1\) is the leaf of \(\beta \) , and we write
\(\mathit {Loop}_q)=\lbrace w\in {\mathpzc {Run}}_{T,q}\cap E^* \; : \; {\mathit {First}}(w)={\mathit {Last}}(w)=q \rbrace ,\)
and for each \(j=1,\dots , h\) we choose \(w_j \in \mathit {Loop}_q)\) such that \({\mathit {App}}(w_j)=l_j\) . Let \(\eta ^{\prime }\) be the maximal priority appearing in \(\mathcal {P}\) . We show as in the proof of proposition REF that for every \(v\in \mathit {Loop}_q)\) , the "run" \(\varphi ^{-1}((w_1\dots w_k v)^\omega )\) must produce a priority smaller or equal to \(\eta ^{\prime }-k+1\) . Taking \(k=h\) , the "run" \(\varphi ^{-1}((w_1\dots w_h)^\omega )\) produces a priority smaller or equal to \(\eta ^{\prime }-h+1\) and even if and only if \({\mathcal {ACD}}(\) is (ACD)even. By lemma REF we can suppose that \(\mathcal {P}\) uses all priorities in \([\eta ^{\prime }-h+1, \eta ^{\prime }]\) . We conclude that \(\mathcal {P}\) uses at least \(h\) priorities, so at least as many as \({\mathcal {P}_{\mathcal {ACD}(}}\) .
In the case \({\mathcal {ACD}}(\) (ACD)ambiguous, if \(h\) is the maximal "height" of a tree in \({\mathcal {ACD}}(\) , then \({\mathcal {P}_{\mathcal {ACD}(}}\) uses \(h+1\) priorities. We can repeat the previous argument with two different maximal branches of respective maximal (ACD)even and (ACD)odd trees. We conclude that \(\mathcal {P}\) uses at least priorities in a range \([\mu ,\mu +h]\cup [\eta ,\eta +h]\) , with \(\mu \) even and \(\eta \) odd, so it uses at least \(h+1\) priorities.
A similar proof, or an application of the results from [10]} gives the following result:
If \(\mathcal {A}\) is a deterministic automaton, the accessible part of \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) uses the optimal number of priorities to recognize \(\mathcal {L}(\mathcal {A})\) .
Finally, we state and prove the optimality of \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) for size.
[Optimality of the number of states]
Let \( be a (possibly "labelled") "Muller transition system" such that all its states are "accessible" and let \) PACD(\( be its "ACD-transition system". If \) P\( is another "parity transition system" such that there is a "locally bijective morphism" \) :P, then
\( |"\mathcal {P}_{\mathcal {ACD}(}"|\le |\mathcal {P}| \) .
Proof of theorem REF
We follow the same steps as for proving theorem REF . We will suppose that all states of the "transition systems" considered are "accessible".
Let 1, 2 be "transition systems" such that there is a "morphism of transition systems" \(\varphi : 1 \rightarrow 2\) . Let \(l\in {\mathpzc {Loop}}(2)\) be a "loop" in 2. An ""\(l\) -SCC"" of 1 (with respect to \(\varphi \) ) is a non-empty "strongly connected subgraph" \((V_l,E_l)\) of the subgraph \((\varphi _V^{-1}({\mathit {States}}(l)),\varphi _E^{-1}(l) )\) such that
\(\nonumber & \text{for every } q_1\in V_l \text{ and every } e_2\in "\mathit {Out}"(\varphi (q_1))\cap l\\ &\text{there is an edge } e_1\in \varphi ^{-1}(e_2)\cap "\mathit {Out}"(q_1) \text{ such that } e_1\in E_l.\)
That is, an \(l\) -SCC is a "strongly connected subgraph" of 1 in which all states and transitions correspond via \(\varphi \) to states and transitions appearing in the "loop" \(l\) . Moreover, given a "run" staying in \(l\) in 2 we can simulate it in the \(l\) -SCC of 1 (property (REF )).
Let 1and 2
be two "transition systems" such that there is a "locally surjective" "morphism" \(\varphi : 1 \rightarrow 2\) . Let \(l\in "\mathpzc {Loop}"(2)\) and \(C_l=(V_l,E_l)\) be a non-empty "\(l\) -SCC" in 1. Then, for every "loop" \(l^{\prime }\in "\mathpzc {Loop}"(2)\) such that \(l^{\prime }\subseteq l\) there is a non-empty "\(l^{\prime }\) -SCC" in \(C_l\) .
Let \((V^{\prime },E^{\prime })=(V_l,E_l)\cap (\varphi _V^{-1}({\mathit {States}}(l^{\prime })),\varphi _E^{-1}(l^{\prime }))\) . We first prove that \((V^{\prime },E^{\prime })\) is non-empty. Let \(q_1\in V_l \subseteq {\mathit {States}}(l)\) . Let \(\varrho \in {\mathpzc {Run}}_{T_2,\varphi (q)}\) be a finite run in 1 from \(\varphi (q_1)\) , visiting only edges in \(l\) and ending in \(q_2\in {\mathit {States}}(l^{\prime })\) . From the local surjectivity, we can obtain a run in \(\varphi ^{-1}(\varrho )\) that will stay in \((V^{\prime },E^{\prime })\) and that will end in a state in \(\varphi _V^{-1}({\mathit {States}}(l^{\prime }))\) . The subgraph \((V^{\prime },E^{\prime })\) clearly has property (REF ) (for \(l^{\prime }\) ).
We prove by induction on the size that any non-empty subgraph \((V^{\prime },E^{\prime })\) verifying the property (REF ) (for \(l^{\prime }\) ) admits an \(l^{\prime }\) -SCC. If \(|V^{\prime }|=1\) , then \((V^{\prime },E^{\prime })\) forms by itself a "strongly connected graph". If \(|V^{\prime }|>1\) and \((V^{\prime },E^{\prime })\) is not strongly connected, then there are vertices \(q,q^{\prime }\in V^{\prime }\) such that there is no path from \(q\) to \(q^{\prime }\) following edges in \(E^{\prime }\) . We let
\( V^{\prime }_q=\lbrace p\in V^{\prime } \; : \; \text{there is a path from } q \text{ to } p \text{ in } (V^{\prime },E^{\prime })\rbrace \; ; \; E^{\prime }_q=E^{\prime }\cap "\mathit {Out}"(V^{\prime }_q)\cap "\mathit {In}"(V^{\prime }_q) .\)
Since \(q^{\prime }\notin V^{\prime }_q\) , the size \(|V^{\prime }_q|\) is strictly smaller than \(|V^{\prime }|\) .
Also, the subgraph \((V^{\prime }_q,E^{\prime }_q)\) is non-empty since \(q\in V^{\prime }_q\) .
The property (REF ) holds from the definition of \((V^{\prime }_q,E^{\prime }_q)\) . We conclude by induction hypothesis.
Let \( be a "Muller transition system" with acceptance condition \) F\( and let \) P\( be a "parity transition system" such that there is a "locally bijective morphism" \) : P. Let \(t_i\) be a "proper tree" of \({\mathcal {ACD}}(\) and \(\tau ,\sigma _1,\sigma _2\in t_i\) nodes in \(t_i\) such that \(\sigma _1,\sigma _2\) are different "children" of \(\tau \) , and let \(l_1=\nu _i(\sigma _1)\) and \(l_2=\nu _i(\sigma _2)\) . If \(C_1\) and \(C_2\) are two "\(l_1 \) -SCC" and "\(l_2\) -SCC" in \(\mathcal {P}\) , respectively, then \(C_1\cap C_2= \emptyset \) .
Suppose there is a state \(q\in C_1\cap C_2\) . Since \(\varphi _V(q)\in {\mathit {States}}(l_1)\cap {\mathit {States}}(l_2)\) , and \(l_1, l_2\) are "loops" there are finite "runs" \(\varrho _1,\varrho _2 \in {\mathpzc {Run}}_{\varphi _V(q)}\) such that \({\mathit {App}}(\varrho _1)=l_1\) and \(\mathit {App}(\varrho _2)=l_2\) . We can “simulate” these runs in \(C_1\) and \(C_2\) thanks to property (REF ), producing runs \(\varphi ^{-1}(\varrho _1)\) and \(\varphi ^{-1}(\varrho _2)\) in \({\mathpzc {Run}}_{\mathcal {P},q}\) and arriving to \(q_1="\mathit {Last}"(\varphi ^{-1}(\varrho _1))\) and \(q_2=\mathit {Last}(\varphi ^{-1}(\varrho _1))\) . Since \(C_1, C_2\) are "\(l_1,l_2\) -SCC", there are finite runs \(w_1\in {\mathpzc {Run}}_{\mathcal {P},q_1}\) , \(w_2\in {\mathpzc {Run}}_{q_2}\) such that \(\mathit {Last}(w_1)=\mathit {Last}(w_2)=q\) , so the runs \(\varphi ^{-1}(\varrho _1)w_1\) and \(\varphi ^{-1}(\varrho _2)w_2\) start and end in \(q\) . We remark that in \( the runs \) (-1(1)w1)=1E(w1)\( and \) (-1(2)w2)=2E(w2)\(start and end in \) V(q)\( andvisit, respectively, all the edges in \) l1\( and \) l2\(.From the definition of \) ACD(\( we have that \) l1F l2 F l1 l2 F\(. Since \)\( preserves the "acceptance condition", the minimal priority produced by \) -1(1)w1\( has the same parity than that of \) -1(2)w2\(, but concatenating both runs we must produce a minimal priority of the opposite parity, arriving to a contradiction.\)
Let \( be a "Muller transition system" and \) "PACD("\( its "ACD-parity transition system". For each tree \) ti\( of \) ACD(\(, each node \) ti\( and each state \) qStatesi()\( we write:\)\( \psi _{\tau ,i,q}=|{\mathit {Branch}}({\mathit {Subtree}}_{t_q}(\tau ))|=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \; \beta \, \text{ passes through } \tau \rbrace |.\)\(\vspace{-8.53581pt}\)\(\Psi _{\tau ,i}=\sum \limits _{q\in {\mathit {States}}_i(\tau )}\psi _{\tau ,i,q}=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \;q\in V \text{ of "index" } i \text{ and } \beta \, \text{ passes through } \tau \rbrace | .\)\(\)
If we consider the root of the trees in \({\mathcal {ACD}}(\) , then each \(\Psi _{\varepsilon ,i}\) is the number of states in \("\mathcal {P}_{\mathcal {ACD}(}"\) associated to this tree, i.e., \(\Psi _{\varepsilon ,i}=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \; q\in V,\; \beta \in {\mathit {Branch}}("t_q")\rbrace |\) . Therefore
\( |"\mathcal {P}_{\mathcal {ACD}(}"|=\sum \limits _{i=0}^{r}\Psi _{\varepsilon ,i} .\)
[Proof of theorem REF]
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a "Muller transition system", \("\mathcal {P}_{\mathcal {ACD}(}"\) the "ACD-parity transition system" of \( and \) P=(V',E',Source',Target',I0',p':E':N )\( a parity transition system such that there is a "locally bijective morphism" \) : P.
First of all, we construct two modified transition systems \(\widetilde{=(V,\widetilde{E},\mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e},\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t},I_0,\widetilde{\mathcal {F}}) and \widetilde{\mathcal {P}}=(V^{\prime },\widetilde{E^{\prime }},\mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e}^{\prime },\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t}^{\prime }, I_0^{\prime }, \widetilde{p^{\prime }}:\widetilde{E^{\prime }}:\rightarrow \mathbb {N} ), such that}\)
Each vertex of \(V\) belongs to a "strongly connected component".
All leaves \(\tau \in t_i\) verify \(|{\mathit {States}}_i(\tau )|=1\) , for every \(t_i\in {\mathcal {ACD}}(\widetilde{).\item Nodes \tau \in t_i verify {\mathit {States}}_i(\tau )=\bigcup _{\sigma \in \mathit {Children}(\tau )}\mathit {States}_i(\sigma ), for every t_i\in {\mathcal {ACD}}(\widetilde{).\item There is a "locally bijective morphism" \widetilde{\varphi }: \widetilde{\mathcal {P}} \rightarrow \widetilde{.\item |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|\le |\widetilde{\mathcal {P}}| \; \Rightarrow \; |\mathcal {P}_{\mathcal {ACD}(}|\le |\mathcal {P}|.}}}We define the transition system \widetilde{ by adding for each q\in V two new edges, e_{q,1}, e_{q,2} with \mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e}(e_{q,j})=\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t}(e_{q,j})=q, for j=1,2. The modified "acceptance condition" \widetilde{\mathcal {F}} is given by: let C\subseteq \widetilde{E}\begin{itemize}\item If C\cap E\ne \emptyset , then C\in \widetilde{\mathcal {F}} \; \Leftrightarrow \; C\cap E \in \mathcal {F} (the occurrence of edges e_{q,j} does not change the "acceptance condition").\item If C\cap E = \emptyset , if there are edges of the form e_{q,1} in C, for some q\in V, then C\in \widetilde{\mathcal {F}}. If all edges of C are of the form e_{q,2}, C\notin \mathcal {F}.\end{itemize}It is easy to verify that the "transition system" \widetilde{ and {\mathcal {ACD}}(\widetilde{) verify conditions 1,2 and 3.We perform equivalent operations in \mathcal {P}, obtaining \widetilde{\mathcal {P}}:we add a pair of edges e_{q,1}, e_{q,2} for each vertex in \mathcal {P}, and we assign them priorities \widetilde{p}(e_{q,1})=\eta +\epsilon and \widetilde{p}(e_{q,2})=\eta +\epsilon +1, where \eta is the maximum of the priorities in \mathcal {P} and \epsilon =0 if \eta is even, and \epsilon =1 if \eta is odd. We extend the "morphism" \varphi to \widetilde{\varphi }: \widetilde{\mathcal {P}} \rightarrow \widetilde{ conserving the "local bijectivity" by setting \widetilde{\varphi }_E(e_{q,j})=e_{\varphi (q),j} for j=1,2. Finally, it is not difficult to verify that the underlying graphs of \mathcal {P}_{\mathcal {ACD}(\widetilde{ )} and \widetilde{\mathcal {P}}_{\mathcal {ACD}(} are equal (the only differences are the priorities associated to the edges e_{q,j}), so in particular |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|=|\widetilde{\mathcal {P}}_{\mathcal {ACD}(}|=|\mathcal {P}_{\mathcal {ACD}(}|. Consequently, |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|\le |\widetilde{\mathcal {P}}| implies |\mathcal {P}_{\mathcal {ACD}(}|\le |\widetilde{\mathcal {P}}|=|\mathcal {P}|.}}}Therefore, it suffices to prove the theorem for the modified systems \widetilde{ and \widetilde{\mathcal {P}}. From now on, we take verifying the conditions 1, 2 and 3 above. In particular, all trees are "proper trees" in {\mathcal {ACD}}( . It also holds that for each q\in V and \tau \in t_i that is not a leaf, \psi _{\tau ,i,q}=\sum \limits _{\sigma \in \mathit {Children}(\tau )}\psi _{\sigma ,i,q}. Therefore, for each \tau \in t_i that is not a leaf \Psi _{\tau ,i}=\sum \limits _{\sigma \in \mathit {Children}(\tau )}\Psi _{\sigma ,i}, and for each leaf \sigma \in t_i we have \Psi _{\sigma ,i}=1.}Vertices of V^{\prime } are partitioned in the equivalence classes of the preimages by \varphi of the roots of the trees \lbrace t_1,\dots ,t_r\rbrace of {\mathcal {ACD}}(: V^{\prime }= \bigcup \limits _{i=1}^r\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon )) \quad \text{ and } \quad \varphi _V^{-1}( {\mathit {States}}_i(\varepsilon )) \cap \varphi _V^{-1}( {\mathit {States}}_j(\varepsilon ))=\emptyset \text{ for } i\ne j .}}\begin{claim*}For each i=1,\dots ,r and each \tau \in t_i, if C_\tau is a non-empty "\nu _i(\tau )-SCC", then|C_\tau |\ge \Psi _{\tau ,i}.\end{claim*}}}}\) Let us suppose this claim holds. In particular \((\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ),\varphi _E^{-1}( \nu _i(\varepsilon ))\) verifies the property (REF ) from definition REF , so from the proof of lemma REF we deduce that it contains a \(\nu _i(\varepsilon )\) -SCC and therefore \(|\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ))| \ge \Psi _{\varepsilon ,i}\) , so
\(|\mathcal {P}|=\sum \limits _{i=1}^r |\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ))|\ge \sum \limits _{i=1}^r \Psi _{\varepsilon ,i}=|"\mathcal {P}_{\mathcal {ACD}(}"|,\)
concluding the proof.
[Proof of the claim]
Let \(C_\tau \) be a "\(\nu _i(\tau )\) -SCC". Let us prove \(|C_\tau |\ge \Psi _{\tau ,i}\) by induction on the "height of the node" \(\tau \) . If \(\tau \) is a leaf (in particular if its height is 1), \(\Psi _{\tau ,i}=1\) and the claim is clear.
If \(\tau \) of height \(h>1\) is not a leaf, then it has children \(\sigma _1,\dots , \sigma _k\) , all of them of height \(h-1\) . Thanks to lemmas REF and REF , for \(j=1,\dots ,k\) , there exist disjoint "\(\nu _i(\sigma _j)\) -SCC" included in \(C_\tau \) , named \(C_1,\dots ,C_k\) , so by induction hypothesis
\( |C_\tau | \ge \sum \limits _{j=1}^k |C_j| \ge \sum \limits _{j=1}^k \Psi _{\sigma _j,i}= \Psi _{\tau ,i}. \)
From the hypothesis of theorem REF we cannot deduce that there is a "morphism" from \(\mathcal {P}\) to \("\mathcal {P}_{\mathcal {ACD}(}"\) or vice-versa. To produce a counter-example it is enough to remember the “non-determinism” in the construction of \(\mathcal {P}_{\mathcal {ACD}(}\) . Two different orderings in the nodes of the trees of \({\mathcal {ACD}}(\) will produce two incomparable, but minimal in size parity transition systems that admit a "locally bijective morphism" to \(. \)
However, we can prove the following result:
If \(\varphi _1: "\mathcal {P}_{\mathcal {ACD}(}" \rightarrow is the "locally bijective morphism" described in the proof of proposition \ref {Prop_Correctness-ACD}, then for every state \) q\( in \) of "index" \(i\) :
\( |\varphi _1^{-1}(q)|=\psi _{\varepsilon ,i,q}\le |\varphi ^{-1}(q)| \;, \; \text{ for every "locally bijective morphism" } \varphi : \mathcal {P}\rightarrow \)
It is enough to remark that if \(q \in {\mathit {States}}_i(\tau )\) , then any "\(\nu _i(\tau )\) -SCC" \(C_\tau \) of \(\mathcal {P}\) will contain some state in \(\varphi ^{-1}(q)\) . We prove by induction as in the proof of the claim that \(\psi _{\tau ,i,q} \le |C_\tau \cap \varphi ^{-1}(q)|\) .
Applications
Determinisation of Büchi automata
In many applications, such as the synthesis of reactive systems for \(LTL\) -formulas, we need to have "deterministic" automata. For this reason, the determinisation of automata is usually a crucial step. Since McNaughton showed in [11]} that Büchi automata can be transformed into Muller deterministic automata recognizing the same language, much effort has been put into finding an efficient way of performing this transformation. The first efficient solution was proposed by Safra in [12]}, producing a deterministic automaton using a "Rabin condition". Due to the many advantages of "parity conditions" (simplicity, easy complementation of automata, they admit memoryless strategies for games, closeness under union and intersection...), determinisation constructions towards parity automata have been proposed too. In [13]}, Piterman provides a construction producing a parity automaton that in addition improves the state-complexity of Safra's construction. In [14]}, Schewe breaks down Piterman's construction in two steps: the first one from a non deterministic Büchi automaton \(\mathcal {B}\) towards a "Rabin automaton" (\("\mathcal {R}_\mathcal {B}"\) ) and the second one gives Piterman's parity automaton (\("\mathcal {P}_\mathcal {B}"\) ).
In this section we prove that there is a "locally bijective morphism" from \("\mathcal {P}_\mathcal {B}"\) to \("\mathcal {R}_\mathcal {B}"\) , and therefore we would obtain a smaller parity automaton applying the "ACD-transformation" in the second step. We provide an example (example REF ) in which the "ACD-transformation" provides a strictly better parity automaton.
From non-deterministic Büchi to deterministic Rabin automata
In [14]}, Schewe presents a construction of a deterministic "Rabin automaton" \(""\mathcal {R}_\mathcal {B}""\) from a non-deterministic "Büchi automaton" \(\mathcal {B}\) . The set of states of the automaton \(\mathcal {R}_\mathcal {B}\) is formed of what he calls ""history trees"". The number of history trees for a Büchi automaton of size \(n\) is given by the function \(\mathit {hist}(n)\) , that is shown to be in \(o((1.65n)^n)\) in [14]}. This construction is presented starting from a state-labelled Büchi automaton. A construction starting from a transition-labelled Büchi automaton can be found in [17]}. In [18]}, Colcombet and Zdanowski proved the worst-case optimality of the construction.
[[14]}]
Given a non-deterministic "Büchi automaton" \(\mathcal {B}\) with \(n\) states, there is an effective construction of a deterministic "Rabin automaton" \(\mathcal {R}_\mathcal {B}\) with \(\mathit {hist}(n)\) states and using \(2^{n-1}\) Rabin pairs that recognizes the language \("\mathcal {L}(\mathcal {B})"\) .
[[18]}]
For every \(n\in \mathbb {N} \) there exists a non-deterministic Büchi automaton \(B_n\) of size \(n\) such that every deterministic Rabin automaton recognizing \(\mathcal {L}(\mathcal {B}_n)\) has at least \(\mathit {hist}(n)\) states.
From non-deterministic Büchi to deterministic parity automata
In order to build a deterministic "parity automaton" \(""\mathcal {P}_\mathcal {B}""\) that recognizes the language of a given "Büchi automaton" \(\mathcal {B}\) , Schewe transforms the automaton \("\mathcal {R}_\mathcal {B}"\) into a parity one using what he calls a later introduction record (LIR). The LIR construction can be seen as adding an ordering (satisfying some restrictions) to the nodes of the "history trees". States of \(\mathcal {P}_\mathcal {B}\) are therefore pairs of history trees with a LIR. In this way we obtain a similar parity automaton that with the Piterman's determinisation procedure [13]}.
The worst-case optimality of this construction was proved in [22]}, [17]}, generalising the methods of [18]}.
[[14]}]
Given a non-deterministic "Büchi automaton" \(\mathcal {B}\) with \(n\) states, there is an effective construction of a deterministic "parity automaton" \("\mathcal {P}_\mathcal {B}"\) with \(O(n!(n-1)!)\) states and using \(2n\) priorities that recognizes the language \("\mathcal {L}(\mathcal {B}))"\) .
[[22]}, [17]}]
For every \(n\in \mathbb {N} \) there exists a non-deterministic Büchi automaton \(B_n\) of size \(n\) such that \("\mathcal {P}_\mathcal {B}"\) has less than \(1.5\) times as many states as a minimal deterministic parity automaton recognizing \(\mathcal {L}(\mathcal {B}_n)\) .
A locally bijective morphism from \(\mathcal {P}_\mathcal {B}\) to \(\mathcal {R}_\mathcal {B}\)
Given a "Büchi automaton" \(\mathcal {B}\) and its determinisations to Rabin and parity automata \("\mathcal {R}_\mathcal {B}"\) and \("\mathcal {P}_\mathcal {B}"\) , there is a "locally bijective morphism" \(\varphi : \mathcal {P}_\mathcal {B}\rightarrow \mathcal {R}_\mathcal {B}\) .
Observing the construction of \(\mathcal {R}_\mathcal {B}\) and \(\mathcal {P}_\mathcal {B}\) in [14]}, we see that the states of \(\mathcal {P}_\mathcal {B}\) are of the form \((T,\chi )\) with \(T\) an state of \(\mathcal {R}_B\) (a "history tree"), and \(\chi : T \rightarrow \lbrace 1,\dots ,|B|\rbrace \) a LIR (that can be seen as an ordering of the nodes of \(T\) ).
It is easy to verify that the mapping \(\varphi _V((T,\chi ))=T\) defines a morphism \(\varphi : \mathcal {R}_\mathcal {B}\rightarrow \mathcal {P}_\mathcal {B}\) (from fact REF there is only one possible definition of \(\varphi _E\) ). Since the automata are deterministic, \(\varphi \) is a "locally bijective morphism".
Let \(\mathcal {B}\) be a "Büchi automaton" and \("\mathcal {R}_\mathcal {B}"\) , \("\mathcal {P}_\mathcal {B}"\) the deterministic Rabin and parity automata obtained by applying the Piterman-Schewe construction to \(\mathcal {B}\) . Then, the parity automaton \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}\) verifies
\( |\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}| \le |\mathcal {P}_\mathcal {B}| \)
and \("\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}"\) uses a smaller number of priorities than \("\mathcal {P}_\mathcal {B}"\) .
It is a direct consequence of propositions REF , REF and theorem REF .
Furthermore, after proposition REF , \("\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}"\) uses the optimal number of priorities to recognize \("\mathcal {L}(\mathcal {B})"\) , and we directly obtain this information from the "alternating cycle decomposition" of \(\mathcal {R}_\mathcal {B}\) , \({\mathcal {ACD}}(\mathcal {R}_\mathcal {B})\) .
In the "example REF " we show a case in which \(|\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}| < |\mathcal {P}_\mathcal {B}|\) and for which the gain in the number of priorities is clear.
In [18]} and [17]}, the lower bounds for the determinisation of "Büchi automata" to Rabin and parity automata where shown using the family of ""full Büchi automata"", \(\lbrace \mathcal {B}_n\rbrace _{n\in \mathbb {N} }\) , \(|\mathcal {B}_n|=n\) . The automaton \(\mathcal {B}_n\) can simulate any other Büchi automaton of the same size. For these automata, the constructions \(\mathcal {P}_{\mathcal {B}_n}\) and \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_{B_n})}\) coincide.
""""
We present a non-deterministic Büchi automaton \(\mathcal {B}\) such that the "ACD-parity automaton" of \("\mathcal {R}_\mathcal {B}"\) has strictly less states and uses strictly less priorities than \("\mathcal {P}_\mathcal {B}"\) .
In figure REF we show the automaton \(\mathcal {B}\) over the alphabet \(\Sigma =\lbrace a,b,c\rbrace \) . Accepting transitions for the "Büchi condition" are represented with a black dot on them. An accessible "strongly connected component" \(\mathcal {R}_\mathcal {B}^{\prime }\) of the determinisation to a "Rabin automaton" \("\mathcal {R}_\mathcal {B}"\) is shown in figure REF . It has 2 states that are "history trees" (as defined in [14]}). There is a "Rabin pair" \((E_\tau ,F_\tau )\) for each node appearing in some "history tree" (four in total), and these are represented by an array with four positions. We assign to each transition and each position \(\tau \) in the array the symbols \({Green2}{{TextRenderingMode=FillStroke,LineWidth=.5pt, }{}}\) , \({Red2}{\mathbf {X}}\) , or \({Orange2}{\bullet }\) depending on whether this transition belongs to \(E_\tau \) , \(F_\tau \) or neither of them, respectively (we can always suppose \(E_\tau \cap F_\tau = \emptyset \) ).
In figure REF there is the "alternating cycle decomposition" corresponding to \(\mathcal {R}_\mathcal {B}^{\prime }\) . We observe that the tree of \({\mathcal {ACD}}(\mathcal {R}_\mathcal {B}^{\prime })\) has a single branch of height 3.
This is, the Rabin condition over \(\mathcal {R}_\mathcal {B}^{\prime }\) is already a "\([1,3]\) -parity condition" and \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B^{\prime })}=\mathcal {R}_\mathcal {B}^{\prime }\) . In particular it has 2 states, and uses priorities in \([1,3]\) .
On the other hand, in figure REF we show the automaton \(\mathcal {P}_\mathcal {B}^{\prime }\) , that has 3 states and uses priorities in \([3,7]\) . The whole automata \(\mathcal {R}_\mathcal {B}\) and \(\mathcal {P}_\mathcal {B}\) are too big to be pictured in these pages, but the three states shown in figure REF are indeed accessible from the initial state of \(\mathcal {P}_\mathcal {B}\) .
<FIGURE><FIGURE><FIGURE><FIGURE>
On relabelling of transition systems by acceptance conditions
In this section we use the information given by the "alternating cycle decomposition" to provide characterisations of "transition systems" that can be labelled with "parity", "Rabin", "Streett" or \("\mathit {Weak}_k"\) conditions, generalising the results of [32]}.
As a consequence, these yield simple proofs of two results about the possibility to define different classes of acceptance conditions in a deterministic automaton. Theorem REF , first proven in [33]}, asserts that if we can define a Rabin and a Streett condition on top of an underlying automaton \(\mathcal {A}\) such that it recognizes the same language \(L\) with both conditions, then we can define a parity condition in \(\mathcal {A}\) recognizing \(L\) too. Theorem REF states that if we can define Büchi and co-Büchi conditions on top of an automaton \(\mathcal {A}\) recognizing the language \(L\) , then we can define a \("\mathit {Weak}"\) condition over \(\mathcal {A}\) such that it recognizes \(L\) .
First, we extend the definition REF of section REF to the "alternating cycle decomposition".
Given a Muller transition system \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) , we say that its "alternating cycle decomposition" \({\mathcal {ACD}}(\) is a
""Rabin ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "Rabin shape".
""Streett ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "Streett shape".
""parity ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "parity shape".
""\([1,\eta ]\) -parity ACD"" (resp. \([0,\eta -1]\) -parity ACD) if it is a parity ACD, every tree has "height" at most \(\eta \) and trees of height \(\eta \) are (ACD)odd (resp. (ACD)even).
""Büchi ACD"" if it is a \([0,1]\) -parity ACD.
""co-Büchi ACD"" if it is a \([1,2]\) -parity ACD.
""\(\mathit {Weak}_k\) ACD"" if it is a parity ACD and every tree \((t_i,\nu _i) \in {\mathcal {ACD}}(\) has "height" at most \(k\) .
The next proposition follows directly from the definitions.
Let \( be a Muller transition system. Then:\begin{itemize}\item {\mathcal {ACD}}( is a "parity ACD" if and only if it is a "Rabin ACD" and a "Streett ACD".\item {\mathcal {ACD}}( is a "\mathit {Weak}_k ACD" if and only if it is a "[0,k]-parity ACD" and a "[1,k+1]-parity ACD".\end{itemize}\)
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "Rabin condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\notin \mathcal {F}\) and \(l_2\notin \mathcal {F}\) , then \(l_1\cup l_2 \notin \mathcal {F}\) .
\({\mathcal {ACD}}(\) is a "Rabin ACD".
(\(1 \Rightarrow 2\) )
Suppose that \( uses a Rabin condition with Rabin pairs \) (E1,F1),...,(Er,Fr)\(. Let \) l1\( and \) l2\( be two rejecting loops. If \) l1l2\( was accepting, then there would be some Rabin pair \) (Ej,Fj)\( and some edge \) el1l2\( such that \) eEj\( and \) eFj\(. However, the edge \) e\( belongs to \) l1\( or to \) l2\(, and the loop it belongs to should be accepting too.\)
(\(2 \Rightarrow 3\) )
Let \(q\in V\) of "index" \(i\) , and \("t_q"\) be the subtree of \({\mathcal {ACD}}(\) associated to \(q\) . Suppose that there is a node \(\tau \in t_q\) such that \((node){p_i}(\tau )\) is even ("round" node) and that it has two different children \(\sigma _1\) and \(\sigma _2\) . The "loops" \(\nu _i(\sigma _1)\) and \(\nu _i(\sigma _2)\) are maximal rejecting loops contained in \(\nu _i(\tau )\) , and since they share the state \(q\) , their union is also a loop that must verify \( \nu _i(\sigma _1) \cup \nu _i(\sigma _2)\in \mathcal {F}\) , contradicting the hypothesis.
(\(3 \Rightarrow 1\) )
We define a "Rabin condition" over \(. For each tree \) ti\( in \) ACD(\( and each "round" node \) ti\( (\) (node)pi()\( even) we define the Rabin pair \) (Ei,,Fi,)\( given by:\)\( E_{i,\tau }=\nu _i(\tau )\setminus \bigcup _{\sigma \in "\mathit {Children}"(\tau )}\nu _i(\sigma ) \quad , \qquad F_{i,\tau }=E \setminus \nu _i(\tau ). \)\(\) Let us show that this condition is "equivalent to" \(\mathcal {F}\) over the transition system \(. We begin by proving the following consequence of being a "Rabin ACD":\begin{claim*}If \tau is a "round" node in the tree t_i of {\mathcal {ACD}}(, and l\in {\mathpzc {Loop}}( is a "loop" such that l\subseteq \nu _i(\tau ) and l\nsubseteq \nu _i(\sigma ) for any child \sigma of \tau , then there is some edge e\in l such that e\notin \nu _i(\sigma ) for any child \sigma of \tau .\end{claim*}\begin{claimproof}Since for each state q\in V the tree "t_q" has "Rabin shape", it is verified that {\mathit {States}}_i(\sigma )\cap {\mathit {States}}_i(\sigma ^{\prime })=\emptyset for every pair of different children \sigma , \sigma ^{\prime } of \tau . Therefore, the union of \nu _i(\sigma ) and \nu _i(\sigma ^{\prime }) is not a loop, and any loop l contained in this union must be contained either in \nu _i(\sigma ) or in \nu _i(\sigma ^{\prime }).\end{claimproof}\)
Let \(\varrho \in {\mathpzc {Run}}_{T}\) be a "run" in \(, let \) lLoop(\( be the loop of \) such that \(\mathit {Inf}(\varrho )=l\) and let \(i\) be the "index" of the edges in this loop. Let \(\tau \) be a maximal node in \(t_i\) (for \({\sqsubseteq }\) ) such that \(l\subseteq \nu _i(\tau )\) . If \(l\in \mathcal {F}\) , let \(\tau \) be a maximal node in \(t_i\) (for \({\sqsubseteq }\) ) such that \(l\subseteq \nu _i(\tau )\) . This node \(\tau \) is a round node, and from the previous claim it follows that there is some edge \(e\in l\) such that \(e\) does not belong to any child of \(\tau \) , so \(e\in E_{i,\tau }\) and \(e\notin F_{i,\tau }\) , so the "run" \(\varrho \) is accepted by the Rabin condition too. If \(l\notin \mathcal {F}\) , then for every round node \(\tau \) , if \(l\subseteq \nu _i(\tau )\) then \(l\subseteq \nu _i(\sigma )\) for some child \(\sigma \) of \(\tau \) . Therefore, for every Rabin pair \((E_{i,\tau },F_{i,\tau })\) and every \(e\in l\) , it is verified \(e\in E_{i,\tau } \, \Rightarrow \, e\in F_{i,\tau }\) .
The Rabin condition presented in this proof does not necessarily use the optimal number of Rabin pairs required to define a Rabin condition "equivalent to" \(\mathcal {F}\) over \(.\)
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "Streett condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\in \mathcal {F}\) and \(l_2\in \mathcal {F}\) , then \(l_1\cup l_2 \in \mathcal {F}\) .
\({\mathcal {ACD}}(\) is a "Streett ACD".
We omit the proof of proposition REF , being the dual case of proposition REF .
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "parity condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\in \mathcal {F}\, \Leftrightarrow \, l_2\in \mathcal {F}\) , then \(l_1\cup l_2 \in \mathcal {F}\, \Leftrightarrow \,l_1\in \mathcal {F}\) . That is, union of loops having the same “accepting status” preserves their “accepting status”.
\({\mathcal {ACD}}(\) is a "parity ACD".
Moreover, the parity condition we can define over \( is a "\) [1,]\(-parity" (resp. \) [0,-1]\(-parity~/~\) "Weakk"\() condition if and only if \) ACD(\( is a "\) [1,]\(-parity ACD" (resp. \) [0,-1]\(-parity ACD~/~"\) Weakk\( ACD").\)
(\(1 \Rightarrow 2\) )
Suppose that \( uses a parity acceptance condition with the priorities given by \) p:E N \(. Then, since \) l1\( and \) l2\( are both accepting or both rejecting, \) p1=p(l1)\( and \) p2=p(l2)\( have the same parity, that is also the same parity than \) p(l1 l2)={p1,p2}\(.\)
(\(2 \Rightarrow 3\) )
Let \(q\in V\) of "index" \(i\) , and \("t_q"\) be the subtree of \({\mathcal {ACD}}(\) associated to \(q\) . Suppose that there is a node \(\tau \in t_q\) with two different children \(\sigma _1\) and \(\sigma _2\) . The loops \(\nu _i(\sigma _1)\) and \(\nu _i(\sigma _2)\) are different maximal loops with the property \(\nu _i(\sigma )\subseteq \nu _i(\tau )\) and \(\nu _i(\sigma )\in \mathcal {F}\, \Leftrightarrow \, \nu _i(\tau ) \notin \mathcal {F}\) . Since they share the state \(q\) , their union is also a loop contained in \(\nu _i(\tau )\) and then
\( \nu _i(\sigma _1) \cup \nu _i(\sigma _2)\in \mathcal {F}\; \Leftrightarrow \; \nu _i(\tau ) \in \mathcal {F}\; \Leftrightarrow \; \nu _i(\sigma _1)\notin \mathcal {F}\)
contradicting the hypothesis.
(\(3 \Rightarrow 1\) )
From the construction of the "ACD-transformation", it follows that \("\mathcal {P}_{\mathcal {ACD}(}"\) is just a relabelling of \( with an equivalent parity condition.\)
For the implication from right to left of the last statement, we remark that if the trees of \({\mathcal {ACD}}(\) have priorities assigned in \([\mu ,\eta ]\) , then the parity transition system \("\mathcal {P}_{\mathcal {ACD}(}"\) will use priorities in \([\mu ,\eta ]\) . If \({\mathcal {ACD}}(\) is a "\(\mathit {Weak}_k\) ACD", then in each "strongly connected component" of \("\mathcal {P}_{\mathcal {ACD}(}"\) the number of priorities used will be the same as the "height" of the corresponding tree of \({\mathcal {ACD}}(\) (at most \(k\) ).
For the other implication it suffices to remark that the priorities assigned by \({\mathcal {ACD}}(\) are optimal (proposition REF ).
Given a "transition system graph" \(G=(V,E,\mathit {Source},\mathit {Target},I_0)\) and a "Muller condition" \(\mathcal {F}\subseteq \mathcal {P}(E)\) , we can define a "parity condition" \(p:E\rightarrow \mathbb {N} \) "equivalent to" \(\mathcal {F}\) over \(G\) if and only if we can define a "Rabin condition" \(R\) and a "Streett condition" \(S\) over \(G\) such that
\( (G,\mathcal {F}) \,"\simeq "\, (G,R)\, "\simeq "\, (G,S) \) .
Moreover, if the Rabin condition \(R\) uses \(r\) Rabin pairs and the Streett condition \(S\) uses \(s\) Streett pairs, we can take the parity condition \(p\) using priorities in
\([1,2r+1]\) if \(r\le s\) .
\([0,2s]\) if \(s\le r\) .
The first statement is a consequence of the characterisations (2) or (3) from propositions REF , REF and REF .
For the second statement we remark that the trees of \({\mathcal {ACD}}(\) have "height" at most \(\min \lbrace 2r+1, 2s+1\rbrace \) . If \(r\ge s\) , then the height \(2r+1\) can only be reached by (ACD)odd trees, and if \(s\ge r\) , the height \(2s+1\) only by (ACD)even trees.
From the last statement of proposition REF and thanks to the second item of proposition REF , we obtain:
Given a "transition system graph" \(G\) and a Muller condition \(\mathcal {F}\) over \(G\) , there is an equivalent \("\mathit {Weak}_k"\) condition over \(G\) if and only if there are both \([0,k]\) and "\([1,k+1]\) -parity" conditions "equivalent to" \(\mathcal {F}\) over \(G\) .
In particular, there is an equivalent "Weak condition" if and only if there are "Büchi" and "co-Büchi" conditions equivalent to \(\mathcal {F}\) over \(G\) .
It is important to notice that the previous results are stated for non-labelled transition systems. We must be careful when translating these results to automata and formal languages. For instance, in [33]} there is an example of a non-deterministic automaton \(\mathcal {A}\) , such that we can put on top of it Rabin and Streett conditions \(R\) and \(S\) such that \(\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) , but we cannot put a parity condition on top of it recognising the same language. However, proposition REF allows us to obtain analogous results for "deterministic automata".
[[33]}]
Let \(\mathcal {A}\) be the "transition system graph" of a "deterministic automaton" with set of states \(Q\) . Let \(R\) be a Rabin condition over \(\mathcal {A}\) with \(r\) pairs and \(S\) a Streett condition over \(\mathcal {A}\) with \(s\) pairs such that \(\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) . Then, there exists a parity condition \(p: Q \times \Sigma \rightarrow \mathbb {N} \) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},p)=\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) .
Moreover,
if \(r\le s\) , we can take \(p\) to be a "\([1,2r+1]\) -parity condition".
if \(s\le r\) , we can take \(p\) to be a "\([0,2s]\) -parity condition".
Proposition REF implies that \((\mathcal {A},R)"\simeq "(\mathcal {A},S)\) , and after corollary REF , there is a parity condition \(p\) using the proclaimed priorities such that \((\mathcal {A},p)"\simeq "(\mathcal {A},R)\) . Therefore \(\mathcal {L}(\mathcal {A},p)"\simeq "\mathcal {L}(\mathcal {A},R)\) (since for both deterministic and non-deterministic \((\mathcal {A},p)"\simeq "(\mathcal {A},R)\) implies \(\mathcal {L}(\mathcal {A},p)"\simeq "\mathcal {L}(\mathcal {A},R)\) ).
Let \(\mathcal {A}\) be the "transition system graph" of a deterministic automaton and \(p\) and \(p^{\prime }\) be \([0,k]\) and "\([1,k+1]\) -parity conditions" respectively over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},p)= \mathcal {L}(\mathcal {A},p^{\prime })\) . Then, there exists a \("\mathit {Weak}_k"\) condition \(W\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},W)=\mathcal {L}(\mathcal {A},p)\) .
In particular, there is a \("\mathit {Weak}"\) condition \(W\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},W)=L\) if and only if there are both "Büchi" and "co-Büchi" conditions \(B,B^{\prime }\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},B)= \mathcal {L}(\mathcal {A},B^{\prime })=L\) .
If follows from proposition REF and corollary REF .
Conclusions
We have presented a transformation that, given a Muller "transition system", provides an equivalent "parity" transition system that has minimal size and uses an optimal number of priorities among those which accept a "locally bijective morphism" to the original Muller transition system. In order to describe this transformation we have introduced the "alternating cycle decomposition", a data structure that arranges all the information about the acceptance condition of the transition system and the interplay between this condition and the structure of the system.
We have shown in section how the alternating cycle decomposition can be useful to reason about acceptance conditions, and we hope that this representation of the information will be helpful in future works.
We have not discussed the complexity of effectively computing the "alternating cycle decomposition" of a Muller transition system. It is known that solving Muller games is \(\mathrm {PSPACE}\) -complete when the acceptance condition is given as a list of accepting sets of colours
[9]}. However, given a Muller game \(\mathcal {G}\) and the "Zielonka tree" of its Muller condition, we have a transformation into a parity game of polynomial size on the size of \(\mathcal {G}\) , so solving Muller games with this extra information is in \(\mathrm {NP}\cap \mathrm {co}\) -\(\mathrm {NP}\) . Also, in order to build \({\mathcal {ACD}}(\) we suppose that the Muller condition is expressed using as colours the set of edges of the game (that is, as an explicit Muller condition), and solving explicit Muller games is in \(\mathrm {PTIME}\) [8]}. Consequently, unless \(\mathrm {PSPACE}\) is contained in \(\mathrm {NP}\cap \mathrm {co}\) -\(\mathrm {NP}\) , we cannot compute the "Zielonka tree" of a Muller condition, nor the "alternating cycle decomposition" of a Muller transition system in polynomial time.
| [10] | [
[
36675,
36679
]
] | https://openalex.org/W1598153029 |
38a37f1b-7ee2-4e2b-b3d2-54fc6242adbc |
\(\varphi _V(v_0)\in I_0^{\prime }\) for every \(v_0\in I_0\) (initial states are preserved).
\(\mathit {Source}^{\prime }(\varphi _E(e))=\varphi _V(\mathit {Source}(e))\) for every \(e\in E\) (origins of edges are preserved).
\(\mathit {Target}^{\prime }(\varphi _E(e))=\varphi _V(\mathit {Target}(e))\) for every \(e\in E\) (targets of edges are preserved).
For every "run" \(\varrho \in {\mathpzc {Run}}_{,\varrho \in \mathit {Acc}\; \Leftrightarrow \; \varphi _E(\varrho ) \in \mathit {Acc}^{\prime } (acceptance condition is preserved).}\) If \((l_V,l_E)\) , \((,l_V^{\prime },l_E^{\prime })\) are "labelled transition systems", we say that \(\varphi \) is a ""morphism of labelled transition systems"" if in addition it verifies
\(l_V^{\prime }(\varphi _V(v))=l_V(v)\) for every \(v\in V\) (labels of states are preserved).
\(l_E^{\prime }(\varphi _E(e))=l_E(e)\) for every \(e\in V\) (labels of edges are preserved).
We remark that it follows from the first three conditions that if \(\varrho \in {\mathpzc {Run}}_{ is a "run" in , then \varphi _E(\varrho )\in {\mathpzc {Run}}_{T^{\prime }} (it is a "run" in starting from some initial vertex).}Given a "morphism of transition systems" \) (V,E)\(, we will denote both maps by \)\( whenever no confusion arises. We extend \) E\( to \) E*\( and \) E\( component wise.\)
A "morphism of transition systems" \(\varphi =(\varphi _V, \varphi _E)\) is unequivocally characterized by the map \(\varphi _E\) . Nevertheless, it is convenient to keep the notation with both maps.
Given two "transition systems" \((V,E,\mathit {Source},\mathit {Target},I_0,\mathit {Acc})\) , \(=(V^{\prime },E^{\prime },\mathit {Source}^{\prime },\mathit {Target}^{\prime },I_0^{\prime },\mathit {Acc}^{\prime })\) , a "morphism of transition systems" \(\varphi : \) is called
""Locally surjective"" if
For every \(v_0^{\prime }\in I_0^{\prime }\) there exists \(v_0\in I_0\) such that \(\varphi (v_0)=v_0^{\prime }\) .
For every \(v\in V\) and every \( e^{\prime }\in E^{\prime }\) such that \( \mathit {Source}^{\prime }(e^{\prime })=\varphi (v)\)
there exists \(e\in E \) such that \( \varphi (e)=e^{\prime } \) and \( \mathit {Source}(e)=v\) .
"Locally injective" if
For every \(v_0^{\prime }\in I_0^{\prime }\) , there is at most one \(v_0\in I_0\) such that \(\varphi (v_0)=v_0^{\prime }\) .
For every \( v\in V\) and every \( e^{\prime }\in E^{\prime } \) such that \( \mathit {Source}^{\prime }(e^{\prime })=\varphi (v) \)
if there are \( e_1,e_2\in E \) such that \( \varphi (e_i)=e^{\prime }\) and \( \mathit {Source}(e_i)=v\) , for \( i=1,2 \) , then \( e_1=e_2 \) .
"Locally bijective" if it is both "locally surjective" and "locally injective".
Equivalently, a "morphism of transition systems" \(\varphi \) is "locally surjective" (resp. injective) if the restriction of \(\varphi _E\) to \("\mathit {Out}"(v)\) is a surjection (resp. an injection) into \("\mathit {Out}"(\varphi (v))\) for every \(v\in V\) and the restriction of \(\varphi _V\) to \(I_0\) is a surjection (resp. an injection) into \(I_0^{\prime }\) .
If we only consider the "underlying graph" of a "transition system", without the "accepting condition", the notion of "locally bijective morphism" is equivalent to the usual notion of bisimulation. However, when considering the accepting condition, we only impose that the acceptance of each "run" must be preserved (and not that the colouring of each transition is preserved). This allows us to compare transition systems using different classes of accepting conditions.
We state two simple, but key facts.
If \(\varphi : \) is a "locally bijective morphism", then \(\varphi \) induces a bijection between the runs in \({\mathpzc {Run}}_{ and {\mathpzc {Run}}_{} that preserves their acceptance.}\)
If \(\varphi \) is a "locally surjective morphism", then it is onto the "accessible part" of \(\) . That is, for every "accessible" state \(v^{\prime }\in \) , there exists some state \(v\in such that \) V(v)=v'\(. In particular if every state of \)\( is "accessible", \)\( is surjective.\)
Intuitively, if we transform a "transition system" 1 into 2 “without adding non-determinism”, we will have a locally bijective morphism \(\varphi : 2 \rightarrow 1\) . In particular, if we consider the "composition" \(2=\mathcal {B}\lhd 1\) of 1 by some "deterministic automaton" \(\mathcal {B}\) , as defined in section , the projection over 1 gives a "locally bijective morphism" from 2 to 1.
""""
Let \(\mathcal {A}\) be the "Muller automaton" presented in the example REF , and \(\mathcal {Z}_{\mathcal {F}_1}\) the "Zielonka tree automaton" for its Muller condition \(\mathcal {F}_1=\lbrace \lbrace a\rbrace ,\lbrace b\rbrace \rbrace \) as in the figure REF . We show them in figure REF and their "composition" \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\lhd \mathcal {A}\) in figure REF . If we name the states of \(\mathcal {A}\) with the letters \(A\) and \(B\) , and those of \(\mathcal {Z}_{\mathcal {F}_1}\) with \(\alpha ,\beta \) , there is a locally bijective morphism \(\varphi : \mathcal {\mathcal {Z}_{\mathcal {F}}}\lhd \mathcal {A}\rightarrow \mathcal {A}\) given by the projection on the first component
\( \varphi _V((X,y))=X \; \text{ for } X\in \lbrace A,B\rbrace ,\, y\in \lbrace \alpha ,\beta \rbrace \)
and \(\varphi _E\) associates to each edge \(e\in \mathit {Out}(X,y)\) labelled by \(a\in \lbrace 0,1\rbrace \) the only edge in \(\mathit {Out}(X)\) labelled with \(a\) .
<FIGURE><FIGURE>We know that \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) is a minimal automaton recognizing the "Muller condition" \(\mathcal {F}\) (theorem REF ). However, the "composition" \(Z_{\mathcal {F}_1} \lhd \mathcal {A}\) has 4 states, and in the example REF (figure REF ) we have shown a parity automaton recognizing \(\mathcal {L}(\mathcal {A})\) with only 3 states. Moreover, there is a "locally bijective" morphism from this smaller parity automaton to \(\mathcal {A}\) (we only have to send the two states on the left to \(A\) and the state on the right to \(B\) ). In the next section we will show a transformation that will produce the parity automaton with only 3 states starting from \(\mathcal {A}\) .
Morphisms of automata and games
Before presenting the optimal transformation of Muller transition systems, we will state some facts about "morphisms" in the particular case of "automata" and "games". When we speak about a "morphism" between two automata, we always refer implicitly to the morphism between the corresponding "labelled transition systems", as explained in "example REF ".
A "morphism" \(\varphi =(\varphi _V,\varphi _E)\) between two "deterministic automata" is always "locally bijective" and it is completely characterized by the map \(\varphi _V\) .
For each letter of the input alphabet and each state, there must be one and only one outgoing transition labelled with this letter.
Let \(\mathcal {A}=(Q,\Sigma , I_0, \Gamma , \delta , \mathit {Acc})\) , \(\mathcal {A}^{\prime }=(Q^{\prime },\Sigma , I_0^{\prime }, \Gamma , \delta ^{\prime }, \mathit {Acc}^{\prime })\) be two (possibly non-deterministic) "automata". If there is a "locally surjective morphism" \(\varphi : \mathcal {A}\rightarrow \mathcal {A}^{\prime }\) , then \("\mathcal {L}(\mathcal {A})"="\mathcal {L}(\mathcal {A}^{\prime })"\) .
Let \(u\in \Sigma ^\omega \) . If \(u\in \mathcal {L}(\mathcal {A})\) there is an accepting run, \(\varrho \) , over \(u\) in \(\mathcal {A}\) . By the definition of a "morphism of labelled transition systems", \(\varphi (\varrho )\) is also an accepting "run over \(u\) " in \(\mathcal {A}^{\prime }\) .
Conversely, if \(u\in \mathcal {L}(\mathcal {A}^{\prime })\) there is an accepting "run over \(u\) " \(\varrho ^{\prime }\) in \(\mathcal {A}^{\prime }\) . Since \(\varphi \) is locally surjective there is a run \(\varrho \) in \(\mathcal {A}\) , such that \(\varphi (\varrho )=\varrho ^{\prime }\) , and therefore \(\varrho \) is an accepting run over \(u\) .
The converse of the previous proposition does not hold: \("\mathcal {L}(\mathcal {A})"=\mathcal {L}(\mathcal {A}^{\prime })\) does not imply the existence of morphisms \(\varphi : \mathcal {A}\rightarrow \mathcal {A}^{\prime }\) or \(\varphi : \mathcal {A}^{\prime } \rightarrow \mathcal {A}\) , even if \(\mathcal {A}\) has minimal size among the Muller automata recognizing \(\mathcal {L}(\mathcal {A})\) .
If \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) are "non-deterministic automata" and \(\varphi :\mathcal {A}\rightarrow \mathcal {A}^{\prime }\) is a "locally bijective morphism", then \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) have to share some other important semantic properties. Two classes of automata that have been extensively studied are unambiguous and good for games automata. An automaton is ""unambiguous"" if for every input word \(w\in \Sigma ^\omega \) there is at most one accepting "run over" \(w\) . ""Good for games"" automata (GFG), first introduced by Henzinger and Piterman in [1]}, are automata that can resolve the non-determinism depending only in the prefix of the word read so far. These types of automata have many good properties and have been used in different contexts (as for example in the model checking of LTL formulas [2]} or in the theory of cost functions [3]}). Unambiguous automata can recognize \(\omega \) -regular languages using a "Büchi" condition (see [4]}) and GFG automata have strictly more expressive power than deterministic ones, being in some cases exponentially smaller (see [5]}, [6]}).
We omit the proof of the next proposition, being a consequence of fact REF and of the argument from the proof of proposition REF .
Let \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) be two "non-deterministic automata". If \(\varphi :\mathcal {A}\rightarrow \mathcal {A}^{\prime }\) is a "locally bijective morphism", then
\(\mathcal {A}\) is unambiguous if and only if \(\mathcal {A}^{\prime }\) is unambiguous.
\(\mathcal {A}\) is GFG if and only if \(\mathcal {A}^{\prime }\) is GFG.
Having a "locally bijective morphism" between two games implies that the "winning regions" of the players are preserved.
Let \(\mathcal {G}=(V, E, \mathit {Source}, \mathit {Target}, v_0, \mathit {Acc}, l_V)\) and \(\mathcal {G}^{\prime }=(V^{\prime },E^{\prime }, \mathit {Source}^{\prime }, \mathit {Target}^{\prime }, v_0^{\prime }, \mathit {Acc}^{\prime }, l_V^{\prime })\) be two "games" such that there is a "locally bijective morphism" \(\varphi :\mathcal {G}\rightarrow \mathcal {G}^{\prime }\) . Let \(P\in \lbrace Eve, Adam\rbrace \) be a player in those games. Then, \(P\) wins \(\mathcal {G}\) if and only if she/he wins \(\mathcal {G}^{\prime }\) . Moreover, if \(\varphi \) is surjective, the "winning region" of \(P\) in \(\mathcal {G}^{\prime }\) is the image by \(\varphi \) of her/his winning region in \(\mathcal {G}\) , \("\mathcal {W}_P"(\mathcal {G}^{\prime })=\varphi ("\mathcal {W}_P"(\mathcal {G}))\) .
Let \(S_P: {\mathpzc {Run}}_{\mathcal {G}}\cap E^* \rightarrow E\) be a winning "strategy" for player \(P\) in \(\mathcal {G}\) . Then, it is easy to verify that the strategy \(S_P^{\prime }: {\mathpzc {Run}}_{\mathcal {G}^{\prime }}\cap E^{\prime *} \rightarrow E^{\prime }\) defined as
\( S_P^{\prime }(\varrho ^{\prime }) = \varphi _E ( S_P(\varphi ^{-1}(\varrho ^{\prime }))) \)
is a winning "strategy" for \(P\) in \(\mathcal {G}^{\prime }\) . (Remark that thanks to fact REF , the morphism \(\varphi \) induces a bijection over "runs", allowing us to use \(\varphi ^{-1}\) in this case).
Conversely, if \(S_P^{\prime }: {\mathpzc {Run}}_{\mathcal {G}^{\prime }}\cap E^{\prime *} \rightarrow E^{\prime }\) is a winning "strategy" for \(P\) in \(\mathcal {G}^{\prime }\) , then \( S_P(\varrho ) = \varphi _E^{-1} ( S_P^{\prime }(\varphi (\varrho ))) \)
is a winning "strategy" for \(P\) in \(\mathcal {G}\) . Here \( \varphi _E^{-1} (e^{\prime })\) is the only edge \(e\in E\) in \("\mathit {Out}"(\mathit {Target}("\mathit {Last}"(\varrho )))\) such that \(\varphi _E(e)=e^{\prime }\) .
The equality \("\mathcal {W}_P"(\mathcal {G}^{\prime })=\varphi ("\mathcal {W}_P"(\mathcal {G}))\) stems from the fact that if we choose a different initial vertex \(v_1\) in \(\mathcal {G}\) , then \(\varphi \) is a "locally bijective morphism" to the game \(\mathcal {G}^{\prime }\) with initial vertex \(\varphi (v_1)\) . Conversely, if we take a different initial vertex \(v_1^{\prime }\) in \(\mathcal {G}^{\prime }\) , since \(\varphi \) is surjective we can take a vertex \(v_1\in \varphi ^{-1}(v_1^{\prime })\) , and \(\varphi \) remains a locally bijective morphism between the resulting games.
The alternating cycle decomposition
Most transformations of "Muller" into "parity" "transition systems" are based on the "composition" by some automaton converting the Muller condition into a parity one. These transformations act on the totality of the system uniformly, regardless of the local structure of the system and the "acceptance condition".
The transformation we introduce in this section takes into account the interplay between the particular "acceptance condition" and the "transition system", inspired by the alternating chains introduced in [7]}.
In the following we will consider "Muller transition systems" with the Muller acceptance condition using edges as colours. We can always suppose this, since given a transition system \( with edges coloured by \) : E C\( and a Muller condition \) FP(C)\(, the condition \) FP(E)\( defined as \) AF (A)F\( is an "equivalent condition over" \) . However, the size of the representation of the condition \(\mathcal {F}\) might change. Making this assumption corresponds to consider what are called explicit Muller conditions. In particular, solving Muller games with explicit Muller conditions is in \(\mathrm {PTIME}\) [8]}, while solving general Muller games is \(\mathrm {PSPACE}\) -complete [9]}.
Given a "transition system" \((V,E,\mathit {Source},\mathit {Target},I_0, \mathit {Acc})\) , a loop is a subset of edges \(l\subseteq E\) such that it exists \(v\in V\) and a finite "run" \(\varrho \in {\mathpzc {Run}}_{T,v}\) such that \("\mathit {First}"(\varrho )="\mathit {Last}"(\varrho )=v\) and \({\mathit {App}}(\varrho )=l\) . The set of "loops" of \( is denoted \) Loop(\(.For a "loop" \) lLoop(\( we write\)\( ""\mathit {States}""(l):= \lbrace v\in V \; : \; \exists e\in l, \; \mathit {Source}(e)=v \rbrace .\)\(\)
Observe that there is a natural partial order in the set \({\mathpzc {Loop}}(\) given by set inclusion.
If \(l\) is a "loop" in \({\mathpzc {Loop}}(\) , for every \(q\in {\mathit {States}}(l)\) there is a run \(\varrho \in {\mathpzc {Run}}_{q}\) such that \(\mathit {App}(\varrho )=l\) .
The maximal loops of \({\mathpzc {Loop}}(\) (for set inclusion) are disjoint and in one-to-one correspondence with the "strongly connected components" of \(.\)
[Alternating cycle decomposition]
Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" with "acceptance condition" given by \(\mathcal {F}\subseteq \mathcal {P}(E)\) . The alternating cycle decomposition (abbreviated ACD) of \(, noted \) ACD(T)\(, is a family of "labelled trees" \) (t1, 1),..., (tr,r)\( with nodes labelled by "loops" in \) Loop(\(, \) i: tiLoop(\(. We define it inductively as follows:\begin{itemize}\item Let \lbrace l_1,\dots , l_r\rbrace be the set of maximal loops of {\mathpzc {Loop}}(. For each i\in \lbrace 1,\dots , r\rbrace we consider a "tree" t_i and define \nu _i(\varepsilon )=l_i.\end{itemize}\item Given an already defined node \)\( of a tree \) ti\( we consider the maximal loops of the set\)\(\lbrace l\subseteq \nu _i(\tau ) \; : \; l\in {\mathpzc {Loop}}( \text{ and } l \in \mathcal {F}\; \Leftrightarrow \; \nu _i(\tau ) \notin \mathcal {F}\rbrace \)\(and for each of these loops \) l\( we add a child to \)\( in \) ti\( labelled by \) l\(. \)
For notational convenience we add a special "tree" \((t_0,\nu _0)\) with a single node \(\varepsilon \) labelled with the edges not appearing in any other tree of the forest, i.e., \(\nu _0(\varepsilon )=E \setminus \bigcup _{i=1}^{r}l_i\) (remark that this is not a "loop").
We define \(\mathit {States}(\nu _0(\varepsilon )):= V\setminus \bigcup _{i=1}^{r}\mathit {States}(l_i)\) (remark that this does not follow the general definition of \("\mathit {States}"()\) for loops).
We call the trees \(t_1,\dots , t_r\) the ""proper trees"" of the "alternating cycle decomposition" of \(.Given a node \)\( of \) ti\(, we note \) Statesi():=States(i())\(.\)
As for the "Zielonka tree", the "alternating cycle decomposition" of \( is not unique, since it depends on the order in which we introduce the children of each node. This will not affect the upcoming results, and we will refer to it as ``the^{\prime \prime } alternating cycle decomposition of \) .
For the rest of the section we fix a "Muller transition system" \((V,E,\mathit {Source},\mathit {Target}, I_0, \mathcal {F})\) with the "alternating cycle decomposition" given by \((t_0,\nu _0), (t_1,\nu _1),\dots , (t_r,\nu _r)\) .
The "Zielonka tree" for a "Muller condition" \(\mathcal {F}\) over the set of colours \(C\) can be seen as a special case of this construction, for the automaton with a single state, input alphabet \(C\) , a transition for each letter in \(C\) and "acceptance condition" \(\mathcal {F}\) .
Each state and edge of \( appears in exactly one of the "trees" of \) ACD(\(.\)
The ""index"" of a state \(q\in V\) (resp. of an edge \(e\in E\) ) in \({\mathcal {ACD}}(\) is the only number \(j\in \lbrace 0,1,\dots ,r\rbrace \) such that \(q\in "\mathit {States}"_j(\varepsilon )\) (resp. \(e \in \nu _j(\varepsilon )\) ).
For each state \(q\in V\) of "index" \(j\) we define the ""subtree associated to the state \(q\) "" as the "subtree" \(t_q\) of \(t_j\) consisting in the set of nodes \(\lbrace \tau \in t_j \; : \; q\in {\mathit {States}}_j(\tau ) \rbrace \) .
We refer to figures REF and REF for an example of \("t_q"\) .
For each "proper tree" \(t_i\) of \("{\mathcal {ACD}}" (\) we say that \(t_i\) is (ACD)even if \(\nu _i(\varepsilon )\in \mathcal {F}\) and that it is (ACD)odd if \(\nu _i(\varepsilon )\notin \mathcal {F}\) .
We say that the "alternating cycle decomposition" of \( is \emph {even} if all the trees of maximal "height" of \) ACD(\( are even; that it is \emph {odd} if all of them are odd, and that it is \emph {(ACD){ambiguous}} if there are even and odd trees of maximal "height".\)
For each \(\tau \in t_i\) , \(i=1,\dots ,r\) , we define the (node)priority of \(\tau \) in \(t_i\) , written \(p_i(\tau )\) as follows:
If \({\mathcal {ACD}}(\) is (ACD)even or (ACD)ambiguous
If \(t_i\) is (ACD)even (\(\nu _i(\varepsilon )\in \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )=|\tau |\) .
If \(t_i\) is (ACD)odd (\(\nu _i(\varepsilon )\notin \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+1=|\tau |+1\) .
If \({\mathcal {ACD}}(\) is (ACD)odd
If \(t_i\) is (ACD)even (\(\nu _i(\varepsilon )\in \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+2=|\tau |+2\) .
If \(t_i\) is (ACD)odd (\(\nu _i(\varepsilon )\notin \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+1=|\tau |+1\) .
For \(i=0\) , we define \(p_0(\varepsilon )=0\) if \("{\mathcal {ACD}}" (\) is (ACD)even or (ACD)ambiguous and \(p_0(\varepsilon )=1\) if \("{\mathcal {ACD}}" (\) is (ACD)odd.
The assignation of priorities to nodes produces a labelling of the levels of each tree. It will be used to determine the priorities needed by a parity "transition system" to simulate \(. The distinction between the cases \) ACD(\( even or odd is added only to obtain the minimal number of priorities in every case.\)
In figure REF we represent a "transition system" \((V,E,\mathit {Source},\mathit {Target},q_0,\mathcal {F})\) with \(V=\lbrace q_0,q_1,q_2,q_3,q_4,q_5\rbrace \) , \(E=\lbrace a,b,\dots ,j,k\rbrace \) and using the "Muller condition"
\(\mathcal {F}=\lbrace \lbrace c,d,e \rbrace ,\lbrace e \rbrace ,\lbrace g,h,i \rbrace ,\lbrace l \rbrace ,\lbrace h,i,j,k \rbrace ,\lbrace j,k \rbrace \rbrace .\)
It has 2 strongly connected components (with vertices \(S_1=\lbrace q_1,q_2\rbrace , S_2=\lbrace q_3,q_4,q_5\rbrace \) ), and a vertex \(q_0\) that does not belong to any strongly connected component.
The "alternating cycle decomposition" of this transition system is shown in figure REF . It consists of two proper "trees", \(t_1\) and \(t_2\) , corresponding to the strongly connected components of \( and the tree \) t0\( that corresponds to the edges not appearing in the strongly connected components.\) We observe that \({\mathcal {ACD}}(\) is (ACD)odd (\(t_2\) is the highest tree, and it starts with a non-accepting "loop"). It is for this reason that we start labelling the levels of \(t_1\) from 2 (if we had assigned priorities \(0,1\) to the nodes of \(t_1\) we would have used 4 priorities, when only 3 are strictly necessary).
In figure REF we show the "subtree associated to" \(q_4\) .
<FIGURE><FIGURE><FIGURE>
The alternating cycle decomposition transformation
We proceed to show how to use the "alternating cycle decomposition" of a "Muller transition system" to obtain a "parity transition system". Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" and \((t_0,\nu _0), (t_1, \nu _1),\dots , (t_r,\nu _r)\) , its "alternating cycle decomposition".
First, we adapt the definitions of \(\mathit {Supp}\) and \(\mathit {Nextbranch}\) to the setting with multiple trees.
For an edge \(e\in E\) such that \(\mathit {Target}(e)\) has "index" \(j\) , for \(i\in \lbrace 0,1,\dots ,r\rbrace \) and a branch \(\beta \) in some subtree of \(t_i\) , we define the ""support"" of \(e\) from \(\tau \) as:
\( {\mathit {Supp}}(\beta ,i,e)={\left\lbrace \begin{array}{ll}\end{array}\right.\text{The maximal node (for } {\sqsubseteq }\text{) } \tau \in \beta \text{ such that } e\in \nu _i(\tau ), \text{ if } i= j .\\[2mm]}\text{The root } \varepsilon \text{ of } t_j, \text{ if } i\ne j.\)
\(\)
Intuitively, \({\mathit {Supp}}(\beta ,i,e)\) is the highest node we visit if we want to go from the bottom of the branch \(\beta \) to a node of the tree that contains \(e\) “in an optimal trajectory” (going up as little as possible). If we have to jump to another tree, we define \({\mathit {Supp}}(\beta ,i,e)\) as the root of the destination tree.
Let \(i\in \lbrace 0,1,\dots ,r\rbrace \) , \(q\) be a state of "index" \(i\) , \(\beta \) be a branch of some "subtree" of \(t_i\) and \(\tau \in \beta \) be a node of \(t_i\) such that \(q\in {\mathit {States}}_i(\tau )\) . If \(\tau \) is not the deepest node of \(\beta \) , let \(\sigma _\beta \) be the unique child of \(\tau \) in \(t_i\) such that \(\sigma _\beta \in \beta \) . We define:
\( {\mathit {Nextchild}_{t_q}}(\beta ,\tau )={\left\lbrace \begin{array}{ll}\tau , \text{ if } \tau \text{ is a leaf in } {t_q}.\\[3mm]\parbox {8cm}{Smallest older sibling of \sigma _\beta in {t_q}, if \sigma _\beta is defined and there is any such older sibling.}\\[3mm]\text{Smallest child of } \tau \text{ in } {t_q} \text{ in any other case}.\end{array}\right.} \)
Let \(i\in \lbrace 0,1,\dots ,r\rbrace \) and \(\beta \) be a branch of some "subtree" of \(t_i\) . For a state \(q\) of "index" \(j\) and a node \(\tau \) such that \(q\in {\mathit {States}}_j(\tau )\) and such that \(\tau \in \beta \) if \(i=j\) , we define:
\( {\mathit {Nextbranch}_{t_q}}(\beta ,i,\tau )= {\left\lbrace \begin{array}{ll}\end{array}\right.\text{ Leftmost branch in } {t_q} \text{ below } {\mathit {Nextchild}_{t_q}}(\beta ,\tau ), \text{ if } i= j .\\[3mm]}\text{The leftmost branch in } {\mathit {Subtree}}_{t_q}(\tau ), \text{ if } i\ne j.\)
\(\)
[ACD-transformation]
Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" with "alternating cycle decomposition" \({\mathcal {ACD}}(= \lbrace (t_0,\nu _0),(t_1,\nu _1),\dots ,(t_r,\nu _r)\rbrace \) . We define its ""ACD-parity transition system"" (or ACD-transformation) \({\mathcal {P}_{\mathcal {ACD}(}}=(V_P,E_P,\mathit {Source}_P,\mathit {Target}_P,I_0^{\prime }, p:E_P\rightarrow \mathbb {N} )\) as follows:
\(V_P=\lbrace (q,i,\beta ) \; : \; q\in V \text{ of "index" } i \text{ and } \beta \in {\mathit {Branch}}({t_q}) \rbrace \) .
For each node \((q,i,\beta )\in V_P\) and each edge \(e\in {\mathit {Out}}(q)\) we define an edge \(e_{i,\beta }\) from \((q,i,\beta )\) . We set
\(\mathit {Source}_P(e_{i,\beta })=(q,i,\beta )\) , where \(q=\mathit {Source}(e)\) .
\(\mathit {Target}_P(e_{i,\beta })=(q^{\prime },k,{\mathit {Nextbranch}_{t_{q^{\prime }}}}(\beta ,i,\tau ))\) , where \(q^{\prime }=\mathit {Target}(e)\) , \(k\) is its "index" and \(\tau ={\mathit {Supp}}(\beta ,i,e)\) .
\(p(e_{i,\tau })=(node){p_j}({\mathit {Supp}}(\beta ,i,e))\) , where \(j\) is the "index" of \({\mathit {Supp}}(\beta ,i,e)\) .
\(I_0^{\prime }=\lbrace (q_0,i,\beta _0) \; : \; q_0\in I_0, \, i \text{ the index of } q_0\) and \(\beta _0\) the leftmost branch in \({t_{q_0}}\rbrace \) .
If \( is labelled by \) lV:VLV\(, \) lE:ELE\(, we label \) PACD(\( by \) lV'((q,i,))=lV(q)\( and \) lE'(ei,)=lE(e)\(.\)
The set of states of \(\mathcal {P}_{\mathcal {ACD}(}\) is build as follows: for each state \(q\in we consider the subtree of \) ACD(\( consisting of the nodes with \) q\( in its label, and we add a state for each branch of this subtree.\) Intuitively, to define transitions in the transition system \({\mathcal {P}_{\mathcal {ACD}(}}\) we move simultaneously in \( and in \) ACD(\(. We start from \) q0I0\( and from the leftmost branch of \) tq0\(. When we take a transition \) e\( in \) while being in a branch \(\beta \) , we climb the branch \(\beta \) searching a node \(\tau \) with \(q^{\prime }=\mathit {Target}(e)\) and \(e\) in its label, and we produce the priority corresponding to the level reached. If no such node exists, we jump to the root of the tree corresponding to \(q^{\prime }\) . Then, we move to the next child of \(\tau \) on the right of \(\beta \) in the tree \({t_{q^{\prime }}}\) , and we pick the leftmost branch under it in \({t_{q^{\prime }}}\) . If we had jumped to the root of \({t_{q^{\prime }}}\) from a different tree, we pick the leftmost branch of \({t_{q^{\prime }}}\) .
The size of \({\mathcal {P}_{\mathcal {ACD}(}}\) is
\( |\mathcal {P}_{\mathcal {ACD}(}|=\sum \limits _{q\in V} |{\mathit {Branch}}({t_q})|. \)
The number of priorities used by \({\mathcal {P}_{\mathcal {ACD}(}}\) is the "height" of a maximal tree of \({\mathcal {ACD}}(\) if \({\mathcal {ACD}}(\) is (ACD)even or (ACD)odd, and the "height" of a maximal tree plus one if \({\mathcal {ACD}}(\) is (ACD)ambiguous.
In figure REF we show the "ACD-parity transition system" \({\mathcal {P}_{\mathcal {ACD}(}}\) of the transition system of example REF (figure REF ). States are labelled with the corresponding state \(q_j\) in \(, the tree of its "index" and a node \) ti\( that is a leaf in \) tqj\( (defining a branch of it).\) We have tagged the edges of \({\mathcal {P}_{\mathcal {ACD}(}}\) with the names of edges of \( (even if it is not an automaton). These indicate the image of the edges by the "morphism" \) : PACD( , and make clear the bijection between "runs" in \( and in \) PACD(\(.\) In this example, we create one “copy” of states \(q_0,q_1\) and \(q_2\) , three “copies” of the state \(q_3\) and two“copies” of states \(q_4\) and \(q_5\) . The resulting "parity transition system" \({\mathcal {P}_{\mathcal {ACD}(}}\) has therefore 10 states.
<FIGURE>Let \(\mathcal {A}\) be the "Muller automaton" of example "REF ". Its "alternating cycle decomposition" has a single tree that coincides with the "Zielonka tree" of its "Muller acceptance condition" \(\mathcal {F}_1\) (shown in figure REF ). However, its "ACD-parity transition system" has only 3 states, less than the "composition" \(\mathcal {Z}_{\mathcal {F}_1} \lhd \mathcal {A}\) (figure REF ), as shown in figure REF .
<FIGURE>[Correctness]
Let \((V, E, \mathit {Source}, \mathit {Target}, I_0, \mathcal {F})\) be a "Muller transition system" and \({\mathcal {P}_{\mathcal {ACD}(}}=(V_P, E_P, \mathit {Source}_P, \mathit {Target}_P, I_0^{\prime }, p:E_P\rightarrow \mathbb {N} )\) its "ACD-transition system". Then, there exists a "locally bijective morphism" \( \varphi : {\mathcal {P}_{\mathcal {ACD}(}} \rightarrow .Moreover, if \) is a "labelled transition system", then \(\varphi \) is a "morphism of labelled transition systems".
We define \(\varphi _V : V_P \rightarrow V\) by \(\varphi _V((q,i,\beta ))=q\) and \(\varphi _E : E_P \rightarrow E\) by \(\varphi _E(e_{i,\tau })=e\) .
It is clear that this map preserves edges, initial states and labels. It is also clear that it is "locally bijective", since we have defined one initial state in \({\mathcal {P}_{\mathcal {ACD}(}}\) for each initial state in \(, and by definition the edges in \) Out((q,i,))\( are in bijection with \) Out(q)\(. It induces therefore a bijection between the runs of the transition systems(fact \ref {Fact_LocBijMorph_BijectionRuns}).\) Let us see that a "run" \(\varrho \) in \( is accepted if and only if \) -1()\( is accepted in \) PACD(\(. First, we remark that any infinite run \)\( of \) will eventually stay in a "loop" \(l\in {\mathpzc {Loop}}(\) such that \(\mathit {Inf}(\varrho )=l\) , and therefore we will eventually only visit states corresponding to the tree \(t_i\) such that \(l\subseteq \nu _i(\varepsilon )\) in the "alternating cycle decomposition". Let \(p_{\min }\) be the smallest priority produced infinitely often in the run \(\varphi ^{-1}(\varrho )\) in \(\mathcal {P}_{\mathcal {ACD}(}\) . As in the proof of proposition REF , there is a unique node \(\tau _p\) in \(t_i\) visited infinitely often such that \((node){p_i}(\tau _p)=p_{\min }\) . Moreover, the states visited infinitely often in \(\mathcal {P}_{\mathcal {ACD}(}\) correspond to branches below \(\tau _p\) , that is, they are of the form
\( (q,i,\beta )\) , with \(\beta \in {\mathit {Subtree}}_{t_q}(\tau _p), \text{ for } q\in {\mathit {States}}_i(\tau _p)\) .
We claim that \(\tau _p\) verifies:
\(l\subseteq \nu _i(\tau _p)\) .
\(l\nsubseteq \nu _i(\sigma )\) for every "child" \(\sigma \) of \(\tau _p\) .
By definition of \({\mathcal {ACD}}(\) this implies
\( l\in \mathcal {F}\; \Longleftrightarrow \; \nu _i(\tau _p)\in \mathcal {F}\; \Leftrightarrow \; p_{\min } \text{ is even.}\)
We show that \(l\subseteq \nu _i(\tau _p)\) . For every edge \(e\notin \nu _i(\tau _p)\) of "index" \(i\) and for every branch \(\beta \in {\mathit {Subtree}}_{t_q}(\tau _p), \text{ for } q\in {\mathit {States}}_i(\tau _p)\) , we have that \(\tau ^{\prime }={\mathit {Supp}}(\beta ,i,e)\) is a strict "ancestor" of \(\tau _p\) in \(t_i\) . Therefore, if \(l\) was not contained in \(\nu _i(\tau _p)\) we would produce infinitely often priorities strictly smaller than \(p_{\min }\) .
Finally, we show that \(l\nsubseteq \nu _i(\sigma )\) for every "child" \(\sigma \) of \(\tau _p\) . Since we reach \(\tau _p\) infinitely often, we take transitions \(e_{i,\beta }\) such that \(\tau _p={\mathit {Supp}}(\beta ,i,e)\) infinitely often. Let us reason by contradiction and let us suppose that there is some child \(\sigma \) of \(\tau _p\) such that \(l\subseteq \nu _i(\sigma )\) . Then for each edge \(e\in l\) , \(\mathit {Target}(e)\in {\mathit {States}}_i(\sigma )\) , and therefore \(\sigma \in t_q\) for all \(q\in {\mathit {States}}(l)\) and for each transition \(e_{i,\beta }\) such that \(\tau _p={\mathit {Supp}}(\beta ,i,e)\) , some branches passing through \(\sigma \) are considered as destinations. Eventually, we will go to some state \((q,i,\beta ^{\prime })\) , for some branch \(\beta ^{\prime }\in {\mathit {Subtree}}_{t_q}(\sigma )\) . But since \(l\subseteq \nu _i(\sigma )\) , then for every edge \(e\in l\) and branch \(\beta ^{\prime }\in {\mathit {Subtree}}_{t_q}(\sigma )\) it is verified that \({\mathit {Supp}}(\beta ^{\prime },i,e)\) is a "descendant" of \(\sigma \) , so we would not visit again \(\tau _p\) and all priorities produced infinitely often would be strictly greater than \(p_{\min }\) .
From the remarks at the end of section REF , we obtain:
If \(\mathcal {A}\) is a "Muller automaton" over \(\Sigma \) , the automaton \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) is a "parity automaton" recognizing \(\mathcal {L}(\mathcal {A})\) . Moreover,
\(\mathcal {A}\) is "deterministic" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is deterministic.
\(\mathcal {A}\) is "unambiguous" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is unambiguous.
\(\mathcal {A}\) is "GFG" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is GFG.
If \(\mathcal {G}\) is a "Muller game", then \(\mathcal {P}_{\mathcal {ACD}(\mathcal {G})}\) is a "parity game" that has the same winner than \(\mathcal {G}\) .
The "winning region" of \(\mathcal {G}\) for a player \(P\in \lbrace Eve, Adam\rbrace \) is \({\mathcal {W}_P}(\mathcal {G})=\varphi ("\mathcal {W}_P"(\mathcal {P}_{\mathcal {ACD}(\mathcal {G})}))\) , being \(\varphi \) the morphism of the proof of proposition REF .
Optimality of the alternating cycle decomposition transformation
In this section we prove the strong optimality of the "alternating cycle decomposition transformation", both for number of priorities (proposition REF ) and for size (theorem REF ). We use the same ideas as for proving the optimality of the "Zielonka tree automaton" in section REF .
[Optimality of the number of priorities]
Let \( be a "Muller transition system" such that all its states are "accessible" and let \) PACD(\( be its "ACD-transition system". If \) P\( is another "parity transition system" such that there is a "locally bijective morphism" \) :P, then \(\mathcal {P}\) uses at least the same number of priorities than \({\mathcal {P}_{\mathcal {ACD}(}}\) .
We distinguish 3 cases depending on whether \({\mathcal {ACD}}(\) is (ACD)even, (ACD)odd or (ACD)ambiguous.
We treat simultaneously the cases \({\mathcal {ACD}}(\) (ACD)even and (ACD)odd. In these cases, the number \(h\) of priorities used by \({\mathcal {P}_{\mathcal {ACD}(}}\) coincides with the maximal "height" of a tree in \({\mathcal {ACD}}(\) . Let \(t_i\) be a tree of maximal "height" \(h\) in \({\mathcal {ACD}}(\) , \(\beta =\lbrace \tau _1,\dots ,\tau _{h}\rbrace \in {\mathit {Branch}}(t_i)\) a branch of \(t_i\) of maximal length (ordered as \(\tau _1 {\sqsupseteq } \tau _2 {\sqsupseteq } \dots {\sqsupseteq } \tau _{h}=\varepsilon \) ) and \(l_j=\nu _i(\tau _j)\) , \(j=1,\dots , h\) . We fix \(q\in {\mathit {States}}_i(\tau _1)\) , where \(\tau _1\) is the leaf of \(\beta \) , and we write
\(\mathit {Loop}_q)=\lbrace w\in {\mathpzc {Run}}_{T,q}\cap E^* \; : \; {\mathit {First}}(w)={\mathit {Last}}(w)=q \rbrace ,\)
and for each \(j=1,\dots , h\) we choose \(w_j \in \mathit {Loop}_q)\) such that \({\mathit {App}}(w_j)=l_j\) . Let \(\eta ^{\prime }\) be the maximal priority appearing in \(\mathcal {P}\) . We show as in the proof of proposition REF that for every \(v\in \mathit {Loop}_q)\) , the "run" \(\varphi ^{-1}((w_1\dots w_k v)^\omega )\) must produce a priority smaller or equal to \(\eta ^{\prime }-k+1\) . Taking \(k=h\) , the "run" \(\varphi ^{-1}((w_1\dots w_h)^\omega )\) produces a priority smaller or equal to \(\eta ^{\prime }-h+1\) and even if and only if \({\mathcal {ACD}}(\) is (ACD)even. By lemma REF we can suppose that \(\mathcal {P}\) uses all priorities in \([\eta ^{\prime }-h+1, \eta ^{\prime }]\) . We conclude that \(\mathcal {P}\) uses at least \(h\) priorities, so at least as many as \({\mathcal {P}_{\mathcal {ACD}(}}\) .
In the case \({\mathcal {ACD}}(\) (ACD)ambiguous, if \(h\) is the maximal "height" of a tree in \({\mathcal {ACD}}(\) , then \({\mathcal {P}_{\mathcal {ACD}(}}\) uses \(h+1\) priorities. We can repeat the previous argument with two different maximal branches of respective maximal (ACD)even and (ACD)odd trees. We conclude that \(\mathcal {P}\) uses at least priorities in a range \([\mu ,\mu +h]\cup [\eta ,\eta +h]\) , with \(\mu \) even and \(\eta \) odd, so it uses at least \(h+1\) priorities.
A similar proof, or an application of the results from [10]} gives the following result:
If \(\mathcal {A}\) is a deterministic automaton, the accessible part of \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) uses the optimal number of priorities to recognize \(\mathcal {L}(\mathcal {A})\) .
Finally, we state and prove the optimality of \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) for size.
[Optimality of the number of states]
Let \( be a (possibly "labelled") "Muller transition system" such that all its states are "accessible" and let \) PACD(\( be its "ACD-transition system". If \) P\( is another "parity transition system" such that there is a "locally bijective morphism" \) :P, then
\( |"\mathcal {P}_{\mathcal {ACD}(}"|\le |\mathcal {P}| \) .
Proof of theorem REF
We follow the same steps as for proving theorem REF . We will suppose that all states of the "transition systems" considered are "accessible".
Let 1, 2 be "transition systems" such that there is a "morphism of transition systems" \(\varphi : 1 \rightarrow 2\) . Let \(l\in {\mathpzc {Loop}}(2)\) be a "loop" in 2. An ""\(l\) -SCC"" of 1 (with respect to \(\varphi \) ) is a non-empty "strongly connected subgraph" \((V_l,E_l)\) of the subgraph \((\varphi _V^{-1}({\mathit {States}}(l)),\varphi _E^{-1}(l) )\) such that
\(\nonumber & \text{for every } q_1\in V_l \text{ and every } e_2\in "\mathit {Out}"(\varphi (q_1))\cap l\\ &\text{there is an edge } e_1\in \varphi ^{-1}(e_2)\cap "\mathit {Out}"(q_1) \text{ such that } e_1\in E_l.\)
That is, an \(l\) -SCC is a "strongly connected subgraph" of 1 in which all states and transitions correspond via \(\varphi \) to states and transitions appearing in the "loop" \(l\) . Moreover, given a "run" staying in \(l\) in 2 we can simulate it in the \(l\) -SCC of 1 (property (REF )).
Let 1and 2
be two "transition systems" such that there is a "locally surjective" "morphism" \(\varphi : 1 \rightarrow 2\) . Let \(l\in "\mathpzc {Loop}"(2)\) and \(C_l=(V_l,E_l)\) be a non-empty "\(l\) -SCC" in 1. Then, for every "loop" \(l^{\prime }\in "\mathpzc {Loop}"(2)\) such that \(l^{\prime }\subseteq l\) there is a non-empty "\(l^{\prime }\) -SCC" in \(C_l\) .
Let \((V^{\prime },E^{\prime })=(V_l,E_l)\cap (\varphi _V^{-1}({\mathit {States}}(l^{\prime })),\varphi _E^{-1}(l^{\prime }))\) . We first prove that \((V^{\prime },E^{\prime })\) is non-empty. Let \(q_1\in V_l \subseteq {\mathit {States}}(l)\) . Let \(\varrho \in {\mathpzc {Run}}_{T_2,\varphi (q)}\) be a finite run in 1 from \(\varphi (q_1)\) , visiting only edges in \(l\) and ending in \(q_2\in {\mathit {States}}(l^{\prime })\) . From the local surjectivity, we can obtain a run in \(\varphi ^{-1}(\varrho )\) that will stay in \((V^{\prime },E^{\prime })\) and that will end in a state in \(\varphi _V^{-1}({\mathit {States}}(l^{\prime }))\) . The subgraph \((V^{\prime },E^{\prime })\) clearly has property (REF ) (for \(l^{\prime }\) ).
We prove by induction on the size that any non-empty subgraph \((V^{\prime },E^{\prime })\) verifying the property (REF ) (for \(l^{\prime }\) ) admits an \(l^{\prime }\) -SCC. If \(|V^{\prime }|=1\) , then \((V^{\prime },E^{\prime })\) forms by itself a "strongly connected graph". If \(|V^{\prime }|>1\) and \((V^{\prime },E^{\prime })\) is not strongly connected, then there are vertices \(q,q^{\prime }\in V^{\prime }\) such that there is no path from \(q\) to \(q^{\prime }\) following edges in \(E^{\prime }\) . We let
\( V^{\prime }_q=\lbrace p\in V^{\prime } \; : \; \text{there is a path from } q \text{ to } p \text{ in } (V^{\prime },E^{\prime })\rbrace \; ; \; E^{\prime }_q=E^{\prime }\cap "\mathit {Out}"(V^{\prime }_q)\cap "\mathit {In}"(V^{\prime }_q) .\)
Since \(q^{\prime }\notin V^{\prime }_q\) , the size \(|V^{\prime }_q|\) is strictly smaller than \(|V^{\prime }|\) .
Also, the subgraph \((V^{\prime }_q,E^{\prime }_q)\) is non-empty since \(q\in V^{\prime }_q\) .
The property (REF ) holds from the definition of \((V^{\prime }_q,E^{\prime }_q)\) . We conclude by induction hypothesis.
Let \( be a "Muller transition system" with acceptance condition \) F\( and let \) P\( be a "parity transition system" such that there is a "locally bijective morphism" \) : P. Let \(t_i\) be a "proper tree" of \({\mathcal {ACD}}(\) and \(\tau ,\sigma _1,\sigma _2\in t_i\) nodes in \(t_i\) such that \(\sigma _1,\sigma _2\) are different "children" of \(\tau \) , and let \(l_1=\nu _i(\sigma _1)\) and \(l_2=\nu _i(\sigma _2)\) . If \(C_1\) and \(C_2\) are two "\(l_1 \) -SCC" and "\(l_2\) -SCC" in \(\mathcal {P}\) , respectively, then \(C_1\cap C_2= \emptyset \) .
Suppose there is a state \(q\in C_1\cap C_2\) . Since \(\varphi _V(q)\in {\mathit {States}}(l_1)\cap {\mathit {States}}(l_2)\) , and \(l_1, l_2\) are "loops" there are finite "runs" \(\varrho _1,\varrho _2 \in {\mathpzc {Run}}_{\varphi _V(q)}\) such that \({\mathit {App}}(\varrho _1)=l_1\) and \(\mathit {App}(\varrho _2)=l_2\) . We can “simulate” these runs in \(C_1\) and \(C_2\) thanks to property (REF ), producing runs \(\varphi ^{-1}(\varrho _1)\) and \(\varphi ^{-1}(\varrho _2)\) in \({\mathpzc {Run}}_{\mathcal {P},q}\) and arriving to \(q_1="\mathit {Last}"(\varphi ^{-1}(\varrho _1))\) and \(q_2=\mathit {Last}(\varphi ^{-1}(\varrho _1))\) . Since \(C_1, C_2\) are "\(l_1,l_2\) -SCC", there are finite runs \(w_1\in {\mathpzc {Run}}_{\mathcal {P},q_1}\) , \(w_2\in {\mathpzc {Run}}_{q_2}\) such that \(\mathit {Last}(w_1)=\mathit {Last}(w_2)=q\) , so the runs \(\varphi ^{-1}(\varrho _1)w_1\) and \(\varphi ^{-1}(\varrho _2)w_2\) start and end in \(q\) . We remark that in \( the runs \) (-1(1)w1)=1E(w1)\( and \) (-1(2)w2)=2E(w2)\(start and end in \) V(q)\( andvisit, respectively, all the edges in \) l1\( and \) l2\(.From the definition of \) ACD(\( we have that \) l1F l2 F l1 l2 F\(. Since \)\( preserves the "acceptance condition", the minimal priority produced by \) -1(1)w1\( has the same parity than that of \) -1(2)w2\(, but concatenating both runs we must produce a minimal priority of the opposite parity, arriving to a contradiction.\)
Let \( be a "Muller transition system" and \) "PACD("\( its "ACD-parity transition system". For each tree \) ti\( of \) ACD(\(, each node \) ti\( and each state \) qStatesi()\( we write:\)\( \psi _{\tau ,i,q}=|{\mathit {Branch}}({\mathit {Subtree}}_{t_q}(\tau ))|=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \; \beta \, \text{ passes through } \tau \rbrace |.\)\(\vspace{-8.53581pt}\)\(\Psi _{\tau ,i}=\sum \limits _{q\in {\mathit {States}}_i(\tau )}\psi _{\tau ,i,q}=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \;q\in V \text{ of "index" } i \text{ and } \beta \, \text{ passes through } \tau \rbrace | .\)\(\)
If we consider the root of the trees in \({\mathcal {ACD}}(\) , then each \(\Psi _{\varepsilon ,i}\) is the number of states in \("\mathcal {P}_{\mathcal {ACD}(}"\) associated to this tree, i.e., \(\Psi _{\varepsilon ,i}=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \; q\in V,\; \beta \in {\mathit {Branch}}("t_q")\rbrace |\) . Therefore
\( |"\mathcal {P}_{\mathcal {ACD}(}"|=\sum \limits _{i=0}^{r}\Psi _{\varepsilon ,i} .\)
[Proof of theorem REF]
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a "Muller transition system", \("\mathcal {P}_{\mathcal {ACD}(}"\) the "ACD-parity transition system" of \( and \) P=(V',E',Source',Target',I0',p':E':N )\( a parity transition system such that there is a "locally bijective morphism" \) : P.
First of all, we construct two modified transition systems \(\widetilde{=(V,\widetilde{E},\mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e},\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t},I_0,\widetilde{\mathcal {F}}) and \widetilde{\mathcal {P}}=(V^{\prime },\widetilde{E^{\prime }},\mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e}^{\prime },\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t}^{\prime }, I_0^{\prime }, \widetilde{p^{\prime }}:\widetilde{E^{\prime }}:\rightarrow \mathbb {N} ), such that}\)
Each vertex of \(V\) belongs to a "strongly connected component".
All leaves \(\tau \in t_i\) verify \(|{\mathit {States}}_i(\tau )|=1\) , for every \(t_i\in {\mathcal {ACD}}(\widetilde{).\item Nodes \tau \in t_i verify {\mathit {States}}_i(\tau )=\bigcup _{\sigma \in \mathit {Children}(\tau )}\mathit {States}_i(\sigma ), for every t_i\in {\mathcal {ACD}}(\widetilde{).\item There is a "locally bijective morphism" \widetilde{\varphi }: \widetilde{\mathcal {P}} \rightarrow \widetilde{.\item |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|\le |\widetilde{\mathcal {P}}| \; \Rightarrow \; |\mathcal {P}_{\mathcal {ACD}(}|\le |\mathcal {P}|.}}}We define the transition system \widetilde{ by adding for each q\in V two new edges, e_{q,1}, e_{q,2} with \mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e}(e_{q,j})=\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t}(e_{q,j})=q, for j=1,2. The modified "acceptance condition" \widetilde{\mathcal {F}} is given by: let C\subseteq \widetilde{E}\begin{itemize}\item If C\cap E\ne \emptyset , then C\in \widetilde{\mathcal {F}} \; \Leftrightarrow \; C\cap E \in \mathcal {F} (the occurrence of edges e_{q,j} does not change the "acceptance condition").\item If C\cap E = \emptyset , if there are edges of the form e_{q,1} in C, for some q\in V, then C\in \widetilde{\mathcal {F}}. If all edges of C are of the form e_{q,2}, C\notin \mathcal {F}.\end{itemize}It is easy to verify that the "transition system" \widetilde{ and {\mathcal {ACD}}(\widetilde{) verify conditions 1,2 and 3.We perform equivalent operations in \mathcal {P}, obtaining \widetilde{\mathcal {P}}:we add a pair of edges e_{q,1}, e_{q,2} for each vertex in \mathcal {P}, and we assign them priorities \widetilde{p}(e_{q,1})=\eta +\epsilon and \widetilde{p}(e_{q,2})=\eta +\epsilon +1, where \eta is the maximum of the priorities in \mathcal {P} and \epsilon =0 if \eta is even, and \epsilon =1 if \eta is odd. We extend the "morphism" \varphi to \widetilde{\varphi }: \widetilde{\mathcal {P}} \rightarrow \widetilde{ conserving the "local bijectivity" by setting \widetilde{\varphi }_E(e_{q,j})=e_{\varphi (q),j} for j=1,2. Finally, it is not difficult to verify that the underlying graphs of \mathcal {P}_{\mathcal {ACD}(\widetilde{ )} and \widetilde{\mathcal {P}}_{\mathcal {ACD}(} are equal (the only differences are the priorities associated to the edges e_{q,j}), so in particular |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|=|\widetilde{\mathcal {P}}_{\mathcal {ACD}(}|=|\mathcal {P}_{\mathcal {ACD}(}|. Consequently, |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|\le |\widetilde{\mathcal {P}}| implies |\mathcal {P}_{\mathcal {ACD}(}|\le |\widetilde{\mathcal {P}}|=|\mathcal {P}|.}}}Therefore, it suffices to prove the theorem for the modified systems \widetilde{ and \widetilde{\mathcal {P}}. From now on, we take verifying the conditions 1, 2 and 3 above. In particular, all trees are "proper trees" in {\mathcal {ACD}}( . It also holds that for each q\in V and \tau \in t_i that is not a leaf, \psi _{\tau ,i,q}=\sum \limits _{\sigma \in \mathit {Children}(\tau )}\psi _{\sigma ,i,q}. Therefore, for each \tau \in t_i that is not a leaf \Psi _{\tau ,i}=\sum \limits _{\sigma \in \mathit {Children}(\tau )}\Psi _{\sigma ,i}, and for each leaf \sigma \in t_i we have \Psi _{\sigma ,i}=1.}Vertices of V^{\prime } are partitioned in the equivalence classes of the preimages by \varphi of the roots of the trees \lbrace t_1,\dots ,t_r\rbrace of {\mathcal {ACD}}(: V^{\prime }= \bigcup \limits _{i=1}^r\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon )) \quad \text{ and } \quad \varphi _V^{-1}( {\mathit {States}}_i(\varepsilon )) \cap \varphi _V^{-1}( {\mathit {States}}_j(\varepsilon ))=\emptyset \text{ for } i\ne j .}}\begin{claim*}For each i=1,\dots ,r and each \tau \in t_i, if C_\tau is a non-empty "\nu _i(\tau )-SCC", then|C_\tau |\ge \Psi _{\tau ,i}.\end{claim*}}}}\) Let us suppose this claim holds. In particular \((\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ),\varphi _E^{-1}( \nu _i(\varepsilon ))\) verifies the property (REF ) from definition REF , so from the proof of lemma REF we deduce that it contains a \(\nu _i(\varepsilon )\) -SCC and therefore \(|\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ))| \ge \Psi _{\varepsilon ,i}\) , so
\(|\mathcal {P}|=\sum \limits _{i=1}^r |\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ))|\ge \sum \limits _{i=1}^r \Psi _{\varepsilon ,i}=|"\mathcal {P}_{\mathcal {ACD}(}"|,\)
concluding the proof.
[Proof of the claim]
Let \(C_\tau \) be a "\(\nu _i(\tau )\) -SCC". Let us prove \(|C_\tau |\ge \Psi _{\tau ,i}\) by induction on the "height of the node" \(\tau \) . If \(\tau \) is a leaf (in particular if its height is 1), \(\Psi _{\tau ,i}=1\) and the claim is clear.
If \(\tau \) of height \(h>1\) is not a leaf, then it has children \(\sigma _1,\dots , \sigma _k\) , all of them of height \(h-1\) . Thanks to lemmas REF and REF , for \(j=1,\dots ,k\) , there exist disjoint "\(\nu _i(\sigma _j)\) -SCC" included in \(C_\tau \) , named \(C_1,\dots ,C_k\) , so by induction hypothesis
\( |C_\tau | \ge \sum \limits _{j=1}^k |C_j| \ge \sum \limits _{j=1}^k \Psi _{\sigma _j,i}= \Psi _{\tau ,i}. \)
From the hypothesis of theorem REF we cannot deduce that there is a "morphism" from \(\mathcal {P}\) to \("\mathcal {P}_{\mathcal {ACD}(}"\) or vice-versa. To produce a counter-example it is enough to remember the “non-determinism” in the construction of \(\mathcal {P}_{\mathcal {ACD}(}\) . Two different orderings in the nodes of the trees of \({\mathcal {ACD}}(\) will produce two incomparable, but minimal in size parity transition systems that admit a "locally bijective morphism" to \(. \)
However, we can prove the following result:
If \(\varphi _1: "\mathcal {P}_{\mathcal {ACD}(}" \rightarrow is the "locally bijective morphism" described in the proof of proposition \ref {Prop_Correctness-ACD}, then for every state \) q\( in \) of "index" \(i\) :
\( |\varphi _1^{-1}(q)|=\psi _{\varepsilon ,i,q}\le |\varphi ^{-1}(q)| \;, \; \text{ for every "locally bijective morphism" } \varphi : \mathcal {P}\rightarrow \)
It is enough to remark that if \(q \in {\mathit {States}}_i(\tau )\) , then any "\(\nu _i(\tau )\) -SCC" \(C_\tau \) of \(\mathcal {P}\) will contain some state in \(\varphi ^{-1}(q)\) . We prove by induction as in the proof of the claim that \(\psi _{\tau ,i,q} \le |C_\tau \cap \varphi ^{-1}(q)|\) .
Applications
Determinisation of Büchi automata
In many applications, such as the synthesis of reactive systems for \(LTL\) -formulas, we need to have "deterministic" automata. For this reason, the determinisation of automata is usually a crucial step. Since McNaughton showed in [11]} that Büchi automata can be transformed into Muller deterministic automata recognizing the same language, much effort has been put into finding an efficient way of performing this transformation. The first efficient solution was proposed by Safra in [12]}, producing a deterministic automaton using a "Rabin condition". Due to the many advantages of "parity conditions" (simplicity, easy complementation of automata, they admit memoryless strategies for games, closeness under union and intersection...), determinisation constructions towards parity automata have been proposed too. In [13]}, Piterman provides a construction producing a parity automaton that in addition improves the state-complexity of Safra's construction. In [14]}, Schewe breaks down Piterman's construction in two steps: the first one from a non deterministic Büchi automaton \(\mathcal {B}\) towards a "Rabin automaton" (\("\mathcal {R}_\mathcal {B}"\) ) and the second one gives Piterman's parity automaton (\("\mathcal {P}_\mathcal {B}"\) ).
In this section we prove that there is a "locally bijective morphism" from \("\mathcal {P}_\mathcal {B}"\) to \("\mathcal {R}_\mathcal {B}"\) , and therefore we would obtain a smaller parity automaton applying the "ACD-transformation" in the second step. We provide an example (example REF ) in which the "ACD-transformation" provides a strictly better parity automaton.
From non-deterministic Büchi to deterministic Rabin automata
In [14]}, Schewe presents a construction of a deterministic "Rabin automaton" \(""\mathcal {R}_\mathcal {B}""\) from a non-deterministic "Büchi automaton" \(\mathcal {B}\) . The set of states of the automaton \(\mathcal {R}_\mathcal {B}\) is formed of what he calls ""history trees"". The number of history trees for a Büchi automaton of size \(n\) is given by the function \(\mathit {hist}(n)\) , that is shown to be in \(o((1.65n)^n)\) in [14]}. This construction is presented starting from a state-labelled Büchi automaton. A construction starting from a transition-labelled Büchi automaton can be found in [17]}. In [18]}, Colcombet and Zdanowski proved the worst-case optimality of the construction.
[[14]}]
Given a non-deterministic "Büchi automaton" \(\mathcal {B}\) with \(n\) states, there is an effective construction of a deterministic "Rabin automaton" \(\mathcal {R}_\mathcal {B}\) with \(\mathit {hist}(n)\) states and using \(2^{n-1}\) Rabin pairs that recognizes the language \("\mathcal {L}(\mathcal {B})"\) .
[[18]}]
For every \(n\in \mathbb {N} \) there exists a non-deterministic Büchi automaton \(B_n\) of size \(n\) such that every deterministic Rabin automaton recognizing \(\mathcal {L}(\mathcal {B}_n)\) has at least \(\mathit {hist}(n)\) states.
From non-deterministic Büchi to deterministic parity automata
In order to build a deterministic "parity automaton" \(""\mathcal {P}_\mathcal {B}""\) that recognizes the language of a given "Büchi automaton" \(\mathcal {B}\) , Schewe transforms the automaton \("\mathcal {R}_\mathcal {B}"\) into a parity one using what he calls a later introduction record (LIR). The LIR construction can be seen as adding an ordering (satisfying some restrictions) to the nodes of the "history trees". States of \(\mathcal {P}_\mathcal {B}\) are therefore pairs of history trees with a LIR. In this way we obtain a similar parity automaton that with the Piterman's determinisation procedure [13]}.
The worst-case optimality of this construction was proved in [22]}, [17]}, generalising the methods of [18]}.
[[14]}]
Given a non-deterministic "Büchi automaton" \(\mathcal {B}\) with \(n\) states, there is an effective construction of a deterministic "parity automaton" \("\mathcal {P}_\mathcal {B}"\) with \(O(n!(n-1)!)\) states and using \(2n\) priorities that recognizes the language \("\mathcal {L}(\mathcal {B}))"\) .
[[22]}, [17]}]
For every \(n\in \mathbb {N} \) there exists a non-deterministic Büchi automaton \(B_n\) of size \(n\) such that \("\mathcal {P}_\mathcal {B}"\) has less than \(1.5\) times as many states as a minimal deterministic parity automaton recognizing \(\mathcal {L}(\mathcal {B}_n)\) .
A locally bijective morphism from \(\mathcal {P}_\mathcal {B}\) to \(\mathcal {R}_\mathcal {B}\)
Given a "Büchi automaton" \(\mathcal {B}\) and its determinisations to Rabin and parity automata \("\mathcal {R}_\mathcal {B}"\) and \("\mathcal {P}_\mathcal {B}"\) , there is a "locally bijective morphism" \(\varphi : \mathcal {P}_\mathcal {B}\rightarrow \mathcal {R}_\mathcal {B}\) .
Observing the construction of \(\mathcal {R}_\mathcal {B}\) and \(\mathcal {P}_\mathcal {B}\) in [14]}, we see that the states of \(\mathcal {P}_\mathcal {B}\) are of the form \((T,\chi )\) with \(T\) an state of \(\mathcal {R}_B\) (a "history tree"), and \(\chi : T \rightarrow \lbrace 1,\dots ,|B|\rbrace \) a LIR (that can be seen as an ordering of the nodes of \(T\) ).
It is easy to verify that the mapping \(\varphi _V((T,\chi ))=T\) defines a morphism \(\varphi : \mathcal {R}_\mathcal {B}\rightarrow \mathcal {P}_\mathcal {B}\) (from fact REF there is only one possible definition of \(\varphi _E\) ). Since the automata are deterministic, \(\varphi \) is a "locally bijective morphism".
Let \(\mathcal {B}\) be a "Büchi automaton" and \("\mathcal {R}_\mathcal {B}"\) , \("\mathcal {P}_\mathcal {B}"\) the deterministic Rabin and parity automata obtained by applying the Piterman-Schewe construction to \(\mathcal {B}\) . Then, the parity automaton \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}\) verifies
\( |\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}| \le |\mathcal {P}_\mathcal {B}| \)
and \("\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}"\) uses a smaller number of priorities than \("\mathcal {P}_\mathcal {B}"\) .
It is a direct consequence of propositions REF , REF and theorem REF .
Furthermore, after proposition REF , \("\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}"\) uses the optimal number of priorities to recognize \("\mathcal {L}(\mathcal {B})"\) , and we directly obtain this information from the "alternating cycle decomposition" of \(\mathcal {R}_\mathcal {B}\) , \({\mathcal {ACD}}(\mathcal {R}_\mathcal {B})\) .
In the "example REF " we show a case in which \(|\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}| < |\mathcal {P}_\mathcal {B}|\) and for which the gain in the number of priorities is clear.
In [18]} and [17]}, the lower bounds for the determinisation of "Büchi automata" to Rabin and parity automata where shown using the family of ""full Büchi automata"", \(\lbrace \mathcal {B}_n\rbrace _{n\in \mathbb {N} }\) , \(|\mathcal {B}_n|=n\) . The automaton \(\mathcal {B}_n\) can simulate any other Büchi automaton of the same size. For these automata, the constructions \(\mathcal {P}_{\mathcal {B}_n}\) and \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_{B_n})}\) coincide.
""""
We present a non-deterministic Büchi automaton \(\mathcal {B}\) such that the "ACD-parity automaton" of \("\mathcal {R}_\mathcal {B}"\) has strictly less states and uses strictly less priorities than \("\mathcal {P}_\mathcal {B}"\) .
In figure REF we show the automaton \(\mathcal {B}\) over the alphabet \(\Sigma =\lbrace a,b,c\rbrace \) . Accepting transitions for the "Büchi condition" are represented with a black dot on them. An accessible "strongly connected component" \(\mathcal {R}_\mathcal {B}^{\prime }\) of the determinisation to a "Rabin automaton" \("\mathcal {R}_\mathcal {B}"\) is shown in figure REF . It has 2 states that are "history trees" (as defined in [14]}). There is a "Rabin pair" \((E_\tau ,F_\tau )\) for each node appearing in some "history tree" (four in total), and these are represented by an array with four positions. We assign to each transition and each position \(\tau \) in the array the symbols \({Green2}{{TextRenderingMode=FillStroke,LineWidth=.5pt, }{}}\) , \({Red2}{\mathbf {X}}\) , or \({Orange2}{\bullet }\) depending on whether this transition belongs to \(E_\tau \) , \(F_\tau \) or neither of them, respectively (we can always suppose \(E_\tau \cap F_\tau = \emptyset \) ).
In figure REF there is the "alternating cycle decomposition" corresponding to \(\mathcal {R}_\mathcal {B}^{\prime }\) . We observe that the tree of \({\mathcal {ACD}}(\mathcal {R}_\mathcal {B}^{\prime })\) has a single branch of height 3.
This is, the Rabin condition over \(\mathcal {R}_\mathcal {B}^{\prime }\) is already a "\([1,3]\) -parity condition" and \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B^{\prime })}=\mathcal {R}_\mathcal {B}^{\prime }\) . In particular it has 2 states, and uses priorities in \([1,3]\) .
On the other hand, in figure REF we show the automaton \(\mathcal {P}_\mathcal {B}^{\prime }\) , that has 3 states and uses priorities in \([3,7]\) . The whole automata \(\mathcal {R}_\mathcal {B}\) and \(\mathcal {P}_\mathcal {B}\) are too big to be pictured in these pages, but the three states shown in figure REF are indeed accessible from the initial state of \(\mathcal {P}_\mathcal {B}\) .
<FIGURE><FIGURE><FIGURE><FIGURE>
On relabelling of transition systems by acceptance conditions
In this section we use the information given by the "alternating cycle decomposition" to provide characterisations of "transition systems" that can be labelled with "parity", "Rabin", "Streett" or \("\mathit {Weak}_k"\) conditions, generalising the results of [32]}.
As a consequence, these yield simple proofs of two results about the possibility to define different classes of acceptance conditions in a deterministic automaton. Theorem REF , first proven in [33]}, asserts that if we can define a Rabin and a Streett condition on top of an underlying automaton \(\mathcal {A}\) such that it recognizes the same language \(L\) with both conditions, then we can define a parity condition in \(\mathcal {A}\) recognizing \(L\) too. Theorem REF states that if we can define Büchi and co-Büchi conditions on top of an automaton \(\mathcal {A}\) recognizing the language \(L\) , then we can define a \("\mathit {Weak}"\) condition over \(\mathcal {A}\) such that it recognizes \(L\) .
First, we extend the definition REF of section REF to the "alternating cycle decomposition".
Given a Muller transition system \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) , we say that its "alternating cycle decomposition" \({\mathcal {ACD}}(\) is a
""Rabin ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "Rabin shape".
""Streett ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "Streett shape".
""parity ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "parity shape".
""\([1,\eta ]\) -parity ACD"" (resp. \([0,\eta -1]\) -parity ACD) if it is a parity ACD, every tree has "height" at most \(\eta \) and trees of height \(\eta \) are (ACD)odd (resp. (ACD)even).
""Büchi ACD"" if it is a \([0,1]\) -parity ACD.
""co-Büchi ACD"" if it is a \([1,2]\) -parity ACD.
""\(\mathit {Weak}_k\) ACD"" if it is a parity ACD and every tree \((t_i,\nu _i) \in {\mathcal {ACD}}(\) has "height" at most \(k\) .
The next proposition follows directly from the definitions.
Let \( be a Muller transition system. Then:\begin{itemize}\item {\mathcal {ACD}}( is a "parity ACD" if and only if it is a "Rabin ACD" and a "Streett ACD".\item {\mathcal {ACD}}( is a "\mathit {Weak}_k ACD" if and only if it is a "[0,k]-parity ACD" and a "[1,k+1]-parity ACD".\end{itemize}\)
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "Rabin condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\notin \mathcal {F}\) and \(l_2\notin \mathcal {F}\) , then \(l_1\cup l_2 \notin \mathcal {F}\) .
\({\mathcal {ACD}}(\) is a "Rabin ACD".
(\(1 \Rightarrow 2\) )
Suppose that \( uses a Rabin condition with Rabin pairs \) (E1,F1),...,(Er,Fr)\(. Let \) l1\( and \) l2\( be two rejecting loops. If \) l1l2\( was accepting, then there would be some Rabin pair \) (Ej,Fj)\( and some edge \) el1l2\( such that \) eEj\( and \) eFj\(. However, the edge \) e\( belongs to \) l1\( or to \) l2\(, and the loop it belongs to should be accepting too.\)
(\(2 \Rightarrow 3\) )
Let \(q\in V\) of "index" \(i\) , and \("t_q"\) be the subtree of \({\mathcal {ACD}}(\) associated to \(q\) . Suppose that there is a node \(\tau \in t_q\) such that \((node){p_i}(\tau )\) is even ("round" node) and that it has two different children \(\sigma _1\) and \(\sigma _2\) . The "loops" \(\nu _i(\sigma _1)\) and \(\nu _i(\sigma _2)\) are maximal rejecting loops contained in \(\nu _i(\tau )\) , and since they share the state \(q\) , their union is also a loop that must verify \( \nu _i(\sigma _1) \cup \nu _i(\sigma _2)\in \mathcal {F}\) , contradicting the hypothesis.
(\(3 \Rightarrow 1\) )
We define a "Rabin condition" over \(. For each tree \) ti\( in \) ACD(\( and each "round" node \) ti\( (\) (node)pi()\( even) we define the Rabin pair \) (Ei,,Fi,)\( given by:\)\( E_{i,\tau }=\nu _i(\tau )\setminus \bigcup _{\sigma \in "\mathit {Children}"(\tau )}\nu _i(\sigma ) \quad , \qquad F_{i,\tau }=E \setminus \nu _i(\tau ). \)\(\) Let us show that this condition is "equivalent to" \(\mathcal {F}\) over the transition system \(. We begin by proving the following consequence of being a "Rabin ACD":\begin{claim*}If \tau is a "round" node in the tree t_i of {\mathcal {ACD}}(, and l\in {\mathpzc {Loop}}( is a "loop" such that l\subseteq \nu _i(\tau ) and l\nsubseteq \nu _i(\sigma ) for any child \sigma of \tau , then there is some edge e\in l such that e\notin \nu _i(\sigma ) for any child \sigma of \tau .\end{claim*}\begin{claimproof}Since for each state q\in V the tree "t_q" has "Rabin shape", it is verified that {\mathit {States}}_i(\sigma )\cap {\mathit {States}}_i(\sigma ^{\prime })=\emptyset for every pair of different children \sigma , \sigma ^{\prime } of \tau . Therefore, the union of \nu _i(\sigma ) and \nu _i(\sigma ^{\prime }) is not a loop, and any loop l contained in this union must be contained either in \nu _i(\sigma ) or in \nu _i(\sigma ^{\prime }).\end{claimproof}\)
Let \(\varrho \in {\mathpzc {Run}}_{T}\) be a "run" in \(, let \) lLoop(\( be the loop of \) such that \(\mathit {Inf}(\varrho )=l\) and let \(i\) be the "index" of the edges in this loop. Let \(\tau \) be a maximal node in \(t_i\) (for \({\sqsubseteq }\) ) such that \(l\subseteq \nu _i(\tau )\) . If \(l\in \mathcal {F}\) , let \(\tau \) be a maximal node in \(t_i\) (for \({\sqsubseteq }\) ) such that \(l\subseteq \nu _i(\tau )\) . This node \(\tau \) is a round node, and from the previous claim it follows that there is some edge \(e\in l\) such that \(e\) does not belong to any child of \(\tau \) , so \(e\in E_{i,\tau }\) and \(e\notin F_{i,\tau }\) , so the "run" \(\varrho \) is accepted by the Rabin condition too. If \(l\notin \mathcal {F}\) , then for every round node \(\tau \) , if \(l\subseteq \nu _i(\tau )\) then \(l\subseteq \nu _i(\sigma )\) for some child \(\sigma \) of \(\tau \) . Therefore, for every Rabin pair \((E_{i,\tau },F_{i,\tau })\) and every \(e\in l\) , it is verified \(e\in E_{i,\tau } \, \Rightarrow \, e\in F_{i,\tau }\) .
The Rabin condition presented in this proof does not necessarily use the optimal number of Rabin pairs required to define a Rabin condition "equivalent to" \(\mathcal {F}\) over \(.\)
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "Streett condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\in \mathcal {F}\) and \(l_2\in \mathcal {F}\) , then \(l_1\cup l_2 \in \mathcal {F}\) .
\({\mathcal {ACD}}(\) is a "Streett ACD".
We omit the proof of proposition REF , being the dual case of proposition REF .
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "parity condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\in \mathcal {F}\, \Leftrightarrow \, l_2\in \mathcal {F}\) , then \(l_1\cup l_2 \in \mathcal {F}\, \Leftrightarrow \,l_1\in \mathcal {F}\) . That is, union of loops having the same “accepting status” preserves their “accepting status”.
\({\mathcal {ACD}}(\) is a "parity ACD".
Moreover, the parity condition we can define over \( is a "\) [1,]\(-parity" (resp. \) [0,-1]\(-parity~/~\) "Weakk"\() condition if and only if \) ACD(\( is a "\) [1,]\(-parity ACD" (resp. \) [0,-1]\(-parity ACD~/~"\) Weakk\( ACD").\)
(\(1 \Rightarrow 2\) )
Suppose that \( uses a parity acceptance condition with the priorities given by \) p:E N \(. Then, since \) l1\( and \) l2\( are both accepting or both rejecting, \) p1=p(l1)\( and \) p2=p(l2)\( have the same parity, that is also the same parity than \) p(l1 l2)={p1,p2}\(.\)
(\(2 \Rightarrow 3\) )
Let \(q\in V\) of "index" \(i\) , and \("t_q"\) be the subtree of \({\mathcal {ACD}}(\) associated to \(q\) . Suppose that there is a node \(\tau \in t_q\) with two different children \(\sigma _1\) and \(\sigma _2\) . The loops \(\nu _i(\sigma _1)\) and \(\nu _i(\sigma _2)\) are different maximal loops with the property \(\nu _i(\sigma )\subseteq \nu _i(\tau )\) and \(\nu _i(\sigma )\in \mathcal {F}\, \Leftrightarrow \, \nu _i(\tau ) \notin \mathcal {F}\) . Since they share the state \(q\) , their union is also a loop contained in \(\nu _i(\tau )\) and then
\( \nu _i(\sigma _1) \cup \nu _i(\sigma _2)\in \mathcal {F}\; \Leftrightarrow \; \nu _i(\tau ) \in \mathcal {F}\; \Leftrightarrow \; \nu _i(\sigma _1)\notin \mathcal {F}\)
contradicting the hypothesis.
(\(3 \Rightarrow 1\) )
From the construction of the "ACD-transformation", it follows that \("\mathcal {P}_{\mathcal {ACD}(}"\) is just a relabelling of \( with an equivalent parity condition.\)
For the implication from right to left of the last statement, we remark that if the trees of \({\mathcal {ACD}}(\) have priorities assigned in \([\mu ,\eta ]\) , then the parity transition system \("\mathcal {P}_{\mathcal {ACD}(}"\) will use priorities in \([\mu ,\eta ]\) . If \({\mathcal {ACD}}(\) is a "\(\mathit {Weak}_k\) ACD", then in each "strongly connected component" of \("\mathcal {P}_{\mathcal {ACD}(}"\) the number of priorities used will be the same as the "height" of the corresponding tree of \({\mathcal {ACD}}(\) (at most \(k\) ).
For the other implication it suffices to remark that the priorities assigned by \({\mathcal {ACD}}(\) are optimal (proposition REF ).
Given a "transition system graph" \(G=(V,E,\mathit {Source},\mathit {Target},I_0)\) and a "Muller condition" \(\mathcal {F}\subseteq \mathcal {P}(E)\) , we can define a "parity condition" \(p:E\rightarrow \mathbb {N} \) "equivalent to" \(\mathcal {F}\) over \(G\) if and only if we can define a "Rabin condition" \(R\) and a "Streett condition" \(S\) over \(G\) such that
\( (G,\mathcal {F}) \,"\simeq "\, (G,R)\, "\simeq "\, (G,S) \) .
Moreover, if the Rabin condition \(R\) uses \(r\) Rabin pairs and the Streett condition \(S\) uses \(s\) Streett pairs, we can take the parity condition \(p\) using priorities in
\([1,2r+1]\) if \(r\le s\) .
\([0,2s]\) if \(s\le r\) .
The first statement is a consequence of the characterisations (2) or (3) from propositions REF , REF and REF .
For the second statement we remark that the trees of \({\mathcal {ACD}}(\) have "height" at most \(\min \lbrace 2r+1, 2s+1\rbrace \) . If \(r\ge s\) , then the height \(2r+1\) can only be reached by (ACD)odd trees, and if \(s\ge r\) , the height \(2s+1\) only by (ACD)even trees.
From the last statement of proposition REF and thanks to the second item of proposition REF , we obtain:
Given a "transition system graph" \(G\) and a Muller condition \(\mathcal {F}\) over \(G\) , there is an equivalent \("\mathit {Weak}_k"\) condition over \(G\) if and only if there are both \([0,k]\) and "\([1,k+1]\) -parity" conditions "equivalent to" \(\mathcal {F}\) over \(G\) .
In particular, there is an equivalent "Weak condition" if and only if there are "Büchi" and "co-Büchi" conditions equivalent to \(\mathcal {F}\) over \(G\) .
It is important to notice that the previous results are stated for non-labelled transition systems. We must be careful when translating these results to automata and formal languages. For instance, in [33]} there is an example of a non-deterministic automaton \(\mathcal {A}\) , such that we can put on top of it Rabin and Streett conditions \(R\) and \(S\) such that \(\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) , but we cannot put a parity condition on top of it recognising the same language. However, proposition REF allows us to obtain analogous results for "deterministic automata".
[[33]}]
Let \(\mathcal {A}\) be the "transition system graph" of a "deterministic automaton" with set of states \(Q\) . Let \(R\) be a Rabin condition over \(\mathcal {A}\) with \(r\) pairs and \(S\) a Streett condition over \(\mathcal {A}\) with \(s\) pairs such that \(\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) . Then, there exists a parity condition \(p: Q \times \Sigma \rightarrow \mathbb {N} \) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},p)=\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) .
Moreover,
if \(r\le s\) , we can take \(p\) to be a "\([1,2r+1]\) -parity condition".
if \(s\le r\) , we can take \(p\) to be a "\([0,2s]\) -parity condition".
Proposition REF implies that \((\mathcal {A},R)"\simeq "(\mathcal {A},S)\) , and after corollary REF , there is a parity condition \(p\) using the proclaimed priorities such that \((\mathcal {A},p)"\simeq "(\mathcal {A},R)\) . Therefore \(\mathcal {L}(\mathcal {A},p)"\simeq "\mathcal {L}(\mathcal {A},R)\) (since for both deterministic and non-deterministic \((\mathcal {A},p)"\simeq "(\mathcal {A},R)\) implies \(\mathcal {L}(\mathcal {A},p)"\simeq "\mathcal {L}(\mathcal {A},R)\) ).
Let \(\mathcal {A}\) be the "transition system graph" of a deterministic automaton and \(p\) and \(p^{\prime }\) be \([0,k]\) and "\([1,k+1]\) -parity conditions" respectively over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},p)= \mathcal {L}(\mathcal {A},p^{\prime })\) . Then, there exists a \("\mathit {Weak}_k"\) condition \(W\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},W)=\mathcal {L}(\mathcal {A},p)\) .
In particular, there is a \("\mathit {Weak}"\) condition \(W\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},W)=L\) if and only if there are both "Büchi" and "co-Büchi" conditions \(B,B^{\prime }\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},B)= \mathcal {L}(\mathcal {A},B^{\prime })=L\) .
If follows from proposition REF and corollary REF .
Conclusions
We have presented a transformation that, given a Muller "transition system", provides an equivalent "parity" transition system that has minimal size and uses an optimal number of priorities among those which accept a "locally bijective morphism" to the original Muller transition system. In order to describe this transformation we have introduced the "alternating cycle decomposition", a data structure that arranges all the information about the acceptance condition of the transition system and the interplay between this condition and the structure of the system.
We have shown in section how the alternating cycle decomposition can be useful to reason about acceptance conditions, and we hope that this representation of the information will be helpful in future works.
We have not discussed the complexity of effectively computing the "alternating cycle decomposition" of a Muller transition system. It is known that solving Muller games is \(\mathrm {PSPACE}\) -complete when the acceptance condition is given as a list of accepting sets of colours
[9]}. However, given a Muller game \(\mathcal {G}\) and the "Zielonka tree" of its Muller condition, we have a transformation into a parity game of polynomial size on the size of \(\mathcal {G}\) , so solving Muller games with this extra information is in \(\mathrm {NP}\cap \mathrm {co}\) -\(\mathrm {NP}\) . Also, in order to build \({\mathcal {ACD}}(\) we suppose that the Muller condition is expressed using as colours the set of edges of the game (that is, as an explicit Muller condition), and solving explicit Muller games is in \(\mathrm {PTIME}\) [8]}. Consequently, unless \(\mathrm {PSPACE}\) is contained in \(\mathrm {NP}\cap \mathrm {co}\) -\(\mathrm {NP}\) , we cannot compute the "Zielonka tree" of a Muller condition, nor the "alternating cycle decomposition" of a Muller transition system in polynomial time.
| [11] | [
[
51515,
51519
]
] | https://openalex.org/W2094912129 |
49eb8b5b-4d5d-4cd4-9053-2f80c2f1ea8f |
\(\varphi _V(v_0)\in I_0^{\prime }\) for every \(v_0\in I_0\) (initial states are preserved).
\(\mathit {Source}^{\prime }(\varphi _E(e))=\varphi _V(\mathit {Source}(e))\) for every \(e\in E\) (origins of edges are preserved).
\(\mathit {Target}^{\prime }(\varphi _E(e))=\varphi _V(\mathit {Target}(e))\) for every \(e\in E\) (targets of edges are preserved).
For every "run" \(\varrho \in {\mathpzc {Run}}_{,\varrho \in \mathit {Acc}\; \Leftrightarrow \; \varphi _E(\varrho ) \in \mathit {Acc}^{\prime } (acceptance condition is preserved).}\) If \((l_V,l_E)\) , \((,l_V^{\prime },l_E^{\prime })\) are "labelled transition systems", we say that \(\varphi \) is a ""morphism of labelled transition systems"" if in addition it verifies
\(l_V^{\prime }(\varphi _V(v))=l_V(v)\) for every \(v\in V\) (labels of states are preserved).
\(l_E^{\prime }(\varphi _E(e))=l_E(e)\) for every \(e\in V\) (labels of edges are preserved).
We remark that it follows from the first three conditions that if \(\varrho \in {\mathpzc {Run}}_{ is a "run" in , then \varphi _E(\varrho )\in {\mathpzc {Run}}_{T^{\prime }} (it is a "run" in starting from some initial vertex).}Given a "morphism of transition systems" \) (V,E)\(, we will denote both maps by \)\( whenever no confusion arises. We extend \) E\( to \) E*\( and \) E\( component wise.\)
A "morphism of transition systems" \(\varphi =(\varphi _V, \varphi _E)\) is unequivocally characterized by the map \(\varphi _E\) . Nevertheless, it is convenient to keep the notation with both maps.
Given two "transition systems" \((V,E,\mathit {Source},\mathit {Target},I_0,\mathit {Acc})\) , \(=(V^{\prime },E^{\prime },\mathit {Source}^{\prime },\mathit {Target}^{\prime },I_0^{\prime },\mathit {Acc}^{\prime })\) , a "morphism of transition systems" \(\varphi : \) is called
""Locally surjective"" if
For every \(v_0^{\prime }\in I_0^{\prime }\) there exists \(v_0\in I_0\) such that \(\varphi (v_0)=v_0^{\prime }\) .
For every \(v\in V\) and every \( e^{\prime }\in E^{\prime }\) such that \( \mathit {Source}^{\prime }(e^{\prime })=\varphi (v)\)
there exists \(e\in E \) such that \( \varphi (e)=e^{\prime } \) and \( \mathit {Source}(e)=v\) .
"Locally injective" if
For every \(v_0^{\prime }\in I_0^{\prime }\) , there is at most one \(v_0\in I_0\) such that \(\varphi (v_0)=v_0^{\prime }\) .
For every \( v\in V\) and every \( e^{\prime }\in E^{\prime } \) such that \( \mathit {Source}^{\prime }(e^{\prime })=\varphi (v) \)
if there are \( e_1,e_2\in E \) such that \( \varphi (e_i)=e^{\prime }\) and \( \mathit {Source}(e_i)=v\) , for \( i=1,2 \) , then \( e_1=e_2 \) .
"Locally bijective" if it is both "locally surjective" and "locally injective".
Equivalently, a "morphism of transition systems" \(\varphi \) is "locally surjective" (resp. injective) if the restriction of \(\varphi _E\) to \("\mathit {Out}"(v)\) is a surjection (resp. an injection) into \("\mathit {Out}"(\varphi (v))\) for every \(v\in V\) and the restriction of \(\varphi _V\) to \(I_0\) is a surjection (resp. an injection) into \(I_0^{\prime }\) .
If we only consider the "underlying graph" of a "transition system", without the "accepting condition", the notion of "locally bijective morphism" is equivalent to the usual notion of bisimulation. However, when considering the accepting condition, we only impose that the acceptance of each "run" must be preserved (and not that the colouring of each transition is preserved). This allows us to compare transition systems using different classes of accepting conditions.
We state two simple, but key facts.
If \(\varphi : \) is a "locally bijective morphism", then \(\varphi \) induces a bijection between the runs in \({\mathpzc {Run}}_{ and {\mathpzc {Run}}_{} that preserves their acceptance.}\)
If \(\varphi \) is a "locally surjective morphism", then it is onto the "accessible part" of \(\) . That is, for every "accessible" state \(v^{\prime }\in \) , there exists some state \(v\in such that \) V(v)=v'\(. In particular if every state of \)\( is "accessible", \)\( is surjective.\)
Intuitively, if we transform a "transition system" 1 into 2 “without adding non-determinism”, we will have a locally bijective morphism \(\varphi : 2 \rightarrow 1\) . In particular, if we consider the "composition" \(2=\mathcal {B}\lhd 1\) of 1 by some "deterministic automaton" \(\mathcal {B}\) , as defined in section , the projection over 1 gives a "locally bijective morphism" from 2 to 1.
""""
Let \(\mathcal {A}\) be the "Muller automaton" presented in the example REF , and \(\mathcal {Z}_{\mathcal {F}_1}\) the "Zielonka tree automaton" for its Muller condition \(\mathcal {F}_1=\lbrace \lbrace a\rbrace ,\lbrace b\rbrace \rbrace \) as in the figure REF . We show them in figure REF and their "composition" \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\lhd \mathcal {A}\) in figure REF . If we name the states of \(\mathcal {A}\) with the letters \(A\) and \(B\) , and those of \(\mathcal {Z}_{\mathcal {F}_1}\) with \(\alpha ,\beta \) , there is a locally bijective morphism \(\varphi : \mathcal {\mathcal {Z}_{\mathcal {F}}}\lhd \mathcal {A}\rightarrow \mathcal {A}\) given by the projection on the first component
\( \varphi _V((X,y))=X \; \text{ for } X\in \lbrace A,B\rbrace ,\, y\in \lbrace \alpha ,\beta \rbrace \)
and \(\varphi _E\) associates to each edge \(e\in \mathit {Out}(X,y)\) labelled by \(a\in \lbrace 0,1\rbrace \) the only edge in \(\mathit {Out}(X)\) labelled with \(a\) .
<FIGURE><FIGURE>We know that \(\mathcal {\mathcal {Z}_{\mathcal {F}}}\) is a minimal automaton recognizing the "Muller condition" \(\mathcal {F}\) (theorem REF ). However, the "composition" \(Z_{\mathcal {F}_1} \lhd \mathcal {A}\) has 4 states, and in the example REF (figure REF ) we have shown a parity automaton recognizing \(\mathcal {L}(\mathcal {A})\) with only 3 states. Moreover, there is a "locally bijective" morphism from this smaller parity automaton to \(\mathcal {A}\) (we only have to send the two states on the left to \(A\) and the state on the right to \(B\) ). In the next section we will show a transformation that will produce the parity automaton with only 3 states starting from \(\mathcal {A}\) .
Morphisms of automata and games
Before presenting the optimal transformation of Muller transition systems, we will state some facts about "morphisms" in the particular case of "automata" and "games". When we speak about a "morphism" between two automata, we always refer implicitly to the morphism between the corresponding "labelled transition systems", as explained in "example REF ".
A "morphism" \(\varphi =(\varphi _V,\varphi _E)\) between two "deterministic automata" is always "locally bijective" and it is completely characterized by the map \(\varphi _V\) .
For each letter of the input alphabet and each state, there must be one and only one outgoing transition labelled with this letter.
Let \(\mathcal {A}=(Q,\Sigma , I_0, \Gamma , \delta , \mathit {Acc})\) , \(\mathcal {A}^{\prime }=(Q^{\prime },\Sigma , I_0^{\prime }, \Gamma , \delta ^{\prime }, \mathit {Acc}^{\prime })\) be two (possibly non-deterministic) "automata". If there is a "locally surjective morphism" \(\varphi : \mathcal {A}\rightarrow \mathcal {A}^{\prime }\) , then \("\mathcal {L}(\mathcal {A})"="\mathcal {L}(\mathcal {A}^{\prime })"\) .
Let \(u\in \Sigma ^\omega \) . If \(u\in \mathcal {L}(\mathcal {A})\) there is an accepting run, \(\varrho \) , over \(u\) in \(\mathcal {A}\) . By the definition of a "morphism of labelled transition systems", \(\varphi (\varrho )\) is also an accepting "run over \(u\) " in \(\mathcal {A}^{\prime }\) .
Conversely, if \(u\in \mathcal {L}(\mathcal {A}^{\prime })\) there is an accepting "run over \(u\) " \(\varrho ^{\prime }\) in \(\mathcal {A}^{\prime }\) . Since \(\varphi \) is locally surjective there is a run \(\varrho \) in \(\mathcal {A}\) , such that \(\varphi (\varrho )=\varrho ^{\prime }\) , and therefore \(\varrho \) is an accepting run over \(u\) .
The converse of the previous proposition does not hold: \("\mathcal {L}(\mathcal {A})"=\mathcal {L}(\mathcal {A}^{\prime })\) does not imply the existence of morphisms \(\varphi : \mathcal {A}\rightarrow \mathcal {A}^{\prime }\) or \(\varphi : \mathcal {A}^{\prime } \rightarrow \mathcal {A}\) , even if \(\mathcal {A}\) has minimal size among the Muller automata recognizing \(\mathcal {L}(\mathcal {A})\) .
If \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) are "non-deterministic automata" and \(\varphi :\mathcal {A}\rightarrow \mathcal {A}^{\prime }\) is a "locally bijective morphism", then \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) have to share some other important semantic properties. Two classes of automata that have been extensively studied are unambiguous and good for games automata. An automaton is ""unambiguous"" if for every input word \(w\in \Sigma ^\omega \) there is at most one accepting "run over" \(w\) . ""Good for games"" automata (GFG), first introduced by Henzinger and Piterman in [1]}, are automata that can resolve the non-determinism depending only in the prefix of the word read so far. These types of automata have many good properties and have been used in different contexts (as for example in the model checking of LTL formulas [2]} or in the theory of cost functions [3]}). Unambiguous automata can recognize \(\omega \) -regular languages using a "Büchi" condition (see [4]}) and GFG automata have strictly more expressive power than deterministic ones, being in some cases exponentially smaller (see [5]}, [6]}).
We omit the proof of the next proposition, being a consequence of fact REF and of the argument from the proof of proposition REF .
Let \(\mathcal {A}\) and \(\mathcal {A}^{\prime }\) be two "non-deterministic automata". If \(\varphi :\mathcal {A}\rightarrow \mathcal {A}^{\prime }\) is a "locally bijective morphism", then
\(\mathcal {A}\) is unambiguous if and only if \(\mathcal {A}^{\prime }\) is unambiguous.
\(\mathcal {A}\) is GFG if and only if \(\mathcal {A}^{\prime }\) is GFG.
Having a "locally bijective morphism" between two games implies that the "winning regions" of the players are preserved.
Let \(\mathcal {G}=(V, E, \mathit {Source}, \mathit {Target}, v_0, \mathit {Acc}, l_V)\) and \(\mathcal {G}^{\prime }=(V^{\prime },E^{\prime }, \mathit {Source}^{\prime }, \mathit {Target}^{\prime }, v_0^{\prime }, \mathit {Acc}^{\prime }, l_V^{\prime })\) be two "games" such that there is a "locally bijective morphism" \(\varphi :\mathcal {G}\rightarrow \mathcal {G}^{\prime }\) . Let \(P\in \lbrace Eve, Adam\rbrace \) be a player in those games. Then, \(P\) wins \(\mathcal {G}\) if and only if she/he wins \(\mathcal {G}^{\prime }\) . Moreover, if \(\varphi \) is surjective, the "winning region" of \(P\) in \(\mathcal {G}^{\prime }\) is the image by \(\varphi \) of her/his winning region in \(\mathcal {G}\) , \("\mathcal {W}_P"(\mathcal {G}^{\prime })=\varphi ("\mathcal {W}_P"(\mathcal {G}))\) .
Let \(S_P: {\mathpzc {Run}}_{\mathcal {G}}\cap E^* \rightarrow E\) be a winning "strategy" for player \(P\) in \(\mathcal {G}\) . Then, it is easy to verify that the strategy \(S_P^{\prime }: {\mathpzc {Run}}_{\mathcal {G}^{\prime }}\cap E^{\prime *} \rightarrow E^{\prime }\) defined as
\( S_P^{\prime }(\varrho ^{\prime }) = \varphi _E ( S_P(\varphi ^{-1}(\varrho ^{\prime }))) \)
is a winning "strategy" for \(P\) in \(\mathcal {G}^{\prime }\) . (Remark that thanks to fact REF , the morphism \(\varphi \) induces a bijection over "runs", allowing us to use \(\varphi ^{-1}\) in this case).
Conversely, if \(S_P^{\prime }: {\mathpzc {Run}}_{\mathcal {G}^{\prime }}\cap E^{\prime *} \rightarrow E^{\prime }\) is a winning "strategy" for \(P\) in \(\mathcal {G}^{\prime }\) , then \( S_P(\varrho ) = \varphi _E^{-1} ( S_P^{\prime }(\varphi (\varrho ))) \)
is a winning "strategy" for \(P\) in \(\mathcal {G}\) . Here \( \varphi _E^{-1} (e^{\prime })\) is the only edge \(e\in E\) in \("\mathit {Out}"(\mathit {Target}("\mathit {Last}"(\varrho )))\) such that \(\varphi _E(e)=e^{\prime }\) .
The equality \("\mathcal {W}_P"(\mathcal {G}^{\prime })=\varphi ("\mathcal {W}_P"(\mathcal {G}))\) stems from the fact that if we choose a different initial vertex \(v_1\) in \(\mathcal {G}\) , then \(\varphi \) is a "locally bijective morphism" to the game \(\mathcal {G}^{\prime }\) with initial vertex \(\varphi (v_1)\) . Conversely, if we take a different initial vertex \(v_1^{\prime }\) in \(\mathcal {G}^{\prime }\) , since \(\varphi \) is surjective we can take a vertex \(v_1\in \varphi ^{-1}(v_1^{\prime })\) , and \(\varphi \) remains a locally bijective morphism between the resulting games.
The alternating cycle decomposition
Most transformations of "Muller" into "parity" "transition systems" are based on the "composition" by some automaton converting the Muller condition into a parity one. These transformations act on the totality of the system uniformly, regardless of the local structure of the system and the "acceptance condition".
The transformation we introduce in this section takes into account the interplay between the particular "acceptance condition" and the "transition system", inspired by the alternating chains introduced in [7]}.
In the following we will consider "Muller transition systems" with the Muller acceptance condition using edges as colours. We can always suppose this, since given a transition system \( with edges coloured by \) : E C\( and a Muller condition \) FP(C)\(, the condition \) FP(E)\( defined as \) AF (A)F\( is an "equivalent condition over" \) . However, the size of the representation of the condition \(\mathcal {F}\) might change. Making this assumption corresponds to consider what are called explicit Muller conditions. In particular, solving Muller games with explicit Muller conditions is in \(\mathrm {PTIME}\) [8]}, while solving general Muller games is \(\mathrm {PSPACE}\) -complete [9]}.
Given a "transition system" \((V,E,\mathit {Source},\mathit {Target},I_0, \mathit {Acc})\) , a loop is a subset of edges \(l\subseteq E\) such that it exists \(v\in V\) and a finite "run" \(\varrho \in {\mathpzc {Run}}_{T,v}\) such that \("\mathit {First}"(\varrho )="\mathit {Last}"(\varrho )=v\) and \({\mathit {App}}(\varrho )=l\) . The set of "loops" of \( is denoted \) Loop(\(.For a "loop" \) lLoop(\( we write\)\( ""\mathit {States}""(l):= \lbrace v\in V \; : \; \exists e\in l, \; \mathit {Source}(e)=v \rbrace .\)\(\)
Observe that there is a natural partial order in the set \({\mathpzc {Loop}}(\) given by set inclusion.
If \(l\) is a "loop" in \({\mathpzc {Loop}}(\) , for every \(q\in {\mathit {States}}(l)\) there is a run \(\varrho \in {\mathpzc {Run}}_{q}\) such that \(\mathit {App}(\varrho )=l\) .
The maximal loops of \({\mathpzc {Loop}}(\) (for set inclusion) are disjoint and in one-to-one correspondence with the "strongly connected components" of \(.\)
[Alternating cycle decomposition]
Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" with "acceptance condition" given by \(\mathcal {F}\subseteq \mathcal {P}(E)\) . The alternating cycle decomposition (abbreviated ACD) of \(, noted \) ACD(T)\(, is a family of "labelled trees" \) (t1, 1),..., (tr,r)\( with nodes labelled by "loops" in \) Loop(\(, \) i: tiLoop(\(. We define it inductively as follows:\begin{itemize}\item Let \lbrace l_1,\dots , l_r\rbrace be the set of maximal loops of {\mathpzc {Loop}}(. For each i\in \lbrace 1,\dots , r\rbrace we consider a "tree" t_i and define \nu _i(\varepsilon )=l_i.\end{itemize}\item Given an already defined node \)\( of a tree \) ti\( we consider the maximal loops of the set\)\(\lbrace l\subseteq \nu _i(\tau ) \; : \; l\in {\mathpzc {Loop}}( \text{ and } l \in \mathcal {F}\; \Leftrightarrow \; \nu _i(\tau ) \notin \mathcal {F}\rbrace \)\(and for each of these loops \) l\( we add a child to \)\( in \) ti\( labelled by \) l\(. \)
For notational convenience we add a special "tree" \((t_0,\nu _0)\) with a single node \(\varepsilon \) labelled with the edges not appearing in any other tree of the forest, i.e., \(\nu _0(\varepsilon )=E \setminus \bigcup _{i=1}^{r}l_i\) (remark that this is not a "loop").
We define \(\mathit {States}(\nu _0(\varepsilon )):= V\setminus \bigcup _{i=1}^{r}\mathit {States}(l_i)\) (remark that this does not follow the general definition of \("\mathit {States}"()\) for loops).
We call the trees \(t_1,\dots , t_r\) the ""proper trees"" of the "alternating cycle decomposition" of \(.Given a node \)\( of \) ti\(, we note \) Statesi():=States(i())\(.\)
As for the "Zielonka tree", the "alternating cycle decomposition" of \( is not unique, since it depends on the order in which we introduce the children of each node. This will not affect the upcoming results, and we will refer to it as ``the^{\prime \prime } alternating cycle decomposition of \) .
For the rest of the section we fix a "Muller transition system" \((V,E,\mathit {Source},\mathit {Target}, I_0, \mathcal {F})\) with the "alternating cycle decomposition" given by \((t_0,\nu _0), (t_1,\nu _1),\dots , (t_r,\nu _r)\) .
The "Zielonka tree" for a "Muller condition" \(\mathcal {F}\) over the set of colours \(C\) can be seen as a special case of this construction, for the automaton with a single state, input alphabet \(C\) , a transition for each letter in \(C\) and "acceptance condition" \(\mathcal {F}\) .
Each state and edge of \( appears in exactly one of the "trees" of \) ACD(\(.\)
The ""index"" of a state \(q\in V\) (resp. of an edge \(e\in E\) ) in \({\mathcal {ACD}}(\) is the only number \(j\in \lbrace 0,1,\dots ,r\rbrace \) such that \(q\in "\mathit {States}"_j(\varepsilon )\) (resp. \(e \in \nu _j(\varepsilon )\) ).
For each state \(q\in V\) of "index" \(j\) we define the ""subtree associated to the state \(q\) "" as the "subtree" \(t_q\) of \(t_j\) consisting in the set of nodes \(\lbrace \tau \in t_j \; : \; q\in {\mathit {States}}_j(\tau ) \rbrace \) .
We refer to figures REF and REF for an example of \("t_q"\) .
For each "proper tree" \(t_i\) of \("{\mathcal {ACD}}" (\) we say that \(t_i\) is (ACD)even if \(\nu _i(\varepsilon )\in \mathcal {F}\) and that it is (ACD)odd if \(\nu _i(\varepsilon )\notin \mathcal {F}\) .
We say that the "alternating cycle decomposition" of \( is \emph {even} if all the trees of maximal "height" of \) ACD(\( are even; that it is \emph {odd} if all of them are odd, and that it is \emph {(ACD){ambiguous}} if there are even and odd trees of maximal "height".\)
For each \(\tau \in t_i\) , \(i=1,\dots ,r\) , we define the (node)priority of \(\tau \) in \(t_i\) , written \(p_i(\tau )\) as follows:
If \({\mathcal {ACD}}(\) is (ACD)even or (ACD)ambiguous
If \(t_i\) is (ACD)even (\(\nu _i(\varepsilon )\in \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )=|\tau |\) .
If \(t_i\) is (ACD)odd (\(\nu _i(\varepsilon )\notin \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+1=|\tau |+1\) .
If \({\mathcal {ACD}}(\) is (ACD)odd
If \(t_i\) is (ACD)even (\(\nu _i(\varepsilon )\in \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+2=|\tau |+2\) .
If \(t_i\) is (ACD)odd (\(\nu _i(\varepsilon )\notin \mathcal {F}\) ), then \(p_i(\tau ):={\mathit {Depth}}(\tau )+1=|\tau |+1\) .
For \(i=0\) , we define \(p_0(\varepsilon )=0\) if \("{\mathcal {ACD}}" (\) is (ACD)even or (ACD)ambiguous and \(p_0(\varepsilon )=1\) if \("{\mathcal {ACD}}" (\) is (ACD)odd.
The assignation of priorities to nodes produces a labelling of the levels of each tree. It will be used to determine the priorities needed by a parity "transition system" to simulate \(. The distinction between the cases \) ACD(\( even or odd is added only to obtain the minimal number of priorities in every case.\)
In figure REF we represent a "transition system" \((V,E,\mathit {Source},\mathit {Target},q_0,\mathcal {F})\) with \(V=\lbrace q_0,q_1,q_2,q_3,q_4,q_5\rbrace \) , \(E=\lbrace a,b,\dots ,j,k\rbrace \) and using the "Muller condition"
\(\mathcal {F}=\lbrace \lbrace c,d,e \rbrace ,\lbrace e \rbrace ,\lbrace g,h,i \rbrace ,\lbrace l \rbrace ,\lbrace h,i,j,k \rbrace ,\lbrace j,k \rbrace \rbrace .\)
It has 2 strongly connected components (with vertices \(S_1=\lbrace q_1,q_2\rbrace , S_2=\lbrace q_3,q_4,q_5\rbrace \) ), and a vertex \(q_0\) that does not belong to any strongly connected component.
The "alternating cycle decomposition" of this transition system is shown in figure REF . It consists of two proper "trees", \(t_1\) and \(t_2\) , corresponding to the strongly connected components of \( and the tree \) t0\( that corresponds to the edges not appearing in the strongly connected components.\) We observe that \({\mathcal {ACD}}(\) is (ACD)odd (\(t_2\) is the highest tree, and it starts with a non-accepting "loop"). It is for this reason that we start labelling the levels of \(t_1\) from 2 (if we had assigned priorities \(0,1\) to the nodes of \(t_1\) we would have used 4 priorities, when only 3 are strictly necessary).
In figure REF we show the "subtree associated to" \(q_4\) .
<FIGURE><FIGURE><FIGURE>
The alternating cycle decomposition transformation
We proceed to show how to use the "alternating cycle decomposition" of a "Muller transition system" to obtain a "parity transition system". Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" and \((t_0,\nu _0), (t_1, \nu _1),\dots , (t_r,\nu _r)\) , its "alternating cycle decomposition".
First, we adapt the definitions of \(\mathit {Supp}\) and \(\mathit {Nextbranch}\) to the setting with multiple trees.
For an edge \(e\in E\) such that \(\mathit {Target}(e)\) has "index" \(j\) , for \(i\in \lbrace 0,1,\dots ,r\rbrace \) and a branch \(\beta \) in some subtree of \(t_i\) , we define the ""support"" of \(e\) from \(\tau \) as:
\( {\mathit {Supp}}(\beta ,i,e)={\left\lbrace \begin{array}{ll}\end{array}\right.\text{The maximal node (for } {\sqsubseteq }\text{) } \tau \in \beta \text{ such that } e\in \nu _i(\tau ), \text{ if } i= j .\\[2mm]}\text{The root } \varepsilon \text{ of } t_j, \text{ if } i\ne j.\)
\(\)
Intuitively, \({\mathit {Supp}}(\beta ,i,e)\) is the highest node we visit if we want to go from the bottom of the branch \(\beta \) to a node of the tree that contains \(e\) “in an optimal trajectory” (going up as little as possible). If we have to jump to another tree, we define \({\mathit {Supp}}(\beta ,i,e)\) as the root of the destination tree.
Let \(i\in \lbrace 0,1,\dots ,r\rbrace \) , \(q\) be a state of "index" \(i\) , \(\beta \) be a branch of some "subtree" of \(t_i\) and \(\tau \in \beta \) be a node of \(t_i\) such that \(q\in {\mathit {States}}_i(\tau )\) . If \(\tau \) is not the deepest node of \(\beta \) , let \(\sigma _\beta \) be the unique child of \(\tau \) in \(t_i\) such that \(\sigma _\beta \in \beta \) . We define:
\( {\mathit {Nextchild}_{t_q}}(\beta ,\tau )={\left\lbrace \begin{array}{ll}\tau , \text{ if } \tau \text{ is a leaf in } {t_q}.\\[3mm]\parbox {8cm}{Smallest older sibling of \sigma _\beta in {t_q}, if \sigma _\beta is defined and there is any such older sibling.}\\[3mm]\text{Smallest child of } \tau \text{ in } {t_q} \text{ in any other case}.\end{array}\right.} \)
Let \(i\in \lbrace 0,1,\dots ,r\rbrace \) and \(\beta \) be a branch of some "subtree" of \(t_i\) . For a state \(q\) of "index" \(j\) and a node \(\tau \) such that \(q\in {\mathit {States}}_j(\tau )\) and such that \(\tau \in \beta \) if \(i=j\) , we define:
\( {\mathit {Nextbranch}_{t_q}}(\beta ,i,\tau )= {\left\lbrace \begin{array}{ll}\end{array}\right.\text{ Leftmost branch in } {t_q} \text{ below } {\mathit {Nextchild}_{t_q}}(\beta ,\tau ), \text{ if } i= j .\\[3mm]}\text{The leftmost branch in } {\mathit {Subtree}}_{t_q}(\tau ), \text{ if } i\ne j.\)
\(\)
[ACD-transformation]
Let \((V,E,\mathit {Source},\mathit {Target},I_0, \mathcal {F})\) be a "Muller transition system" with "alternating cycle decomposition" \({\mathcal {ACD}}(= \lbrace (t_0,\nu _0),(t_1,\nu _1),\dots ,(t_r,\nu _r)\rbrace \) . We define its ""ACD-parity transition system"" (or ACD-transformation) \({\mathcal {P}_{\mathcal {ACD}(}}=(V_P,E_P,\mathit {Source}_P,\mathit {Target}_P,I_0^{\prime }, p:E_P\rightarrow \mathbb {N} )\) as follows:
\(V_P=\lbrace (q,i,\beta ) \; : \; q\in V \text{ of "index" } i \text{ and } \beta \in {\mathit {Branch}}({t_q}) \rbrace \) .
For each node \((q,i,\beta )\in V_P\) and each edge \(e\in {\mathit {Out}}(q)\) we define an edge \(e_{i,\beta }\) from \((q,i,\beta )\) . We set
\(\mathit {Source}_P(e_{i,\beta })=(q,i,\beta )\) , where \(q=\mathit {Source}(e)\) .
\(\mathit {Target}_P(e_{i,\beta })=(q^{\prime },k,{\mathit {Nextbranch}_{t_{q^{\prime }}}}(\beta ,i,\tau ))\) , where \(q^{\prime }=\mathit {Target}(e)\) , \(k\) is its "index" and \(\tau ={\mathit {Supp}}(\beta ,i,e)\) .
\(p(e_{i,\tau })=(node){p_j}({\mathit {Supp}}(\beta ,i,e))\) , where \(j\) is the "index" of \({\mathit {Supp}}(\beta ,i,e)\) .
\(I_0^{\prime }=\lbrace (q_0,i,\beta _0) \; : \; q_0\in I_0, \, i \text{ the index of } q_0\) and \(\beta _0\) the leftmost branch in \({t_{q_0}}\rbrace \) .
If \( is labelled by \) lV:VLV\(, \) lE:ELE\(, we label \) PACD(\( by \) lV'((q,i,))=lV(q)\( and \) lE'(ei,)=lE(e)\(.\)
The set of states of \(\mathcal {P}_{\mathcal {ACD}(}\) is build as follows: for each state \(q\in we consider the subtree of \) ACD(\( consisting of the nodes with \) q\( in its label, and we add a state for each branch of this subtree.\) Intuitively, to define transitions in the transition system \({\mathcal {P}_{\mathcal {ACD}(}}\) we move simultaneously in \( and in \) ACD(\(. We start from \) q0I0\( and from the leftmost branch of \) tq0\(. When we take a transition \) e\( in \) while being in a branch \(\beta \) , we climb the branch \(\beta \) searching a node \(\tau \) with \(q^{\prime }=\mathit {Target}(e)\) and \(e\) in its label, and we produce the priority corresponding to the level reached. If no such node exists, we jump to the root of the tree corresponding to \(q^{\prime }\) . Then, we move to the next child of \(\tau \) on the right of \(\beta \) in the tree \({t_{q^{\prime }}}\) , and we pick the leftmost branch under it in \({t_{q^{\prime }}}\) . If we had jumped to the root of \({t_{q^{\prime }}}\) from a different tree, we pick the leftmost branch of \({t_{q^{\prime }}}\) .
The size of \({\mathcal {P}_{\mathcal {ACD}(}}\) is
\( |\mathcal {P}_{\mathcal {ACD}(}|=\sum \limits _{q\in V} |{\mathit {Branch}}({t_q})|. \)
The number of priorities used by \({\mathcal {P}_{\mathcal {ACD}(}}\) is the "height" of a maximal tree of \({\mathcal {ACD}}(\) if \({\mathcal {ACD}}(\) is (ACD)even or (ACD)odd, and the "height" of a maximal tree plus one if \({\mathcal {ACD}}(\) is (ACD)ambiguous.
In figure REF we show the "ACD-parity transition system" \({\mathcal {P}_{\mathcal {ACD}(}}\) of the transition system of example REF (figure REF ). States are labelled with the corresponding state \(q_j\) in \(, the tree of its "index" and a node \) ti\( that is a leaf in \) tqj\( (defining a branch of it).\) We have tagged the edges of \({\mathcal {P}_{\mathcal {ACD}(}}\) with the names of edges of \( (even if it is not an automaton). These indicate the image of the edges by the "morphism" \) : PACD( , and make clear the bijection between "runs" in \( and in \) PACD(\(.\) In this example, we create one “copy” of states \(q_0,q_1\) and \(q_2\) , three “copies” of the state \(q_3\) and two“copies” of states \(q_4\) and \(q_5\) . The resulting "parity transition system" \({\mathcal {P}_{\mathcal {ACD}(}}\) has therefore 10 states.
<FIGURE>Let \(\mathcal {A}\) be the "Muller automaton" of example "REF ". Its "alternating cycle decomposition" has a single tree that coincides with the "Zielonka tree" of its "Muller acceptance condition" \(\mathcal {F}_1\) (shown in figure REF ). However, its "ACD-parity transition system" has only 3 states, less than the "composition" \(\mathcal {Z}_{\mathcal {F}_1} \lhd \mathcal {A}\) (figure REF ), as shown in figure REF .
<FIGURE>[Correctness]
Let \((V, E, \mathit {Source}, \mathit {Target}, I_0, \mathcal {F})\) be a "Muller transition system" and \({\mathcal {P}_{\mathcal {ACD}(}}=(V_P, E_P, \mathit {Source}_P, \mathit {Target}_P, I_0^{\prime }, p:E_P\rightarrow \mathbb {N} )\) its "ACD-transition system". Then, there exists a "locally bijective morphism" \( \varphi : {\mathcal {P}_{\mathcal {ACD}(}} \rightarrow .Moreover, if \) is a "labelled transition system", then \(\varphi \) is a "morphism of labelled transition systems".
We define \(\varphi _V : V_P \rightarrow V\) by \(\varphi _V((q,i,\beta ))=q\) and \(\varphi _E : E_P \rightarrow E\) by \(\varphi _E(e_{i,\tau })=e\) .
It is clear that this map preserves edges, initial states and labels. It is also clear that it is "locally bijective", since we have defined one initial state in \({\mathcal {P}_{\mathcal {ACD}(}}\) for each initial state in \(, and by definition the edges in \) Out((q,i,))\( are in bijection with \) Out(q)\(. It induces therefore a bijection between the runs of the transition systems(fact \ref {Fact_LocBijMorph_BijectionRuns}).\) Let us see that a "run" \(\varrho \) in \( is accepted if and only if \) -1()\( is accepted in \) PACD(\(. First, we remark that any infinite run \)\( of \) will eventually stay in a "loop" \(l\in {\mathpzc {Loop}}(\) such that \(\mathit {Inf}(\varrho )=l\) , and therefore we will eventually only visit states corresponding to the tree \(t_i\) such that \(l\subseteq \nu _i(\varepsilon )\) in the "alternating cycle decomposition". Let \(p_{\min }\) be the smallest priority produced infinitely often in the run \(\varphi ^{-1}(\varrho )\) in \(\mathcal {P}_{\mathcal {ACD}(}\) . As in the proof of proposition REF , there is a unique node \(\tau _p\) in \(t_i\) visited infinitely often such that \((node){p_i}(\tau _p)=p_{\min }\) . Moreover, the states visited infinitely often in \(\mathcal {P}_{\mathcal {ACD}(}\) correspond to branches below \(\tau _p\) , that is, they are of the form
\( (q,i,\beta )\) , with \(\beta \in {\mathit {Subtree}}_{t_q}(\tau _p), \text{ for } q\in {\mathit {States}}_i(\tau _p)\) .
We claim that \(\tau _p\) verifies:
\(l\subseteq \nu _i(\tau _p)\) .
\(l\nsubseteq \nu _i(\sigma )\) for every "child" \(\sigma \) of \(\tau _p\) .
By definition of \({\mathcal {ACD}}(\) this implies
\( l\in \mathcal {F}\; \Longleftrightarrow \; \nu _i(\tau _p)\in \mathcal {F}\; \Leftrightarrow \; p_{\min } \text{ is even.}\)
We show that \(l\subseteq \nu _i(\tau _p)\) . For every edge \(e\notin \nu _i(\tau _p)\) of "index" \(i\) and for every branch \(\beta \in {\mathit {Subtree}}_{t_q}(\tau _p), \text{ for } q\in {\mathit {States}}_i(\tau _p)\) , we have that \(\tau ^{\prime }={\mathit {Supp}}(\beta ,i,e)\) is a strict "ancestor" of \(\tau _p\) in \(t_i\) . Therefore, if \(l\) was not contained in \(\nu _i(\tau _p)\) we would produce infinitely often priorities strictly smaller than \(p_{\min }\) .
Finally, we show that \(l\nsubseteq \nu _i(\sigma )\) for every "child" \(\sigma \) of \(\tau _p\) . Since we reach \(\tau _p\) infinitely often, we take transitions \(e_{i,\beta }\) such that \(\tau _p={\mathit {Supp}}(\beta ,i,e)\) infinitely often. Let us reason by contradiction and let us suppose that there is some child \(\sigma \) of \(\tau _p\) such that \(l\subseteq \nu _i(\sigma )\) . Then for each edge \(e\in l\) , \(\mathit {Target}(e)\in {\mathit {States}}_i(\sigma )\) , and therefore \(\sigma \in t_q\) for all \(q\in {\mathit {States}}(l)\) and for each transition \(e_{i,\beta }\) such that \(\tau _p={\mathit {Supp}}(\beta ,i,e)\) , some branches passing through \(\sigma \) are considered as destinations. Eventually, we will go to some state \((q,i,\beta ^{\prime })\) , for some branch \(\beta ^{\prime }\in {\mathit {Subtree}}_{t_q}(\sigma )\) . But since \(l\subseteq \nu _i(\sigma )\) , then for every edge \(e\in l\) and branch \(\beta ^{\prime }\in {\mathit {Subtree}}_{t_q}(\sigma )\) it is verified that \({\mathit {Supp}}(\beta ^{\prime },i,e)\) is a "descendant" of \(\sigma \) , so we would not visit again \(\tau _p\) and all priorities produced infinitely often would be strictly greater than \(p_{\min }\) .
From the remarks at the end of section REF , we obtain:
If \(\mathcal {A}\) is a "Muller automaton" over \(\Sigma \) , the automaton \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) is a "parity automaton" recognizing \(\mathcal {L}(\mathcal {A})\) . Moreover,
\(\mathcal {A}\) is "deterministic" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is deterministic.
\(\mathcal {A}\) is "unambiguous" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is unambiguous.
\(\mathcal {A}\) is "GFG" if and only if \(\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}\) is GFG.
If \(\mathcal {G}\) is a "Muller game", then \(\mathcal {P}_{\mathcal {ACD}(\mathcal {G})}\) is a "parity game" that has the same winner than \(\mathcal {G}\) .
The "winning region" of \(\mathcal {G}\) for a player \(P\in \lbrace Eve, Adam\rbrace \) is \({\mathcal {W}_P}(\mathcal {G})=\varphi ("\mathcal {W}_P"(\mathcal {P}_{\mathcal {ACD}(\mathcal {G})}))\) , being \(\varphi \) the morphism of the proof of proposition REF .
Optimality of the alternating cycle decomposition transformation
In this section we prove the strong optimality of the "alternating cycle decomposition transformation", both for number of priorities (proposition REF ) and for size (theorem REF ). We use the same ideas as for proving the optimality of the "Zielonka tree automaton" in section REF .
[Optimality of the number of priorities]
Let \( be a "Muller transition system" such that all its states are "accessible" and let \) PACD(\( be its "ACD-transition system". If \) P\( is another "parity transition system" such that there is a "locally bijective morphism" \) :P, then \(\mathcal {P}\) uses at least the same number of priorities than \({\mathcal {P}_{\mathcal {ACD}(}}\) .
We distinguish 3 cases depending on whether \({\mathcal {ACD}}(\) is (ACD)even, (ACD)odd or (ACD)ambiguous.
We treat simultaneously the cases \({\mathcal {ACD}}(\) (ACD)even and (ACD)odd. In these cases, the number \(h\) of priorities used by \({\mathcal {P}_{\mathcal {ACD}(}}\) coincides with the maximal "height" of a tree in \({\mathcal {ACD}}(\) . Let \(t_i\) be a tree of maximal "height" \(h\) in \({\mathcal {ACD}}(\) , \(\beta =\lbrace \tau _1,\dots ,\tau _{h}\rbrace \in {\mathit {Branch}}(t_i)\) a branch of \(t_i\) of maximal length (ordered as \(\tau _1 {\sqsupseteq } \tau _2 {\sqsupseteq } \dots {\sqsupseteq } \tau _{h}=\varepsilon \) ) and \(l_j=\nu _i(\tau _j)\) , \(j=1,\dots , h\) . We fix \(q\in {\mathit {States}}_i(\tau _1)\) , where \(\tau _1\) is the leaf of \(\beta \) , and we write
\(\mathit {Loop}_q)=\lbrace w\in {\mathpzc {Run}}_{T,q}\cap E^* \; : \; {\mathit {First}}(w)={\mathit {Last}}(w)=q \rbrace ,\)
and for each \(j=1,\dots , h\) we choose \(w_j \in \mathit {Loop}_q)\) such that \({\mathit {App}}(w_j)=l_j\) . Let \(\eta ^{\prime }\) be the maximal priority appearing in \(\mathcal {P}\) . We show as in the proof of proposition REF that for every \(v\in \mathit {Loop}_q)\) , the "run" \(\varphi ^{-1}((w_1\dots w_k v)^\omega )\) must produce a priority smaller or equal to \(\eta ^{\prime }-k+1\) . Taking \(k=h\) , the "run" \(\varphi ^{-1}((w_1\dots w_h)^\omega )\) produces a priority smaller or equal to \(\eta ^{\prime }-h+1\) and even if and only if \({\mathcal {ACD}}(\) is (ACD)even. By lemma REF we can suppose that \(\mathcal {P}\) uses all priorities in \([\eta ^{\prime }-h+1, \eta ^{\prime }]\) . We conclude that \(\mathcal {P}\) uses at least \(h\) priorities, so at least as many as \({\mathcal {P}_{\mathcal {ACD}(}}\) .
In the case \({\mathcal {ACD}}(\) (ACD)ambiguous, if \(h\) is the maximal "height" of a tree in \({\mathcal {ACD}}(\) , then \({\mathcal {P}_{\mathcal {ACD}(}}\) uses \(h+1\) priorities. We can repeat the previous argument with two different maximal branches of respective maximal (ACD)even and (ACD)odd trees. We conclude that \(\mathcal {P}\) uses at least priorities in a range \([\mu ,\mu +h]\cup [\eta ,\eta +h]\) , with \(\mu \) even and \(\eta \) odd, so it uses at least \(h+1\) priorities.
A similar proof, or an application of the results from [10]} gives the following result:
If \(\mathcal {A}\) is a deterministic automaton, the accessible part of \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) uses the optimal number of priorities to recognize \(\mathcal {L}(\mathcal {A})\) .
Finally, we state and prove the optimality of \({\mathcal {P}_{\mathcal {ACD}(\mathcal {A})}}\) for size.
[Optimality of the number of states]
Let \( be a (possibly "labelled") "Muller transition system" such that all its states are "accessible" and let \) PACD(\( be its "ACD-transition system". If \) P\( is another "parity transition system" such that there is a "locally bijective morphism" \) :P, then
\( |"\mathcal {P}_{\mathcal {ACD}(}"|\le |\mathcal {P}| \) .
Proof of theorem REF
We follow the same steps as for proving theorem REF . We will suppose that all states of the "transition systems" considered are "accessible".
Let 1, 2 be "transition systems" such that there is a "morphism of transition systems" \(\varphi : 1 \rightarrow 2\) . Let \(l\in {\mathpzc {Loop}}(2)\) be a "loop" in 2. An ""\(l\) -SCC"" of 1 (with respect to \(\varphi \) ) is a non-empty "strongly connected subgraph" \((V_l,E_l)\) of the subgraph \((\varphi _V^{-1}({\mathit {States}}(l)),\varphi _E^{-1}(l) )\) such that
\(\nonumber & \text{for every } q_1\in V_l \text{ and every } e_2\in "\mathit {Out}"(\varphi (q_1))\cap l\\ &\text{there is an edge } e_1\in \varphi ^{-1}(e_2)\cap "\mathit {Out}"(q_1) \text{ such that } e_1\in E_l.\)
That is, an \(l\) -SCC is a "strongly connected subgraph" of 1 in which all states and transitions correspond via \(\varphi \) to states and transitions appearing in the "loop" \(l\) . Moreover, given a "run" staying in \(l\) in 2 we can simulate it in the \(l\) -SCC of 1 (property (REF )).
Let 1and 2
be two "transition systems" such that there is a "locally surjective" "morphism" \(\varphi : 1 \rightarrow 2\) . Let \(l\in "\mathpzc {Loop}"(2)\) and \(C_l=(V_l,E_l)\) be a non-empty "\(l\) -SCC" in 1. Then, for every "loop" \(l^{\prime }\in "\mathpzc {Loop}"(2)\) such that \(l^{\prime }\subseteq l\) there is a non-empty "\(l^{\prime }\) -SCC" in \(C_l\) .
Let \((V^{\prime },E^{\prime })=(V_l,E_l)\cap (\varphi _V^{-1}({\mathit {States}}(l^{\prime })),\varphi _E^{-1}(l^{\prime }))\) . We first prove that \((V^{\prime },E^{\prime })\) is non-empty. Let \(q_1\in V_l \subseteq {\mathit {States}}(l)\) . Let \(\varrho \in {\mathpzc {Run}}_{T_2,\varphi (q)}\) be a finite run in 1 from \(\varphi (q_1)\) , visiting only edges in \(l\) and ending in \(q_2\in {\mathit {States}}(l^{\prime })\) . From the local surjectivity, we can obtain a run in \(\varphi ^{-1}(\varrho )\) that will stay in \((V^{\prime },E^{\prime })\) and that will end in a state in \(\varphi _V^{-1}({\mathit {States}}(l^{\prime }))\) . The subgraph \((V^{\prime },E^{\prime })\) clearly has property (REF ) (for \(l^{\prime }\) ).
We prove by induction on the size that any non-empty subgraph \((V^{\prime },E^{\prime })\) verifying the property (REF ) (for \(l^{\prime }\) ) admits an \(l^{\prime }\) -SCC. If \(|V^{\prime }|=1\) , then \((V^{\prime },E^{\prime })\) forms by itself a "strongly connected graph". If \(|V^{\prime }|>1\) and \((V^{\prime },E^{\prime })\) is not strongly connected, then there are vertices \(q,q^{\prime }\in V^{\prime }\) such that there is no path from \(q\) to \(q^{\prime }\) following edges in \(E^{\prime }\) . We let
\( V^{\prime }_q=\lbrace p\in V^{\prime } \; : \; \text{there is a path from } q \text{ to } p \text{ in } (V^{\prime },E^{\prime })\rbrace \; ; \; E^{\prime }_q=E^{\prime }\cap "\mathit {Out}"(V^{\prime }_q)\cap "\mathit {In}"(V^{\prime }_q) .\)
Since \(q^{\prime }\notin V^{\prime }_q\) , the size \(|V^{\prime }_q|\) is strictly smaller than \(|V^{\prime }|\) .
Also, the subgraph \((V^{\prime }_q,E^{\prime }_q)\) is non-empty since \(q\in V^{\prime }_q\) .
The property (REF ) holds from the definition of \((V^{\prime }_q,E^{\prime }_q)\) . We conclude by induction hypothesis.
Let \( be a "Muller transition system" with acceptance condition \) F\( and let \) P\( be a "parity transition system" such that there is a "locally bijective morphism" \) : P. Let \(t_i\) be a "proper tree" of \({\mathcal {ACD}}(\) and \(\tau ,\sigma _1,\sigma _2\in t_i\) nodes in \(t_i\) such that \(\sigma _1,\sigma _2\) are different "children" of \(\tau \) , and let \(l_1=\nu _i(\sigma _1)\) and \(l_2=\nu _i(\sigma _2)\) . If \(C_1\) and \(C_2\) are two "\(l_1 \) -SCC" and "\(l_2\) -SCC" in \(\mathcal {P}\) , respectively, then \(C_1\cap C_2= \emptyset \) .
Suppose there is a state \(q\in C_1\cap C_2\) . Since \(\varphi _V(q)\in {\mathit {States}}(l_1)\cap {\mathit {States}}(l_2)\) , and \(l_1, l_2\) are "loops" there are finite "runs" \(\varrho _1,\varrho _2 \in {\mathpzc {Run}}_{\varphi _V(q)}\) such that \({\mathit {App}}(\varrho _1)=l_1\) and \(\mathit {App}(\varrho _2)=l_2\) . We can “simulate” these runs in \(C_1\) and \(C_2\) thanks to property (REF ), producing runs \(\varphi ^{-1}(\varrho _1)\) and \(\varphi ^{-1}(\varrho _2)\) in \({\mathpzc {Run}}_{\mathcal {P},q}\) and arriving to \(q_1="\mathit {Last}"(\varphi ^{-1}(\varrho _1))\) and \(q_2=\mathit {Last}(\varphi ^{-1}(\varrho _1))\) . Since \(C_1, C_2\) are "\(l_1,l_2\) -SCC", there are finite runs \(w_1\in {\mathpzc {Run}}_{\mathcal {P},q_1}\) , \(w_2\in {\mathpzc {Run}}_{q_2}\) such that \(\mathit {Last}(w_1)=\mathit {Last}(w_2)=q\) , so the runs \(\varphi ^{-1}(\varrho _1)w_1\) and \(\varphi ^{-1}(\varrho _2)w_2\) start and end in \(q\) . We remark that in \( the runs \) (-1(1)w1)=1E(w1)\( and \) (-1(2)w2)=2E(w2)\(start and end in \) V(q)\( andvisit, respectively, all the edges in \) l1\( and \) l2\(.From the definition of \) ACD(\( we have that \) l1F l2 F l1 l2 F\(. Since \)\( preserves the "acceptance condition", the minimal priority produced by \) -1(1)w1\( has the same parity than that of \) -1(2)w2\(, but concatenating both runs we must produce a minimal priority of the opposite parity, arriving to a contradiction.\)
Let \( be a "Muller transition system" and \) "PACD("\( its "ACD-parity transition system". For each tree \) ti\( of \) ACD(\(, each node \) ti\( and each state \) qStatesi()\( we write:\)\( \psi _{\tau ,i,q}=|{\mathit {Branch}}({\mathit {Subtree}}_{t_q}(\tau ))|=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \; \beta \, \text{ passes through } \tau \rbrace |.\)\(\vspace{-8.53581pt}\)\(\Psi _{\tau ,i}=\sum \limits _{q\in {\mathit {States}}_i(\tau )}\psi _{\tau ,i,q}=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \;q\in V \text{ of "index" } i \text{ and } \beta \, \text{ passes through } \tau \rbrace | .\)\(\)
If we consider the root of the trees in \({\mathcal {ACD}}(\) , then each \(\Psi _{\varepsilon ,i}\) is the number of states in \("\mathcal {P}_{\mathcal {ACD}(}"\) associated to this tree, i.e., \(\Psi _{\varepsilon ,i}=|\lbrace (q,i,\beta )\in "\mathcal {P}_{\mathcal {ACD}(}" \; : \; q\in V,\; \beta \in {\mathit {Branch}}("t_q")\rbrace |\) . Therefore
\( |"\mathcal {P}_{\mathcal {ACD}(}"|=\sum \limits _{i=0}^{r}\Psi _{\varepsilon ,i} .\)
[Proof of theorem REF]
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a "Muller transition system", \("\mathcal {P}_{\mathcal {ACD}(}"\) the "ACD-parity transition system" of \( and \) P=(V',E',Source',Target',I0',p':E':N )\( a parity transition system such that there is a "locally bijective morphism" \) : P.
First of all, we construct two modified transition systems \(\widetilde{=(V,\widetilde{E},\mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e},\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t},I_0,\widetilde{\mathcal {F}}) and \widetilde{\mathcal {P}}=(V^{\prime },\widetilde{E^{\prime }},\mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e}^{\prime },\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t}^{\prime }, I_0^{\prime }, \widetilde{p^{\prime }}:\widetilde{E^{\prime }}:\rightarrow \mathbb {N} ), such that}\)
Each vertex of \(V\) belongs to a "strongly connected component".
All leaves \(\tau \in t_i\) verify \(|{\mathit {States}}_i(\tau )|=1\) , for every \(t_i\in {\mathcal {ACD}}(\widetilde{).\item Nodes \tau \in t_i verify {\mathit {States}}_i(\tau )=\bigcup _{\sigma \in \mathit {Children}(\tau )}\mathit {States}_i(\sigma ), for every t_i\in {\mathcal {ACD}}(\widetilde{).\item There is a "locally bijective morphism" \widetilde{\varphi }: \widetilde{\mathcal {P}} \rightarrow \widetilde{.\item |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|\le |\widetilde{\mathcal {P}}| \; \Rightarrow \; |\mathcal {P}_{\mathcal {ACD}(}|\le |\mathcal {P}|.}}}We define the transition system \widetilde{ by adding for each q\in V two new edges, e_{q,1}, e_{q,2} with \mathit {So\hspace{-0.85358pt}\widetilde{urc}\hspace{-0.85358pt}e}(e_{q,j})=\mathit {Ta\hspace{-0.85358pt}\widetilde{rge}\hspace{-0.85358pt}t}(e_{q,j})=q, for j=1,2. The modified "acceptance condition" \widetilde{\mathcal {F}} is given by: let C\subseteq \widetilde{E}\begin{itemize}\item If C\cap E\ne \emptyset , then C\in \widetilde{\mathcal {F}} \; \Leftrightarrow \; C\cap E \in \mathcal {F} (the occurrence of edges e_{q,j} does not change the "acceptance condition").\item If C\cap E = \emptyset , if there are edges of the form e_{q,1} in C, for some q\in V, then C\in \widetilde{\mathcal {F}}. If all edges of C are of the form e_{q,2}, C\notin \mathcal {F}.\end{itemize}It is easy to verify that the "transition system" \widetilde{ and {\mathcal {ACD}}(\widetilde{) verify conditions 1,2 and 3.We perform equivalent operations in \mathcal {P}, obtaining \widetilde{\mathcal {P}}:we add a pair of edges e_{q,1}, e_{q,2} for each vertex in \mathcal {P}, and we assign them priorities \widetilde{p}(e_{q,1})=\eta +\epsilon and \widetilde{p}(e_{q,2})=\eta +\epsilon +1, where \eta is the maximum of the priorities in \mathcal {P} and \epsilon =0 if \eta is even, and \epsilon =1 if \eta is odd. We extend the "morphism" \varphi to \widetilde{\varphi }: \widetilde{\mathcal {P}} \rightarrow \widetilde{ conserving the "local bijectivity" by setting \widetilde{\varphi }_E(e_{q,j})=e_{\varphi (q),j} for j=1,2. Finally, it is not difficult to verify that the underlying graphs of \mathcal {P}_{\mathcal {ACD}(\widetilde{ )} and \widetilde{\mathcal {P}}_{\mathcal {ACD}(} are equal (the only differences are the priorities associated to the edges e_{q,j}), so in particular |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|=|\widetilde{\mathcal {P}}_{\mathcal {ACD}(}|=|\mathcal {P}_{\mathcal {ACD}(}|. Consequently, |\mathcal {P}_{\mathcal {ACD}(\widetilde{ )}|\le |\widetilde{\mathcal {P}}| implies |\mathcal {P}_{\mathcal {ACD}(}|\le |\widetilde{\mathcal {P}}|=|\mathcal {P}|.}}}Therefore, it suffices to prove the theorem for the modified systems \widetilde{ and \widetilde{\mathcal {P}}. From now on, we take verifying the conditions 1, 2 and 3 above. In particular, all trees are "proper trees" in {\mathcal {ACD}}( . It also holds that for each q\in V and \tau \in t_i that is not a leaf, \psi _{\tau ,i,q}=\sum \limits _{\sigma \in \mathit {Children}(\tau )}\psi _{\sigma ,i,q}. Therefore, for each \tau \in t_i that is not a leaf \Psi _{\tau ,i}=\sum \limits _{\sigma \in \mathit {Children}(\tau )}\Psi _{\sigma ,i}, and for each leaf \sigma \in t_i we have \Psi _{\sigma ,i}=1.}Vertices of V^{\prime } are partitioned in the equivalence classes of the preimages by \varphi of the roots of the trees \lbrace t_1,\dots ,t_r\rbrace of {\mathcal {ACD}}(: V^{\prime }= \bigcup \limits _{i=1}^r\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon )) \quad \text{ and } \quad \varphi _V^{-1}( {\mathit {States}}_i(\varepsilon )) \cap \varphi _V^{-1}( {\mathit {States}}_j(\varepsilon ))=\emptyset \text{ for } i\ne j .}}\begin{claim*}For each i=1,\dots ,r and each \tau \in t_i, if C_\tau is a non-empty "\nu _i(\tau )-SCC", then|C_\tau |\ge \Psi _{\tau ,i}.\end{claim*}}}}\) Let us suppose this claim holds. In particular \((\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ),\varphi _E^{-1}( \nu _i(\varepsilon ))\) verifies the property (REF ) from definition REF , so from the proof of lemma REF we deduce that it contains a \(\nu _i(\varepsilon )\) -SCC and therefore \(|\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ))| \ge \Psi _{\varepsilon ,i}\) , so
\(|\mathcal {P}|=\sum \limits _{i=1}^r |\varphi _V^{-1}( {\mathit {States}}_i(\varepsilon ))|\ge \sum \limits _{i=1}^r \Psi _{\varepsilon ,i}=|"\mathcal {P}_{\mathcal {ACD}(}"|,\)
concluding the proof.
[Proof of the claim]
Let \(C_\tau \) be a "\(\nu _i(\tau )\) -SCC". Let us prove \(|C_\tau |\ge \Psi _{\tau ,i}\) by induction on the "height of the node" \(\tau \) . If \(\tau \) is a leaf (in particular if its height is 1), \(\Psi _{\tau ,i}=1\) and the claim is clear.
If \(\tau \) of height \(h>1\) is not a leaf, then it has children \(\sigma _1,\dots , \sigma _k\) , all of them of height \(h-1\) . Thanks to lemmas REF and REF , for \(j=1,\dots ,k\) , there exist disjoint "\(\nu _i(\sigma _j)\) -SCC" included in \(C_\tau \) , named \(C_1,\dots ,C_k\) , so by induction hypothesis
\( |C_\tau | \ge \sum \limits _{j=1}^k |C_j| \ge \sum \limits _{j=1}^k \Psi _{\sigma _j,i}= \Psi _{\tau ,i}. \)
From the hypothesis of theorem REF we cannot deduce that there is a "morphism" from \(\mathcal {P}\) to \("\mathcal {P}_{\mathcal {ACD}(}"\) or vice-versa. To produce a counter-example it is enough to remember the “non-determinism” in the construction of \(\mathcal {P}_{\mathcal {ACD}(}\) . Two different orderings in the nodes of the trees of \({\mathcal {ACD}}(\) will produce two incomparable, but minimal in size parity transition systems that admit a "locally bijective morphism" to \(. \)
However, we can prove the following result:
If \(\varphi _1: "\mathcal {P}_{\mathcal {ACD}(}" \rightarrow is the "locally bijective morphism" described in the proof of proposition \ref {Prop_Correctness-ACD}, then for every state \) q\( in \) of "index" \(i\) :
\( |\varphi _1^{-1}(q)|=\psi _{\varepsilon ,i,q}\le |\varphi ^{-1}(q)| \;, \; \text{ for every "locally bijective morphism" } \varphi : \mathcal {P}\rightarrow \)
It is enough to remark that if \(q \in {\mathit {States}}_i(\tau )\) , then any "\(\nu _i(\tau )\) -SCC" \(C_\tau \) of \(\mathcal {P}\) will contain some state in \(\varphi ^{-1}(q)\) . We prove by induction as in the proof of the claim that \(\psi _{\tau ,i,q} \le |C_\tau \cap \varphi ^{-1}(q)|\) .
Applications
Determinisation of Büchi automata
In many applications, such as the synthesis of reactive systems for \(LTL\) -formulas, we need to have "deterministic" automata. For this reason, the determinisation of automata is usually a crucial step. Since McNaughton showed in [11]} that Büchi automata can be transformed into Muller deterministic automata recognizing the same language, much effort has been put into finding an efficient way of performing this transformation. The first efficient solution was proposed by Safra in [12]}, producing a deterministic automaton using a "Rabin condition". Due to the many advantages of "parity conditions" (simplicity, easy complementation of automata, they admit memoryless strategies for games, closeness under union and intersection...), determinisation constructions towards parity automata have been proposed too. In [13]}, Piterman provides a construction producing a parity automaton that in addition improves the state-complexity of Safra's construction. In [14]}, Schewe breaks down Piterman's construction in two steps: the first one from a non deterministic Büchi automaton \(\mathcal {B}\) towards a "Rabin automaton" (\("\mathcal {R}_\mathcal {B}"\) ) and the second one gives Piterman's parity automaton (\("\mathcal {P}_\mathcal {B}"\) ).
In this section we prove that there is a "locally bijective morphism" from \("\mathcal {P}_\mathcal {B}"\) to \("\mathcal {R}_\mathcal {B}"\) , and therefore we would obtain a smaller parity automaton applying the "ACD-transformation" in the second step. We provide an example (example REF ) in which the "ACD-transformation" provides a strictly better parity automaton.
From non-deterministic Büchi to deterministic Rabin automata
In [14]}, Schewe presents a construction of a deterministic "Rabin automaton" \(""\mathcal {R}_\mathcal {B}""\) from a non-deterministic "Büchi automaton" \(\mathcal {B}\) . The set of states of the automaton \(\mathcal {R}_\mathcal {B}\) is formed of what he calls ""history trees"". The number of history trees for a Büchi automaton of size \(n\) is given by the function \(\mathit {hist}(n)\) , that is shown to be in \(o((1.65n)^n)\) in [14]}. This construction is presented starting from a state-labelled Büchi automaton. A construction starting from a transition-labelled Büchi automaton can be found in [17]}. In [18]}, Colcombet and Zdanowski proved the worst-case optimality of the construction.
[[14]}]
Given a non-deterministic "Büchi automaton" \(\mathcal {B}\) with \(n\) states, there is an effective construction of a deterministic "Rabin automaton" \(\mathcal {R}_\mathcal {B}\) with \(\mathit {hist}(n)\) states and using \(2^{n-1}\) Rabin pairs that recognizes the language \("\mathcal {L}(\mathcal {B})"\) .
[[18]}]
For every \(n\in \mathbb {N} \) there exists a non-deterministic Büchi automaton \(B_n\) of size \(n\) such that every deterministic Rabin automaton recognizing \(\mathcal {L}(\mathcal {B}_n)\) has at least \(\mathit {hist}(n)\) states.
From non-deterministic Büchi to deterministic parity automata
In order to build a deterministic "parity automaton" \(""\mathcal {P}_\mathcal {B}""\) that recognizes the language of a given "Büchi automaton" \(\mathcal {B}\) , Schewe transforms the automaton \("\mathcal {R}_\mathcal {B}"\) into a parity one using what he calls a later introduction record (LIR). The LIR construction can be seen as adding an ordering (satisfying some restrictions) to the nodes of the "history trees". States of \(\mathcal {P}_\mathcal {B}\) are therefore pairs of history trees with a LIR. In this way we obtain a similar parity automaton that with the Piterman's determinisation procedure [13]}.
The worst-case optimality of this construction was proved in [22]}, [17]}, generalising the methods of [18]}.
[[14]}]
Given a non-deterministic "Büchi automaton" \(\mathcal {B}\) with \(n\) states, there is an effective construction of a deterministic "parity automaton" \("\mathcal {P}_\mathcal {B}"\) with \(O(n!(n-1)!)\) states and using \(2n\) priorities that recognizes the language \("\mathcal {L}(\mathcal {B}))"\) .
[[22]}, [17]}]
For every \(n\in \mathbb {N} \) there exists a non-deterministic Büchi automaton \(B_n\) of size \(n\) such that \("\mathcal {P}_\mathcal {B}"\) has less than \(1.5\) times as many states as a minimal deterministic parity automaton recognizing \(\mathcal {L}(\mathcal {B}_n)\) .
A locally bijective morphism from \(\mathcal {P}_\mathcal {B}\) to \(\mathcal {R}_\mathcal {B}\)
Given a "Büchi automaton" \(\mathcal {B}\) and its determinisations to Rabin and parity automata \("\mathcal {R}_\mathcal {B}"\) and \("\mathcal {P}_\mathcal {B}"\) , there is a "locally bijective morphism" \(\varphi : \mathcal {P}_\mathcal {B}\rightarrow \mathcal {R}_\mathcal {B}\) .
Observing the construction of \(\mathcal {R}_\mathcal {B}\) and \(\mathcal {P}_\mathcal {B}\) in [14]}, we see that the states of \(\mathcal {P}_\mathcal {B}\) are of the form \((T,\chi )\) with \(T\) an state of \(\mathcal {R}_B\) (a "history tree"), and \(\chi : T \rightarrow \lbrace 1,\dots ,|B|\rbrace \) a LIR (that can be seen as an ordering of the nodes of \(T\) ).
It is easy to verify that the mapping \(\varphi _V((T,\chi ))=T\) defines a morphism \(\varphi : \mathcal {R}_\mathcal {B}\rightarrow \mathcal {P}_\mathcal {B}\) (from fact REF there is only one possible definition of \(\varphi _E\) ). Since the automata are deterministic, \(\varphi \) is a "locally bijective morphism".
Let \(\mathcal {B}\) be a "Büchi automaton" and \("\mathcal {R}_\mathcal {B}"\) , \("\mathcal {P}_\mathcal {B}"\) the deterministic Rabin and parity automata obtained by applying the Piterman-Schewe construction to \(\mathcal {B}\) . Then, the parity automaton \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}\) verifies
\( |\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}| \le |\mathcal {P}_\mathcal {B}| \)
and \("\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}"\) uses a smaller number of priorities than \("\mathcal {P}_\mathcal {B}"\) .
It is a direct consequence of propositions REF , REF and theorem REF .
Furthermore, after proposition REF , \("\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}"\) uses the optimal number of priorities to recognize \("\mathcal {L}(\mathcal {B})"\) , and we directly obtain this information from the "alternating cycle decomposition" of \(\mathcal {R}_\mathcal {B}\) , \({\mathcal {ACD}}(\mathcal {R}_\mathcal {B})\) .
In the "example REF " we show a case in which \(|\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B)}| < |\mathcal {P}_\mathcal {B}|\) and for which the gain in the number of priorities is clear.
In [18]} and [17]}, the lower bounds for the determinisation of "Büchi automata" to Rabin and parity automata where shown using the family of ""full Büchi automata"", \(\lbrace \mathcal {B}_n\rbrace _{n\in \mathbb {N} }\) , \(|\mathcal {B}_n|=n\) . The automaton \(\mathcal {B}_n\) can simulate any other Büchi automaton of the same size. For these automata, the constructions \(\mathcal {P}_{\mathcal {B}_n}\) and \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_{B_n})}\) coincide.
""""
We present a non-deterministic Büchi automaton \(\mathcal {B}\) such that the "ACD-parity automaton" of \("\mathcal {R}_\mathcal {B}"\) has strictly less states and uses strictly less priorities than \("\mathcal {P}_\mathcal {B}"\) .
In figure REF we show the automaton \(\mathcal {B}\) over the alphabet \(\Sigma =\lbrace a,b,c\rbrace \) . Accepting transitions for the "Büchi condition" are represented with a black dot on them. An accessible "strongly connected component" \(\mathcal {R}_\mathcal {B}^{\prime }\) of the determinisation to a "Rabin automaton" \("\mathcal {R}_\mathcal {B}"\) is shown in figure REF . It has 2 states that are "history trees" (as defined in [14]}). There is a "Rabin pair" \((E_\tau ,F_\tau )\) for each node appearing in some "history tree" (four in total), and these are represented by an array with four positions. We assign to each transition and each position \(\tau \) in the array the symbols \({Green2}{{TextRenderingMode=FillStroke,LineWidth=.5pt, }{}}\) , \({Red2}{\mathbf {X}}\) , or \({Orange2}{\bullet }\) depending on whether this transition belongs to \(E_\tau \) , \(F_\tau \) or neither of them, respectively (we can always suppose \(E_\tau \cap F_\tau = \emptyset \) ).
In figure REF there is the "alternating cycle decomposition" corresponding to \(\mathcal {R}_\mathcal {B}^{\prime }\) . We observe that the tree of \({\mathcal {ACD}}(\mathcal {R}_\mathcal {B}^{\prime })\) has a single branch of height 3.
This is, the Rabin condition over \(\mathcal {R}_\mathcal {B}^{\prime }\) is already a "\([1,3]\) -parity condition" and \(\mathcal {P}_{\mathcal {ACD}(\mathcal {R}_B^{\prime })}=\mathcal {R}_\mathcal {B}^{\prime }\) . In particular it has 2 states, and uses priorities in \([1,3]\) .
On the other hand, in figure REF we show the automaton \(\mathcal {P}_\mathcal {B}^{\prime }\) , that has 3 states and uses priorities in \([3,7]\) . The whole automata \(\mathcal {R}_\mathcal {B}\) and \(\mathcal {P}_\mathcal {B}\) are too big to be pictured in these pages, but the three states shown in figure REF are indeed accessible from the initial state of \(\mathcal {P}_\mathcal {B}\) .
<FIGURE><FIGURE><FIGURE><FIGURE>
On relabelling of transition systems by acceptance conditions
In this section we use the information given by the "alternating cycle decomposition" to provide characterisations of "transition systems" that can be labelled with "parity", "Rabin", "Streett" or \("\mathit {Weak}_k"\) conditions, generalising the results of [32]}.
As a consequence, these yield simple proofs of two results about the possibility to define different classes of acceptance conditions in a deterministic automaton. Theorem REF , first proven in [33]}, asserts that if we can define a Rabin and a Streett condition on top of an underlying automaton \(\mathcal {A}\) such that it recognizes the same language \(L\) with both conditions, then we can define a parity condition in \(\mathcal {A}\) recognizing \(L\) too. Theorem REF states that if we can define Büchi and co-Büchi conditions on top of an automaton \(\mathcal {A}\) recognizing the language \(L\) , then we can define a \("\mathit {Weak}"\) condition over \(\mathcal {A}\) such that it recognizes \(L\) .
First, we extend the definition REF of section REF to the "alternating cycle decomposition".
Given a Muller transition system \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) , we say that its "alternating cycle decomposition" \({\mathcal {ACD}}(\) is a
""Rabin ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "Rabin shape".
""Streett ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "Streett shape".
""parity ACD"" if for every state \(q\in V\) , the tree \("t_q"\) has "parity shape".
""\([1,\eta ]\) -parity ACD"" (resp. \([0,\eta -1]\) -parity ACD) if it is a parity ACD, every tree has "height" at most \(\eta \) and trees of height \(\eta \) are (ACD)odd (resp. (ACD)even).
""Büchi ACD"" if it is a \([0,1]\) -parity ACD.
""co-Büchi ACD"" if it is a \([1,2]\) -parity ACD.
""\(\mathit {Weak}_k\) ACD"" if it is a parity ACD and every tree \((t_i,\nu _i) \in {\mathcal {ACD}}(\) has "height" at most \(k\) .
The next proposition follows directly from the definitions.
Let \( be a Muller transition system. Then:\begin{itemize}\item {\mathcal {ACD}}( is a "parity ACD" if and only if it is a "Rabin ACD" and a "Streett ACD".\item {\mathcal {ACD}}( is a "\mathit {Weak}_k ACD" if and only if it is a "[0,k]-parity ACD" and a "[1,k+1]-parity ACD".\end{itemize}\)
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "Rabin condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\notin \mathcal {F}\) and \(l_2\notin \mathcal {F}\) , then \(l_1\cup l_2 \notin \mathcal {F}\) .
\({\mathcal {ACD}}(\) is a "Rabin ACD".
(\(1 \Rightarrow 2\) )
Suppose that \( uses a Rabin condition with Rabin pairs \) (E1,F1),...,(Er,Fr)\(. Let \) l1\( and \) l2\( be two rejecting loops. If \) l1l2\( was accepting, then there would be some Rabin pair \) (Ej,Fj)\( and some edge \) el1l2\( such that \) eEj\( and \) eFj\(. However, the edge \) e\( belongs to \) l1\( or to \) l2\(, and the loop it belongs to should be accepting too.\)
(\(2 \Rightarrow 3\) )
Let \(q\in V\) of "index" \(i\) , and \("t_q"\) be the subtree of \({\mathcal {ACD}}(\) associated to \(q\) . Suppose that there is a node \(\tau \in t_q\) such that \((node){p_i}(\tau )\) is even ("round" node) and that it has two different children \(\sigma _1\) and \(\sigma _2\) . The "loops" \(\nu _i(\sigma _1)\) and \(\nu _i(\sigma _2)\) are maximal rejecting loops contained in \(\nu _i(\tau )\) , and since they share the state \(q\) , their union is also a loop that must verify \( \nu _i(\sigma _1) \cup \nu _i(\sigma _2)\in \mathcal {F}\) , contradicting the hypothesis.
(\(3 \Rightarrow 1\) )
We define a "Rabin condition" over \(. For each tree \) ti\( in \) ACD(\( and each "round" node \) ti\( (\) (node)pi()\( even) we define the Rabin pair \) (Ei,,Fi,)\( given by:\)\( E_{i,\tau }=\nu _i(\tau )\setminus \bigcup _{\sigma \in "\mathit {Children}"(\tau )}\nu _i(\sigma ) \quad , \qquad F_{i,\tau }=E \setminus \nu _i(\tau ). \)\(\) Let us show that this condition is "equivalent to" \(\mathcal {F}\) over the transition system \(. We begin by proving the following consequence of being a "Rabin ACD":\begin{claim*}If \tau is a "round" node in the tree t_i of {\mathcal {ACD}}(, and l\in {\mathpzc {Loop}}( is a "loop" such that l\subseteq \nu _i(\tau ) and l\nsubseteq \nu _i(\sigma ) for any child \sigma of \tau , then there is some edge e\in l such that e\notin \nu _i(\sigma ) for any child \sigma of \tau .\end{claim*}\begin{claimproof}Since for each state q\in V the tree "t_q" has "Rabin shape", it is verified that {\mathit {States}}_i(\sigma )\cap {\mathit {States}}_i(\sigma ^{\prime })=\emptyset for every pair of different children \sigma , \sigma ^{\prime } of \tau . Therefore, the union of \nu _i(\sigma ) and \nu _i(\sigma ^{\prime }) is not a loop, and any loop l contained in this union must be contained either in \nu _i(\sigma ) or in \nu _i(\sigma ^{\prime }).\end{claimproof}\)
Let \(\varrho \in {\mathpzc {Run}}_{T}\) be a "run" in \(, let \) lLoop(\( be the loop of \) such that \(\mathit {Inf}(\varrho )=l\) and let \(i\) be the "index" of the edges in this loop. Let \(\tau \) be a maximal node in \(t_i\) (for \({\sqsubseteq }\) ) such that \(l\subseteq \nu _i(\tau )\) . If \(l\in \mathcal {F}\) , let \(\tau \) be a maximal node in \(t_i\) (for \({\sqsubseteq }\) ) such that \(l\subseteq \nu _i(\tau )\) . This node \(\tau \) is a round node, and from the previous claim it follows that there is some edge \(e\in l\) such that \(e\) does not belong to any child of \(\tau \) , so \(e\in E_{i,\tau }\) and \(e\notin F_{i,\tau }\) , so the "run" \(\varrho \) is accepted by the Rabin condition too. If \(l\notin \mathcal {F}\) , then for every round node \(\tau \) , if \(l\subseteq \nu _i(\tau )\) then \(l\subseteq \nu _i(\sigma )\) for some child \(\sigma \) of \(\tau \) . Therefore, for every Rabin pair \((E_{i,\tau },F_{i,\tau })\) and every \(e\in l\) , it is verified \(e\in E_{i,\tau } \, \Rightarrow \, e\in F_{i,\tau }\) .
The Rabin condition presented in this proof does not necessarily use the optimal number of Rabin pairs required to define a Rabin condition "equivalent to" \(\mathcal {F}\) over \(.\)
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "Streett condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\in \mathcal {F}\) and \(l_2\in \mathcal {F}\) , then \(l_1\cup l_2 \in \mathcal {F}\) .
\({\mathcal {ACD}}(\) is a "Streett ACD".
We omit the proof of proposition REF , being the dual case of proposition REF .
Let \((V,E,\mathit {Source},\mathit {Target},I_0,\mathcal {F})\) be a Muller transition system. The following conditions are equivalent:
We can define a "parity condition" over \( that is "equivalent to" \) F\( over \) .
For every pair of loops \(l_1,l_2\in {\mathpzc {Loop}}(\) , if \(l_1\in \mathcal {F}\, \Leftrightarrow \, l_2\in \mathcal {F}\) , then \(l_1\cup l_2 \in \mathcal {F}\, \Leftrightarrow \,l_1\in \mathcal {F}\) . That is, union of loops having the same “accepting status” preserves their “accepting status”.
\({\mathcal {ACD}}(\) is a "parity ACD".
Moreover, the parity condition we can define over \( is a "\) [1,]\(-parity" (resp. \) [0,-1]\(-parity~/~\) "Weakk"\() condition if and only if \) ACD(\( is a "\) [1,]\(-parity ACD" (resp. \) [0,-1]\(-parity ACD~/~"\) Weakk\( ACD").\)
(\(1 \Rightarrow 2\) )
Suppose that \( uses a parity acceptance condition with the priorities given by \) p:E N \(. Then, since \) l1\( and \) l2\( are both accepting or both rejecting, \) p1=p(l1)\( and \) p2=p(l2)\( have the same parity, that is also the same parity than \) p(l1 l2)={p1,p2}\(.\)
(\(2 \Rightarrow 3\) )
Let \(q\in V\) of "index" \(i\) , and \("t_q"\) be the subtree of \({\mathcal {ACD}}(\) associated to \(q\) . Suppose that there is a node \(\tau \in t_q\) with two different children \(\sigma _1\) and \(\sigma _2\) . The loops \(\nu _i(\sigma _1)\) and \(\nu _i(\sigma _2)\) are different maximal loops with the property \(\nu _i(\sigma )\subseteq \nu _i(\tau )\) and \(\nu _i(\sigma )\in \mathcal {F}\, \Leftrightarrow \, \nu _i(\tau ) \notin \mathcal {F}\) . Since they share the state \(q\) , their union is also a loop contained in \(\nu _i(\tau )\) and then
\( \nu _i(\sigma _1) \cup \nu _i(\sigma _2)\in \mathcal {F}\; \Leftrightarrow \; \nu _i(\tau ) \in \mathcal {F}\; \Leftrightarrow \; \nu _i(\sigma _1)\notin \mathcal {F}\)
contradicting the hypothesis.
(\(3 \Rightarrow 1\) )
From the construction of the "ACD-transformation", it follows that \("\mathcal {P}_{\mathcal {ACD}(}"\) is just a relabelling of \( with an equivalent parity condition.\)
For the implication from right to left of the last statement, we remark that if the trees of \({\mathcal {ACD}}(\) have priorities assigned in \([\mu ,\eta ]\) , then the parity transition system \("\mathcal {P}_{\mathcal {ACD}(}"\) will use priorities in \([\mu ,\eta ]\) . If \({\mathcal {ACD}}(\) is a "\(\mathit {Weak}_k\) ACD", then in each "strongly connected component" of \("\mathcal {P}_{\mathcal {ACD}(}"\) the number of priorities used will be the same as the "height" of the corresponding tree of \({\mathcal {ACD}}(\) (at most \(k\) ).
For the other implication it suffices to remark that the priorities assigned by \({\mathcal {ACD}}(\) are optimal (proposition REF ).
Given a "transition system graph" \(G=(V,E,\mathit {Source},\mathit {Target},I_0)\) and a "Muller condition" \(\mathcal {F}\subseteq \mathcal {P}(E)\) , we can define a "parity condition" \(p:E\rightarrow \mathbb {N} \) "equivalent to" \(\mathcal {F}\) over \(G\) if and only if we can define a "Rabin condition" \(R\) and a "Streett condition" \(S\) over \(G\) such that
\( (G,\mathcal {F}) \,"\simeq "\, (G,R)\, "\simeq "\, (G,S) \) .
Moreover, if the Rabin condition \(R\) uses \(r\) Rabin pairs and the Streett condition \(S\) uses \(s\) Streett pairs, we can take the parity condition \(p\) using priorities in
\([1,2r+1]\) if \(r\le s\) .
\([0,2s]\) if \(s\le r\) .
The first statement is a consequence of the characterisations (2) or (3) from propositions REF , REF and REF .
For the second statement we remark that the trees of \({\mathcal {ACD}}(\) have "height" at most \(\min \lbrace 2r+1, 2s+1\rbrace \) . If \(r\ge s\) , then the height \(2r+1\) can only be reached by (ACD)odd trees, and if \(s\ge r\) , the height \(2s+1\) only by (ACD)even trees.
From the last statement of proposition REF and thanks to the second item of proposition REF , we obtain:
Given a "transition system graph" \(G\) and a Muller condition \(\mathcal {F}\) over \(G\) , there is an equivalent \("\mathit {Weak}_k"\) condition over \(G\) if and only if there are both \([0,k]\) and "\([1,k+1]\) -parity" conditions "equivalent to" \(\mathcal {F}\) over \(G\) .
In particular, there is an equivalent "Weak condition" if and only if there are "Büchi" and "co-Büchi" conditions equivalent to \(\mathcal {F}\) over \(G\) .
It is important to notice that the previous results are stated for non-labelled transition systems. We must be careful when translating these results to automata and formal languages. For instance, in [33]} there is an example of a non-deterministic automaton \(\mathcal {A}\) , such that we can put on top of it Rabin and Streett conditions \(R\) and \(S\) such that \(\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) , but we cannot put a parity condition on top of it recognising the same language. However, proposition REF allows us to obtain analogous results for "deterministic automata".
[[33]}]
Let \(\mathcal {A}\) be the "transition system graph" of a "deterministic automaton" with set of states \(Q\) . Let \(R\) be a Rabin condition over \(\mathcal {A}\) with \(r\) pairs and \(S\) a Streett condition over \(\mathcal {A}\) with \(s\) pairs such that \(\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) . Then, there exists a parity condition \(p: Q \times \Sigma \rightarrow \mathbb {N} \) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},p)=\mathcal {L}(\mathcal {A},R)=\mathcal {L}(\mathcal {A},S)\) .
Moreover,
if \(r\le s\) , we can take \(p\) to be a "\([1,2r+1]\) -parity condition".
if \(s\le r\) , we can take \(p\) to be a "\([0,2s]\) -parity condition".
Proposition REF implies that \((\mathcal {A},R)"\simeq "(\mathcal {A},S)\) , and after corollary REF , there is a parity condition \(p\) using the proclaimed priorities such that \((\mathcal {A},p)"\simeq "(\mathcal {A},R)\) . Therefore \(\mathcal {L}(\mathcal {A},p)"\simeq "\mathcal {L}(\mathcal {A},R)\) (since for both deterministic and non-deterministic \((\mathcal {A},p)"\simeq "(\mathcal {A},R)\) implies \(\mathcal {L}(\mathcal {A},p)"\simeq "\mathcal {L}(\mathcal {A},R)\) ).
Let \(\mathcal {A}\) be the "transition system graph" of a deterministic automaton and \(p\) and \(p^{\prime }\) be \([0,k]\) and "\([1,k+1]\) -parity conditions" respectively over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},p)= \mathcal {L}(\mathcal {A},p^{\prime })\) . Then, there exists a \("\mathit {Weak}_k"\) condition \(W\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},W)=\mathcal {L}(\mathcal {A},p)\) .
In particular, there is a \("\mathit {Weak}"\) condition \(W\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},W)=L\) if and only if there are both "Büchi" and "co-Büchi" conditions \(B,B^{\prime }\) over \(\mathcal {A}\) such that \(\mathcal {L}(\mathcal {A},B)= \mathcal {L}(\mathcal {A},B^{\prime })=L\) .
If follows from proposition REF and corollary REF .
Conclusions
We have presented a transformation that, given a Muller "transition system", provides an equivalent "parity" transition system that has minimal size and uses an optimal number of priorities among those which accept a "locally bijective morphism" to the original Muller transition system. In order to describe this transformation we have introduced the "alternating cycle decomposition", a data structure that arranges all the information about the acceptance condition of the transition system and the interplay between this condition and the structure of the system.
We have shown in section how the alternating cycle decomposition can be useful to reason about acceptance conditions, and we hope that this representation of the information will be helpful in future works.
We have not discussed the complexity of effectively computing the "alternating cycle decomposition" of a Muller transition system. It is known that solving Muller games is \(\mathrm {PSPACE}\) -complete when the acceptance condition is given as a list of accepting sets of colours
[9]}. However, given a Muller game \(\mathcal {G}\) and the "Zielonka tree" of its Muller condition, we have a transformation into a parity game of polynomial size on the size of \(\mathcal {G}\) , so solving Muller games with this extra information is in \(\mathrm {NP}\cap \mathrm {co}\) -\(\mathrm {NP}\) . Also, in order to build \({\mathcal {ACD}}(\) we suppose that the Muller condition is expressed using as colours the set of edges of the game (that is, as an explicit Muller condition), and solving explicit Muller games is in \(\mathrm {PTIME}\) [8]}. Consequently, unless \(\mathrm {PSPACE}\) is contained in \(\mathrm {NP}\cap \mathrm {co}\) -\(\mathrm {NP}\) , we cannot compute the "Zielonka tree" of a Muller condition, nor the "alternating cycle decomposition" of a Muller transition system in polynomial time.
| [18] | [
[
53596,
53600
],
[
54009,
54013
],
[
55047,
55051
],
[
57921,
57925
]
] | https://openalex.org/W1507176903 |
9a31fc59-f7e5-446a-99a7-4f8678a3e95c | Layer Normalization (LN, [1]}).
Training instability is one major challenge for training LLMs [2]}, [3]}, [4]} (Cf. Figure REF in Appendix for collapses in training several 100B-scale models).
A proper choice of LNs can help stabilize the training of LLMs. We experiment with existing practices, e.g., Pre-LN [5]}, Post-LN [1]}, Sandwich-LN [7]}, which are unfortunately incapable of stabilizing our GLM-130B test runs (Cf. Figure REF (a) and Appendix REF for details).
| [4] | [
[
106,
109
]
] | https://openalex.org/W4224308101 |
f2462b2d-c018-46eb-ad7c-29c1be629fc3 | The 3D Parallel Strategy.
The data parallelism [1]} and tensor model parallelism [2]} are the de facto practices for training billion-scale models [3]}, [4]}.
To further handle
the huge GPU memory requirement and the decrease in overall GPU utilization resulted from applying tensor parallel between nodes—as 40G rather than 80G A100s are used for training GLM-130B, we combine the pipeline model parallelism with the other two strategies to form a 3D parallel strategy.
| [4] | [
[
153,
156
]
] | https://openalex.org/W4285294723 |
0d2f5fc0-0908-4665-97f7-81984e83ed44 | GLM-130B's few-shot (5-shot) performance on MMLU approaches GPT-3 (43.9) after viewing about 300B tokens in Figure REF .
It continues moving up as the training proceeds, achieving an accuracy of 44.8 when the training has to end (i.e., viewing 400B tokens in total). This aligns with the observation [1]} that most existing LLMs are far from adequately trained.
| [1] | [
[
300,
303
]
] | https://openalex.org/W4225591000 |
b974c183-ba8b-48d2-b4a6-845990172941 | BIG-bench [1]} benchmarks challenging tasks concerning models' ability on reasoning, knowledge, and commonsense.
Given evaluating on its 150 tasks is time-consuming for LLMs, we report the BIG-bench-lite—an official 24-task sub-collection—for now.
Observed from Figure REF and Table REF ,
GLM-130B outperforms GPT-3 175B and even PaLM 540B (4\(\times \) larger) in zero-shot setting.
This is probably owing to GLM-130B's bidirectional context attention and MIP, which has been proved to improve zero-shot results in unseen tasks [2]}, [3]}.
As the number of shots increases, GLM-130B's performance keeps going up, maintaining its outperformance over GPT-3
(Cf. Appendix REF and Table REF for details on each model and task).
| [3] | [
[
537,
540
]
] | https://openalex.org/W3205068155 |
272fa563-67df-43f1-b940-fc278539ab93 | Pre-Training.
Vanilla language modeling refers to decoder-only autoregressive models (e.g., GPT [1]}), but it also recognizes any forms of self-supervised objectives on texts. Recently, transformer-based [2]} language models present a fascinating scaling law: new abilities [3]} arouse as models scale up, from 1.5B [4]}, 10B-scale language models [5]}, [6]}, [7]}, to 100B-scale GPT-3 [8]}.
Later, despite many emerged 100B-scale LLMs [9]}, [10]}, [11]}, [12]}, [13]}, [14]}, [15]}, [16]} in both English and Chinese, they are not available to public or only accessible via limited APIs.
The closeness of LLMs severely stymies its development.
GLM-130B's efforts, along with recent ElutherAI, OPT-175B [17]}, and BLOOM-176B [18]}, aim to offer high-quality open-sourced LLMs to our community.
What additionally makes GLM-130B distinct from others is that, GLM-130B focuses on the inclusivity of LLMs to all researchers and developers.
From our decision of its size, the choice of its architecture, and the expenditure to allow its fast inference on popularized GPUs, GLM-130B believes it is the inclusivity of all that be the key to realize LLMs' promised welfare to people.
| [11] | [
[
449,
453
]
] | https://openalex.org/W4226146865 |
79228fa2-eb51-4426-9cea-295af453c3c5 | Different from most existing LLMs including GPT-3 175B[1]}, PaLM 540B [2]}, Gopher [3]}, Chinchilla [4]}, LaMDA [5]}, FLAN [6]}, and many others, GLM-130B is open-sourced and aims to promote openness and inclusivity in LLM research.
| [2] | [
[
70,
73
]
] | https://openalex.org/W2473344385 |
04dc350e-050d-44ca-9389-7d1385895c03 | Pre-LN [1]}.
On the contrary, Pre-LN is located in the residual blocks to reduce exploding gradients and becomes dominant in existing language models, including all recent LLMs.
However, OPT-175B [2]}, BLOOM [3]}, and text-to-image model CogView [4]} later observe that Pre-LN is still unable to handle the vulnerable training when models scale up to 100B or meet multi-modal data.
This is also justified in GLM-130B's preliminary experiments, where Pre-LN consistently crashes during the early training stage.
| [4] | [
[
246,
249
]
] | https://openalex.org/W3165647589 |
80c151f2-d653-4247-bffd-6473442a15b1 | Feed-forward Network.
Some recent efforts to improve transformer architecture have been on the FFN, including replacing it with GLU (adopted in PaLM). Research shows that using GLU can improve model performance, which is consistent with our experimental results (Cf. Table REF ). Specifically, we use GLU with the GeLU [1]} activation. as
\(\operatorname{FFN}_{\mathrm {GeGLU}}\left(x; W_1, V, W_{2}\right)=\left(\mathrm {GeLU}(x W_1) \otimes x V\right) W_{2}\)
| [1] | [
[
319,
322
]
] | https://openalex.org/W2899663614 |
ad8971a8-485c-4b35-a2c0-eb33debeee3a | As described in prior sections, GLM-130B's weight can be quantized into INT4 to reduce parameter redundancy in the inference drastically.
However, we also find that GLM-130B's activations (i.e., hidden states between layers) cannot be appropriately quantized, as they contain value outliers, as is also suggested in concurrent literature [1]}.
What is unique in GLM-130B is that 30% of its dimensions may present value outliers (Cf. Figure REF ), while other GPT-based LLMs (e.g., OPT-175B and BLOOM 176B) only have very few outlying dimensions [1]}.
Therefore, the solution to decompose matrix multiplication for higher-precision computation in outlying dimensions proposed in [1]} does not apply to GLM-130B.
<FIGURE> | [1] | [
[
338,
341
],
[
545,
548
],
[
678,
681
]
] | https://openalex.org/W4292119927 |
f399a3fc-6166-4666-a6d3-f03de17ca977 | Following practices in [1]}, [2]}, [3]}, [4]}, we include many prompted instruction datasets in GLM-130B's MIP training, which accounts for 5% of the training tokens.
All prompts for T0 datasets are from PromptSource [5]}, and prompts for DeepStruct datasets are newly created.
Their composition is shown in Table REF , which makes up natural language understanding and generation datasets from T0 [3]} and promptsource [5]}, and information extraction datasets from DeepStruct [8]}.
In GLM-130B's training, we calculate that approximately 36% of the samples in each dataset have been seen.
| [4] | [
[
41,
44
]
] | https://openalex.org/W3217187998 |
d003b355-f1d8-4955-a6f1-a17bad72e8b3 | All results on 57 MMLU [1]} datasets of GLM-130B and BLOOM 176B are shown in Table REF .
In Section REF , we report weighted average accuracy (i.e., accuracy average per sample) of GLM-130B, GPT-3 175B, and BLOOM 176B following the original benchmark setting [1]}.
| [1] | [
[
23,
26
],
[
259,
262
]
] | https://openalex.org/W3121904249 |
bd7a4326-5155-48b0-8fb1-f98e6aa9d070 | Pile.
Pile evalution [1]} is a comprehensive language modeling benchmark that initially includes 22 different text datasets from diverse domains.
We report our results over a part of 18 datasets with previously reported baseline results [2]}.
Unlike traditional language modeling benchmarks, Pile evaluation reports the BPB (bits-per-byte) perplexity to avoid the mismatch comparison between models with different vocabularies.
Because, in general, language models with a larger vocabulary will be favored in perplexity comparison if not restricted.
In the evaluation, we strictly follow the setting in [1]}, leveraging [gMASK] and a context-length of 1,024 with bidirectional attention, and the rest of 1024 tokens to calculate BPB in an autoregressive manner.
The weighted average BPB is calculated based on each shared dataset's ratio in Pile training-set [1]}.
| [1] | [
[
21,
24
],
[
603,
606
],
[
859,
862
]
] | https://openalex.org/W3118781290 |
087d35a4-46a0-4a2b-93a6-50cac250697e | Here we elaborate the prompts we use for CLUE [1]} and FewCLUE [2]} evaluation.
In Chinese datasets, prompting meets some challenges as Chinese texts are organized by single characters rather than words, leading to unequal length of verbalizers in many cases.
Albeit dataset-specific calibration [3]}, [4]} can help to mitigate the issue, the too-specified technique can be complicated in implementation.
Our evaluation in this paper adopts a more easy-to-solve method leveraging GLM-130B's unique features.
As GLM-130B is a bilingual LLM with English MIP, we adopt English prompts and verbalizers from similar tasks in [5]} for Chinese dataset evaluation and find such strategies to be quite effective.
In terms of evaluation metrics, except for DRCD and CMRC2018, two question answering datasets report EM, and other datasets report accuracy.
<TABLE><TABLE><TABLE><TABLE> | [5] | [
[
620,
623
]
] | https://openalex.org/W4221159410 |
ab29b545-fde6-4b00-927a-6bc5e531527f |
Constant gap? As tab:comparison shows, so far no constant-gap sublinear-time algorithm is known. Maybe the one-sided preprocessing setting is more approachable for this challenge. We believe that our approach is hopeless to achieve a constant gap, since we borrow from the recursive decomposition introduced in [1]} which inherently incurs a polylogarithmic overhead in the approximation factor.
Improving the query time? The well-known \(\Omega (n/K)\) lower bound against the \((k, K)\) -gap Hamming distance problem (and therefore against edit distance) continues to hold in the one-sided preprocessing setting. In particular, the most optimistic hope is an algorithm with query time \(^*(n/k)\) for the \((k, ^*(k))\) -gap edit distance problem. Can this be achieved or is the extra \(+k\) in the query time of thm:one-sided necessary? For two-sided preprocessing, to the best of our knowledge no lower bound is known.
| [1] | [
[
312,
315
]
] | https://openalex.org/W2032869782 |
fcebdc43-134d-4b1f-88fd-dd1a57be8daf | We give two ways to answer queries. The first one relies in a black-box manner on any almost-linear-time algorithm to compute an edit distance approximation with multiplicative error \(n^{(1)}\) [1]}, [2]}, [3]}. Using the same trick as before to restrict the set of feasible shifts, it suffices to approximate \((Y_{v, s}, Y_{v, s^{\prime }})\) for all shifts \(s \in S_v\) and \(s^{\prime } \in {-k \cdot n^{(1)}, \dots , k \cdot n^{(1)}}\) . (We could also discretize the range of \(s^{\prime }\) , but this does not improve the performance here.) In this way, we incur an additive error of at most \((t_v)\) . The computation per node \(v\) takes time \(|Y_v| \cdot k \cdot k \cdot n^{(1)} / t_v\) , which becomes \(k n^{1+(1)}\) in expectation by lem:expected-precision and by summing over all nodes \(v\) .
| [2] | [
[
203,
206
]
] | https://openalex.org/W2019972981 |
9a53820b-bbf7-4204-ba19-2d585b0ce10d | QD aims at characterizing
all the quantumness in a quantum state.[1]}, [2]}
Its definition is based on the two quantum versions of the classical
mutual information.[3]} For a classical system (or a state)
\(AB\) , the total correlation between the subsystems \(A\) and \(B\)
can be expressed as \(I_{A,B}=H_{{A}}+H_{{B}}-H_{{AB}}\)
or \(J_{A,B}=H_{{A}}-H_{{A}|{B}}\) , where
\(H_{{A}}\) (\(H_{{B}}\) , \(H_{{AB}}\) ) is the Shannon
entropy, and \(H_{{A}|{B}}\) is the conditional entropy.
One can prove that \(I_{A,B}\) and \(J_{A,B}\) are
equal to each other. Now let's extend the definition of \(I_{A,B}\)
and \(J_{A,B}\) to a quantum state described by the density
matrix \(\rho _{{AB}}\) . For \(I_{A,B}\) , by replacing
the Shannon entropy with the von Neumann entropy \(S(\rho )\) , one can
easily obtain the quantum mutual information \(\mathcal {I}(\rho _{{AB}})\)
as[1]}, [5]}, [6]}, [7]}, [8]}
\(\mathcal {I}(\rho _{{AB}})=S(\rho _{{A}})+S(\rho _{{B}})-S(\rho _{{AB}}).\)
| [2] | [
[
71,
74
]
] | https://openalex.org/W1973948787 |
0b79318c-1145-4c6d-b158-b9b68241bec7 | QD aims at characterizing
all the quantumness in a quantum state.[1]}, [2]}
Its definition is based on the two quantum versions of the classical
mutual information.[3]} For a classical system (or a state)
\(AB\) , the total correlation between the subsystems \(A\) and \(B\)
can be expressed as \(I_{A,B}=H_{{A}}+H_{{B}}-H_{{AB}}\)
or \(J_{A,B}=H_{{A}}-H_{{A}|{B}}\) , where
\(H_{{A}}\) (\(H_{{B}}\) , \(H_{{AB}}\) ) is the Shannon
entropy, and \(H_{{A}|{B}}\) is the conditional entropy.
One can prove that \(I_{A,B}\) and \(J_{A,B}\) are
equal to each other. Now let's extend the definition of \(I_{A,B}\)
and \(J_{A,B}\) to a quantum state described by the density
matrix \(\rho _{{AB}}\) . For \(I_{A,B}\) , by replacing
the Shannon entropy with the von Neumann entropy \(S(\rho )\) , one can
easily obtain the quantum mutual information \(\mathcal {I}(\rho _{{AB}})\)
as[1]}, [5]}, [6]}, [7]}, [8]}
\(\mathcal {I}(\rho _{{AB}})=S(\rho _{{A}})+S(\rho _{{B}})-S(\rho _{{AB}}).\)
| [5] | [
[
891,
894
]
] | https://openalex.org/W2033729910 |
cc5ec342-813d-4c78-b680-64f9d3ce24ac | QD aims at characterizing
all the quantumness in a quantum state.[1]}, [2]}
Its definition is based on the two quantum versions of the classical
mutual information.[3]} For a classical system (or a state)
\(AB\) , the total correlation between the subsystems \(A\) and \(B\)
can be expressed as \(I_{A,B}=H_{{A}}+H_{{B}}-H_{{AB}}\)
or \(J_{A,B}=H_{{A}}-H_{{A}|{B}}\) , where
\(H_{{A}}\) (\(H_{{B}}\) , \(H_{{AB}}\) ) is the Shannon
entropy, and \(H_{{A}|{B}}\) is the conditional entropy.
One can prove that \(I_{A,B}\) and \(J_{A,B}\) are
equal to each other. Now let's extend the definition of \(I_{A,B}\)
and \(J_{A,B}\) to a quantum state described by the density
matrix \(\rho _{{AB}}\) . For \(I_{A,B}\) , by replacing
the Shannon entropy with the von Neumann entropy \(S(\rho )\) , one can
easily obtain the quantum mutual information \(\mathcal {I}(\rho _{{AB}})\)
as[1]}, [5]}, [6]}, [7]}, [8]}
\(\mathcal {I}(\rho _{{AB}})=S(\rho _{{A}})+S(\rho _{{B}})-S(\rho _{{AB}}).\)
| [6] | [
[
897,
900
]
] | https://openalex.org/W2055754097 |
b70f6c84-12c4-4a29-abd1-da0e571716a7 | The quantum generalization of \(J_{A,B}\) is slightly difficult
because of the conditional entropy \(H_{{A}|{B}}\) , thus,
firstly the quantum conditional entropy \(S(\rho |\lbrace B_{k}\rbrace )\) —the
quantum version of \(H_{{A}|{B}}\) —has to be defined,
where \(\lbrace B_{k}\rbrace \) is just a complete set of projectors.[1]}, [2]}
Then a variant of quantum mutual information can be defined as \(\mathcal {J}_{\lbrace B_{k}\rbrace }(\rho _{{AB}})=S(\rho _{{A}})-S(\rho |\lbrace B_{k}\rbrace )\) .
Following Ref. [3]},
the quantity
\(\mathcal {J}(\rho _{{AB}}):=\sup _{\lbrace B_{k}\rbrace }\mathcal {J}_{\lbrace B_{k}\rbrace }(\rho _{{AB}})\)
| [1] | [
[
329,
332
]
] | https://openalex.org/W2038645424 |
4d56067b-d19c-416f-922e-d1fa505a333f | The quantum generalization of \(J_{A,B}\) is slightly difficult
because of the conditional entropy \(H_{{A}|{B}}\) , thus,
firstly the quantum conditional entropy \(S(\rho |\lbrace B_{k}\rbrace )\) —the
quantum version of \(H_{{A}|{B}}\) —has to be defined,
where \(\lbrace B_{k}\rbrace \) is just a complete set of projectors.[1]}, [2]}
Then a variant of quantum mutual information can be defined as \(\mathcal {J}_{\lbrace B_{k}\rbrace }(\rho _{{AB}})=S(\rho _{{A}})-S(\rho |\lbrace B_{k}\rbrace )\) .
Following Ref. [3]},
the quantity
\(\mathcal {J}(\rho _{{AB}}):=\sup _{\lbrace B_{k}\rbrace }\mathcal {J}_{\lbrace B_{k}\rbrace }(\rho _{{AB}})\)
| [2] | [
[
335,
338
]
] | https://openalex.org/W2073595591 |
a9ed17fd-802f-4d6d-a115-622f33406f7a | The quantum generalization of \(J_{A,B}\) is slightly difficult
because of the conditional entropy \(H_{{A}|{B}}\) , thus,
firstly the quantum conditional entropy \(S(\rho |\lbrace B_{k}\rbrace )\) —the
quantum version of \(H_{{A}|{B}}\) —has to be defined,
where \(\lbrace B_{k}\rbrace \) is just a complete set of projectors.[1]}, [2]}
Then a variant of quantum mutual information can be defined as \(\mathcal {J}_{\lbrace B_{k}\rbrace }(\rho _{{AB}})=S(\rho _{{A}})-S(\rho |\lbrace B_{k}\rbrace )\) .
Following Ref. [3]},
the quantity
\(\mathcal {J}(\rho _{{AB}}):=\sup _{\lbrace B_{k}\rbrace }\mathcal {J}_{\lbrace B_{k}\rbrace }(\rho _{{AB}})\)
| [3] | [
[
521,
524
]
] | https://openalex.org/W1571385165 |
80faf360-c14a-420d-a80f-ee671964c79f | QoE training data. Our QoE training data are gathered through a user study. In practice, this can be done through a
crowdsourcing platform like Amazon Mechanical Turk. Our user study involved 30 paid users (15 females) who were studying at our
institution. To minimize user involvement, we apply the k-means clustering algorithm [1]} to choose 100 representative
webpages from our training dataset. We ask each user to watch the screen update of each training webpage on a XiaoMi 9 smartphone under
various FPS speeds. We also vary the incoming event by considering 5 commonly interactive speeds per gesture [2]} To help
our participants to correlate the generated events to finger movements, we invite them to interact with the device and show the resultant
FPS of their finger movements. For each training instance, we ask a user to select the lowest acceptable screen update rate. We then record
the corresponding minimum acceptable FPS on a per-webpage, per-speed, per-gesture and per-user basis. On average, it took a participant 2.5
hours to complete the study. Later, we extend this user study to all 1,000 webpages used for QoE evaluation using cross-validation.
<TABLE> | [1] | [
[
329,
332
]
] | https://openalex.org/W1663973292 |
9438fdad-bef6-40a3-bcd2-ba86b5bc0a97 | To estimate if a QoE target prediction is wrong, we leverage the conformal prediction (CP) [1]}, [2]}. The CP
is a statistical assessment method for quantifying how much we could trust a model's prediction. This is done by learning a
nonconformity function from the model's training data. This function estimates the “strangeness" of a mapping from input features,
\(x\) , to a prediction output, \(y\) , by looking at the input and the probability distribution of the model prediction. In our case, the
function estimates the error bound of a QoE prediction. If the error bound is greater than a configurable threshold (20% in this work), we
then consider the model gives an incorrect prediction.
| [1] | [
[
91,
94
]
] | https://openalex.org/W2171585602 |
c33b7d6c-45d8-4bbe-9413-72c72c27c846 | To solve MDSF through RADAR, we need to specify the RL algorithm and the arrival/departure stochastic process models; the remaining components of the framework are problem-independent. In [1]}, we showed that average-reward reinforcement learning, i.e., the R-Learning algorithm [2]}, is an efficient solution for the problem. In this algorithm, in addition to the values of the actions \(Q[s,a]\) , a parameter \(\rho \) , which is the average reward of the MDP, is also learned. So the TD-update rule of the RL algorithm is:
\(Q[s,a] \leftarrow (1 - \alpha )Q[s,a] + \alpha \Big (\mathfrak {R}(s,a) - \rho + \max _{a^{\prime }} Q[s^{\prime },a^{\prime }] \Big ),\)
| [1] | [
[
188,
191
]
] | https://openalex.org/W3135815554 |
5cfa0929-a728-4f9f-92b5-3ee89c22fb35 | Observations of the 6.4 keV flux from Sgr B2 have not found up to
now any reliable evident stationary component though as predicted
by [1]} a fast decrease of 6.4 keV emission observed with
XMM-Newton for several molecular clouds suggested that the
emission generated by low energy cosmic rays, if present, might
become dominant in several years. Nevertheless, for several
clouds, including Sgr B, observations show temporary variations of
6.4 keV emission both rise and decay of intensity (see
[2]}, [1]}). We interpreted this rise of emission as a
stage when the X-ray front ejected by Sgr A\(^\ast \) entered into
these clouds and the level of background generated by cosmic rays
as the 6.4 keV emission before the intensity jump.
| [2] | [
[
495,
498
]
] | https://openalex.org/W1548887662 |
ecd706b3-be74-4c7e-a1dc-78c962515570 | Arenas et al. [1]} examined Kuramoto oscillators that are coupled to each other on a network with a hierarchical community structure (with small, denser communities nested inside larger, sparser ones), and they examined how the architecture of structural communities affects the formation of functional communities, as quantified by how long it takes oscillators to synchronize. Oscillators in denser communities synchronize faster than oscillators in sparser ones. Variations of coupled Kuramoto oscillators have also been used to study of functional communities [2]}, [3]}. In these papers, the network architecture was fixed, and the key question concerned constructing partitions of networks that are based on oscillator dynamics.
| [1] | [
[
14,
17
]
] | https://openalex.org/W2100240966 |
11ac0401-10ca-4e66-915c-084ed900e70e | Our investigation of functional communities in the present paper departs from the perspective of the aforementioned papers, as we focus directly on detecting communities from output dynamics. We consider both the formation and the disappearance of functional communities. Instead of examining synchronization time scales as a way to partition a network of oscillators, we generate output data from coupled Kuramoto oscillators on networks (which we construct from random-graph models) and identify functional communities from such data. These functional communities, whose name takes inspiration from studies of functional brain networks in neuroscience [1]}, arise from communities in networks in which edges encode time-series similarity of the nodes of some network. Another name for such communities is “behavioral communities”, and they were explored briefly in [2]} using methods from information theory and structural community detection.
| [1] | [
[
654,
657
]
] | https://openalex.org/W3106209931 |
3562d015-127c-4d69-99a0-cbd245140af4 | To study dynamics on a graph, suppose that each node \(j \in \lbrace 1, \ldots , N_s\rbrace \) is associated with a Kuramoto oscillator [1]}. This yields the following dynamical system:
\(\dot{\theta }_{j} = \omega _{j} + \frac{K}{N_{s}}\sum _{k=1}^{N_{s}}A_{jk}\sin (\theta _{k}-\theta _{j})\,, \quad \omega _{j} \sim \frac{1}{\gamma } g\left(\frac{x}{\gamma }\right)\,,\)
| [1] | [
[
137,
140
]
] | https://openalex.org/W2153706317 |
c12f4e14-7c72-4821-bb10-f8c81b5d4161 | Recent works which perform well on few-shot image classification have found that it is preferable to use a combination of metric-based feature extraction/classification combined with task-specific adaptation [1]}, [2]}.
Most relevant to this paper, the recently introduced CrossTransformer [3]} uses an attention mechanism to align the query and support set using image patch co-occurrences. This is used to create query-specific class prototypes before classification within a prototypical network [4]}. Whilst this is effective for few-shot image classification, one potential weakness is that relative spatial information is not encoded. For example, it would not differentiate between a bicycle and a unicycle.
This distinction is typically not needed in [3]}'s tested datasets [6]}, where independent part-based matching is sufficient to distinguish between the classes.
| [3] | [
[
290,
293
],
[
759,
762
]
] | https://openalex.org/W3099495704 |
f67724b8-eb79-4832-bd14-fdef0d9ea11a | Few-shot video action recognition methods have had success with a wide range of approaches, including memory networks of key frame representations [1]}, [2]} and
adversarial video-level feature generation [3]}.
Recent works have attempted to make use of temporal information. Notably, [4]} aligns variable length query and support videos before calculating the similarity between the query and support set.
[5]} combines a variety of techniques, including spatial and temporal attention to enrich representations, and jigsaws for self-supervision. [6]} achieves state-of-the-art performance by calculating query to support-set frame similarities. They then enforce temporal consistency between a pair of videos by monotonic temporal ordering. Their method can be thought of as a differentiable generalisation of dynamic time warping.
Note that the above works either search for the single support video [5]} or average representation of a support class [4]}, [6]} that the query is closest to.
A concurrent work to ours attempts to resolve this through query-centred learning [10]}.
Importantly, all prior works perform attention operations on a frame level, as they tend to use single-frame representations.
| [4] | [
[
285,
288
],
[
953,
956
]
] | https://openalex.org/W2963996402 |
27c4ef78-b259-4aff-8104-835519a14524 | Few-shot video action recognition methods have had success with a wide range of approaches, including memory networks of key frame representations [1]}, [2]} and
adversarial video-level feature generation [3]}.
Recent works have attempted to make use of temporal information. Notably, [4]} aligns variable length query and support videos before calculating the similarity between the query and support set.
[5]} combines a variety of techniques, including spatial and temporal attention to enrich representations, and jigsaws for self-supervision. [6]} achieves state-of-the-art performance by calculating query to support-set frame similarities. They then enforce temporal consistency between a pair of videos by monotonic temporal ordering. Their method can be thought of as a differentiable generalisation of dynamic time warping.
Note that the above works either search for the single support video [5]} or average representation of a support class [4]}, [6]} that the query is closest to.
A concurrent work to ours attempts to resolve this through query-centred learning [10]}.
Importantly, all prior works perform attention operations on a frame level, as they tend to use single-frame representations.
| [5] | [
[
407,
410
],
[
903,
906
]
] | https://openalex.org/W3095374178 |
dac12c9d-3881-4882-8dc4-6c95fe280905 | Few-shot video action recognition methods have had success with a wide range of approaches, including memory networks of key frame representations [1]}, [2]} and
adversarial video-level feature generation [3]}.
Recent works have attempted to make use of temporal information. Notably, [4]} aligns variable length query and support videos before calculating the similarity between the query and support set.
[5]} combines a variety of techniques, including spatial and temporal attention to enrich representations, and jigsaws for self-supervision. [6]} achieves state-of-the-art performance by calculating query to support-set frame similarities. They then enforce temporal consistency between a pair of videos by monotonic temporal ordering. Their method can be thought of as a differentiable generalisation of dynamic time warping.
Note that the above works either search for the single support video [5]} or average representation of a support class [4]}, [6]} that the query is closest to.
A concurrent work to ours attempts to resolve this through query-centred learning [10]}.
Importantly, all prior works perform attention operations on a frame level, as they tend to use single-frame representations.
| [6] | [
[
548,
551
],
[
959,
962
]
] | https://openalex.org/W3035374961 |
b5c2a36e-86cb-4450-b693-fca6f2beab79 | where \(L\) is a standard layer normalisation [1]}.
We apply the Softmax operation to acquire the attention map
| [1] | [
[
47,
50
]
] | https://openalex.org/W3037932933 |
1566e05b-990f-47bf-8e54-4ece8b1cb0bc | Datasets.
We evaluate our method on four datasets. The first two are Kinetics [1]} and Something-Something V2 (SSv2) [2]}, which have been frequently used to evaluate few-shot action recognition in previous works [3]}, [4]}, [5]}, [6]}. SSv2, in particular, has been shown to require temporal reasoning (e.g. [7]}, [8]}, [9]}). We use the few-shot splits for both datasets proposed by the authors of [3]}, [11]} which are publicly accessiblehttps://github.com/ffmpbgrnn/CMN. In this setup, 100 videos from 100 classes are selected, with 64, 12 and 24 classes used for train/val/test. We also provide results for the few-shot split of SSv2 used by [6]} which uses 10x more videos per class in the training set.
Additionally, we evaluate our method on HMDB51 [13]} and UCF101 [14]}, using splits from [5]}.
| [1] | [
[
78,
81
]
] | https://openalex.org/W2914084271 |
f7d21e74-9a39-4258-850e-9b5b15dac9d6 | Datasets.
We evaluate our method on four datasets. The first two are Kinetics [1]} and Something-Something V2 (SSv2) [2]}, which have been frequently used to evaluate few-shot action recognition in previous works [3]}, [4]}, [5]}, [6]}. SSv2, in particular, has been shown to require temporal reasoning (e.g. [7]}, [8]}, [9]}). We use the few-shot splits for both datasets proposed by the authors of [3]}, [11]} which are publicly accessiblehttps://github.com/ffmpbgrnn/CMN. In this setup, 100 videos from 100 classes are selected, with 64, 12 and 24 classes used for train/val/test. We also provide results for the few-shot split of SSv2 used by [6]} which uses 10x more videos per class in the training set.
Additionally, we evaluate our method on HMDB51 [13]} and UCF101 [14]}, using splits from [5]}.
| [2] | [
[
117,
120
]
] | https://openalex.org/W2625366777 |
bccbf44a-c193-4e15-a855-6398b2721a27 | Datasets.
We evaluate our method on four datasets. The first two are Kinetics [1]} and Something-Something V2 (SSv2) [2]}, which have been frequently used to evaluate few-shot action recognition in previous works [3]}, [4]}, [5]}, [6]}. SSv2, in particular, has been shown to require temporal reasoning (e.g. [7]}, [8]}, [9]}). We use the few-shot splits for both datasets proposed by the authors of [3]}, [11]} which are publicly accessiblehttps://github.com/ffmpbgrnn/CMN. In this setup, 100 videos from 100 classes are selected, with 64, 12 and 24 classes used for train/val/test. We also provide results for the few-shot split of SSv2 used by [6]} which uses 10x more videos per class in the training set.
Additionally, we evaluate our method on HMDB51 [13]} and UCF101 [14]}, using splits from [5]}.
| [3] | [
[
213,
216
],
[
400,
403
]
] | https://openalex.org/W2894873912 |
749cb634-1589-44b5-b136-e6b1c6519a21 | Datasets.
We evaluate our method on four datasets. The first two are Kinetics [1]} and Something-Something V2 (SSv2) [2]}, which have been frequently used to evaluate few-shot action recognition in previous works [3]}, [4]}, [5]}, [6]}. SSv2, in particular, has been shown to require temporal reasoning (e.g. [7]}, [8]}, [9]}). We use the few-shot splits for both datasets proposed by the authors of [3]}, [11]} which are publicly accessiblehttps://github.com/ffmpbgrnn/CMN. In this setup, 100 videos from 100 classes are selected, with 64, 12 and 24 classes used for train/val/test. We also provide results for the few-shot split of SSv2 used by [6]} which uses 10x more videos per class in the training set.
Additionally, we evaluate our method on HMDB51 [13]} and UCF101 [14]}, using splits from [5]}.
| [11] | [
[
406,
410
]
] | https://openalex.org/W3041485444 |
708e589a-33c5-44b7-a854-cffcdd3d82db | Acknowledgements Publicly-available datasets were used for this work.
This work was performed under the SPHERE Next Steps EPSRC Project EP/R005273/1.
Damen is supported by EPSRC Fellowship UMPIRE (EP/T004991/1).
Appendix
X-Shot results
<TABLE>In the main paper, we introduced Temporal-Realational CrossTransformers (TRX) for few-shot action recognition. They are designed specifically for \(K\) -shot problems where \(K>1\) , as TRX is able to match sub-sequences from the query against sub-sequences from multiple support set videos.
Table REF in the main paper shows results on the standard 5-way 5-shot benchmarks on Kinetics [1]}, Something-Something V2 (SSv2) [2]}, HMDB51 [3]} and UCF101 [4]}.
For completeness we also provide 1-, 2-, 3-, 4- and 5-shot results for TRX with \(\Omega {=}\lbrace 1\rbrace \) (frame-to-frame comparisons) and \(\Omega {=}\lbrace 2,3\rbrace \) (pair and triplet comparisons) on the large-scale datasets Kinetics and SSv2. These are in Table REF in this appendix, where we also list results from all other works which provide these scores.
For 1-shot, in Kinetics, TRX performs similarly to recent few-shot action-recognition methods [5]}, [6]}, [7]}, but these are all outperformed by OTAM [8]}. OTAM works by finding a strict alignment between the query and single support set video per class. It does not scale as well as TRX when \(K>1\) , shown by TRX performing better on the 5-shot benchmark.
This is because TRX is able to match query sub-sequences against similar sub-sequences in the support set, and importantly ignore sub-sequences (or whole videos) which are not as useful.
Compared to the strict alignment in OTAM [8]}, where the full video is considered in the alignment, TRX can exploit several sub-sequences from the same video, ignoring any distractors. Despite not being as well suited to 1-shot problems, on SSv2 TRX performs similarly to OTAM. 2-shot TRX even outperforms 5-shot OTAM.
Table REF again highlights the importance of tuples, shown in the main paper, where TRX with \(\Omega {=}\lbrace 2,3\rbrace \) consistently outperforms \(\Omega {=}\lbrace 1\rbrace \) .
Figure 5 in the main paper shows how TRX scales on SSv2 compared to CMN [5]}, [11]}, which also provides X-shot results (\(1 \le X \le 5)\) . The equivalent graph for Kinetics is shown in Fig. REF here. This confirms TRX scales better as the shot increases. There is less of a difference between TRX with \(\Omega {=}\lbrace 1\rbrace \) and \(\Omega {=}\lbrace 2,3\rbrace \) , as Kinetics requires less temporal knowledge to discriminate between the classes than SSv2 (ablated in Sec. 4.3.1 and 4.3.2 in the main paper).
<FIGURE>
The impact of positional encoding
<TABLE>TRX adds positional encodings to the individual frame representations before concatenating them into tuples. Table REF shows that adding positional encodings improves SSv2 for both single frames and higher-order tuples (by +0.3% and +0.6% respectively).
For Kinetics, performance stays the same as single frames and improves slightly with tuples (+0.4%) for the proposed model.
Overall, positional encoding improves the results marginally for TRX.
| [3] | [
[
683,
686
]
] | https://openalex.org/W2126579184 |
80240dbf-d978-49a7-a97e-aaffb8073157 | Acknowledgements Publicly-available datasets were used for this work.
This work was performed under the SPHERE Next Steps EPSRC Project EP/R005273/1.
Damen is supported by EPSRC Fellowship UMPIRE (EP/T004991/1).
Appendix
X-Shot results
<TABLE>In the main paper, we introduced Temporal-Realational CrossTransformers (TRX) for few-shot action recognition. They are designed specifically for \(K\) -shot problems where \(K>1\) , as TRX is able to match sub-sequences from the query against sub-sequences from multiple support set videos.
Table REF in the main paper shows results on the standard 5-way 5-shot benchmarks on Kinetics [1]}, Something-Something V2 (SSv2) [2]}, HMDB51 [3]} and UCF101 [4]}.
For completeness we also provide 1-, 2-, 3-, 4- and 5-shot results for TRX with \(\Omega {=}\lbrace 1\rbrace \) (frame-to-frame comparisons) and \(\Omega {=}\lbrace 2,3\rbrace \) (pair and triplet comparisons) on the large-scale datasets Kinetics and SSv2. These are in Table REF in this appendix, where we also list results from all other works which provide these scores.
For 1-shot, in Kinetics, TRX performs similarly to recent few-shot action-recognition methods [5]}, [6]}, [7]}, but these are all outperformed by OTAM [8]}. OTAM works by finding a strict alignment between the query and single support set video per class. It does not scale as well as TRX when \(K>1\) , shown by TRX performing better on the 5-shot benchmark.
This is because TRX is able to match query sub-sequences against similar sub-sequences in the support set, and importantly ignore sub-sequences (or whole videos) which are not as useful.
Compared to the strict alignment in OTAM [8]}, where the full video is considered in the alignment, TRX can exploit several sub-sequences from the same video, ignoring any distractors. Despite not being as well suited to 1-shot problems, on SSv2 TRX performs similarly to OTAM. 2-shot TRX even outperforms 5-shot OTAM.
Table REF again highlights the importance of tuples, shown in the main paper, where TRX with \(\Omega {=}\lbrace 2,3\rbrace \) consistently outperforms \(\Omega {=}\lbrace 1\rbrace \) .
Figure 5 in the main paper shows how TRX scales on SSv2 compared to CMN [5]}, [11]}, which also provides X-shot results (\(1 \le X \le 5)\) . The equivalent graph for Kinetics is shown in Fig. REF here. This confirms TRX scales better as the shot increases. There is less of a difference between TRX with \(\Omega {=}\lbrace 1\rbrace \) and \(\Omega {=}\lbrace 2,3\rbrace \) , as Kinetics requires less temporal knowledge to discriminate between the classes than SSv2 (ablated in Sec. 4.3.1 and 4.3.2 in the main paper).
<FIGURE>
The impact of positional encoding
<TABLE>TRX adds positional encodings to the individual frame representations before concatenating them into tuples. Table REF shows that adding positional encodings improves SSv2 for both single frames and higher-order tuples (by +0.3% and +0.6% respectively).
For Kinetics, performance stays the same as single frames and improves slightly with tuples (+0.4%) for the proposed model.
Overall, positional encoding improves the results marginally for TRX.
| [4] | [
[
699,
702
]
] | https://openalex.org/W24089286 |
aedd368b-d7c6-4f2b-800a-4eec5ebab06e |
We construct an ontology that models the data of interest.
We create a virtual table operator for the data source at hand (if it is not available for the kind of data source we want to access (e.g., Twitter API), implementing Algorithm REF .
We
create the mappings, where the source part
comprises an extended-SQL query, i.e., an SQL query that uses the virtual table operator
for the selected data source along with the respective parameters. The caching parameter t is included optionally as a parameter of the respective virtual tables.
Given the ontology and the mappings, we set up a virtual RDF repository using our extended OBDA system in combination with an
SQL engine that is able to process the extended-SQL queries
included in the mappings.
Note that the selected OBDA system
should
be (made) “database-agnostic” in the sense that it does not require access to the data beforehand. This feature goes beyond the existing
RDB2RDF and OBDA systems, which require that the data to be mapped already reside in a database, to which they connect
in order to a-priori extract
meta-data. [1]}, [2]}.
In our case, we change the OBDA paradigm so that the data is fetched on-the-fly, after a SPARQL query is fired.
Once a SPARQL query arrives, the OBDA system translates it to SQL. The resulting SQL embeds the virtual table operator(s) involved in the query.
By the time these operators are invoked as part of the extended-SQL query evaluation, the extended SQL query is evaluated using a system that supports
extended-SQL queries and virtual tables. In our case, this system is MadIS.
According to the caching parameter f that is defined in the mappings,
MadIS decides whether results will be accessed on-the-fly from the data source
(Step 6a) or cached results will be returned instead (Step 6b).
Eventually, the query result
returns back to the OBDA system to be presented
as virtual RDF triples.
If applicable, reasoning is applied to the fetched data (e.g., OWL 2 QL reasoning [3]} is performed in [1]}).
| [2] | [
[
1100,
1103
]
] | https://openalex.org/W1987544386 |
1742fdb8-c542-40ec-b40f-600d7a235074 | \(\bullet \) Ontophttps://github.com/ontop/ontop [1]}
is a state-of-the-art, open-source OBDA system that supports R2RML and its native mapping language. We extended the MadIS JDBC connector so that it complies with Ontop, while Ontop was extended to use MadIS as a back-end.
The latter modification is the most significant one,
enabling Ontop to operate in a “database-agnostic” manner that supports non-materialized databases and relies on MadIS as back-end.
The reason is that Ontop, like all other OBDA systems, connects only to
populated and materialized databases, using their data for optimization, before querying them.
Instead, Ontop4theWeb retrieves the data to be queried only after the user fires a query, creating a virtual table on-the-fly. As a result, no prior knowledge of the data can be used.
| [1] | [
[
50,
53
]
] | https://openalex.org/W2299775049 |
076c8c19-8abe-4c95-b173-416ebcaafe87 | Proposition 3.4 ([1]})
Let \((A,\ast _A, a_A)\) be a pre-Lie algebroid. Define a skew-symmetric bilinear bracket operation \([-,-]_A\) on \(\Gamma (A)\) by
\([X,Y]_A=X\ast _A Y-Y\ast _A X,\quad \forall ~X,Y\in \Gamma (A). \)
| [1] | [
[
17,
20
]
] | https://openalex.org/W1663738177 |