text
stringlengths 7.27k
650k
| label
int64 0
10
|
---|---|
Qualitative Modelling via Constraint
Programming: Past, Present and Future
Thomas W. Kelsey1 , Lars Kotthoff1 , Christoffer A. Jefferson1 , Stephen A.
Linton1 , Ian Miguel1 , Peter Nightingale1 , and Ian P. Gent1
arXiv:1209.3916v1 [] 18 Sep 2012
School of Computer Science,
University of St Andrews, KY16 9SX, UK
Abstract. Qualitative modelling is a technique integrating the fields of
theoretical computer science, artificial intelligence and the physical and
biological sciences. The aim is to be able to model the behaviour of systems without estimating parameter values and fixing the exact quantitative dynamics. Traditional applications are the study of the dynamics of
physical and biological systems at a higher level of abstraction than that
obtained by estimation of numerical parameter values for a fixed quantitative model. Qualitative modelling has been studied and implemented
to varying degrees of sophistication in Petri nets, process calculi and
constraint programming. In this paper we reflect on the strengths and
weaknesses of existing frameworks, we demonstrate how recent advances
in constraint programming can be leveraged to produce high quality qualitative models, and we describe the advances in theory and technology
that would be needed to make constraint programming the best option
for scientific investigation in the broadest sense.
Keywords: Constraint Programming, Qualitative Models, Compartmental Models, Dynamical Systems
1
Introduction
The standard approach for non-computer scientists when investigating dynamic
scientific systems is to develop a quantitative mathematical model. Differential equations are chosen in the belief that they best represent (for example)
convection-diffusion-reaction or population change, and parameter values are
estimated from empirical data. This approach suffers from several limitations
which are widely documented, and which we summarise with examples in Section 2.
In a standard modelling text [21, Chapter 5], qualitative model formulation
is described as
. . . the conversion of an objective statement and a set of hypotheses and
assumptions into an informal, conceptual model. This form does not contain explicit equations, but its purpose is to provide enough detail and
structure so that a consistent set of equations can be written. The qualitative model does not uniquely determine the equations, but does indicate
2
Qualitative Modelling via Constraint Programming
the minimal mathematical components needed. The purpose of a qualitative model is to provide a conceptual frame-work for the attainment of
the objectives. The framework summarizes the modeler’s current thinking concerning the number and identity of necessary system components
(objects) and the relationships among them.
For the computer scientist, a qualitative approach is more natural. The dynamics of the system under investigation are described in a formal language,
but with no (or few) a priori assumptions made about the specific mathematical
model that may be produced. This means working at a higher level of abstraction than usual, it requires the formalisation of complex system behaviour, and
it involves searching a large space of candidate models for those to be used to
generate numerical models. Computer scientists are, in general, trained to be
able to identify and work at the most suitable levels of abstraction; they also
design and use highly formal languages, and routinely develop algorithms for
NP-hard problem classes. Hence the computer scientist is ideally qualified to
undertake qualitative modelling. This is by no means a new observation, and in
Section 3 we give a critical evaluation of historic and current computer science
approaches to this problem. We focus on three particular approaches, constraint
programming (CP), temporal logics and process calculi. In our view, historic
CP approaches were hindered by both struggles to accommodate temporality
into constraints, and by limitations in the CP languages and tools available at
the time. The process calculus and temporal logic approaches have been more
successful: the languages and tools used to model and verify computer system
behaviour have been (and are being) adapted to model important systems arising
in molecular and cell biology.
The CP approach has been recently revisited, using languages and tools
developed as part of the Constraint Solver Synthesiser research project at St
Andrews. We give a detailed worked example in Section 4 in which the application area is human cell population dynamics. A version of this example will
be presented at the forthcoming Workshop on Constraint Based Methods for
Bioinformatics [27]. We demonstrate the ability to
1. describe sophisticated qualitative dynamic behaviour in a non-temporal modelling language;
2. convert these descriptions into standard CP constraints;
3. explore the large solution spaces of the resulting constraint satisfaction problems (CSPs);
4. iterate using parameter estimates and/or subsidiary modelling assumptions
to converge on useable quantitative models.
However, fundamental problems remain. In particular, our exploration of solution spaces is neither truly stochastic nor targeted enough to reduce non-useful
search effort. Nor do we have any organised way to investigate the tradeoff between realism of qualitative model and computational complexity of quantitative
model. We explore these and other limitations in Section 5, and present them as
research opportunities for the CP community. Successful research activity would
Qualitative Modelling via Constraint Programming
3
be beneficial to the scientific community in the widest sense. Any scientific team
would be able to describe the system under investigation in terms of qualitative
system descriptions such as:
–
–
–
–
behaviour A is required;
behaviour B is forbidden;
if C happens, it happens after D;
the rate of change of the rate of change of E has exactly two minima in
timescale F;
– the rate of change in the decline of G is no less than the rate of change in
the increase in H.
CP technology would then be used to iteratively converge on suitable models
for use by the global scientific community. In our opinion, this would represent
an important transfer of CP expertise, languages and search to our colleagues
working in other scientific fields.
2
Quantitative mathematical models
Successful computer modelling in the physical, biological and economic sciences
is a difficult undertaking. Domains are often poorly measured due to ethical,
technical and/or financial constraints. In extreme instances the collection of
accurate longitudinal data is simply impossible using current techniques. This
adversely affects the production and assessment of hypothetical quantitative
models, since the incompleteness of the domain datas necessitates the making
of assumptions that may or may not reflect ground truths. A second category
of assumptions are involved in the choice of quantitative modelling framework.
Hypothetical solutions can be ruled out by restricting the complexity of models,
and unrealistic models can be allowed by over-complex models. For both types
of a priori assumption, mutually exclusive assumptions must be kept separate,
sometimes with no scientific justification.
The remainder of this section consists of two illustrative examples, both taken
from biology.
2.1
Nitric Oxide diffusion
Our first example (adapted from a paper by Degasperi and Calder presented at
a workshop on Process Algebra and Stochastically Timed Activities [11]) of the
limitations of starting the modelling process by selecting a mathematical model
involves modelling nitric oxide (NO) bioavailability in blood vessels. Models of
this scenario aim to determine the diffusion distance of NO along the radius of
a vessel, where NO is produced in a narrow region on the internal wall of the
vessel. Numerous models have been developed over the last decade and most
share underlying assumptions and use the similar diffusion governing equations.
In particular, a vessel is modelled as a cylinder with partial differential equations
(PDEs), using Fick’s law of diffusion in cylindrical coordinates. Compartments
4
Qualitative Modelling via Constraint Programming
define areas such as endothelium (where NO is produced), vascular wall, and
lumen (i.e. where the blood flows). Another common assumption is that the diffusion operates only in the radial direction, while it can be considered negligible
in other directions. A complete review and critical evaluation of these models is
given in [40]. The author concludes:
The complexity of NO interactions in vivo makes detailed quantitative
analyses through mathematical modeling an invaluable tool in investigations of NO pathophysiology. Mathematical models can provide a different
perspective on the mechanisms that regulate NO signaling and transport
and can be utilized for the validation and screening of proposed hypotheses. At this point, however, the predictive ability of these models is limited by the lack of quantitative information for major parameters that
affect NO’s fate in the vascular wall. Further, the difficulties associated
with measuring NO directly in biological tissues and the scarcity of NO
measurements in the microcirculation present a significant obstacle in
model validation. Thus, caution is needed in interpreting the in silico
simulations and accepting model predictions when experimental data are
missing. Advances in both the experimental methodologies and in the
theoretical models are required to further elucidate NO’s roles in the
vasculature. [40, our emphases]
Fig. 1. Compartmental schematic of human ovarian follicular development.
Qualitative Modelling via Constraint Programming
2.2
5
Ovarian follicle dynamics
Our second example involves the modelling of human cell populations. The human ovary contains a population of primordial (or non-growing) follicles (F0 in
Figure 1). Some of these are recruited towards maturation and start to grow.
Many of these die off through atresia, but some become primary follicles (F1 in
Figure 1). Again, a proportion of these die off with the remainder growing into
secondary follicles (F2 in Figure 1). This continues until a very small proportion become eggs that are released from the ovary for potential fertilisation. For
the purposes of this study, we consider only the dynamics of follicle progression
(primordial to primary to secondary). Since there are well-defined physiological differences between the types, the obvious choice of quantitative model is
compartmental:
dF0
= −kT0 F0 − kL0 F0
dt
dF1
= kT0 F0 − kT1 F1 − kL1 F1
dt
dF2
= kT1 F1 − kT2 F2 − kL2 F2
dt
Kinetic loss and transfer parameters – kLi and kTi respectively – are found in
principle by estimating populations at known ages, then fitting ODE solutions
that minimise residual errors [15].
There are several limitations to this approach. Empirical data is scarce for
primordial follicles [41], is calculated by inference for primary follicles [24], and
simply does not exist for secondary follicles. Mouse-model studies have produced
reasonable parameter estimates and validation [5], but it is not known how well
these results translate to humans.
As a direct result of these limitations, two entirely different compartmental
models have been published in the literature. In [5] there are no losses after
F0 , whereas in [15] there are no losses for F1 , losses for F2 , and losses for F0
after age 38. A third research group investigating the same cell dynamics but
with its own empirical data and modelling assumptions would be highly likely
to produce a third quantitative model being fundamentally different to those
already published. So there is an obvious problem: which (if any) of these models
should be used by the wider research community to describe and account for
changes in cell populations over time?
A more fundamental problem is that the loss–migration model may not be
the correct choice. Recent studies have shown that human ovarian stem-cells
exist, suggesting that further model parameters are needed to allow for regeneration of the primordial follicle pool. The resulting models suffer from biological
implausibility in the mouse model [5], and remain to be produced for humans. A
key methodological drawback is that the use of compartmental models leads to a
constrained class of solutions that excludes other plausible models. For example,
the dynamics could also be modelled by nonlinear reaction–diffusion equations
6
Qualitative Modelling via Constraint Programming
that lead to solutions that are unlikely to be obtained from a system of coupled
linear ODEs (Figure 2).
Fig. 2. Two hypothetical models of primordial follicle population from conception to
menopause. On the left, a peak model adapted from [41]. On the right, the solution of
a reaction–diffusion equation. Both are supported by existing physiological theory and
empirical evidence.
3
Existing approaches to QM
Qualitative modelling is a mature computer scientific technique, with existing
methods and results for qualitative compartmental models [33,32,35] and for the
use of CSPs to describe and solve qualitative models [10,14]. However, these
latter studies either reported incomplete algorithms [10] or described complicated algebras with no associated CSP modelling language or optimised CSP
solver [14]. In 2002, a hybrid approach was presented in which concurrency was
described in terms of CP constraints [4].
A key observation is that these studies were published 10–20 years ago. It
appears that the limitations of CP technology at the time were collectively sufficient to stifle the development of languages, solvers and tools for CP-based
qualitative modelling.
Other approaches include process calculi and temporal logics, both of which
have been shown to be successful at the molecular level [6] and the protein
network level [7,38], but not as yet at inter- and intra-cellular levels. Despite
this, the process calculus and temporal logic communities are engaging in active
current research to improve its techniques and widen access to other scientific
areas. Of particular note are BIOCHAM (temporal logic) and BioPEPA (process
calculus).
BIOCHAM [8] consists of two languages (one rule-based, the other based on
either the CTL or LTL temporal logic languages) that allows the iterative development of quantitative models from qualitative ones. This answers the obvious
Qualitative Modelling via Constraint Programming
7
question posed by newcomers to qualitative modelling: “given a good qualitative
model, how do I derive a model that I can use for numeric studies?” BIOCHAM
has sophisticated tool support and is under active current development (version
3.3 released in October 2011).
BioPEPA [9] a process algebra for the modelling and the analysis of biochemical networks. It is a modification of PEPA (originally defined for the performance
analysis of computer systems), in order to handle the use of general kinetic laws.
The Edinburgh-based BioPEPA research group has sought and received substantial funding to improve the accessibility of their framework by researchers at all
levels of systems biology. A cloud-based architecture is under development, as
is improved translation to and from SBML (System Biology Markup Language)
formats, thereby supporting easier exchange and curation of models.
In summary, from the competing candidates for a computer science basis for
successful qualitative modelling, CP has – as it were – fallen by the wayside, while
temporal logics and process calculi are providing real technology and support,
at least to the biomedical modelling communities. We see no obvious reason for
this: clearly time is a variable in all dynamical modelling, and therefore notions
of “liveness”, “before” and “after” needed to be incorporated into the qualitative
modelling framework. But this is perfectly possible in CP, as we demonstrate in
Section 4.
4
Case study: cell dynamics QM using constraints
Our case study is the compartmental modelling of NGFs described in Section
2.2. We use the Savile Row tool that converts constraint problem models formulated in the solver-independent modelling language Essence0 [17] to the input
format of the Constraint Satisfaction Problem (CSP) solver Minion [19]. Savile
Row converts Essence0 problem instances into Minion format and applies reformulations (such as common subexpression elimination) that enhance search.
As well as the standard variables and constraints expected of a CSP modelling
language, Essence0 allows the specification of “for all” and “exist” constraints,
that are then re-cast as basic logic constraints in Minion.
We expect our candidate qualitative models to be implemented as differential
equations or by non-linear curve-fitting. In both case we need to specify the
notions of rate of change and smoothness. Suppose that X[0, . . . , n] is a series
of variables representing a follicle population at different ages. Then we can
approximate first derivatives by X 0 [1, . . . , n] where X 0 [i] = X[i] − X[i − 1], and
second derivatives by X 00 [1, . . . , n − 1] where X 00 [j] = X 0 [j + 1] − X 0 [j]. These
definitions allow us to post qualitative constraints about peak populations
∃p ∈ [1, . . . , n] such that ∀i > p, X 0 [i] < 0 ∧ ∀i < p, X 0 [i] > 0.
We can require or forbid smoothness by restricting the absolute value of the X 00
variables, and require or forbid fast rates of population growth by restrictions
on the X 0 [i].
8
Qualitative Modelling via Constraint Programming
By having three sets of variables (primordial, primary and secondary follicles) each with up to two derivative approximations, we can model interactions
between the populations at different ages. For example, we can require a zero
population of secondary follicles until puberty, after which the population behaviour is similar to that of primary follicles, but on a smaller scale and with an
adjustable time-lag.
Qualitative description
Essence0 statement
find x : [int(0..max)] of int(0..100)
percentage of peak population
find y: [int(1..max)] of int(−r · · · r)
1st deriv. variables
find z : int(1..max − 1)] of int(−r · · · r)
2nd deriv. variables
forAll i : int(1..max).y[i] = x[i] − x[i − 1]
1st deriv.definition
forAll j : int(1..max − 1).z[j] = y[j + 1] − y[j]
2st deriv. definition
exists k, j : int(2..birth).
forall i : int(birth..max).
i < k ⇒ y[i] > 0
positive 1st deriv. pre-peak
i > k ⇒ y[i] < 0
negative 1st deriv. post-peak
x[k] = 100 ∧ y[k] = 0
it is a peak
i > birth ⇒ |z[i]| < max
smooth post-gestation
Table 1. An example of a simple qualitative model specified in Essence0 . When supplied
with values for max, r, and birth, Savile Row will construct a Minion instance, the
solutions of which are all hypothetical models that respect the qualitative description.
To further abstract away from quantitative behaviour, populations can be
defined in terms of proportion of peak rather than absolute numbers of cells,
different time scales can be used for different age ranges (e.g. neonatal vs postmenopausal), and we can model the qualitative behaviour of values that are
normally log-adjusted in quantitative studies. Table 1 gives an illustrative example of a model involving one type of follicle.
Any solution of such a model is a candidate for the basis of a quantitative
model of actual cell dynamics, once boundary conditions and scale conditions are
supplied. For example, the population of each type of follicle is known to be zero
at conception, and can be assumed to be below 1,000 at menopause. Several
studies have reported that peak primordial population is about 300,000 per
ovary [41], and there is initial evidence that primary follicle population peaks at
13–15 years of age in humans [24]. Using a combination of facts and quantitative
information, a range of quantitative models can be produced for later empirical
validation.
Each of our qualitative models represents a class of CSPs, a set of variables
with integer or Boolean domains together with a set of constraints involving
those variables. A solution is an assignment of domain values to variables such
that no constraint is violated. In our methods, solutions are found by Minion
using backtrack search with a variety of search heuristics. In general, there will be
many more solutions to the CSP than realistic models, and many more realistic
Qualitative Modelling via Constraint Programming
9
models than models that accurately describe reflect what happens in nature.
Moreover, the resulting quantitative models can be graded by their complexity
– linear ODE, piecewise-linear ODE, quadratic ODE, ..., non-linear PDE. Hence
the ideal situation would be a CSP solution leading to an easily solved quantitive
model that is biologically accurate. However, no such solution need exist, and we
need to investigate the tradeoff between model complexity and model accuracy.
We can sample the space of CSP solutions by randomly ordering the variables
before making value assignments, thereby constructing a different but logically
equivalent search tree at each attempt. This allows us to estimate the likelihood
of “good” models being found (i.e cheap and accurate), and thereby estimate
the computational costs involved in attempting to find the best model that can
be derived from our qualitative descriptions.
In this case study we have utilised recent advances in CSP technology such
as solver-independent modelling frameworks, specification–solver interfaces that
enhance CSP instances, and the use of solvers that can quickly find all solutions
to large and complex CSP instances [28,13,12]. Taken together, these advances
allow us to easily specify qualitative behaviour of cell dynamics, obtain solutions
that generate quantitative models, and systematically investigate the tradeoffs
between computational expense, model complexity and biological accuracy in a
domain for which there is extremely limited direct empirical data. Our investigations utilise the search heuristics used to find CSP solutions: solvers proceed by
backtrack search in a tree constructed by explicit choices for current search variable and current value assignment, by randomising these choices we can explore
the space of candidate solutions.
The framework for ovarian cells treats primordial follicles as a source, and
the other types as both sinks and sources. There is no feedback in the dynamical
system, but we see no reason why this aspect could not be included if required.
Moreover, it is relatively simple to incorporate other indicators of ovarian reserve [26,25,16] thereby obtaining an integrated model involving cells, hormones
and physiology. We therefore believe that this initial study can generalise to other
domains at other levels of systems biology from population-based epidemiology
to steered molecular dynamics.
5
Future directions for CP
The case study in Section 4 was realised using languages and tools developed
in the Constraint Solver Synthesiser project at St Andrews. Currently, applying
constraint technology to a large, complex problem requires significant manual
tuning by an expert. Such experts are rare. The central aim of the project is to
improve dramatically the scalability of constraint technology, while simultaneously removing its reliance on manual tuning by an expert. It is our view that
here are many techniques in the literature that, although effective in a limited
number of cases, are not suitable for general use. Hence, they are omitted from
current general solvers and remain relatively undeveloped. QM is an excellent
example.
10
Qualitative Modelling via Constraint Programming
Recent advances in CP technology allow us to
1. describe complex qualitative system behaviour in a language accessible and
understandable by anyone with a reasonable level of scientific and/or mathematical training;
2. optimise the definition of CSPs based on qualitative descriptions via analysis
of the options for variables, values and constraints;
3. use machine-learning to build an optimised bespoke solver for the class of
CSPs derived from the descriptions;
4. efficiently search the solution spaces of large and complex CSP instances.
However, we are at the proof-of-concept stage for QM, having shown the ability in principle to produce useful results, rather than extensive and peer-reviewed
research output. We now present specific avenues of research that would allow
not only the production of high quality qualitative models, but also a robust
schema for deriving a suitable quantitative model from the space of solutions of
a CSP that represents a QM. The research areas are given in order of realisability: the first version of Savile Row (Section 5.1) was released in July 2012 and is
under current active development, whereas the systematic search for models that
are both realistic and lead to computationally inexpensive differential equations
(Section 5.4) is a completely unexplored research topic.
Several of the references for the research topics mentioned in the remainder
of this section are incomplete. This is due to the work being part of unfinished
investigations, or being planned and designed as future investigations.
5.1
Essence0 and Savile Row
Savile Row [39] is a modelling assistant tool that reads the language Essence0
and transforms it into the input format of a number of solvers (currently Minion [20], Gecode [18] and Dominion [3]). It was designed from the start to be
solver-independent and easily extended with new transformation rules. It is also
straightforward to add new output languages supported by an alternate sequence
of transformations. At present Savile Row is at an early stage of development
compared to other tools such as MiniZinc [34]. However it has some features that
are particularly relevant to qualitative modelling, and its extensibility makes it
suitable for the future work we describe below.
Uniquely Savile Row can produce Minion and Dominion’s logical metaconstraints for conjunction and disjunction. This is highly relevant to qualitative
modelling because disjunctions arise from exists statements, and conjunctions
from forAll statements (when they are nested inside exists or some logical operator). Exists and forAll will be extensively used in qualitative modelling to
model time. Minion’s logical metaconstraints can be much more efficient than
other methods [23].
Savile Row also implements common subexpression elimination (CSE) [36].
This replaces two or more identical expressions in a model with a single auxiliary
variable. The auxiliary variable is then constrained to be equal to the common
Qualitative Modelling via Constraint Programming
11
expression. In many cases CSE will strengthen propagation. CSEs tend to arise
when quantifiers are unrolled, so we expect this feature to be very relevant to
QM. At present Savile Row will only exploit identical common subexpressions.
To fully exploit CSE for QM, we would need to identify the types of non-identical
CSEs that occur with QM (for example, common subsets of disjunctions) and
extend Savile Row to eliminate them.
To better express complex QM problems in Essence0 is likely to require extensions to the language. In particular we have identified comprehensions as an
interesting future direction. These allow more flexible expression of constraints
with respect to quantifier variables and parameters. For example, suppose we
have a one-dimensional matrix x and we want to state that there exists a midpoint such that all variables before the mid-point are different, and the mid-point
equals some parameter p. Using a variable comprehension, we can express this as
follows. The comprehension creates a list of variables for the allDiff constraint.
exists i : int(0..max). allDiff([x[j] | j : int(0..max), j < i]) ∧x[i] = p
Comprehensions afford a great deal of flexibility. As a second example, they
would allow the tuple lists of table constraints to be constructed on the fly
based on parameters and quantifier variables. Therefore we expect them to be
an excellent addition to the language for QM and for many other problems.
5.2
Solver Generation
A major challenge facing constraints research is to deliver constraint solving
that scales easily to problems of practical size. Current constraint solvers, such
as Choco [31], Eclipse [1], Gecode [18], Ilog Solver [22], or Minion [20] are monolithic in design, accepting a broad range of models. This convenience comes at
the price of a necessarily complex internal architecture, resulting in significant
overheads and inhibiting efficiency and scalability. Each solver may thus incorporate a large number of features, many of which will not be required for most
constraint problems. The complexity of current solvers also means that it is often
prohibitively difficult to incorporate new techniques as they appear in the literature. A further drawback is that current solvers perform little or no analysis of
an input model and the features of an individual model cannot be exploited to
produce a more efficient solving process.
To mitigate these drawbacks, constraint solvers often allow manual tuning
of the solving process. However, this requires considerable expertise, preventing
the widespread adoption of constraints as a technique for solving the most challenging combinatorial problems. The components of a constraint solver are also
usually tightly coupled, with complex restrictions on how they may be linked
together, making automated generation of different solvers difficult.
We address these challenges in the Constraint Solver Synthesiser project.
The benefits achieved in the framework lead to faster and more scalable solvers.
In addition, the automated approach simplifies the task of modelling constraint
problems by removing the need to manually optimise specifications. As well as
12
Qualitative Modelling via Constraint Programming
architecture-driven development, we utilise concepts from generative programming, AI, domain-specific software engineering and product-lines in the Constraint Solver Synthesiser approach.
Initial results from comparing solvers generated by Dominion with an existing
solver are positive and indicate this approach is promising [2]. Dominion is in fact
expected to make bigger gains in the cases where there are many interdependent
decisions to be made from a large number of components, where traditional
solvers are limited by having to cater for the generic problem.
The Dominion approach improves performance and scalability of solving constraint problems as a result of:
–
–
–
–
tuning the solver to characteristics of the problem
making more informed choices by analysing the input model
specialising the solver by only incorporating required components, and
providing extra functionality that can be added easily and used when required.
A number of avenues are open for further work. In particular learning how
to automatically create high quality solvers quickly is a major open problem.
This is essentially an instance of the Algorithm Selection Problem [37]. A lot of
research has investigated ways of tackling this problem, but veritable challenges
remain. A prime example for new challenges in Algorithm Selection are the issues
related to contemporary machine architectures with a large number of computing
elements with diverse capabilities (e.g. multiple CPU and GPU cores in modern
laptops). Research to date has largely focussed on using a single processor, with
some research into parallelisation on homogeneous hardware. Being able to run
several algorithms at once has a significant impact on how algorithms should be
selected. In particular, constraints on the type of algorithms that be run at the
same time, for example because only one of them can use the GPU, as well as
collaboration between the algorithms pose promising directions for research.
All of these directions are highly relevant to qualitative modelling, as advances that speed up constraint solving in practise would enable us to tackle
practical problems that are currently beyond the reach of CP.
5.3
Exploring Search Spaces I
Current CP solvers are tailored towards finding a single solution to a problem,
or proving no solution exists. The solution found can be either the first one
discovered, or the “best” solution under a single optimisation condition. In many
situations this is insufficient, as users want to be able to understand and reason
about all solutions to their problem. For many such problems, current CP is
simply useless. We believe CP solvers must be extended to be able to solve
such problems, while maintaining and improving the efficiency and ease-of-use
of existing CP tools.
Groups are one of the most fundamental mathematical concepts, and problems whose solutions are a group occur in huge numbers of both research and
Qualitative Modelling via Constraint Programming
13
real-world applications. All groups include an “identity” element, so the problem
of finding a single solution to a problem whose solutions form a group is trivial.
Enumerating all solutions to such problems is impractical, as groups considered
“small” by mathematicians often have over 10100 elements.
The reason groups with more than 10100 members can be handled is that
groups are rarely represented by a complete enumeration. Instead, groups are
represented by a small subset of their elements, which can be used to generate
the whole group, utilising the fact that groups are closed under composition of
their members. Using a small number of members of a structure to generate the
complete structure occurs in many areas of mathematics, including algebraic
structures such as groups, semigroups, vector spaces and lattices.
We plan on extending CP so it can generate efficient compact representations
of the solutions to problems, and allow users to explore and understand these
solutions. This will allow CP to be used to tackle many new classes of problems,
of interest to many different types of user.
A related issue is the parallel exploration of search spaces. This is an especially relevant issue as during the last few years, a dramatic paradigm shift from
ever faster processors to an ever increasing number of processors and processing
elements has occurred. Even basic contemporary machines have several generic
processing elements and specialised chips for e.g. graphics processing.
While many systems for parallel constraint solving have been developed, we
are not aware of any in current use that can be deployed easily by non-expert
users. Recent work at St Andrews started to address this problem [30] and the
latest released version of the Minion constraint solver (version 0.14, July 2012)
has preliminary support for the large-scale distributed solving of any constraint
problem. However, further research is required to make it easier to use and
evaluate its usefulness for qualitative models.
5.4
Exploring Search Spaces II
In Section 5.3 we described issues to do with the efficient search of large solution
spaces, which is clearly of fundamental importance for QM. However, even if
efficiency is assured, there are two further problems to overcome if high quality
QM is to be achieved. The first is the organisation of search in a controlled and
stochastic way – i.e. using the mathematical theory of probability to express and
utilise the inherent degrees of uncertainty in which qualitative model solutions
are likely to lead to “good” quantitative models. Existing CP search heuristics
allow the user to specify the order in which the variables and/or values are
selected during search. This order can be randomised (implemented for example
as the -randomiseorder and -randomseed heuristic options in Minion), but this is
far from a fully stochastic exploration of the search space. Both BIOCHAM and
BioPEPA (described in Section 3) fully support iterative stochastic simulation
allowing convergence to preferred numeric models.
The second issue relates to the tradeoff between scientific accuracy and plausibility of a QM (as determined by testing generalisation to empirical data) and
the mathematical and computational complexity of the preferred quantitative
14
Qualitative Modelling via Constraint Programming
Fig. 3. Simplified tradeoff between QM realism and numeric model complexity
model, as shown in Figure 3. Qualitative models can be ranked in terms of
realism in a continuum ranging from highly unrealistic to a highly accurate simulation of what we understand the system in question to be. The models can
also be ranked in terms of the type of differential equations needed to implement a numeric simulation. Many simple systems of linear ODEs are solvable in
polynomial time and space. Others are not (depending on Lipschitz conditions
and whether or not P = P SP ACE [29]). Nonlinear ODEs are strictly harder
to solve as a class, and most PDEs have no closed form solution. The complexity of obtaining approximate solutions follows the same scale, in general. It is
clear that given two qualitative models that are roughly equivalent in terms of
assessed realism, the one that leads to the differential equations that are easier
to solve should normally be selected. The CP technology needed to make these
decisions does not exist, and its development is a completely unexplored avenue
of future research.
Qualitative Modelling via Constraint Programming
6
15
Conclusions
A large proportion of research effort in CP is directed inwards. Quite correctly,
researchers seek ways to improve the modelling of CSPs, the efficiency of constraint propagators, and the range and scope of constraints in a general sense.
This is as it should be, and the authors’ combined research effort is predominantly inwards in this sense. However, if technologies such as CP are not being
used by non-developers to solve problems in the wider domain, then they are of
intellectual interest to only a small numbers of insiders.
In this paper we describe an area of use for CP technologies that has fallen
into neglect, in our opinion for no good reason. The temporal logic and process
calculus research communities are achieving success in qualitative modelling by
publishing papers, being awarded grants, and by having the fruits of their research efforts used to solve real problems in systems biology. But dynamic systems can be perfectly well described in terms of finite difference relationships
that obviate the need for temporal and process components in the underlying
system description language. All finite difference methods rely on discretising
a function on a grid, and the discretisation can be readily expressed in terms
of CP variables and values with simple arithmetic constraints: in Section 4 we
described the standard backward-difference approximation of a derivative, using
unit step-length in order to maintain integer value domains. Forward and central
differences can be approximated using the same technique, as can derivatives to
any required higher order. The fact that time is the dependent variable in our
models is unimportant: the discretisation works for arbitrary choice of variable
representation. That the numeric error in finite difference approximations of
derivatives is proportional to the step size (one for our forward and backward
differences; two for central differences) is also unimportant: our aim is to derive
a CSP with larger than needed solution space, in order not to rule out realistic
models that would not be result of a priori choice of differential equation model.
In addition, it is our view that the CP framework is inherently more attractive
than temporal and process frameworks, since the ability to formally reason about
a timeline in terms of “until”, “since”, etc. is not needed, and, if present, makes
searching for solutions harder than necessary due to well-documented problems
with state-space explosion.
However, current CP technology is not well enough developed to compete
with (and ideally replace) the areas of computer science that have dedicated
more research effort and resource to this area of study. CP research effort into
qualitative modelling faltered in the early years of this century, and has not yet
recovered. The specific areas identified in Section 5 are a non-exhaustive set of
future research directions for the CP community that, if successful, would allow
our languages and tools to be routinely used by researchers from the physical,
biological and economic sciences.
Acknowledgments. The authors are supported by United Kingdom EPSRC
grant EP/H004092/1. LK is supported by a SICSA studentship and an EPSRC
fellowship.
16
Qualitative Modelling via Constraint Programming
References
1. Aggoun, A., Chan, D., Dufresne, P., Falvey, E., Grant, H., Harvey, W., Herold,
A., Macartney, G., Meier, M., Miller, D., Mudambi, S., Novello, S., Perez, B., van
Rossum, E., Schimpf, J., Shen, K., Tsahageas, P.A., de Villeneuve, D.H.: Eclipse
user manual release 5.10 (2006), http://eclipse-clp.org/
2. Balasubramaniam, D., Jefferson, C., Kotthoff, L., Miguel, I., Nightingale, P.: An
automated approach to generating efficient constraint solvers. In: 34th International Conference on Software Engineering (Jun 2012)
3. Balasubramaniam, D., de Silva, L., Jefferson, C., Kotthoff, L., Miguel, I., Nightingale, P.: Dominion: An architecture-driven approach to generating efficient constraint solvers. In: Proceedings of the 9th Working IEEE/IFIP Conference on
Software Architecture. IEEE (2011), (To appear)
4. Bockmayr, A., Courtois, A.: Using hybrid concurrent constraint programming to
model dynamic biological systems. In: 18th International Conference on Logic Programming. pp. 85–99. Springer (2002)
5. Bristol-Gould, S.K., Kreeger, P.K., Selkirk, C.G., Kilen, S.M., Mayo, K.E., Shea,
L.D., Woodruff, T.K.: Fate of the initial follicle pool: empirical and mathematical
evidence supporting its sufficiency for adult fertility. Developmental biology 298(1),
149–54 (Oct 2006)
6. Calder, M., Hillston, J.: Process algebra modelling styles for biomolecular processes. In: Priami, C., Back, R.J., Petre, I. (eds.) Transactions on Computational Systems Biology XI, pp. 1–25. Springer-Verlag, Berlin, Heidelberg (2009),
http://dx.doi.org/10.1007/978-3-642-04186-0_1
7. Calzone, L., Chabrier-Rivier, N., Fages, F., Soliman, S.: Machine learning biochemical networks from temporal logic properties. T. Comp. Sys. Biology pp. 68–94
(2006)
8. Calzone, L., Fages, F., Soliman, S.: BIOCHAM: an environment for modeling biological systems and formalizing experimental knowledge. Bioinformatics (Oxford,
England) 22(14), 1805–7 (Jul 2006)
9. Ciocchetta, F., Hillston, J.: Bio-pepa: A framework for the modelling and analysis
of biological systems. Theor. Comput. Sci. 410(33-34), 3065–3084 (Aug 2009)
10. Clancy, D.: Qualitative simulation as a temporally-extended constraint satisfaction
problem. Proc. AAAI 98 (1998)
11. Degasperi, A., Calder, M.: On the formalisation of gradient diffusion models of
biological systems. In: Proc. 8th Workshop on Process Algebra and Stochastically
Timed Activities. pp. 139–144 (2009)
12. Distler, A., Kelsey, T., Kotthoff, L., Jefferson, C.: The semigroups of order 10. In:
Principles and Practice of Constraint Programming - CP 2012, 18th International
Conference, CP 2012, Proceedings. Lecture Notes in Computer Science, vol. 7514,
pp. 883–899. Springer (2012)
13. Distler, A., Kelsey, T.: The monoids of orders eight, nine & ten. Annals of Mathematics and Artificial Intelligence 56(1), 3–21 (Jul 2009)
14. Escrig, M.T., Cabedo, L.M., Pacheco, J., Toledo, F.: Several Models on Qualitative
Motion as instances of the CSP. Revista Iberoamericana de Inteligencia Artificial
6(17), 55–71 (2002)
15. Faddy, M.J., Gosden, R.G.: A mathematical model of follicle dynamics in the
human ovary. Human reproduction (Oxford, England) 10(4), 770–5 (Apr 1995)
16. Fleming, R., Kelsey, T.W., Anderson, R.A., Wallace, W.H., Nelson, S.M.: Interpreting human follicular recruitment and antimüllerian hormone concentrations
throughout life. Fertil. Steril. (Aug 2012)
Qualitative Modelling via Constraint Programming
17
17. Frisch, A.M., Harvey, W., Jefferson, C., Martı́nez-Hernández, B., Miguel, I.:
Essence: A constraint language for specifying combinatorial problems. Constraints
13(3), 268–306 (Jun 2008)
18. http://www.gecode.org/
19. Gent, I.P., Jefferson, C., Miguel, I.: Minion: A fast scalable constraint solver. In:
Brewka, G., Coradeschi, S., Perini, A., Traverso, P. (eds.) The European Conference on Artificial Intelligence 2006 (ECAI 06). pp. 98–102. IOS Press (2006)
20. Gent, I.P., Jefferson, C.A., Miguel, I.: MINION: A fast scalable constraint solver.
In: Proceedings of the Seventeenth European Conference on Artificial Intelligence.
pp. 98–102 (2006)
21. Haefner, J.: Modeling Biological Systems. Springer-Verlag, New York (2005)
22. http://www.ilog.com/products/cp/
23. Jefferson, C., Moore, N., Nightingale, P., Petrie, K.E.: Implementing logical connectives in constraint programming. Artificial Intelligence 174, 1407–1429 (2010)
24. Kelsey, T.W., Anderson, R.A., Wright, P., Nelson, S.M., Wallace, W.H.B.: Datadriven assessment of the human ovarian reserve. Molecular human reproduction
18(2), 79–87 (Sep 2011)
25. Kelsey, T.W., Wallace, W.H.: Ovarian volume correlates strongly with the number
of nongrowing follicles in the human ovary. Obstet Gynecol Int 2012, 305025 (2012)
26. Kelsey, T.W., Wright, P., Nelson, S.M., Anderson, R.A., Wallace, W.H.: A validated model of serum anti-müllerian hormone from conception to menopause.
PLoS ONE 6(7), e22024 (2011)
27. Kelsey, T., Linton, S.: Qualitative models of cell dynamics as constraint satisfaction
problems. In: Backhoven, R., Will, S. (eds.) Proc. of the Workshop on Constraint
Based Methods for Bioinformatics (WCB12). pp. 16–22 (2012)
28. Kelsey, T., Linton, S., Roney-Dougal, C.M.: New developments in symmetry breaking in search using computational group theory. In: Buchberger, B., Campbell, J.A.
(eds.) AISC. Lecture Notes in Computer Science, vol. 3249, pp. 199–210. Springer
(2004)
29. Ko, K.I.: On the computational complexity of ordinary differential equations. Inf.
Control 58(1-3), 157–194 (Jul 1984)
30. Kotthoff, L., Moore, N.C.: Distributed solving through model splitting. In: 3rd
Workshop on Techniques for implementing Constraint Programming Systems
(TRICS). pp. 26–34 (2010)
31. Laburthe, F.: Choco: a constraint programming kernel for solving combinatorial
optimization problems, http://choco.sourceforge.net/
32. Menzies, T., Compton, P.: Applications of abduction: hypothesis testing of neuroendocrinological qualitative compartmental models. Artificial intelligence in
medicine 10(2), 145–75 (Jun 1997)
33. Menzies, T., Compton, P., Feldman, B., Toth, T.: Qualitative compartmental modelling. AAAI Technical Report SS-92-02 (1992)
34. Nethercote, N., Stuckey, P.J., Becket, R., Brand, S., Duck, G.J., Tack, G.: Minizinc:
Towards a standard cp modelling language. In: Proceedings of 13th International
Conference on Principles and Practice of Constraint Programming. pp. 529–543
(2007)
35. Radke-Sharpe, N., White, K.: The role of qualitative knowledge in the formulation
of compartmental models. IEEE Transactions on Systems, Man and Cybernetics,
Part C (Applications and Reviews) 28(2), 272–275 (May 1998)
36. Rendl, A., Miguel, I., Gent, I.P., Jefferson, C.: Automatically enhancing constraint
model instances during tailoring. In: Proceedings of Eighth Symposium on Abstraction, Reformulation, and Approximation (SARA) (2009)
18
Qualitative Modelling via Constraint Programming
37. Rice, J.R.: The algorithm selection problem. Advances in Computers 15, 65–118
(1976)
38. Rizk, A., Batt, G., Fages, F., Soliman, S.: Continuous valuations of temporal logic
specifications with applications to parameter optimization and robustness measures. Theor. Comput. Sci. 412(26), 2827–2839 (2011)
39. http://savilerow.cs.st-andrews.ac.uk/
40. Tsoukias, N.M.: Nitric oxide bioavailability in the microcirculation: insights from
mathematical models. Microcirculation (New York, N.Y. : 1994) 15(8), 813–34
(Nov 2008)
41. Wallace, W.H.B., Kelsey, T.W.: Human ovarian reserve from conception to the
menopause. PloS one 5(1), e8772 (Jan 2010)
| 5 |
arXiv:1711.04052v1 [] 11 Nov 2017
BIG COHEN-MACAULAY MODULES,
MORPHISMS OF PERFECT COMPLEXES, AND
INTERSECTION THEOREMS IN LOCAL ALGEBRA
LUCHEZAR L. AVRAMOV, SRIKANTH B. IYENGAR, AND AMNON NEEMAN
Abstract. There is a well known link from the first topic in the title to the
third one. In this paper we thread that link through the second topic. The
central result is a criterion for the tensor nilpotence of morphisms of perfect
complexes over commutative noetherian rings, in terms of a numerical invariant of the complexes known as their level. Applications to local rings include a
strengthening of the Improved New Intersection Theorem, short direct proofs
of several results equivalent to it, and lower bounds on the ranks of the modules in every finite free complex that admits a structure of differential graded
module over the Koszul complex on some system of parameters.
1. Introduction
A big Cohen-Macaulay module over a commutative noetherian local ring R is a
(not necessarily finitely generated) R-module C such that some system of parameters of R forms a C-regular sequence. In [16] Hochster showed that the existence
of such modules implies several fundamental homological properties of finitely generated R-modules. In [17], published in [18], he proved that big Cohen-Macaulay
modules exist for algebras over fields, and conjectured their existence in the case
of mixed characteristic. This was recently proved by Y. André in [2]; as a major
consequence many “Homological Conjectures” in local algebra are now theorems.
A perfect R-complex is a bounded complex of finite projective R-modules. Its
level with respect to R, introduced in [6] and defined in 2.3, measures the minimal
number of mapping cones needed to assemble a quasi-isomorphic complex from
bounded complexes of finite projective modules with differentials equal to zero.
The main result of this paper, which appears as Theorem 3.3, is the following
Tensor Nilpotence Theorem. Let f : G → F be a morphism of perfect complexes
over a commutative noetherian ring R.
If f factors through a complex whose homology is I-torsion for some ideal I of
R with height I ≥ levelR HomR (G, F ), then the induced morphism
⊗nR f : ⊗nR G → ⊗nR F
is homotopic to zero for some non-negative integer n.
Date: November 15, 2017.
2010 Mathematics Subject Classification. 13D22 (primary); 13D02, 13D09 (secondary).
Key words and phrases. big Cohen-Macaulay module, homological conjectures, level, perfect
complex, rank, tensor nilpotent morphism.
Partly supported by NSF grants DMS-1103176 (LLA) and DMS-1700985 (SBI).
1
2
L. L. AVRAMOV, S. B. IYENGAR, AND A. NEEMAN
Big Cohen-Macaulay modules play an essential, if discreet role in the proof, as a tool
for constructing special morphisms in the derived category of R; see Proposition 3.7.
In applications to commutative algebra it is convenient to use another property
of morphisms of perfect complexes: f is fiberwise zero if H(k(p) ⊗R f ) = 0 holds for
every p in Spec R. Hopkins [21] and Neeman [25] have shown that this is equivalent
to tensor nilpotence; this is a key tool for the classification of the thick subcategories
of perfect R-complexes.
It is easy to see that the level of a complex does not exceed its span, defined
in 2.1. Due to these remarks, the Tensor Nilpotence Theorem is equivalent to the
Morphic Intersection Theorem. If f is not fiberwise zero and factors through
a complex with I-torsion homology for an ideal I of R, then there are inequalities:
span F + span G − 1 ≥ levelR HomR (G, F ) ≥ height I + 1 .
In Section 4 we use this result to prove directly, and sometimes to generalize
and sharpen, several basic theorems in commutative algebra. These include the
Improved New Intersection Theorem, the Monomial Theorem and several versions
of the Canonical Element Theorem. All of them are equivalent, but we do not
know if they imply the Morphic Intersection Theorem; a potentially significant
obstruction to that is that the difference span F − levelR F can be arbitrarily big.
Another application, in Section 5, yields lower bounds for ranks of certain finite
free complexes, related to a conjecture of Buchsbaum and Eisenbud, and Horrocks.
In [7] a version of the Morphic Intersection Theorem for certain tensor triangulated categories is proved. This has implications for morphisms of perfect complexes
of sheaves and, more generally, of perfect differential sheaves, over schemes.
2. Perfect complexes
Throughout this paper R will be a commutative noetherian ring.
This section is a recap on the various notions and construction, mainly concerning
perfect complexes, needed in this work. Pertinent references include [6, 26].
2.1. Complexes. In this work, an R-complex (a shorthand for ‘a complex of Rmodules’) is a sequence of homomorphisms of R-modules
X :=
∂X
X
∂n−1
n
→ Xn−1 −−−→ Xn−2 −→ · · ·
· · · −→ Xn −−−
such that ∂ X ∂ X = 0. We write X ♮ for the graded R-module underlying X. The ith
i
X
suspension of X is the R-complex Σi X with (Σi X)n = Xn−i and ∂nΣ X = (−1)i ∂n−i
for each n. The span of X is the number
span X := sup{i | Xi 6= 0} − inf{i | Xi 6= 0} + 1
Thus span X = −∞ if and only if X = 0, and span X = ∞ if and only if Xi 6= 0
for infinitely many i ≥ 0. The span of X is finite if and only if span X is an natural
number. When span X is finite we say that X is bounded.
Complexes of R-modules are objects of two distinct categories.
In the category of complexes C(R) a morphism f : Y → X of R-complexes is a
family (fi : Yi → Xi )i∈Z or R-linear maps satisfying ∂iX fi = fi−1 ∂iY . It is a quasiisomorphism if H(f ), the induced map in homology, is bijective. Complexes that
can be linked by a string of quasi-isomorphisms are said to be quasi-isomorphic.
TENSOR NILPOTENT MORPHISMS
3
The derived category D(R) is obtained from C(R) by inverting all quasi-isomorphisms. For constructions of the localization functor C(R) → D(R) and of the
derived functors ?⊗LR ? and RHomR (¿, ?), see e.g. [13, 31, 24]. When P is a complex
of projectives with Pi = 0 for i ≪ 0, the functors P ⊗LR ? and RHomR (P, ?) are
represented by P ⊗R ? and HomR (P, ?), respectively. In particular, the localization
functor induces for each n a natural isomorphism of abelian groups
∼ HomD(R) (P, Σn X) .
(2.1.1)
H−n (RHomR (P, X)) =
2.2. Perfect complexes. In C(R), a perfect R-complex is a bounded complex
of finitely generated projective R-modules. When P is perfect, the R-complex
P ∗ := HomR (P, R) is perfect and the natural biduality map
P −→ P ∗∗ = HomR (HomR (P, R), R)
is an isomorphism. Moreover for any R-complex X the natural map
P ∗ ⊗R X −→ HomR (P, X)
is an isomorphism. In the sequel these properties are used without comment.
2.3. Levels. A length l semiprojective filtration of an R-complex P is a sequence
of R-subcomplexes of finitely generated projective modules
0 = P (0) ⊆ P (1) ⊆ · · · ⊆ P (l) = P
♮
♮
such that P (i − 1) is a direct summand of P (i) and the differential of P (i)/P (i−1)
is equal to zero, for i = 1, . . . , l. For every R-complex F , we set
F is a retract of some R-complex P that
R
.
level F = inf l ∈ N
has a semiprojective filtration of length l
By [6, 2.4], this number is equal to the level F with respect to R, defined in [6, 2.1].
In particular, levelR F is finite if and only if F is quasi-isomorphic to some perfect
complex. When F is quasi-isomorphic to a perfect complex P , one has
levelR F ≤ span P .
(2.3.1)
Indeed, if P := 0 → Pb → · · · → Pa → 0, then consider the filtration by subcomplexes P (n) := P<n+a . The inequality can be strict; see 2.7 below.
When R is regular, any R-complex F with H(F ) finitely generated satisfies
levelR F ≤ dim R + 1 .
(2.3.2)
For R-complexes X and Y one has
(2.3.3)
levelR (Σi X) = levelR X
for every integer i, and
R
level (X ⊕ Y ) = max{levelR X, levelR Y }.
These equalities follow easily from the definitions.
Lemma 2.4. The following statements hold for every perfect R-complex P .
(1) levelR (P ∗ ) = levelR P .
(2) For each perfect R-complex Q there are inequalities
levelR (P ⊗R Q) ≤ levelR P + levelR Q − 1 .
levelR HomR (P, Q) ≤ levelR P + levelR Q − 1 .
4
L. L. AVRAMOV, S. B. IYENGAR, AND A. NEEMAN
Proof. (1) If P is a retract of P ′ , then levelR P ≤ levelR P ′ and P ∗ is a retract of
(P ′ )∗ . Thus, we can assume P itself has a finite semiprojective filtration {P (n)}ln=0 .
The inclusions P (l − i) ⊆ P (l − i + 1) ⊆ P define subcomplexes
∗
P ∗ (i) := Ker(P ∗ −→ P (l − i) ) ⊆ Ker(P ∗ −→ P (l − i + 1)) =: P ∗ (i + 1)
of finitely generated projective modules. They form a length l semiprojective filtration of P ∗ , as P ∗ (i − 1)♮ is a direct summand of P ∗ (i)♮ and there are isomorphisms
∗
P ∗ (n) ∼ P (l − n + 1)
.
=
P ∗ (n − 1)
P (l − n)
This gives levelR P ∗ ≤ levelR P , and the reverse inequality follows from P ∼
= P ∗∗ .
(2) Assume first that P has a semiprojective filtration {P (n)}ln=0 and Q has a
semiprojective filtration {Q(n)}m
n=0 . For all h, i, we identity P (h) ⊗R Q(i) with a
subcomplex of P ⊗R Q. For each non-negative integer n ≥ 0 form the subcomplex
X
C(n) :=
P (j + 1) ⊗R Q(n − j)
j>0
of P ⊗R Q. A direct computation yields an isomorphism of R-complexes
Q(n − j)
C(n) ∼ X P (j + 1)
⊗R
.
=
C(n − 1)
P (j)
Q(n − j − 1)
j>0
l+m−1
{C(n)}n=0
Thus
is a semiprojective filtration of P ⊗R Q.
The second inequality in (2) follows from the first one, given (1) and the isomorphism HomR (P, Q) ∼
= P ∗ ⊗R Q. Next we verify the first inequality. There is nothing
to prove unless the levels of P and Q are finite. Thus we may assume that P is a
retract of a complex P ′ with a semiprojective filtration of length l = levelR P and Q
is a retract of a complex Q′ with a semiprojective filtration of length m = levelR Q.
Then P ⊗R Q is a retract of P ′ ⊗R Q′ , and—by what we have just seen—this
complex has a semiprojective filtration of length l + m − 1, as desired.
2.5. Ghost maps. A ghost is a morphism g : X → Y in D(R) such that H(g) = 0;
see [11, §8]. Evidently a composition of morphisms one of which is ghost is a ghost.
The next result is a version of the “Ghost Lemma”; cf. [11, Theorem 8.3], [29,
Lemma 4.11], and [5, Proposition 2.9].
Lemma 2.6. Let F be an R-complex and c an integer with c ≥ levelR F .
When g : X → Y is a composition of c ghosts the following morphisms are ghosts
F ⊗LR g : F ⊗LR X −→ F ⊗LR Y
and
RHomR (F, g) : RHomR (F, X) −→ RHomR (F, Y )
Proof. For every R-complex W there is a canonical isomorphism
≃
RHomR (F, R) ⊗LR W −−→ RHomR (F, W ) ,
so it suffices to prove the first assertion. For that, we may assume that F has a
semiprojective filtration {F (n)}ln=0 , where l = levelR F . By hypothesis, g = h ◦ f
where f : X → W is a (c − 1)fold composition of ghosts and h : W → Y is a ghost.
Tensoring these maps with the exact sequence of R-complexes
ι
π
0 −→ F (1) −−→ F −−→ G −→ 0
TENSOR NILPOTENT MORPHISMS
5
where G := F/F (1), yields a commutative diagram of graded R-modules
H(F (1) ⊗R X)
// H(F ⊗R X)
H(F (1)⊗f )
// H(G ⊗R X)
H(F ⊗f )
H(F (1) ⊗R W )
H(ι⊗W )
// H(F ⊗R W )
H(F (1)⊗h)
H(G⊗f )
H(π⊗W )
// H(G ⊗R W )
H(F ⊗h)
H(F (1) ⊗R Y )
H(ι⊗Y )
// H(F ⊗R Y )
// H(G ⊗R Y )
where the rows are exact. Since levelR G ≤ l − 1 ≤ c − 1, the induction hypothesis
implies G ⊗ f is a ghost; that is to say, H(G ⊗ f ) = 0. The commutativity of the
diagram above and the exactness of the middle row implies that
Im H(F ⊗ f ) ⊆ Im H(ι ⊗ W )
This entails the inclusion below.
Im H(F ⊗ g) = H(F ⊗ h)(Im H(F ⊗ f ))
⊆ H(F ⊗ h)(Im H(ι ⊗ W ))
⊆ Im H(ι ⊗ Y ) H(F (1) ⊗ h))
=0
The second equality comes from the commutativity of the diagram. The last one
holds because F (1) is graded-projective and H(h) = 0 imply H(F (1) ⊗ h) = 0.
2.7. Koszul complexes. Let x := x1 , . . . , xn be elements in R.
♮
We write K(x) for the Koszul complex on x. Thus K(x) is the exterior algebra
K
on a free R-module K(x)1 with basis {e
x1 , . . . , x
en }, and ∂ is the unique R-linear
map that satisfies the Leibniz rule and has ∂(e
xi ) = xi for i = 1, . . . , n. In particular,
K(x) is a DG (differential graded) algebra, and so its homology H(K(x)) is a graded
algebra with H0 (K(x)) = R/(x). This implies (x) H(K(x)) = 0.
Evidently K(x) is a perfect R-complex; it is indecomposable when R is local;
see [1, 4.7]. As K(x)i is non-zero precisely for 0, . . . , n, from (2.3.1) one gets
levelR K(x) ≤ span K(x) = n + 1 .
Equality holds if R is local and x is a system of parameters; see Theorem 4.2 below.
However, span K(x) − levelR K(x) can be arbitrarily large; see [1, Section 3].
For any Koszul complex K on n elements, there are isomorphisms of R-complexes
n
M
n
Σi K ( i ) .
K∗ ∼
=
= Σ−n K and K ⊗R K ∼
i=0
See [8, Propositions 1.6.10 and 1.6.21]. It thus follows from (2.3.3) that
(2.7.1)
levelR HomR (K, K) = levelR (K ⊗R K) = levelR K .
In particular, the inequalities in Lemma 2.4(2) can be strict.
3. Tensor nilpotent morphisms
In this section we prove the Tensor Nilpotence Theorem announced in the introduction. We start by reviewing the properties of interest.
6
L. L. AVRAMOV, S. B. IYENGAR, AND A. NEEMAN
3.1. Tensor nilpotence. Let f : Y → X be a morphism in D(R).
The morphism f is said to be tensor nilpotent if for some n ∈ N the morphism
f ⊗L · · · ⊗L f : Y ⊗LR · · · ⊗LR Y −→ X ⊗LR · · · ⊗LR X
| R {z R }
n
is equal to zero in D(R); when the R-complexes X, Y are perfect this means that
the morphism ⊗n f : ⊗nR Y → ⊗nR X is homotopic to zero. When X is perfect and
f : X → Σl X is a morphism with ⊗n f homotopic to zero the n-fold composition
Σl f
Σ2l f
Σnl
X −→ Σl X −−−→ Σ2n X −−−−→ · · · −−−→ Σnl X
is also homotopic to zero. The converse does not hold, even when R is a field for
in that case tensor nilpotent morphisms are zero.
3.2. Fiberwise zero morphisms. A morphism f : Y → X that satisfies
k(p) ⊗LR f = 0
in D(k(p))
for every p ∈ Spec R
is said to be fiberwise zero. This is equivalent to requiring k ⊗LR f = 0 in D(k) for
every homomorphism R → k with k a field. In D(k) a morphism is zero if (and
only if) it is a ghost, so the latter condition is equivalent to H(k ⊗LR f ) = 0.
In D(k), a morphism is tensor nilpotent exactly when it is zero. Thus if f is
tensor nilpotent, it is fiberwise zero. There is a partial converse: If a morphism
f : G → F of perfect R-complexes is fiberwise zero, then it is tensor nilpotent. This
was proved by Hopkins [21, Theorem 10] and Neeman [25, Theorem 1.1].
The next result is the Tensor Nilpotence Theorem from the Introduction. Recall
that an R-module is said to be I-torsion if each one of its elements is annihilated
by some power of I.
Theorem 3.3. Let R be a commutative noetherian ring and f : G → F a morphism
of perfect R-complexes. If for some ideal I of R the following conditions hold
(1) f factors through some complex with I-torsion homology, and
(2) levelR HomR (G, F ) ≤ height I ,
then f is fiberwise zero. In particular, f is tensor nilpotent.
The proof of the theorem is given after Proposition 3.7.
Remark 3.4. Lemma 2.4 shows that the inequality (2) is implied by
levelR F + levelR G ≤ height I + 1 ;
the converse does not hold; see (2.7.1).
On the other hand, condition (2) cannot be weakened: Let (R, m, k) be a local
ring and G the Koszul complex on some system of parameters of R and let
f : G −→ (G/G6d−1 ) ∼
= Σd R
be the canonical surjection with d = dim R. Then G is an m-torsion complex and
levelR G = d + 1; see 2.7. Evidently H(k ⊗R f ) 6= 0, so f is not fiberwise zero.
In the proof of Theorem 3.3 we exploit the functorial nature of I-torsion.
TENSOR NILPOTENT MORPHISMS
7
3.5. Torsion complexes. The derived I-torsion functor assigns to every X in
D(R) an R-complex RΓI X; when X is a module it computes its local cohomology:
HnI (X) = H−n (RΓI X) holds for each integer n. There is a natural morphism
t : RΓI X −→ X in D(R) that has the following universal property: Every morphism
Y → X such that H(Y ) is I-torsion factors uniquely through t; see Lipman [24,
Section 1]. It is easy to verify that the following conditions are equivalent.
(1) H(X) is I-torsion.
(2) H(X)p = 0 for each prime ideal p 6⊇ I.
(3) The natural morphism t : RΓI X → X is a quasi-isomorphism.
When they hold, we say that X is I-torsion. Note a couple of properties:
(3.5.1)
(3.5.2)
If X is I-torsion, then X ⊗LR Y is I-torsion for any R-complex Y .
There is a natural isomorphism RΓI (X ⊗L Y ) ∼
= (RΓI X) ⊗L Y .
R
R
Indeed, H(Xp ) ∼
= H(X)p = 0 holds for each p 6⊇ I, giving Xp = 0 in D(R). Thus
(X ⊗LR Y )p ∼
= Xp ⊗LR Y ∼
=0
holds in D(R). It yields H(X ⊗LR Y )p ∼
= H((X ⊗LR Y )p ) = 0, as desired.
A proof of the isomorphism in (3.5.2) can be found in [24, 3.3.1].
3.6. Big Cohen-Macaulay modules. Let (R, m, k) be a local ring.
A (not necessarily finitely generated) R-module C is big Cohen-Macaulay if every
system of parameters for R is a C-regular sequence, in the sense of [8, Definition
1.1.1]. In the literature the name is sometimes used for R-modules C that satisfy
the property for some system of parameters for R; however, the m-adic completion
of C is then big Cohen-Macaulay in the sense above; see [8, Corollary 8.5.3].
The existence of big Cohen-Macaulay was proved by Hochster [16, 17] in case
when R contains a field as a subring, and by André [2] when it does not; for the
latter case, see also Heitmann and Ma [15].
In this paper, big Cohen-Macaulay modules are visible only in the next result.
Proposition 3.7. Let I be an ideal in R and set c := height I.
When C is a big Cohen-Macaulay R-module the following assertions hold.
(1) The canonical morphism t : RΓI C → C from the I-torsion complex RΓI C
(see 3.5) is a composition of c ghosts.
(2) If a morphism g : G → C of R-complexes with levelR G ≤ c factors through
some I-torsion complex, then g = 0.
Proof. (1) We may assume I = (x), where x = {x1 , . . . , xc } is part of a system of
parameters for R; see [8, Theorem A.2]. The morphism t factors as
RΓ(x1 ,...,xc ) (C) −→ RΓ(x1 ,...,xc−1 ) (C) −→ · · · −→ RΓ(x1 ) (C) −→ C .
Since the sequence x1 , . . . , xc is C-regular, we have Hi (RΓ(x1 ,...,xj ) (C)) = 0 for
i 6= −j; see [8, (3.5.6) and (1.6.16)]. Thus every one of the arrows above is a ghost,
so that t is a composition of c ghosts, as desired.
(2) Suppose g factors as G → X → C with X an I-torsion R-complex. As noted
in 3.5, the morphism X → C factors through t, so g factors as
g′
g′′
t
G −→ X −→ RΓI C −
→C.
8
L. L. AVRAMOV, S. B. IYENGAR, AND A. NEEMAN
In view of the hypothesis levelR G ≤ c and part (1), Lemma 2.6 shows that
RHomR (G, t) : RHomR (G, RΓI C) −→ RHomR (G, C)
is a ghost. Using brackets to denote cohomology classes, we get
[g] = [tg ′′ g ′ ] = H0 (RHomR (G, t))([g ′′ g ′ ]) = 0 .
Due to the isomorphism (2.1.1), this means that g is zero in D(R).
Lemma 3.8. Let f : G → F be a morphism of perfect R-complexes, where G is
finite free with Gi = 0 for i ≪ 0 and F is perfect. Let f ′ : F ∗ ⊗R G → R denote
the composed morphism in the next display, where e is the evaluation map:
F ∗ ⊗R f
e
F ∗ ⊗R G −−−−−−→ F ∗ ⊗R F −−→ R .
If f factors through some I-torsion complex, then so does f ′ .
The morphism f ′ is fiberwise zero if and only if so is f .
Proof. For the first assertion, note that if f factors through an I-torsion complex
X, then F ∗ ⊗R f factors through F ∗ ⊗R X, and the latter is I-torsion.
For the second assertion, let k be field and R → k be a homomorphism of rings.
Let (−) and (−)∨ stand for k ⊗R (−) and Homk (−, k), respectively. The goal is to
prove that f = 0 is equivalent to f ′ = 0.
Since F is perfect, there are canonical isomorphisms
∼
∨
=
F ∗ ⊗ k −−→ HomR (F, k) ∼
= Homk (k ⊗R F, k) = (F ) .
Given this, it follows that f ′ can be realized as the composition of morphisms
∨
(F ) ⊗k f
∨
∨
e
(F ) ⊗k G −−−−−−→ (F ) ⊗k F −−→ k .
If F is zero, then f = 0 and f ′ = 0 hold. When F is nonzero, it is easy to verify
6 0 is equivalent to f ′ 6= 0, as desired.
that f =
Proof of Theorem 3.3. Given morphisms of R-complexes G → X → F such that F
and G are perfect and X is I-torsion for an ideal I with
levelR HomR (G, F ) ≤ height I ,
we need to prove that f is fiberwise zero. This implies the tensor nilpotence of f ,
as recalled in 3.1.
By Lemma 3.8, the morphism f ′ : F ∗ ⊗R G → R factors through an I-torsion
complex, and if f ′ is fiberwise zero, so if f . The isomorphisms of R-complexes
(F ∗ ⊗R G)∗ ∼
= G∗ ⊗R F ∼
= HomR (G, F )
and Lemma 2.4 yield levelR (F ∗ ⊗R G) = levelR HomR (G, F ). Thus, replacing f
by f ′ , it suffices to prove that if f : G → R is a morphism that factors through an
I-torsion complex and satisfies levelR G ≤ height I, then f is fiberwise zero.
Fix p in Spec R. When p 6⊇ I we have Xp = 0, by 3.5(2). For p ⊇ I we have
levelRp Gp ≤ levelR G ≤ height I ≤ height Ip ,
where the first inequality follows directly from the definitions; see [6, Proposition
3.7]. It is easy to verify that Xp is Ip -torsion. Thus, localizing at p, we may further
assume (R, m, k) is a local ring, and we have to prove that H(k ⊗R f ) = 0 holds.
Let C be a big Cohen-Macaulay R-module. It satisfies mC 6= C, so the canonical
γ
ε
f
γ
map π : R → k factors as R −
→C −
→ k. The composition G −
→R−
→ C is zero in
TENSOR NILPOTENT MORPHISMS
9
D(R), by Proposition 3.7. We get πf = εγf = 0, whence H(k ⊗R π) H(k ⊗R f ) = 0.
Since H(k ⊗R π) is bijective, this implies H(k ⊗R f ) = 0, as desired.
The following consequence of Theorem 3.3 is often helpful.
Corollary 3.9. Let (R, m, k) be a local ring, F a perfect R-complex, and G an
R-complex of finitely generated free modules.
If a morphism of R-complexes f : G → F satisfies the conditions
(1) f factors through some m-torsion complex,
(2) sup F ♮ − inf G♮ ≤ dim R − 1, and
then H(k ⊗R f ) = 0.
Proof. An m-torsion complex X satisfies k(p) ⊗LR X = 0 for any p in Spec R \ {m}.
Thus a morphism, g, of R-complexes that factors through X is fiberwise zero if and
only if k ⊗LR g = 0. This remark will be used in what follows.
Condition (2) implies Gn = 0 for n ≪ 0. Let f ′ denote the composition
F ∗ ⊗R f
e
F ∗ ⊗R G −−−−−−→ F ∗ ⊗R F −−→ R ,
where e is the evaluation map. Since inf (F ∗ ⊗R G)♮ = − sup F ♮ +inf G♮ , Lemma 3.8
shows that it suffices to prove the corollary for morphisms f : G → R.
As f factors through some m-torsion complex, so does the composite morphism
f
G60 ⊆ G −−→ R
It is easy to check that if the induced map H(k ⊗R G60 ) → H(k ⊗R R) = k is zero,
then so is H(k ⊗R f ). Thus we may assume Gn = 0 for n 6∈ [−d + 1, 0], where
d = dim R. This implies levelR G ≤ d, so Theorem 3.3 yields the desideratum.
For some applications the next statement, with weaker hypothesis but also
weaker conclusion, suffices. The example in Remark 3.4 shows that the result
cannot be strengthened to conclude that f is fiberwise zero.
Theorem 3.10. Let R be a local ring and f : G → F a morphism of R-complexes.
If there exists an ideal I of R such that
(1) f factors through an I-torsion complex, and
(2) levelR F ≤ height I,
then H(C ⊗LR f ) = 0 for every big Cohen-Macaulay module C.
Proof. Set c := height I and let t : RΓI C → C be the canonical morphism. It
follows from (3.5.1) that C ⊗LR f also factors through an I-torsion R-complex. Then
the quasi-isomorphism (3.5.2) and the universal property of the derived I-torsion
functor, see 3.5, implies that C ⊗LR f factors as a composition of the morphisms:
t⊗L F
R
C ⊗LR G −→ (RΓI C) ⊗LR F −−−−
−→ C ⊗LR F
By Proposition 3.7(1) the morphism t is a composition of c ghosts. Thus condition
(2) and Lemma 2.6 imply t ⊗LR F is a ghost, and hence so is C ⊗LR f .
10
L. L. AVRAMOV, S. B. IYENGAR, AND A. NEEMAN
4. Applications to local algebra
In this section we record applications the Tensor Nilpotence Theorem to local
algebra. To that end it is expedient to reformulate it as the Morphic Intersection
Theorem from the Introduction, restated below.
Theorem 4.1. Let R be a commutative noetherian ring and f : G → F a morphism
of perfect R-complexes.
If f is not fiberwise zero and factors through a complex with I-torsion homology
for an ideal I of R, then there are inequalities:
span F + span G − 1 ≥ levelR HomR (G, F ) ≥ height I + 1 .
Proof. The inequality on the left comes from Lemma 2.4 and (2.3.1). The one on
the right is the contrapositive of Theorem 3.3.
Here is one consequence.
Theorem 4.2. Let R be a local ring and F a complex of finite free R-modules:
F :=
0 → Fd → Fd−1 → · · · → F0 → 0
For each ideal I such that I · Hi (F ) = 0 for i ≥ 1 and I · z = 0 for some element z
in H0 (F ) \ m H0 (F ), where m is the maximal ideal of R, one has
d + 1 ≥ span F ≥ levelR F ≥ height I + 1 .
Proof. Indeed, the first two inequalities are clear from definitions. As to third
one, pick ze ∈ F0 representing z in H0 (F ) and consider the morphism of complexes
f : R → F given by r 7→ re
z . Since z is not in m H0 (F ), one has
H0 (k ⊗R f ) = k ⊗R H0 (f ) 6= 0
for k = R/m. In particular, k⊗R f is nonzero. On the other hand, f factors through
the inclusion X ⊆ F , where X is the subcomplex defined by
(
Fi
i≥1
Xi =
Re
z + d(F1 ) i = 0
By construction, we have Hi (X) = Hi (F ) for i ≥ 1 and I H0 (X) = 0, so H(X) is
I-torsion. The desired inequality now follows from Theorem 4.1 applied to f .
The preceding result is a stronger form of the Improved New Intersection Theorem1 of Evans and Griffith [12]; see also [19, §2]. First, the latter is in terms of
spans of perfect complexes whereas the one above is in terms of levels with respect
to R; second, the hypothesis on the homology of F is weaker. Theorem 4.2 also
subsumes prior extensions of the New Intersection Theorem to statements involving
levels, namely [6, Theorem 5.1], where it was assumed that I · H0 (F ) = 0 holds,
and [1, Theorem 3.1] which requires Hi (F ) to have finite length for i ≥ 1.
In the influential paper [18], Hochster identified certain canonical elements in the
local cohomology of local rings, conjectured that they are never zero, and proved
that statement in the equal characteristic case. He also gave several reformulations
that do not involve local cohomology. Detailed discussions of the relations between
these statements and the histories of their proofs are presented in [28] and [20].
1This and the other statements in this section were conjectures prior to the appearance of [2].
TENSOR NILPOTENT MORPHISMS
11
Some of those statements concern properties of morphisms from the Koszul complex on some system of parameters to resolutions of various R-modules. This makes
them particularly amenable to approaches from the Morphic Intersection Theorem.
In the rest of this section we uncover direct paths to various forms of the Canonical
Element Theorem and related results.
We first prove a version of [18, 2.3]. The conclusion there is that fd is not zero,
but the remarks in [18, 2.2(6)] show that it is equivalent to the following statement.
Theorem 4.3. Let (R, m, k) be a local ring, x a system of parameters for R, and
K the Koszul complex on x, and F a complex of free R-modules.
If f : K → F is a morphism of R-complexes with H0 (k ⊗R f ) 6= 0, then one has
Hd (S ⊗R f ) 6= 0
for
S = R/(x)
and
d = dim R .
Proof. Recall from 2.7 that ∂(K) lies in (x)K, that K1 has a basis x
e1 , . . . , x
ed and K ♮
is the exterior algebra on K1 . Thus Kd is a free R-module with basis x = x
e1 · · · x
ed
and Hd (S ⊗R K) = S(1 ⊗ x), so we need to prove f (Kd ) 6⊆ (x)Fd + ∂(Fd+1 ).
Arguing by contradiction, we suppose the contrary. This means
f (x) = x1 y1 + · · · + xd yd + ∂ F (y)
with y1 , . . . , yd ∈ Fd and y ∈ Fd+1 . For i = 1, . . . , d set x∗i := x
e1 · · · x
ei−1 x
ei+1 · · · x
ed ;
thus {x∗1 , . . . , x∗d } is a basis of the R-module Kd−1 . Define R-linear maps
hd−1 : Kd−1 → Fd
by
hd : Kd → Fd+1
by
hd−1 (x∗i ) = (−1)i−1 yi
for i = 1, . . . , d .
hd (x) = y .
Extend them to a degree one map h : K → F with hi = 0 for i 6= d − 1, d. The map
g := f − ∂ F h − h∂ K : K → F
is easily seen to be a morphism of complexes that is homotopic to f and satisfies
gd = 0. This last condition implies that g factors as a composition of morphisms
g′
K −−→ F<d ⊆ F .
The complex K is m-torsion; see 2.7. Thus Corollary 3.9, applied to g ′ , yields
H(k ⊗R g ′ ) = 0. This gives the second equality below:
H0 (k ⊗R f ) = H0 (k ⊗R g) = H0 (k ⊗R g ′ ) = 0 .
The first one holds because f and g are homotopic, and the last one because g0 = g0′ .
The result of the last computation contradicts the hypotheses on H0 (k ⊗R f ).
A first specialization is the Canonical Element Theorem.
Corollary 4.4. Let I be an ideal in R containing a system of parameters x1 , . . . , xd .
With K the Koszul complex on x and F a free resolution of R/I, any morphism
f : K → F of R-complexes lifting the surjection R/(x) → R/I has fd (K) 6= 0.
As usual, when A is a matrix, Id (A) denotes the ideal of its minors of size d.
Corollary 4.5. Let R be a local ring, x a system of parameters for R, and y a
finite subset of R with (y) ⊇ (x).
If A is a matrix such that Ay = x, then Id (A) 6⊆ (x) for d = dim R.
12
L. L. AVRAMOV, S. B. IYENGAR, AND A. NEEMAN
Proof. Let K and F be the Koszul complexes on x and y, respectively. The matrix
A defines a unique morphism of DG R-algebras f : K → F . Evidently, H0 (k ⊗R f )
is the identity map on k, and hence is not zero. Since fd can be represented by a
column matrix whose entries are the various d×d minors of A, the desired statement
is a direct consequence of Theorem 4.3.
A special case of the preceding result yields the Monomial Theorem.
Corollary 4.6. When y1 , . . . , yd is a system of parameters for local ring, one has
(y1 · · · yd )n 6∈ (y1n+1 , . . . , ydn+1 )
Proof. Apply Corollary 4.5 to the inclusion
n
y1 0
0 y2n
A:= .
..
..
.
0
0
for every integer
n ≥ 1.
(y1n+1 , . . . , ydn+1 ) ⊆ (y1 , . . . , yd ) and
··· 0
··· 0
.. .
..
.
.
· · · ydn
We also deduce from Theorem 4.3 another form of the Canonical Element Theorem. Roberts [27] proposed the statement and proved that it is equivalent to the
Canonical Element Theorem; a different proof appears in Huneke and Koh [22].
Recall that for any pair (S, T ) of R-algebras the graded module TorR (S, T ) carries a natural structure of graded-commutative R-algebra, given by the ⋔-product
of Cartan and Eilenberg [10, Chapter XI.4].
Lemma 4.7. Let R be a commutative ring, I an ideal of R, and set S := R/I. Let
G → S be some R-free resolution, K be the Koszul complex on some generating set
of I, and g : K → G a morphisms of R-complexes lifting the identity of S.
For every surjective homomorphism ψ : S → T of of commutative rings there is
a commutative diagram of strictly graded-commutative S-algebras
S ⊗R K
V
S H1 (S ⊗R K)
α
V
ψ
// // V TorR
1 (S, S)
S
µS
TorR
1 (S,ψ)
V
T
TorR
1 (S, T )
// TorR (S, S)
TorR (S,ψ)
µT
// TorR (S, T )
where α1 = H1 (S⊗R g), the map α is defined by the functoriality of exterior algebras,
and the maps µ? are defined by the universal property of exterior algebras.
V
Proof. The equality follows from ∂ K (K) ⊆ IK and K ♮ = R K1 . The resolution
G can be chosen to have G61 = K61 ; this makes α1 surjective, and the surjectivity of α follows. The map TorR
1 (S, ψ) is surjective because it can be identified
with the natural map I/I 2 → I/IJ, where J = Ker(R → T ); the surjectivity of
V
R
ψ Tor1 (S, ψ) follows. The square commutes by the naturality of ⋔-products.
Theorem 4.8. Let (R, m, k) be a local ring, I a parameter ideal, and S := R/I.
For each surjective homomorphism S → T the morphism of graded T -algebras
V
R
µT : T TorR
1 (S, T ) −→ Tor (S, T )
has the property that µT ⊗T k is injective. In particular, µk is injective.
TENSOR NILPOTENT MORPHISMS
13
Proof. The functoriality of the construction of µ implies that the canonical surjection π : T → k induces a commutative diagram of graded-commutative algebras
V
T
V
π
TorR
1 (S, T )
µT
// TorR (S, T )
TorR
1 (S,π)
V
k
TorR (S,π)
TorR
1 (S, k)
µk
// TorR (S, k)
It is easy to verify that π induces a bijective map
R
R
TorR
1 (S, π) ⊗T k : Tor1 (S, T ) ⊗T k → Tor1 (S, k) ,
so (∧π TorR (S, π)) ⊗T k is an isomorphism. Thus it suffices to show µk is injective.
≃
Let K be the Koszul complex on a minimal generating set of I. Let G −
→ S and
≃
F −
→ k be R-free resolutions of S and k, respectively. Lift the identity map of S
and the canonical surjection ψ : S → k to morphisms g : K → G and h : G → F ,
respectively. We have µS α = H(S ⊗R g) and TorR (S, ψ) = H(S ⊗R h). This implies
the second equality in the string
R
S
µkd TorR
d (S, π)αd = Tord (S, ψ)µd αd = Hd (S ⊗R hg) 6= 0 .
The first equality comes from Lemma 4.7, with T = k, and the non-equality from
Theorem 4.3, with f = hg. In particular, we get µkd 6= 0. We have an isomorphism
∼ V k d of graded k-algebras, so µk is injective by the next remark.
TorR
1 (S, k) =
k
V
Remark 4.9. If Q is a field, d is a non-negative integer, and λ : Q Qd → B is a
homomorphism of graded Q-algebras with λd 6= 0, then λ is injective.
V
Vd
Indeed, the graded subspace Q Qd of exterior algebra Q Qd is contained in
every every non-zero ideal and has rank one, so λd 6= 0 implies Ker(λ) = 0.
5. Ranks in finite free complexes
This section is concerned with DG modules over Koszul complexes on sequences
of parameters. Under the additional assumptions that R is a domain and F is a
resolution of some R-module, the theorem below was proved in [3, 6.4.1], and earlier
for cyclic modules in [9, 1.4]; background is reviewed after the proof. The Canonical
Element Theorem, in the form of Theorem 4.3 above, is used in the proof.
Theorem 5.1. Let (R, m, k) be a local ring, set d = dim R, and let F be a complex
of finite free R-modules with H0 (F ) 6= 0 and Fi = 0 for i < 0.
If F admits a structure of DG module over the Koszul complex on some system
of parameters of R, then there is an inequality
d
(5.1.1)
rankR (Fn ) ≥
for each n ∈ Z .
n
Proof. The desired inequality is vacuous when d = 0, so suppose d ≥ 1. Let x be
the said system of parameters of R and K the Koszul complex on x. Since F is a
DG K-module, each Hi (F ) is an R/(x)-module, and hence of finite length.
First we reduce to the case when R is a domain. To that end, let p be a prime
ideal of R such that dim(R/p) = d. Evidently, the image of x in R is a system of
14
L. L. AVRAMOV, S. B. IYENGAR, AND A. NEEMAN
parameters for R/p. By base change, (R/p) ⊗R F is a DG module over (R/p) ⊗R K,
the Koszul complex on x with coefficients in R/p, with
∼ R/p ⊗ H0 (F ) 6= 0 .
H0 ((R/p) ⊗R F ) =
♮
Moreover, the rank of F ♮ as an R-module equals the rank of (R/p) ⊗R F as an
R/p-module. Thus, after base change to R/p we can assume R is a domain.
Choose a cycle z ∈ F0 that maps to a minimal generator of the R-module H0 (F ).
Since F is a DG K-module, this yields a morphism of DG K-modules
f: K →F
with
f (a) = az .
This is, in particular, a morphism of complexes. Since k ⊗R H0 (F ) 6= 0, by the
choice of z, Theorem 4.3 applies, and yields that f (Kd ) 6= 0. Since R is a domain,
this implies f (Q ⊗R Kd ) is non-zero, where Q is the field of fractions of R.
♮
Set Λ := (Q ⊗R K) and consider the homomorphism of graded Λ-modules
V
λ := Q ⊗R f ♮ : Λ → Q ⊗R F ♮ .
Qd , Remark 4.9 gives the inequality in the display
d
rankR (Fn ) = rankQ (Q ⊗R Fn ) ≥ rankQ (Λn ) =
.
n
As Λ is isomorphic to
Q
Both equalities are clear.
The inequalities (5.1.1) are related to a major topic of research in commutative
algebra. We discuss it for a local ring (R, m, k) and a bounded R-complex F of
finite free modules with F<0 = 0, homology of finite length, and H0 (F ) 6= 0.
5.2. Ranks of syzygies. The celebrated and still open Rank Conjecture of Buchsbaum and Eisenbud [9, Proposition 1.4], and Horrocks [14, Problem 24] predicts
that (5.1.1) holds whenever F is a resolution of some module ofPfinite length.
That conjecture is known for d ≤ 4. Its validity would imply n rankR Fn ≥ 2d .
For d = 5 and equicharacteristic R, this was proved in [4, Proposition 1] by using
Evans and Griffth’s Syzygy Theorem [12]; in view of [2], it holds for all R.
M. Walker [30] used methods from K-theory to prove that
P In a breakthrough,
d
rank
F
≥
2
holds
when R contains 12 and is complete intersection (in particR
n
n
ular, regular), and when R is an algebra over some field of positive characteristic.
5.3. Obstructions for DG module structures. Theorem 5.1 provides a series
of obstruction for the existence of any DG module structure on F . In particular, it
implies that if rankR F < 2d holds with d = dim R, then F supports no DG module
structure over K(x) for any system of parameters x. Complexes satisfying the
restriction on ranks were recently constructed in [23, 4.1]. These complexes have
nonzero homology in degrees 0 and 1, so they are not resolutions of R-modules.
5.4. DG module structures on resolutions. Let F be a minimal resolution of
an R-module M of nonzero finite length and x a parameter set for R with xM = 0.
When F admits a DG module structure over K(x) the Rank Conjecture holds,
by Theorem 5.1. It was conjectured in [9, 1.2′ ] that such a structure exists for all
F and x. An obstruction to its existence was found in [3, 1.2], and examples when
that obstruction is not zero were produced in [3, 2.4.2]. On the other hand, by [3,
1.8] the obstruction vanishes when x lies in m annR (M ).
TENSOR NILPOTENT MORPHISMS
15
It is not known if F supports some DG K(x)-module structure for special choices
of x; in particular, for high powers of systems of parameters contained in annR (M ).
References
[1] H. Altmann, E Grifo, J. Montaño, W. Sanders, T. Vu, Lower bounds on levels of perfect
complexes, J. Algebra 491 (2017), 343–356.
[2] Y. André, La conjecture du facteur direct, preprint, arXiv:1609.00345.
[3] L. L. Avramov, Obstructions to the existence of multiplicative structures on minimal free
resolutions, Amer. J. Math. 103 (1987), 1–31.
[4] L. L. Avramov, R.-O. Buchweitz, Lower bounds on Betti numbers, Compositio Math. 86
(1993), 147–158.
[5] L. L. Avramov, R.-O. Buchweitz, S. Iyengar, Class and rank of differential modules, Invent.
Math. 169 (2007), 1–35.
[6] L. L. Avramov, R.-O. Buchweitz, S. B. Iyengar, C. Miller, Homology of perfect complexes,
Adv. Math. 223 (2010), 1731–1781; Corrigendum, Adv. Math. 225 (2010), 3576–3578.
[7] L. L. Avramov, S. B. Iyengar, A. Neeman, work in progress.
[8] W. Bruns, J. Herzog, Cohen-Macaulay Rings (Revised ed.), Cambridge Stud. Adv. Math.
39, Cambridge Univ. Press, Cambridge, 1998.
[9] D. Buchsbaum, D. Eisenbud, Algebra structures for finite free resolutions, and some structure
theorems for ideals of codimension 3, Amer. J. Math. 99 (1977), 447–485.
[10] H. Cartan, S. Eilenberg, Homological algebra, Princeton University Press, Princeton, NJ,
1956.
[11] D. Christensen, Ideals in triangulated categories: Phantoms, ghosts and skeleta, Adv. Math.
136 (1998), 284–339.
[12] G. E. Evans, P. Griffith, The syzygy problem, Annals of Math. (2) 114 (1981), 323–333.
[13] S. I. Gelfand, Y. I. Manin, Methods of Homological Algebra, Annals of Math. (2) 114 (1981),
323–333.
[14] R. Hartshorne, Algebraic vector bundles on projective spaces: a problem list, Topology 18
(1979), 117–128.
[15] R. Heitmann, L. Ma, Big Cohen-Macaulay algebras and the vanishing conjecture for maps
of Tor in mixed characteristic, preprint, arXiv:1703.08281.
[16] M. Hochster, Cohen-Macaulay modules, Conference on Commutative Algebra (Univ. Kansas,
Lawrence, Kan., 1972), Lecture Notes in Math., 311, Springer, Berlin, 1973; pp. 120–152.
[17] M. Hochster, Deep local rings, Aarhus University preprint series, December 1973.
[18] M. Hochster, Topics in the Homological Theory of Modules over Commutative Rings, Conf.
Board Math. Sci. 24, Amer. Math. Soc., Providence, RI, 1975.
[19] M. Hochster, Canonical elements in local cohomology modules and the direct summand conjecture, J. Algebra 84 (1983), 503–553.
[20] M. Hochster, Homological conjectures and lim Cohen-Macaulay sequences, Homological and
Computational Methods in Commutative Algebra (Cortona, 2016), Springer INdAM Series,
Springer, Berlin, to appear.
[21] M. Hopkins, Global methods in homotopy theory. Homotopy Theory (Durham, 1985), 73–96,
London Math. Soc. Lecture Note Ser. 117, Cambridge Univ. Press, Cambridge, 1987.
[22] C. Huneke, J. Koh, Some dimension 3 cases of the Canonical Element Conjecture, Proc.
Amer. Math. Soc. 98 (1986), 394–398.
[23] S. B. Iyengar, M. E. Walker, Examples of finite free complexes of small rank and homology,
preprint, arXiv:1706.02156.
[24] J. Lipman, Lectures on local cohomology and duality, Local Cohomology and its Applications
(Guanajuato, 1999), Lecture Notes Pure Appl. Math. 226, Marcel Dekker, New York, 2002;
pp. 39–89.
[25] A. Neeman, The chromatic tower for D(R), Topology 31 (1992), 519–532.
[26] P. C. Roberts, Homological Invariants of Modules over Commutative Rings, Sém. Math. Sup.
72, Presses Univ. Montréal, Montréal, 1980.
[27] P. C. Roberts, The equivalence of two forms of the Canonical Element Conjecture, undated
manuscript.
[28] P. C. Roberts, The homological conjectures, Progress in Commutative Algebra 1, de Gruyter,
Berlin, 2012; pp. 199–230.
16
L. L. AVRAMOV, S. B. IYENGAR, AND A. NEEMAN
[29] R. Rouquier, Dimensions of triangulated categories, J. K-Theory 1 (2008), 193–256, 257–258.
[30] M. E. Walker, Total Betti numbers of modules of finite projective dimension, Annals of Math.,
to appear; preprint, arXiv:1702.02560.
[31] C. Weibel, An Introduction to Homological Algebra, Cambridge Stud. Adv. Math. 38, Cambridge Univ. Press, Cambridge, 1994.
Department of Mathematics, University of Nebraska, Lincoln, NE 68588, U.S.A.
E-mail address: [email protected]
Department of Mathematics, University of Utah, Salt Lake City, UT 84112, U.S.A.
E-mail address: [email protected]
Centre for Mathematics and its Applications, Mathematical Sciences Institute Australian National University, Canberra, ACT 0200, Australia.
E-mail address: [email protected]
| 0 |
arXiv:1307.7970v4 [] 3 Mar 2015
1
Short Term Memory Capacity in Networks via
the Restricted Isometry Property
Adam S. Charles, Han Lun Yap, Christopher J. Rozell
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA
Abstract
Cortical networks are hypothesized to rely on transient network activity to support short
term memory (STM). In this paper we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately
sparse in some basis. We leverage results from compressed sensing to provide rigorous non-asymptotic recovery guarantees, quantifying the impact of the input sparsity
level, the input sparsity basis, and the network characteristics on the system capacity. Our analysis demonstrates that network memory capacities can scale superlinearly
with the number of nodes, and in some situations can achieve STM capacities that are
much larger than the network size. We provide perfect recovery guarantees for finite
sequences and recovery bounds for infinite sequences. The latter analysis predicts that
network STM systems may have an optimal recovery length that balances errors due to
omission and recall mistakes. Furthermore, we show that the conditions yielding optimal STM capacity can be embodied in several network topologies, including networks
with sparse or dense connectivities.
1
Introduction
————————Short term memory (STM) is critical for neural systems to understand non-trivial
environments and perform complex tasks. While individual neurons could potentially
account for very long or very short stimulus memory (e.g., through changing synaptic
weights or membrane dynamics, respectively), useful STM on the order of seconds is
conjectured to be due to transient network activity. Specifically, stimulus perturbations
can cause activity in a recurrent network long after the input has been removed, and
recent research hypothesizes that cortical networks may rely on transient activity to
support STM (Jaeger & Haas, 2004; Maass et al., 2002; Buonomano & Maass, 2009).
Understanding the role of memory in neural systems requires determining the fundamental limits of STM capacity in a network and characterizing the effects on that
capacity of the network size, topology, and input statistics. Various approaches to quantifying the STM capacity of linear (Jaeger, 2001; White et al., 2004; Ganguli et al.,
2008; Hermans & Schrauwen, 2010) and nonlinear (Wallace et al., 2013) recurrent networks have been used, often assuming Gaussian input statistics (Jaeger, 2001; White
et al., 2004; Hermans & Schrauwen, 2010; Wallace et al., 2013). These analyses show
that even under optimal conditions, the STM capacity (i.e., the length of the stimulus
able to be recovered) scales only linearly with the number of nodes in the network.
While conventional wisdom holds that signal structure could be exploited to achieve
more favorable capacities, this idea has generally not been the focus of significant rigorous study.
Recent work in computational neuroscience and signal processing has shown that
many signals of interest have statistics that are strongly non-Gaussian, with low-dimensional
structure that can be exploited for many tasks. In particular, sparsity-based signal models (i.e., representing a signal using relatively few non-zero coefficients in a basis) have
recently been shown to be especially powerful. In the computational neuroscience lit-
2
erature, sparse encodings increase the capacity of associative memory models (Baum
et al., 1988) and are sufficient neural coding models to account for several properties of
neurons in primary visual cortex (i.e., response preferences (Olshausen & Field, 1996)
and nonlinear modulations (Zhu & Rozell, 2013)). In the signal processing literature,
the recent work in compressed sensing (CS) (Candes et al., 2006; Ganguli & Sompolinsky, 2012) has established strong guarantees on sparse signal recovery from highly
undersampled measurement systems.
Ganguli & Sompolinsky (2010) have previously conjectured that the ideas of CS
can be used to achieve STM capacities that exceed the number of network nodes in an
orthogonal recurrent network when the inputs are sparse in the canonical basis (i.e. the
input sequences have temporally localized activity). While these results are compelling
and provide a great deal of intuition, the theoretical support for this approach remains
an open question as the results in (Ganguli & Sompolinsky, 2010) use an asymptotic
analysis on an approximation of the network dynamics to support empirical findings.
In this paper we establish a theoretical basis for CS approaches in network STM by
providing rigorous non-asymptotic recovery error bounds for an exact model of the
network dynamics and input sequences that are sparse in any general basis (e.g., sinusoids, wavelets, etc.). Our analysis shows conclusively that the STM capacity can
scale superlinearly with the number of network nodes, and quantifies the impact of the
input sparsity level, the input sparsity basis, and the network characteristics on the system capacity. We provide both perfect recovery guarantees for finite inputs, as well as
bounds on the recovery performance when the network has an arbitrarily long input sequence. The latter analysis predicts that network STM systems based on CS may have
an optimal recovery length that balances errors due to omission and recall mistakes.
Furthermore, we show that the structural conditions yielding optimal STM capacity in
our analysis can be embodied in many different network topologies, including networks
with both sparse and dense connectivities.
3
2
Background
2.1
Short Term Memory in Recurrent Networks
Since understanding the STM capacity of networked systems would lead to a better understanding of how such systems perform complex tasks, STM capacity has
been studied in several network architectures, including discrete-time networks (Jaeger,
2001; White et al., 2004; Ganguli et al., 2008), continuous-time networks (Hermans &
Schrauwen, 2010; Büsing et al., 2010), and spiking networks (Maass et al., 2002; Mayor
& Gerstner, 2005; Legenstein & Maass, 2007; Wallace et al., 2013). While many different analysis methods have been used, each tries to quantify the amount of information
present in the network states about the past inputs. For example, in one approach taken
to study echo state networks (ESNs) (White et al., 2004; Ganguli et al., 2008; Hermans
& Schrauwen, 2010), this information preservation is quantified through the correlation
between the past input and the current state. When the correlation is too low, that input
is said to no longer be represented in the state. The results of these analyses conclude
that for Gaussian input statistics, the number of previous inputs that are significantly
correlated with the current network state is bounded by a linear function of the network
size.
In another line of analysis, researchers have sought to directly quantify the degree to
which different inputs lead to unique network states (Jaeger, 2001; Maass et al., 2002;
Legenstein & Maass, 2007; Strauss et al., 2012). In essence, the main idea of this work
is that a one-to-one relationship between input sequences and the network states should
allow the system to perform an inverse computation to recover the original input. A
number of specific properties have been proposed to describe the uniqueness of the
network state with respect to the input. In spiking liquid state machines (LSMs), in
work by Maass et al. (2002), a separability property is suggested that guarantees distinct network states for distinct inputs and follow up work (Legenstein & Maass, 2007)
relates the separability property to practical computational tasks through the Vapnik4
Chervonenkis (VC) dimension (Vapnik & Chervonenkis, 1971). More recent work analyzing similar networks using separation properties (Wallace et al., 2013; Büsing et al.,
2010) gives an upper bound for the STM capacity that scales like the logarithm of the
number of network nodes.
In discrete ESNs, the echo-state property (ESP) ensures that every network state at
a given time is uniquely defined by some left-infinite sequence of inputs (Jaeger, 2001).
The necessary condition for the ESP is that the maximum eigenvalue magnitude of the
system is less than unity (an eigenvalue with a magnitude of one would correspond to
a linear system at the edge of instability). While the ESP ensures uniqueness, it does
not ensure robustness and output computations can be sensitive to small perturbations
(i.e., noisy inputs). A slightly more robust property looks at the conditioning of the
matrix describing how the system acts on an input sequence (Strauss et al., 2012). The
condition number describes not only a one-to-one correspondence, but also quantifies
how small perturbations in the input affect the output. While work by Strauss et al.
(2012) is closest in spirit to the analysis in this paper, it ultimately concludes that the
STM capacity still scales linearly with the network size.
Determining whether or not a system abides by one of the separability properties
depends heavily on the network’s construction. In some cases, different architectures
can yield very different results. For example, in the case of randomly connected spiking networks, low connectivity (each neuron is connected to a small number of other
neurons) can lead to large STM capacities (Legenstein & Maass, 2007; Büsing et al.,
2010), whereas high connectivity leads to chaotic dynamics and smaller STM capacities (Wallace et al., 2013). In contrast, linear ESNs with high connectivities (appropriately normalized) (Büsing et al., 2010) can have relatively large STM capacities (on the
order of the number of nodes in the network) (Ganguli et al., 2008; Strauss et al., 2012).
Much of this work centers around using systems with orthogonal connectivity matrices,
which leads to a topology that robustly preserves information. Interestingly, such systems can be constructed to have arbitrary connectivity while preserving the information
5
2
0
−2
0
2
10
20
30
40
50
10
20
30
40
50
0
−2
0
2
2.5
2
0
−2
0
2
1.5
10
20
30
40
1
50
0.5
0
0
−2
0
2
−0.5
10
20
30
40
50
−1
−1.5
0
−2
0
−2
3
10
20
30
40
50
2
1
0
−1
−2
−2
0
2
4
Figure 1: The current state of the network encodes information about the stimulus history. Different stimuli (examples shown to the left), when perturbing the same system (in this figure a three neuron orthogonal network) result in distinct states x =
[x1 , x2 , x3 ]T at the current time (n = 50). The current state is therefore informative
for distinguishing between the input sequences.
preserving properties (Strauss et al., 2012).
While a variety of networks have been analyzed using the properties described
above, these analyses ignore any structure of the inputs sequences that could be used to
improve the analysis (Jaeger, 2001; Mayor & Gerstner, 2005). Conventional wisdom
has suggested that STM capacities could be increased by exploiting structure in the inputs, but formal analysis has rarely addressed this case. For example, work by Ganguli
& Sompolinsky (2010) builds significant intuition for the role of structured inputs in
increasing STM capacity, specifically proposing to use the tools of CS to study the case
when the input signals are temporally sparse. However, the analysis by Ganguli & Sompolinsky (2010) is asymptotic and focuses on an annealed (i.e., approximate) version of
the system that neglects correlations between the network states over time. The present
paper can be viewed as a generalization of this work to provide formal guarantees for
STM capacity of the exact system dynamics, extensions to arbitrary orthogonal sparsity
bases, and recovery bounds when the input exceeds the capacity of the system (i.e., the
input is arbitrarily long).
6
2.2
Compressed Sensing
In the CS paradigm, a signal s ∈ RN is sparse in a basis Ψ so that it can be approximately written as s ≈ Ψa, where most of the entries in a ∈ RN are zero. This signal
is observed through measurements x ∈ RM taken via a compressive (e.g., M N )
linear operator:
x = As + .
(1)
Sparse coefficients representing the signal are recovered by solving the convex optimization
b = arg min ||a||1
a
a
such that
||x − AΨa||2 ≤ ||||2 ,
(2)
where ||||2 is the magnitude of the measurement noise.
There is substantial evidence from the signal processing and computational neuroscience communities that many natural signals are sparse in an appropriate basis (Olshausen & Field, 1996; Elad et al., 2008). The recovery problem above requires that the
system knows the sparsity basis Ψ to perform the recovery, which neural systems may
not know a priori. We note that recent work has shown that appropriate sparsity bases
can be learned from example data (Olshausen & Field, 1996), even in the case where the
system only observes the inputs through compressed measurements (Isely et al., 2011).
While the analysis doesn’t depend on the exact method for solving the optimization in
equation (2), we also note that this type of optimization can be solved in biologically
plausible network architectures (e.g., (Rozell et al., 2010; Rhen & Sommer, 2007; Hu
et al., 2012; Balavoine et al., 2012, 2013; Shapero et al., 2011)).
The most common sufficient condition in CS for stable recovery is known as the
Restricted Isometry Property (RIP) (Candes & Tao, 2006). Formally, we say that RIP(2K, δ) holds for A in the basis Ψ if for any vector s that is 2K-sparse in Ψ we have
7
that
C (1 − δ) ≤ ||As||22 / ||s||22 ≤ C (1 + δ)
(3)
holds for constants C > 0 and 0 < δ < 1. Said another way, the RIP guarantees that all
pairs of vectors that are K-sparse in Ψ have their distances preserved after projecting
through the matrix A. This can be seen by observing that for a pair of K-sparse vectors,
their difference has at most 2K nonzeros. In this way, the RIP can be viewed as a type of
separation property for sparse signals that is similar in spirit to the separation properties
used in previous studies of network STM (Jaeger, 2001; White et al., 2004; Maass et al.,
2002; Hermans & Schrauwen, 2010).
When A satisfies the RIP-(2K, δ) in the basis Ψ with ‘reasonable’ δ (e.g. δ ≤
√
2 − 1) and the signal estimate is b
s = Ψb
a, canonical results establish the following
bound on signal recovery error:
||s − b
s||2 ≤ α ||||2 + β
ΨT (s − sK )
√
K
1
,
(4)
where α and β are constants and sK is the best K-term approximation to s in the basis
Ψ (i.e., using the K largest coefficients in a) (Candes, 2006). Equation (4) shows that
signal recovery error is determined by the magnitude of the measurement noise and
sparsity of the signal. In the case that the signal is exactly K-sparse and there is no
measurement noise, this bound guarantees perfect signal recovery.
While the guarantees above are deterministic and non-asymptotic, the canonical CS
results state that measurement matrices generated randomly from “nice” independent
distributions (e.g., Gaussian, Bernoulli) can satisfy RIP with high probability when
M = O(K log N ) (Rauhut, 2010). For example, random Gaussian measurement matrices (perhaps the most highly used construction in CS) satisfy the RIP condition for any
sparsity basis with probability 1−O(1/N ) when M ≥ Cδ −2 K log (N ). This extremely
8
favorable scaling law (i.e., linear in the sparsity level) for random Gaussian matrices is
in part due to the fact that Gaussian matrices have many degrees of freedom, resulting
in M statistically independent observations of the signal. In many practical examples,
there exists a high degree of structure in A that causes the measurements to be correlated. Structured measurement matrices with correlations between the measurements
have been recently studied due to their computational advantages. While these matrices
can still satisfy the RIP, they typically require more measurement to reconstruct a signal with the same fidelity and the performance may change depending on the sparsity
basis (i.e., they are no longer “universal” because they don’t perform equally well for
all sparsity bases). One example which arises often in the signal processing community
is the case of random circulant matrices (Krahmer et al., 2012), where the number of
measurements needed to assure that the RIP holds with high probability for temporally
sparse signals (i.e., Ψ is the identity) increases to M ≥ Cδ −2 K log4 (N ). Other structured systems analyzed in the literature include Toeplitz matrices (Haupt et al., 2010),
partial circulant matrices (Krahmer et al., 2012), block diagonal matrices (Eftekhari
et al., 2012; Park et al., 2011), subsampled unitary matrices (Bajwa et al., 2009), and
randomly subsampled Fourier matrices (Rudelson & Vershynin, 2008). These types
of results are used to demonstrate that signal recovery is possible with highly undersampled measurements, where the number of measurements scales linearly with the
“information level” of the signal (i.e., the number of non-zero coefficients) and only
logarithmically with the ambient dimension.
9
3
STM Capacity using the RIP
3.1
Network Dynamics as Compressed Sensing
We consider the same discrete-time ESN model used in previous studies (Jaeger, 2001;
Ganguli et al., 2008; Ganguli & Sompolinsky, 2010; White et al., 2004):
x[n] = f (W x[n − 1] + zs[n] + e[n]) ,
(5)
where x[n] ∈ RM is the network state at time n, W is the (M × M ) recurrent (feedback) connectivity matrix, s[n] ∈ R is the input sequence at time n, z is the (M × 1)
projection of the input into the network, e[n] is a potential network noise source, and
f : RM → RM is a possible pointwise nonlinearity. As in previous studies (Jaeger,
2001; White et al., 2004; Ganguli et al., 2008; Ganguli & Sompolinsky, 2010), this
paper will consider the STM capacity of a linear network (i.e., f (x) = x).
The recurrent dynamics of Equation (5) can be used to write the network state at
time N :
x[N ] = As + ,
(6)
where A is a M × N matrix, the k th column of A is W k−1 z, s = [s[N ], . . . , s[1]]T ,
the initial state of the system is x[0] = 0, and is the node activity not accounted for by
P
N −k
the input stimulus (e.g. the sum of network noise terms = N
e[k]). With
k=1 W
this network model, we assume that the input sequence s is K-sparse in an orthonormal
basis Ψ (i.e., there are only K nonzeros in a = ΨT s).
3.2
STM Capacity of Finite-Length Inputs
We first consider the STM capacity of a network with finite-length inputs, where a
length-N input signal drives a network and the current state of the M network nodes
10
at time N is used to recover the input history via Equation (2). If A derived from the
network dynamics satisfies the RIP for the sparsity basis Ψ, the bounds in Equation (4)
establish strong guarantees on recovering s from the current network states x[N ]. Given
the significant structure in A, it is not immediately clear that any network construction
can result in A satisfying the RIP. However, the structure in A is very regular and in
fact only depends on powers of W applied to z:
A = z | W z | W 2 z | . . . | W N −1 z .
Writing the eigendecomposition of the recurrent matrix W = U DU −1 , we re-write
the measurement matrix as
A = U z̃ | D z̃ | D 2 z̃ | . . . | D N −1 z̃ ,
where z̃ = U −1 z. Rearranging, we get
e d0 | d | d2 | . . . | dN −1 = U ZF
e
A = UZ
(7)
e =
where Fk,l = dl−1
is the k th eigenvalue of W raised to the (l − 1)th power and Z
k
diag (U −1 z).
While the RIP conditioning of A depends on all of the matrices in the decomposition of Equation 7, the conditioning of F is the most challenging because it is the only
matrix that is compressive (i.e., not square). Due to this difficulty, we start by specie that preserves the conditioning properties of F
fying a network structure for U and Z
(other network constructions will be discussed in Section 4). Specifically, as in (White
et al., 2004; Ganguli et al., 2008; Ganguli & Sompolinsky, 2010) we choose W to be
a random orthonormal matrix, assuring that the eigenvector matrix U has orthonormal
columns and preserves the conditioning properties of F . Likewise, we choose the feedforward vector z to be z =
√1 U 1M ,
M
where 1M is a vector of M ones (the constant
11
√
M simplifies the proofs but has no bearing on the result). This choice for z assures
√
e is the identity matrix scaled by M (analogous to (Ganguli et al., 2008) where
that Z
z is optimized to maximize the SNR in the system). Finally, we observe that the richest
information preservation apparently arises for a real-valued W when its eigenvalues
are complex, distinct in phase, have unit magnitude, and appear in complex conjugate
pairs.
For the above network construction, our main result shows that A satisfies the RIP
in the basis Ψ (implying the bounds from Equation (4) hold) when the network size
scales linearly with the sparsity level of the input. This result is made precise in the
following theorem:
Theorem 3.1. Suppose N ≥ M , N ≥ K and N ≥ O(1).1 Let U be any unitary
matrix of eigenvectors (containing complex conjugate pairs) and set z =
e = diag (U −1 z) =
that Z
√1 U 1M
M
so
√1 I.
M
For M an even integer, denote the eigenvalues of
jwm M/2
}
.
Let
the
first
M/2
eigenvalues
{e
W by {ejwm }M
m=1 be chosen uniformly at
m=1
M/2
random on the complex unit circle (i.e., we chose {wm }m=1 uniformly at random from
[0, 2π)) and the other M/2 eigenvalues as the complex conjugates of these values (i.e.,
for M/2 < m ≤ M , ejwm = e−jwm−M/2 ). Under these conditions, for a given RIP
conditioning δ < 1 and failure probability η, if
M ≥C
K 2
µ (Ψ) log4 (N ) log(η −1 ),
δ2
(8)
for a universal constant C, then for any s that is K-sparse (i.e., has no more than K
non-zero entries)
(1 − δ) ≤ ||AΨs||22 / ||s||22 ≤ (1 + δ)
1 The notation N ≥ O(1) means that N ≥ C for some constant C. For clarity, we do not keep track
of the constants in our proofs. The interested reader is referred to (Rauhut, 2010) for specific values of
the constants.
12
with probability exceeding 1 − η.
The proof of this statement is given in Appendix 6.1 and follows closely the approach in (Rauhut, 2010) by generalizing it to both include any basis Ψ and account for
the fact that W is a real-valued matrix. The quantity µ (·) (known as the coherence)
captures the largest inner product between the sparsity basis and the Fourier basis, and
is calculated as:
µ (Ψ) = max
sup
N
−1
X
n=1,...,N t∈[0,2π]
m=0
Ψm,n e−jtm .
(9)
In the result above, the coherence is lower (therefore the STM capacity is higher) when
the sparsity basis is more “different” from the Fourier basis.
The main observation of the result above is that STM capacity scales superlinearly
with network size. Indeed, for some values of K and µ (Ψ) it is possible to have STM
capacities much greater than the number of nodes (i.e., N M ). To illustrate the
perfect recovery of signal lengths beyond the network size, Figure 2 shows an example
recovery of a single long input sequence. Specifically, we generate a 100 node random
orthogonal connectivity matrix W and generate z =
√1 U 1M .
M
We then drive the
network with an input sequence that is 480 samples long and constructed using 24
non-zero coefficients (chosen uniformly at random) of a wavelet basis. The values at
the non-zero entries were chosen uniformly in the range [0.5,1.5]. In this example
we omit noise so that we can illustrate the noiseless recovery. At the end of the input
sequence, the resulting 100 network states are used to solve the optimization problem in
Equation 2 for recovering the input sequence (using the network architecture in (Rozell
et al., 2010)). The recovered sequence, as depicted in Figure 2, is identical to the input
sequence, clearly indicating that the 100 nodes were able to store the 480 samples of
the input sequence (achieving STM capacity higher than the network size).
Directly checking the RIP condition for specific matrices is NP-hard (one would
need to check every possible 2K-sparse signal). In light of this difficulty in verifying
13
recovery of all possible sparse signals (which the RIP implies), we will explore the qualitative behavior of the RIP bounds above by examining in Figure 3 the average recovery
relative MSE (rMSE) in simulation for a network with M nodes when recovering input
sequences of length N with varying sparsity bases. Figure 3 uses a plotting style similar to the Donoho-Tanner phase transition diagrams (Donoho & Tanner, 2005) where
the average recovery rMSE is shown for each pair of variables under noisy conditions.
While the traditional Donoho-Tanner phase transitions plot noiseless recovery performance to observe the threshold between perfect and imperfect recovery, here we also
add noise to illustrate the stability of the recovery guarantees. The noise is generated as
random additive Gaussian noise at the input (e
in Equation (5)) to the system with zero
mean and variance such that the total noise in the system ( in Equation (6)) has a norm
of approximately 0.01. To demonstrate the behavior of the system, the phase diagrams
in Figure 3 sweep the ratio of measurements to the total signal length (M /N ) and the
ratio of the signal sparsity to the number of measurements (K/M ). Thus at the upper
left hand corner, the system is recovering a dense signal from almost no measurements
(which should almost certainly yield poor results) and at the right hand edge of the plots
the system is recovering a signal from a full set of measurements (enough to recover the
signal well for all sparsity ranges). We generate ten random ESNs for each combination
of ratios (M /N , K/M ). The simulated networks are driven with input sequences that
are sparse in one of four different bases (Canonical, Daubechies-10 wavelet, Symlet-3
wavelet and DCT) which have varying coherence with the Fourier basis. We use the
node values at the end of the sequence to recover the inputs.2
In each plot of Figure 3, the dashed line denotes the boundary where the system is
able to essentially perform perfect recovery (recovery error ≤ 1%) up to the noise floor.
Note that the area under this line (the white area in the plot) denotes the region where the
system is leveraging the sparse structure of the input to get capacities of N > M . We
2 For computational efficiency, we use the TFOCS software package (Becker et al., 2011) to solve the
optimization problem in Equation (2) for these simulations.
14
2
2
1
1
0
−1
0
0
100
200
300
2
400
500
−1
0
100
200
300
400
500
1
0
Encoding
Network
Eq. [5]
20
40
60
State Index
80
100
Decoding
Network
Eq. [2]
...
−1
0
Figure 2: A length 480 stimulus pattern (left plot) that is sparse in a wavelet basis drives
the encoding network defined by a random orthogonal matrix W and a feed-forward
vector z. The 100 node values (center plot) are then used to recover the full stimulus
pattern (right plot) using a decoding network which solves Equation (2).
also observe that the dependence of the RIP bound on the coherence with the Fourier
basis is clearly shown qualitatively in these plots, with the DCT sparsity basis showing
much worse performance than the other bases.
3.3
STM Capacity of Infinite-Length Inputs
After establishing the perfect recovery bounds for finite-length inputs in the previous
section, we turn here to the more interesting case of a network that has received an
input beyond its STM capacity (perhaps infinitely long). In contrast to the finite-length
input case where favorable constructions for W used random unit-norm eigenvalues,
this construction would be unstable for infinitely long inputs. In this case, we take
W to have all eigenvalue magnitudes equal to q < 1 to ensure stability. The matrix
constructions we consider in this section are otherwise identical to that described in the
previous section.
In this scenario, the recurrent application of W in the system dynamics assures that
each input perturbation will decay steadily until it has zero effect on the network state.
While good for system stability, this decay means that each input will slowly recede
15
0.6
0.4
0.4
0.2
0.2
0.2 0.4 0.6 0.8
M/N
K/M
K/M
0.6
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
1
0.8
0.8
0.2
0.2 0.4 0.6 0.8
M/N
rMSE - DCT
0.6
0.6
0.4
0.4
0.2
0.2
0.2 0.4 0.6 0.8
M/N
1
0.8
0.8
K/M
1
0.8
rMSE - Symlets
rMSE - Daubechies
K/M
rMSE - Canonical
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.2 0.4 0.6 0.8
M/N
Figure 3: Random orthogonal networks can have a STM capacity that exceeds the number of nodes. These plots depict the recovery relative mean square error (rMSE) for
length-1000 input sequences from M network nodes where the input sequences are
K-sparse. Each figure depicts recovery for a given set of ratios M/N and K/M . Recovery is near perfect (rMSE ≤ 1%; denoted by the dotted line) for large areas of each
plot (to the left of the N = M boundary at the right of each plot) for sequences sparse
in the canonical basis or various wavelet basis (shown here are 4 level decompositions in Symlet-3 wavelets and Daubechies-10 wavelets). For bases more coherent with
the Fourier basis (e.g., discrete cosine transform-DCT), recovery performance above
N = M can suffer significantly. All the recovery here was done for noise such that
kk2 ≈ 0.01.
into the past until the network activity contains no useable memory of the event. In
other words, any network with this decay can only hope to recover a proxy signal that
accounts for the decay in the signal representation induced by the forgetting factor q.
Specifically, we define this proxy signal to be Qs, where Q = diag ([1, q, q 2 , . . .]). Previous work (Ganguli et al., 2008; Jaeger, 2001; White et al., 2004) has characterized
recoverability by using statistical arguments to quantify the correlation of the node values to each past input perturbation. In contrast, our approach is to provide recovery
bounds on the rMSE for a network attempting to recover the N past samples of Qs,
which corresponds to the weighted length-N history of s. Note that in contrast to the
previous section where we established the length of the input that can be perfectly recovered, the amount of time we attempt to recall (N ) is now a parameter that can be
varied.
Our technical approach to this problem comes from observing that activity due to
inputs older than N acts as interference when recovering more recent inputs. In other
words, we can group older terms (i.e., from farther back than N time samples ago)
16
with the noise term, resulting again in A being an M by N linear operation that can
satisfy RIP for length-N inputs. In this case, after choosing the length of the memory
to recover, the guarantees in Equation (4) hold when considering every input older than
N as contributing to the “noise” part of the bound.
Specifically, in the noiseless case where s is sparse in the canonical basis (µ (I) =
1) with a maximum signal value smax , we can bound the first term of Equation (4) using
a geometric sum that depends on N , K and q. For a given scenario (i.e., a choice of
q, K and the RIP conditioning of A), a network can support signal recovery up to a
certain sparsity level K ∗ , given by:
M δ2
,
K =
C logγ (N )
∗
(10)
where γ is a scaling constant (e.g., γ = 4 using the present techniques, but γ = 1 is
conjectured (Rudelson & Vershynin, 2008)). We can also bound the second term of
Equation (4) by the sum of the energy in the past N perturbations that are beyond this
sparsity level K ∗ . Together these terms yield the bound on the recovery of the proxy
signal:
qN
≤ βsmax ||U ||2
1−q
min[K ∗ ,K]
βsmax
q
− qK
+ p
1−q
min [K ∗ , K]
||Qs − Qb
s||2
+ αmax ||U ||2
(11)
q
.
1−q
The derivation of the first two terms in the above bound is detailed in Appendix 6.3, and
the final term is simply the accumulated noise, which should have bounded norm due
to the exponential decay of the eigenvalues of W .
Intuitively, we see that this approach implies the presence of an optimal value for
the recovery length N . For example, choosing N too small means that there is useful
signal information in the network that the system is not attempting to recover, resulting
17
in omission errors (i.e., an increase in the first term of Equation (4) by counting too
much signal as noise). On the other hand, choosing N too large means that the system is encountering recall errors by trying to recover inputs with little or no residual
information remaining in the network activity (i.e., an increase in the second term of
Equation (4) from making the signal approximation worse by using the same number
of nodes for a longer signal length).
The intuitive argument above can be made precise in the sense that the bound in
Equation (11) does have at least one local minimum for some value of 0 < N < ∞.
First, we note that the noise term (i.e., the third term on the right side of Equation (11))
does not depend on N (the choice in origin does not change the infinite summation),
implying that the optimal recovery length only depends on the first two terms. We
also note the important fact that K ∗ is non-negative and monotonically decreasing with
increasing N . It is straightforward to observe that the bound in equation Equation (11)
tends to infinity as N increases (due to the presence of K ∗ in the denominator of the
second term). Furthermore, for small values of N , the second term in Equation (11) is
zero (due to K ∗ > K), and the first term is monotonically decreasing with N . Taken
together, since the function is continuous in N , has negative slope for small N and tends
to infinity for large N , we can conclude that it must have at least one local minima in
the range 0 < N < ∞. This result predicts that there is (at least one) optimal value for
the recovery length N .
The prediction of an optimal recovery length above is based on the fact that the
error bound in Equation (11)), and it is possible that the error itself will not actually
show this behavior (since the bound may not be tight in all cases). To test the qualitative
intuition from Equation (11), we simulate recovery of input lengths and show the results
in Figure 4. Specifically, we generate 50 ESNs with 500 nodes and a decay rate of q
=0.999. The input signals are length-8000 sequences that have 400 nonzeros whose
locations are chosen uniformly at random and whose amplitudes are chosen from a
Gaussian distribution (zero mean and unit variance). After presenting the full 8000
18
High Omission Errors (N = 500)
rMSE = 0.8826
4
Empirical Recovery
Theoretical Bound
3
MSE
Approximately Optimal (N = 4000)
rMSE = 0.0964
2
1
High Recall Errors (N = 8000)
rMSE = 0.1823
0
500
2000
4000
6000
Recovered Length N
8000
Figure 4: The theoretical bound on the recovery error for the past N perturbations
to a network of size M has a minimum value at some optimal recovery length. This
optimal value depends on the network size, the sparsity K, the decay rate q, and the
RIP conditioning of A. Shown on the right is a simulation depicting the MSE for both
the theoretical bound (red dashed line) and an empirical recovery for varying recovery
lengths N . In this simulation K = 400, q = 0.999, M = 500. The error bars for the
empirical curve show the maximum and minimum MSE. On the left we show recovery
(in orange) of a length-8000 decayed signal (in black) when recovering the past 500
(top), 4000 (middle), and 8000 (bottom) most recent perturbations. As expected, at N
= 4000 (approximately optimal) the recovery has the highest accuracy.
samples of the input signal to the network, we use the network states to recover the
input history with varying lengths and compared the resulting MSE to the bound in
Equation (11). Note that while the theoretical bound may not be tight for large signal
lengths, the recovery MSE matches the qualitative behavior of the bound by achieving
a minimum value at N > M .
4
4.1
Other Network Constructions
Alternate Orthogonal Constructions
Our results in the previous section focus on the case where W is orthogonal and z
projects the signal evenly into all eigenvectors of W . When either W or z deviate from
this structure the STM capacity of the network apparently decreases. In this section
19
we revisit those specifications, considering alternate network structures allowed under
these assumptions as well as the consequences of deviating from these assumptions in
favor of other structural advantages for a system (e.g., wire length, etc.).
To begin, we consider the assumption of orthogonal network connectivity, where
the eigenvalues have constant magnitude and the eigenvectors are orthonormal. Cone . While this construcstructed in this way, U exactly preserves the conditioning of ZF
tion may seem restrictive, orthogonal matrices are relatively simple to generate and encompass a number of distinct cases. For small networks, selecting the eigenvalues uniformly at random from the unit circle (and including their complex conjugates to ensure
real connectivity weights) and choosing an orthonormal set of complex conjugate eigenvectors creates precisely these optimal properties. For larger matrices, the connectivity
matrix can instead be constructed directly by choosing W at random and orthogonalizing the columns. Previous results on random matrices (Diaconis & Shahshahani, 1994)
guarantee that as the size of W increases, the eigenvalue probability density approaches
the uniform distribution as desired. Some recent work in STM capacity demonstrates
an alternate method by which orthogonal matrices can be constructed while constraining the total connectivity of the network (Strauss et al., 2012). This method iteratively
applies rotation matrices to obtain orthogonal matrices with varying degrees of connectivity. We note here that one special case of connectivity matrices not well-suited to
the STM task, even when made orthogonal, are symmetric networks, where the strictly
real-valued eigenvalues generates poor RIP conditioning for F .
While simple to generate in principle, the matrix constructions discussed above are
generally densely connected and may be impractical for many systems. However, many
other special network topologies that may be more biophysically realistic (i.e., block diagonal connectivity matrices and small-world3 networks (Mongillo et al., 2008)) can be
constructed so that W still has orthonormal columns. For example, consider the case of
3 Small-world structures are typically taken to be networks where small groups of neurons are densely
connected amongst themselves, yet sparse connections to other groups reduces the maximum distance
between any two nodes.
20
Full Network
Modular Network
1
...
2
2
2
...
...
1
1
1
2
2
...
2
1
1
2
1
Small-World Network
1
2
...
Figure 5: Possible network topologies which have orthogonal connectivity matrices.
In the general case, all nodes are connected via non-symmetric connections. Modular
topologies can still be orthogonal if each block is itself orthogonal. Small world topologies may also have orthogonal connectivity, especially when a few nodes are completely
connected to a series of otherwise disjoint nodes.
a block diagonal connection matrix (illustrated in Figure 5), where many unconnected
networks of at least two nodes each are driven by the same input stimulus and evolve
separately. Such a structure lends itself to a modular framework, where more of these
subnetworks can be recruited to recover input stimuli further in the past. In this case,
each block can be created independently as above and pieced together. The columns
of the block diagonal matrix will still have unit norm and will be both orthogonal to
vectors within its own block (since each of the diagonal sub-matrices are orthonormal)
and orthogonal to all columns in other blocks (since there is no overlap in the non-zero
indices).
Similarly, a small-world topology can be achieved by taking a few of the nodes in
every group of the block diagonal case and allowing connections to all other neurons
(either unidirectional or bidirectional connections). To construct such a matrix, a block
diagonal orthogonal matrix can be taken, a number of columns can be removed and
replaced with full columns, and the resulting columns can be made orthonormal with
respect to the remaining block-diagonal columns. In these cases, the same eigenvalue
distribution and eigenvector properties hold as the fully connected case, resulting in
21
the same RIP guarantees (and therefore the same recovery guarantees) demonstrated
earlier. We note that this is only one approach to constructing a network with favorable
STM capacity and not all networks with small-world properties will perform well.
Additionally, we note that as opposed to networks analyzed in prior work (in particular the work in (Wallace et al., 2013) demonstrating that random networks with high
connectivity have short STM), the average connectivity does not play a dominant role in
our analysis. Specifically, it has been observed in spiking networks that higher network
connectivity can reduce the STM capacity so that is scales only with log(M ) (Wallace et al., 2013)). However, in our ESN analysis, networks can have low connectivity
(e.g. 2x2 block-diagonal matrices - the extreme case of the block diagonal structure described above) or high connectivity (e.g. fully connected networks) and have the same
performance.
4.2
Suboptimal Network Constructions
Finally, we can also analyze some variations to the network structure assumed in this
paper to see how much performance decreases. First, instead of the deterministic construction for z discussed in the earlier sections, there has also been interest in choosing z as i.i.d. random Gaussian values (Ganguli et al., 2008; Ganguli & Sompolinsky,
2010). In this case, it is also possible to show that A satisfies the RIP (with respect
to the basis Ψ and with the same RIP conditioning δ as before) by paying an extra
log(N ) penalty in the number of measurements. Specifically, we have also established
the following theorem:
Theorem 4.1. Suppose N ≥ M , N ≥ K and N ≥ O(1). Let U be any unitary matrix
of eigenvectors (containing complex conjugate pairs) and the entries of z be i.i.d. zeromean Gaussian random variables with variance
1
.
M
For M an even integer, denote the
M/2
jwm
eigenvalues of W by {ejwm }M
}m=1 ) be chosen
m=1 . Let the first M/2 eigenvalues ({e
M/2
uniformly at random on the complex unit circle (i.e., we chose {wm }m=1 uniformly at
22
random from [0, 2π)) and the other M/2 eigenvalues as the complex conjugates of these
values. Then, for a given RIP conditioning δ and failure probability N − log
4
N
≤ η ≤ 1e ,
if
M ≥C
K 2
µ (Ψ) log5 (N ) log(η −1 ),
2
δ
(12)
A satisfies RIP-(K, δ) with probability exceeding 1 − η for a universal constant C.
The proof of this theorem can be found in Appendix 6.2. The additional log factor in
the bound in (12) reflects that a random feed-forward vector may not optimally spread
the input energy over the different eigen-directions of the system. Thus, some nodes
may see less energy than others, making them slightly less informative. Note that while
this construction does perform worse that the optimal constructions from Theorem 3.1,
the STM capacity is still very favorable (i.e., a linear scaling in the sparsity level and
logarithmic scaling in the signal length).
Second, instead of orthogonal connectivity matrices, there has also been interest
in network constructions involving non-orthogonal connectivity matrices (perhaps for
noise reduction purposes (Ganguli et al., 2008)). When the eigenvalues of W still lie on
the complex unit circle, we can analyze how non-orthogonal matrices affect the RIP results. In this case, the decomposition in Equation (7) still holds and Theorem 3.1 still applies to guarantee that F satisfies the RIP. However, the non-orthogonality changes the
conditioning of U and subsequently the total conditioning of A. Specifically the condi2
2
tioning of U (the ratio of the maximum and minimum singular values σmax
/σmin
= γ)
will effect the total conditioning of A. We can use the RIP of F and the extreme singular values of U to bound how close U F is to an isometry for sparse vectors, both above
by
2
2
||U F s||22 ≤ σmax
||F s||22 ≤ σmax
C(1 + δ) ||s||22 ,
23
and below by
2
2
||U F s||22 ≥ σmin
||F s||22 ≥ σmin
C(1 − δ) ||s||22 .
By consolidating these bounds, we find a new RIP statement for the composite matrix
C 0 (1 − δ 0 ) ||s||22 ≤ ||U F s||22 ≤ C 0 (1 + δ 0 ) ||s||22
2
2
where σmin
C(1 − δ) = C 0 (1 − δ 0 ) and σmax
C(1 + δ) = C 0 (1 + δ 0 ). These relationships
can be used to solve for the new RIP constants:
δ0 =
C0
γ−1
γ+1
+δ
1 + δ γ−1
γ+1
1
2
2
2
2
C σmax
+ σmin
+ δ(σmax
− σmin
)
=
2
These expressions demonstrate that as the conditioning of U improves (i.e. γ → 1),
the RIP conditioning does not change from the optimal case of an orthogonal network
(δ 0 = δ). However, as the conditioning of U gets worse and γ grows, the constants
associated with the RIP statement also get worse (implying more measurements are
likely required to guarantee the same recovery performance).
The above analysis primarily concerns itself with constructions where the eigenvalues of W are still unit norm, however U is not orthogonal. Generally, when the
eigenvalues of W differ from unity and are not all of equal magnitude, the current
approach becomes intractable. In one case, however, there are theoretical guarantees:
f unit-norm eigenvalues, and the remainwhen W is rank deficient. If W only has M
f eigenvalues are zero, then the resulting matrix A is composed the same
ing M − M
f rows are all zero. This means that the effective
way, except that the bottom M − M
24
f × N subsampled DTFT
measurements only depend on an M
e s+
x[N ] = U ZF
e
F
e
= UZ
s +
0M −M
f,N
e fFe s +
= UZ
1:M
where Fe is matrix consisting of the non-zero rows of F . In this case we can choose
f of the nodes and the previous theorems will all hold, replacing the true number
any M
f.
of nodes M with the effective number of nodes M
5
Discussion
We have seen that the tools of the CS literature can provide a way to quantify the STM
capacity in linear networks using rigorous non-asymptotic recovery error bounds. Of
particular note is that this approach leverages the non-Gaussianity of the input statistics
to show STM capacities that are superlinear in the size of the network and depend
linearly on the sparsity level of the input. This work provides a concrete theoretical
understanding for the approach conjectured in (Ganguli & Sompolinsky, 2010) along
with a generalization to arbitrary sparsity bases and infinitely long input sequences.
This analysis also predicts that there exists an optimal recovery length that balances
omission errors and recall mistakes.
In contrast to previous work on ESNs that leverage nonlinear network computations
for computational power (Jaeger & Haas, 2004), the present work uses a linear network
and nonlinear computations for signal recovery. Despite the nonlinearity of the recovery process, the fundamental results of the CS literature also guarantee that the recovery
process is stable and robust. For example, with access to only a subset of nodes (due to
failures or communication constraints), signal recovery generally degrades gracefully
25
by still achieving the best possible approximation of the signal using fewer coefficients.
Beyond signal recovery, we also note that the RIP can guarantee performance on many
tasks (e.g. detection, classification, etc.) performed directly on the network states (Davenport et al., 2010). Finally, we note that while this work only addresses the case where
a single input is fed to the network, there may be networks of interest that have a number
of input streams all feeding into the same network (with different feed-forward vectors).
We believe that the same tools utilized here can be used in the multi-input case, since
the overall network state is still a linear function of the inputs.
Acknowledgments
The authors are grateful to J. Romberg for valuable discussions related to this work.
This work was partially supported by NSF grant CCF-0905346 and DSO National Laboratories, Singapore.
6
Appendix
6.1
Proof of RIP
e satisfies the RIP under the
In this appendix, we show that the matrix A = U ZF
conditions stated in Equation (8) of the main text in order to prove Theorem 3.1. We
note that (Rauhut, 2010) shows that for the canonical basis (Ψ = I), the bounds for M
can be tightened to M ≥ max C δK2 log4 N, C 0 δK2 log η −1 using a more complex proof
technique than we will employ here. For η =
1
,
N
the result in (Rauhut, 2010) represents
an improvement of several log(N ) factors when restricted to only the canonical basis
for Ψ. We also note that the scaling constant C found in the general RIP definition of
√
Equation (3) of the main text is unity due to the M scaling of z.
While the proof of Theorem 3.1 is fairly technical, the procedure follows very
closely the proof of Theorem 8.1 from (Rauhut, 2010) on subsampled discrete time
26
Fourier transform (DTFT) matrices. While the basic approach is the same, the novelty
in our presentation is the incorporation of the sparsity basis Ψ and considerations for a
real-valued connectivity matrix W .
Before beginning the proof of this theorem, we note that because U is assumed
e Ψsk2 for any signal s. Thus, it suffices to establish the
unitary, kAΨsk2 = kZF
b := ZF
e Ψ. For the upcoming proof, it will be
conditioning properties of the matrix A
useful to write this matrix as a sum of rank-1 operators. The specific rank-1 operator that
will be useful for our purposes is Xl XlH with XlH := FlH Ψ, the conjugate of the l-th
row of F Ψ, where FlH := 1, ejwl , · · · , ejwl (N −1) ∈ CN is the conjugated l-th row of
F . Because of the way the “frequencies” {wm } are chosen, for any l >
M
,
2
∗
Xl = Xl−
M.
2
b is zel X H where zel is the l-th diagonal entry of the diagonal matrix
The l-th row of A
l
e meaning that we can use the sum of rank-1 operators to write the decomposition
Z,
2
H
bH b
bH A
b = PM |e
A
l=1 zl | Xl Xl . If we define the random variable B := A A − I and the
y H By
b has RIP conditioning
, we can equivalently say that A
norm kBkK := sup
H
y is K-sparse y y
δ if
bH A
b−I
kBkK := A
=
K
M
X
|e
zl |2 Xl XlH − I
l=1
≤ δ.
K
To aid in the upcoming proof, we make a few preliminary observations and rewrite
the quantities of interest in some useful ways. First, because of the correspondences
bH A
b (i.e. Xl = X ∗
bH b
between the summands in A
l−M/2 ), we can rewrite A A as
bH A
b =
A
M/2
X
2
|e
zl |
Xl XlH
l=1
making clear the fact that there are only
+
M/2
X
|e
zl |2 Xl XlH
∗
,
l=1
M
2
independent wm ’s. Under the assumption of
27
Theorem 3.1, zel =
E
M/2
X
√1
M
for l = 1, · · · , M . Therefore,
2
|e
zl |
Xl XlH
l=1
=
M/2
X
l=1
M/2
X 1 H
1
Ψ E Fl FlH Ψ = I,
|e
zl | E Xl XlH =
M
2
l=1
2
where it is straightforward to check that E Fl FlH = I. By the same reasoning, we
hP
i 1
M/2
H ∗
2
= 2 I. This implies that we can rewrite B as
X
X
also have E
|e
z
|
l
l
l
l=1
B
=
M
X
|e
zl |2 Xl XlH − I
l=1
M/2
M/2
X
X
∗ 1
1
|e
zl |2 Xl XlH − I +
|e
zl |2 Xl XlH − I
=
2
2
l=1
l=1
=: B1 + B2 .
The main proof of the theorem has two main steps. First, we will establish a bound
on the moments of the quantity of interest kBkK . Next we will use these moments to
derive a tail bound on kBkK , which will lead directly to the RIP statement we seek.
The following two lemmas from the literature will be critical for these two steps.
Lemma 6.1 (Lemma 8.2 of (Rauhut, 2010)). Suppose M ≥ K and suppose we have a
sequence of (fixed) vectors Yl ∈ CN for l = 1, · · · , M such that κ := maxl=1,··· ,M kYl k∞ <
∞. Let {ξl } be a Rademacher sequence, i.e., a sequence of i.i.d. ±1 random variables.
Then for p = 1 and for p ∈ R and p ≥ 2,
"
E
M
X
l=1
p
ξl Yl YlH
#!1/p
p
√ √
≤ C 0 C 1/p κ p K log(100K) log(4N ) log(10M )
K
v
u M
u X
×t
Yl YlH
l=1
,
K
where C, C 0 are universal constants.
Lemma 6.2 (Adapted from Proposition 6.5 of (Rauhut, 2010)). Suppose Z is a random
28
variable satisfying
(E [|Z|p ])1/p ≤ αβ 1/p p1/γ ,
1/γ
1/γ
for all p ∈ [p0 , p1 ], and for constants α, β, γ, p0 , p1 . Then, for all u ∈ [p0 , p1 ],
γ
P |Z| ≥ e1/γ αu ≤ βe−u /γ .
Armed with this notation and these lemmas, we now prove Theorem 3.1:
Proof. We seek to show that under the conditions on M in Theorem 3.1, P [kBkK > δ] ≤
η. Since B = B1 + B2 and {kB1 kK ≤ δ/2} ∩ {kB2 kK ≤ δ/2} ⊂ {kBkK ≤ δ}, then,
P [kBkK > δ] ≤ P [kB1 kK > δ/2] + P [kB2 kK > δ/2] .
Thus, it will suffice to bound P [kB1 kK > δ/2] ≤ η/2 since B2 = B1∗ implies that
P [kB2 kK > δ/2] ≤ η/2. In this presentation we let C, C 0 be some universal constant
that may not be the same from line to line.
To begin, we use Lemma 6.1 to bound Ep := (E [kB1 kpK ])1/p by setting Yl = zel∗ Xl
for l = 1, · · · , M2 . To meet the conditions of Lemma 6.1 we use a standard “symmetrization” manipulation (see Lemma 6.7 of (Rauhut, 2010)). Specifically, we can
write:
Ep = (E [kB1 kpK ])1/p
M/2
X
≤ 2 E
ξl Yl YlH
l=1
= 2 E
M/2
X
p
1/p
K
p
ξl |e
zl |2 Xl XlH
l=1
1/p
,
K
where now the expectation is over the old random sequence {wl }, together with a
29
newly added Rademacher sequence {ξl }. Applying the law of iterated expectation and
Lemma 6.1, we have for p ≥ 2:
Epp := E [kB1 kpK ]
p
≤ 2 E E
(13)
M/2
X
l=1
p
ξl |e
zl |2 Xl XlH
| {wl }
(14)
K
≤
p
√ √
2C 0 C 1/p pκ K log(100K) log(4N ) log(5M )
p
p/2
M/2
X
E
|e
zl |2 Xl XlH
l=1
K
p
M/2
q
1
X
4
2
H
1/p √
0
|e
z l | Xl Xl − I
pκ C K log (N ) E
≤
C
2
l=1
p/2
1
+ kIkK
2
K
v
u
M/2
q
pu
1
√
u X
4
1/p
H
0
2
≤
C
pκ C K log (N ) tE
|e
zl | Xl Xl − I
2
l=1
p s
p
q
1
√
4
1/p
0
pκ C K log (N )
≤
C
Ep +
.
2
p
1
+
2
K
In the first line above, the inner expectation is over the Rademacher sequence {ξl }
(where we apply Lemma 6.1) while the outer expectation is over the {wl }. The third line
uses the triangle inequality for the k · kK norm, the fourth line uses Jansen’s inequality,
and the fifth line uses triangle inequality for moments norm (i.e., (E [|X + Y |p ])1/p ≤
(E [|X|p ])1/p + (E [|Y |p ])1/p ). To get to log4 N in the third line, we used our assumption
that N ≥ M , N ≥ K and N ≥ O(1) in Theorem 3.1. Now using the definition of κ
from Lemma 6.1, we can bound this quantity as:
1
1
µ(Ψ)
κ := max kYl k∞ = max |e
zl |kXl k∞ = √ max kXl k∞ = √ max |hFl , Ψn i| ≤ √ .
l
l
M l
M l,n
M
Therefore, we have the following implicit bound on the moments of the random variable
30
of interest
Ep ≤ C
1/p √
s
p
The above can be written as Ep ≤ ap
C 0 Kµ(Ψ)2 log4 (N )
M
r
1
Ep + .
2
q
q
0
2 log4 (N )
√
.
Ep + 21 , where ap = C 1/p p 4C Kµ(Ψ)
M
a2
By squaring, rearranging the terms and completing the square, we have Ep ≤ 2p +
q
a2
ap 12 + 4p . By assuming ap ≤ 21 , this bound can be simplified to Ep ≤ ap . Now, this
assumption is equivalent to having an upper bound on the range of values of p:
1
1
√
ap ≤
⇔
p≤
2
2C 1/p
s
M
4C 0 Kµ(Ψ)2 log4 (N )
M
⇔ p≤
.
16C 2/p C 0 Kµ(Ψ)2 log4 (N )
and p1 =
M
16C 2/p C 0 Kµ(Ψ)2 log4 (N )
q
C 0 Kµ(Ψ)2 log4 (N )
,
M
β = C, γ = 2, p0 = 2,
√ √
we obtain the following tail bound for u ∈ [ 2, p1 ]:
Hence, by using Lemma 6.2 with α =
s
P kB1 kK ≥ e1/2
4
C 0 Kµ(Ψ)2 log (N )
2
u ≤ Ce−u /2 .
M
If we pick δ < 1 such that
s
e1/2
C 0 Kµ(Ψ)2 log4 (N )
δ
u≤
M
2
(15)
and u such that
2 /2
Ce−u
≤
p
η
⇔ u ≥ 2 log(2Cη −1 ),
2
then we have our required tail bound of P [kB1 kK > δ] ≤ η/2. First, observe that
31
Equation (15) is equivalent to having
CKµ(Ψ)2 log4 (N ) log(η −1 )
M≥
.
δ2
√ √
Also, because of the limited range of values u can take (i.e., u ∈ [ 2, p1 ]), we require
that
s
p
2 log(2Cη −1 ) ≤
M
16C 2/p C 0 Kµ(Ψ)2
4
log (N )
=
√
p1
⇔ M ≥ CKµ(Ψ)2 log4 (N ) log(η −1 ),
which, together with the earlier condition on M , completes the proof.
6.2
RIP with Gaussian feed-forward vectors
In this appendix we extend the RIP analysis of Appendix 6.1 to the case when z is
chosen to be a Gaussian i.i.d. vector, as presented in Theorem 4.1. It is unfortunate that
with the additional randomness in the feed-forward vector, the same proof procedure as
in Theorem 3.1 cannot be used. In the proof of Theorem 3.1, we showed that the random variable kZ1 kK has p-th moments that scale like αβ 1/p p1/2 (through Lemma 6.1)
for a range of p which suggests that it has a sub-gaussian tail (i.e., P [kZ1 kK > u] ≤
2 /2
Ce−u
) for a range of deviations u. We then used this tail bound to bound the prob-
ability that kZ1 kK exceeds a fixed conditioning δ. With Gaussian uncertainties in the
feed-forward vector z, Lemma 6.1 will not yield the required sub-gaussian tail but instead gives us moments estimates that result in sub-optimal scaling of M with respect to
N . Therefore, we will instead follow the proof procedure of Theorem 16 from (Tropp
et al., 2009) that will yield the better measurement rate given in Theorem 4.1.
Let us begin by recalling a few notations from the proof of Theorem 3.1 and by
introducing further notations that will simplify our exposition later. First, recall that we
32
b = ZF
e Ψ
let XlH be the l-th row of F Ψ. Thus, the l-th row of our matrix of interest A
e Whereas before,
is zel XlH where zel is the l-th diagonal entry of the diagonal matrix Z.
zel =
√1
M
for any l = 1, · · · , M , here it will be a random variable. To understand the
resulting distribution of zel , first note that for the connectivity matrix W to be real, we
need to assume that the second
M
2
M
2
columns of U are complex conjugates of the first
columns. Thus, we can write U = [UR | UR ] + j [UI | − UI ], where UR , UI ∈
M
RM × 2 . Because U H U = I, we can deduce that URT UI = 0 and that the `2 norms of
√1 .4
2
the columns of both UR and UI are
With these matrices UR , UI , let us re-write the random vector ze to illustrate its
b := [UR | UI ] ∈ RM ×M , which is a scaled unitary
structure. Consider the matrix U
bTU
b =
matrix (because we can check that U
1
I).
2
Next, consider the random vector
b T z. Because U
b is (scaled) unitary and z is composed of i.i.d. zero-mean
zb := U
Gaussian random variables of variance
1
,
M
the entries of zb are also i.i.d. zero-mean
Gaussian random variables, but now with variance
in terms of UR and UI , for any l ≤
M
,
2
1
.
2M
Then, from our definition of U
we have zel = zbl − j zbl+ M and for l >
2
have zel = zbl− M + j zbl . This clearly shows that each of the first
2
M
2
M
,
2
we
entries of ze is made
up of 2 i.i.d. random variables (one being the real component, the other imaginary), and
that the other
M
,
2
l≤
M
2
M
.
2
entries are just complex conjugates of the first
Because of this, for
2
|e
zl |2 = |e
zl+ M |2 = zbl2 + zbl+
M is the sum of squares of 2 i.i.d. Gaussian random
2
2
variables.
From the proof of Theorem 3.1, we also denoted
bH A
b−I =
Z := A
M/2
X
l=1
M/2
X
∗ 1
1
|e
zl |2 Xl XlH − I +
|e
zl |2 Xl XlH − I =: Z1 + Z2 .
2
2
l=1
4 This can be shown by writing
H
U U
UIT
=
−j
([UR | UR ] + j [UI | − UI ])
−UIT
i h T
i
h
i
h T
h T
T
T
UR UI −UR
UI
U U
UR UR UR
UR
UIT UI −UIT UI
+ −UI T UR
+
j
=
+
T
T
T
T
T
T
U U U U
−U U U U
U U −U U
URT
URT
R
R
R
R
I
I
I
I
R
Then by equating the above to I + j0, we arrive at our conclusion.
33
I
R
I
I
R
UIT UR
−UIT UR
i
.
It is again easy to check that E
1
I.
2
hP
M/2
|e
zl |2 Xl XlH
l=1
i
=E
hP
M/2
l=1
|e
zl |2 Xl XlH
∗ i
=
H
b has RIP conditioning δ whenever kZkK ≤ δ with kZkK := sup y Zy .
Finally, A
H
y is K-sparse y y
Before moving on to the proof, we first present a lemma regarding the random
sequence |zl |2 that will be useful in the sequel.
2
zl |2 = zbl2 + zbl+M/2
where zbl for l = 1, · · · , M
Lemma 6.3. Suppose for l = 1, · · · , M2 , |e
is a sequence of i.i.d. zero-mean Gaussian random variables of variance
1
.
2M
Also
suppose that η ≤ 1 is a fixed probability. For the random variable maxl=1,··· ,M/2 |e
zl |2 ,
we have the following bounds on the expected value and tail probability of this extreme
value:
E
2
max
l=1,··· ,M/2
|e
zl |
C2 log (C20 M η −1 )
P
max |e
zl | >
l=1,··· ,M/2
M
2
1
≤
M
C1 M
log
+1 ,
2
(16)
≤ η.
(17)
Proof. To ease notation, every index l used as a variable for a maximization will be
taken over the set l = 1, . . . , M2 without explicitly writing the index set. To calculate
E [maxl |e
zl |2 ], we use the following result that allows us to bound the expected value of
a positive random variable by its tail probability (see Proposition 6.1 of [Rauhut]):
h
i Z
2
E max |e
zl | =
l
∞
h
i
P max |e
zl | > u du.
2
(18)
l
0
Using the union bound, we have the estimate P [maxl |e
zl |2 > u] ≤
M
P
2
[|e
z1 |2 > u]
(since the |e
zl |2 are identically distributed). Now, because |e
z1 |2 is a sum of squares of
two Gaussian random variables and thus is a (generalized) χ2 random variable with 2
degrees of freedom (which we shall denote by χ2 ),5 we have
P |e
z1 |2 > u = P [χ2 > 2M u] =
1 −2M u
e 2 = C1 e−M u ,
Γ(1)
5 The pdf of a χ2 random variable χq with q degrees of freedom is given by p(x) =
1
xq/2−1 e−x/2 . Therefore, it’s tail probability can be obtained by integration: P [χq > u] =
q/2
R2 ∞ Γ(q/2)
p(x)dx.
u
34
where Γ(·) is the Gamma function and the 2M u appears instead of u in the exponential
because of the standardization of the Gaussian random variables (initially of variance
1
).
2M
To proceed, we break the integral in (18) into 2 parts. To do so, notice that if
u < M1 log C12M , then the trivial upper bound of P [maxl |e
zl |2 > u] ≤ 1 is a better
C1 M −M u
e
.
2
estimate than
In other words, our estimate for the tail bound of maxl |e
zl |2 is
not very good for small u but gets better with increasing u. Therefore, we have
h
2
E max |e
zl |
l
i
Z
≤
1
M
log(
C1 M
2
)
Z
∞
C1 M −M u
e
du
C M
1
2
log( 12 )
M
∞
C1 M
C1 M 1 −M u
log
−
e
C M
2
2
M
1
log( 12 )
M
C1 M
1
C1 − log( C12M )
C1 M
log
e
=
+
log
+1 .
2
2
M
2
1 du +
0
=
1
M
=
1
M
This is the bound in expectation that we seek for in Equation (18).
In the second part of the proof that follows, C, C 0 denote universal constants. Essentially, we will want to apply Lemma 6.2 that is used in Appendix 6.1 to obtain our
tail bound. In the lemma, the tail bound of a random variable X can be estimated once
we know the moments of X. Therefore, we require the moments of the random variable
maxl |e
zl |2 . For this, for any p > 0, we use the following simple estimate:
h
i M
2p M
E max |e
zl |2p ≤
max E |e
zl | =
E |e
z1 |2p ,
l
2 l
2
(19)
where the first step comes from writing the expectation as an integral of the cumulative distribution (as seen in Equation (18)) and taking the union bound, and the second step comes from the fact that the |e
zl |2 are identically distributed. Now, |e
z1 |2 is a
sub-exponential random variable since it is a sum of squares of Gaussian random variables (Vershynin, 2012).6 Therefore, for any p > 0, it’s p-th moment can be bounded
6 A sub-exponential random variable is a random variable whose tail probability is bounded by
exp−Cu for some constant C. Thus, a χ2 random variable is a specific instance of a sub-exponential
random variable.
35
by
1/p C 0 1/p
E |e
z1 |2p
≤
C p,
M
where the division by M comes again from the variance of the Gaussian random variables that make up |e
z1 |2 . Putting this bound with Equation (19), we have the following
estimate for the p-th moments of maxl |e
zl |2 :7
2p
i1/p
Therefore, by Lemma 6.2 with α =
C0
,
M
h
E max |e
zl |
l
C0
≤
M
β=
CM
2
CM
,
2
1/p
p.
and γ = 1, we have
eC 0 u
CM −u
2
P max |e
zl | >
≤
e .
l
M
2
By choosing u = log
CM −1
η
2
, we have our desired tail bound of
C2 log (C20 M η −1 )
P max |e
zl | >
≤ η.
l
M
2
Armed with this lemma, we can now turn out attention to the main proof. As stated
earlier, this follows essentially the same form as (Tropp et al., 2009) with the primary
difference of including the results from Lemma 6.3. As before, because P [kZkK > δ] ≤
P [kZ1 kK > δ/2] + P [kZ2 kK > δ/2] with Z2 = Z1∗ , we just have to consider bounding
the tail bound P [kZ1 kK > δ/2]. This proof differs from that in Appendix 6.1 in that
here, we will first show that E [kZ1 kK ] is small when M is large enough and then show
that Z1 does not differ much from E [kZ1 kK ] with high probability.
7 We remark that this bound gives a worse estimate for the expected value as that calculated before
because of the crude bound given by Equation (19).
36
Expectation
In this section, we will show that E [kZ1 kK ] is small. This will basically follow from
Lemma 6.1 in Appendix 6.1 and Equation (16) in Lemma 6.3. To be precise, the remainder of this section is to prove:
Theorem 6.1. Choose any δ 0 ≤ 12 . If M ≥
C3 Kµ(Ψ)2 log5 N
,
δ 02
then E [kZkK ] ≤ δ 0 .
Proof. Again, C is some universal constant that may not be the same from line to line.
We follow the same symmetrization step found in the proof in Appendix 6.1 to arrive
at:
M/2
E := E [kZ1 kK ] ≤ 2E E
X
ξl |e
zl |2 Xl XlH
l=1
|{wl }, ze ,
K
where the outer expectation is over the Rademacher sequence {ξl } and the inner expectation is over the random “frequencies” {wl } and feed-forward vector ze. As before, for
l = 1, · · · , M2 , we set Yl = zel∗ Xl . Observe that by definition κ := maxl=1,··· ,M/2 kYl k∞ =
maxl |e
zl |kXl k∞ and thus is a random variable. We then use Lemma 6.1 with p = 1 to
get
v
u M/2
u
p
√
u X
E ≤ 2C K log(100K) log(4N ) log(5M )E κt
|e
zl |2 Xl XlH
l=1
v
u
M/2
q
u
p
u X
4
2
tE
≤
4CK log (N ) E [κ ]
|e
zl |2 Xl XlH
l=1
K
K
r
q
p
1
≤
4CK log4 (N ) E [κ2 ] E + ,
2
(20)
where the second line uses the Cauchy-Schwarz inequality for expectations and the
third line uses triangle inequality. Again, to get to log4 N in the second line, we used
our assumption that N ≥ M , N ≥ K and N ≥ O(1) in Theorem 4.1. It therefore remains to calculate E [κ2 ]. Now, κ = maxl |e
zl |kXl k∞ ≤ maxl |e
zl | maxl kXl k∞ .
37
First, we have maxl kXl k∞ = maxl,n |hFl , Ψn i| ≤ µ(Ψ). Next, (16) in Lemma 6.3
tells us that E maxl=1,··· ,M/2 |e
zl |2 ≤ M1 log C12M + 1 . Thus, we have E [κ2 ] ≤
µ(Ψ)2
C1 M
log
+
1
. Putting everything together, we have
M
2
s
E = E [kZ1 kK ] ≤
r
CK log4 (N ) log C12M + 1 µ(Ψ)2
1
E+ .
M
2
q
q
C M
CK log4 (N )(log( 12 )+1)µ(Ψ)2
1
.
Now, the above can be written as E ≤ a E + 2 , where a =
M
2
By squaring it, rearranging the terms and completing the squares, we have E ≤ a2 +
q
2
a 12 + a4 . By supposing a ≤ 12 , this can be simplified as E ≤ a. To conclude, let us
choose M such that a ≤ δ 0 where δ 0 ≤
1
2
is our pre-determined conditioning (which
incidentally fulfills our previous assumption that a ≤ 12 ). By applying the formula for
a, we have that if M ≥
C3 Kµ(Ψ)2 log5 (N )
,
δ 02
then E ≤ δ 0 .
Tail Probability
To give a probability tail bound estimate to Z1 , we use the following lemma found
in (Tropp et al., 2009; Rauhut, 2010):
Lemma 6.4. Suppose Yl for l = 1, · · · , M are independent, symmetric random variP
ables such that kYl kK ≤ ζ < ∞ almost surely. Let Y = M
l=1 Yl . Then for any u, t > 1,
we have
2
P [kY kK > C(uE [kY kK ] + tζ)] ≤ e−u + e−t .
The goal of this section is to prove:
Theorem 6.2. Pick any δ ≤
C4 Kµ(Ψ)2 log N log η −1
,
δ2
5
1
2
and suppose N − log
then P [kZ1 kK > δ] ≤ 8η.
38
4
(N )
≤ η ≤
1
.
e
Suppose M ≥
Proof. To use Lemma 6.4, we want Yl to look like the summands of
Z1 =
M/2
X
2
|e
zl |2 Xl XlH − E |e
zl | Xl XlH .
l=1
However, this poses several problems. First, they are not symmetric8 and thus, we need
to symmetrize it by defining
zl0 |2 Xl0 (Xl0 )H
Yel = |e
zl |2 Xl XlH − |e
∼ ξl |e
zl |2 Xl XlH − |e
zl0 |2 Xl0 (Xl0 )H
where ze0 , Xl0 are independent copies of ze and Xl respectively, and ξl is an independent
Rademacher sequence. Here, the relation X ∼ Y for two random variables X, Y means
that X has the same distribution as Y . To form Yel , what we have done is take each
summand of Z1 and take it’s difference with an independent copy of itself. Because Yel
is symmetric, adding a Rademacher sequence does not change its distribution and this
sequence is only introduced to resolve a technicality that will arise later on. If we let
PM/2
Ye := l=1 Yel , then the random variables Ye (symmetrized) and Z1 (un-symmetrized)
are related via the following estimates (Rauhut, 2010):
h
i
E kYe kK ≤ 2E [kZ1 kK ] ,
h
i
e
P [kZ1 kK > 2E [kZ1 kK ] + u] ≤ 2P kY kK > u .
(21)
(22)
However, a second condition imposed on Yl in Lemma 6.4 is that kYl kK ≤ ζ < ∞
almost surely. Because of the unbounded nature of the Gaussian random variables
zel and zel0 in Yel , this condition is not met. Therefore, we need to define a Yl that is
conditioned on the event that these Gaussian random variables are bounded. To do so,
8 A random variable X is symmetric if X and −X has the same distribution.
39
define the following event:
n
o C log (C 0 M η −1 )
2
2
2
0 2
F = max max |e
zl | , max |e
zl | ≤
.
l
l
M
Using Equation (17) in Lemma 6.3, we can calculate P [F c ], where F c is the complementary event of F :
n
o C log (C 0 M η −1 )
2
2
2
0 2
P [F ] = P max max |e
zl | , max |e
zl | >
l
l
M
−1
0
C2 log (C20 M η −1 )
C2 log (C2 M η )
0 2
2
+ P max |e
zl | >
≤ P max |e
zl | >
l
l
M
M
c
≤ 2η.
Conditioned on event F , the k · kK norm of Yel is well-bounded:
Yel
n
o
|e
zl |2 Xl XlH − |e
zl0 |2 Xl0 (Xl0 )H K ≤ 2 max max |e
zl |2 , max |e
zl0 |2 Xl XlH
l
l
H
0
−1
H
2C2 log (C2 M η )
y Xl Xl y
=
sup
M
yH y
y is K-sparse
2
2C2 log (C20 M η −1 )
2 kyk1
≤
sup
kXl k∞
M
kyk22
y is K-sparse
CKµ(Ψ)2 log (C20 M η −1 )
2KC2 log (C20 M η −1 )
max kXl k2∞ ≤
:= ζ,
≤
l
M
M
=
K
where in the last line we used the fact that the ratio between the `1 and `2 norms of an
K-sparse vector is K, and the estimate we derived for maxl kXl k2∞ in Appendix 6.1.
We now define a new random variable that is a truncated version of Yel which takes
for value 0 whenever we fall under event F c , i.e.,
Yl := Yel IF = ξl ||e
zl |2 Xl XlH − |e
zl0 |2 Xl0 (Xl0 )H IF ,
where IF is the indicator function of event Fl . If we define Y =
PM/2
l=1
Yl , then the
random variables Y (truncated) and Ye (un-truncated) are related by (Tropp et al., 2009)
40
K
(see also Lemma 1.4.3 of (De La Peña & Giné, 1999))
h
i
e
P kY kK > u ≤ P [kY kK > u] + P [F c ] .
(23)
When ze, ze0 , Xl , Xl0 are held constant so only the Rademacher sequence ξl is random,
then the contraction principle (Tropp et al., 2009; Ledoux & Talagrand, 1991) tells us
h
i
e
that E [kY kK ] ≤ E kY kK . Note that the sole reason for introducing the Rademacher
sequences is for this use of the contraction principle. As this holds point-wise for all
ze, ze0 , Xl , Xl0 , we have
h
i
E [kY kK ] ≤ E kYe kK .
(24)
We now have all the necessary ingredients to apply Lemma 6.4. First, by choosing
δ 0 ≤ 21 , from Theorem 6.1, we have that E [kZkK ] ≤ δ 0 whenever M ≥
C3 Kµ(Ψ)2 log5 N
.
δ 02
Thus, by chaining (24) and (21), we have
h
i
E [kY kK ] ≤ E kYe kK ≤ 2E [kZ1 kK ] ≤ 2δ 0 .
Also, with this choice of M , we have
ζ=
CKµ(Ψ)2 log (C20 M η −1 )
Cδ 02 log (C20 M η −1 )
≤
.
M
log5 N
Using these estimates for ζ and E [kY kK ], and choosing u =
p
log η −1 and t = log η −1 ,
Lemma 6.4 says that
p
Cδ 02 log (C20 M η −1 ) log η −1
0
0
−1
P kY kK > C 2δ log η +
≤ 2η.
log5 N
Then, using the relation between the tail probabilities of Y and Ye (23) together with
41
our estimate for P [F c ], we have
p
Cδ 02 log (C20 M η −1 ) log η −1
0
0
−1
e
P kY kK > C 2δ log η +
≤ 2η + P [F c ] ≤ 4η.
5
log N
Finally, using the relation between the tail probabilities of Ye and Z (22), we have
−1
−1
0 02
0
p
M
η
)
log
η
CC
δ
log
(C
2
P kZ1 kK > 2δ + 2C δ log η −1 +
≤ 8η,
log5 N
0
0 0
where we used the fact that E [kZ1 kK ] ≤ δ 0 . Then, for a pre-determined conditioning
δ ≤ 12 , pick δ 0 =
3C 00
√δ
log η −1
for a constant C 00 which will be chosen appropriately later.
With this choice of δ 0 and with our assumptions that δ ≤
1
2
and η ≤ 1e , the three terms
in the tail bound becomes
1 δ
δ
p
≤ 00 ,
C 3
3C 00 log η −1
2C 0 δ
=
,
C 00 3
CC 0 δ 2 (log(C20 M ) + log η −1 )
=
9 (C 00 )2 log5 N
CC 0 (log(C20 M ) + log η −1 ) δ
≤
.
3
3 (C 00 )2 log5 N
2δ 0 =
2C 0 δ 0
p
log η −1
CC 0 δ 02 log (C20 M η −1 ) log η −1
log5 N
As for the last term, if η ≥
1
C20 M
, then
CC 0 (log(C20 M )+log η −1 )
3(C 00 )2 log5 N
(where we further supposed that N ≥ O(1)). If N − log
bound is from the theorem assumptions), then
2CC 0
.
3(C 00 )2
4
N
≤
≤η≤
2CC 0 log(C20 M )
3(C 00 )2 log5 N
1
C20 M
CC 0 (log(C20 M )+log η −1 )
3(C 00 )2 log5 N
≤
δ δ δ
P kZ1 kK > + +
≤ 8η.
3 3 3
C3 Kµ(Ψ)2 log5 N
δ 02
42
2CC 0
3(C 00 )2
(where the lower
By choosing C 00 appropriately large, we then have
Putting the formula for δ 0 into M ≥
≤
completes the proof.
2CC 0 log η −1
3(C 00 )2 log5 N
≤
6.3
Derivation of recovery bound for infinite length inputs
In this appendix we derive the bound in Equation (11) of the main text. The approach
we take is to bound the individual components of Equation (4) of the main text. As
the noise term due to noise in the inputs is unaffected, we will bound the noise term
due to the unrecovered signal (the first term in Equation (4) of the main text) by the
component of the input history that is beyond the attempted recovery, and we will bound
the signal approximation term (the second term in Equation (4) of the main text) by the
quality of the signal recovery possible in the attempted recovery length. In this way
we can observe how different properties of the system and input sequence affect signal
recovery.
To bound the first term in Equation (4) of the main text (i.e., the omission errors due
to inputs beyond the recovery window), we first write the current state at any time N ∗
as
∗
∗
x[N ] =
N
X
WN
∗ −n
zs[n].
n=0
We only wish to recover the past N ≤ N ∗ time steps, so we break up the summation
into components of the current state due to “signal” (i.e., signal we attempt to recover)
and “noise” (i..e, older signal we omit from the recovery):
∗
∗
x[N ] =
N
X
W
N ∗ −n
zs[n] +
n=N ∗ −N +1
∗ −N
NX
WN
∗ −n
zs[n]
n=0
∗
=
N
X
WN
∗ −n
zs[n] +
n=N ∗ −N +1
= As + 2 .
From here we can see that the first summation is the matrix multiply As as is discussed in the paper. The second summation here, 2 , essentially acts as an additional
noise term in the recovery. We can further analyze the effect of this noise term by un43
derstanding that 2 is bounded for well behaved input sequences s[n] (in fact all that
is needed is that the maximum value or the expected value and variance are reasonably bounded) when the eigenvalues of W are of magnitude q ≤ 1. We can explicitly
calculate the worst case scenario bounds on the norm of 2 ,
∗ −N
NX
W
N ∗ −n
≤
zs[n]
n=0
2
∗ −N
NX
U (qD)N
∗ −n
U −1 zs[n]
n=0
≤ ||U ||2
2
∗ −N
NX
(qD)N
∗ −n
U −1 zs[n]
n=0
,
2
where D = diag(d1 , . . . , dM ) is the diagonal matrix containing the normalized eigenvalues of W . If we assume that z is chosen as mentioned as in Section 3.2 so that
√
U −1 z = 1/ M 1, the eigenvalues of W are uniformly spread around a complex
circle of radius q, and that s[n] ≤ smax for all n, then we can bound this quantity as
∗ −N
NX
n=0
W
N ∗ −n
zs[n]
2
smax
≤ √ ||U ||2
M
∗ −N
NX
(qD)N
∗ −n
1
n=0
2
PN ∗ −N N ∗ −n N ∗ −n
d1
n=0 q
smax
.
..
= √ ||U ||2
M
P ∗
N −N N ∗ −n N ∗ −n
q
dM
n=0
2
v
u M N ∗ −N
2
uX X
smax
∗ −n N ∗ −n
t
N
q
dk
= √ ||U ||2
M
k=1 n=0
≤ smax ||U ||2
∗ −N
NX
n=0
N
qN
∗ −n
q − qN
≤ smax ||U ||2
1−q
∗
where dk is the k th normalized eigenvalue of W . In the limit of large input signal
∗
lengths (N ∗ → ∞), we have N ∗ N and so q N q N , which leaves the approximate
44
expression
||2 ||2 ≤ smax ||U ||2
qN
.
1−q
To bound the second term in Equation (4) of the main text (i.e., the signal approximation errors due to imperfect recovery), we must characterize the possible error between the signal (which is K-sparse) and the approximation to the signal with the K ∗
largest coefficients. In the worst case scenario, there are K − K ∗ + 1 coefficients that
cannot be guaranteed to be recovered by the RIP conditions, and these coefficients all
take the maximum value smax . In this case, we can bound the signal approximation
error as stated in the main text:
β
β
√
ks − sS ∗ k1 ≤ √
K∗
K∗
βsmax
= √
K∗
K
X
n=K ∗ +1
K∗
q
|q n smax |
− qK
1−q
.
In the case where noise is present, we can also bound the total power of the noise
term,
α
NX
+N ∗
2
k
,
W ze
[k]
k=0
2
using similar steps. Taking max as the largest possible input noise into the system, we
obtain the bound
α
NX
+N ∗
k=0
2
k
< αmax ||U ||2
W ze
[k]
2
45
q
1−q
References
Bajwa, W. U., Sayeed, A. M., & Nowak, R. (2009). A restricted isometry property for
structurally-subsampled unitary matrices. In 47th Annual Allerton Conference on
Communication, Control, and Computing, pp. 1005–1012. IEEE.
Balavoine, A., Romberg, J., & Rozell, C. J. (2012). Convergence and rate analysis of
neural networks for sparse approximation. IEEE Transactions on Neural Networks
and Learning Systems, 23, 1377–1389.
Balavoine, A., Rozell, C. J., & Romberg, J. (2013). Convergence speed of a dynamical
system for sparse recovery. IEEE Transactions on Signal Processing, 61, 4259–4269.
Baum, E. B., Moody, J., & Wilczek, F. (1988). Internal representations for associative
memory. Biological Cybernetics, 92, 217–228.
Becker, S., Candes, E. J., & Grant, M. (2011). Templates for convex cone problems with
applications to sparse signal recovery. Mathematical Programming Computation, 3.
Buonomano, D. V. & Maass, W. (2009). State-dependent computations: spatiotemporal
processing in cortical networks. Nature Reviews Neuroscience, 10, 113–125.
Büsing, L., Schrauwen, B., & Legenstein, R. (2010). Connectivity, dynamics, and
memory in reservoir computing with binary and analog neurons. Neural computation,
22, 1272–1311.
Candes, E. J. (2006). Compressive sampling. Proc. Int. Congr. Mathematicians, 3,
1433–1452.
Candes, E. J., Romberg, J., & T, T. (2006). Robust uncertainty principles: exact signal
reconstruction from highly incomplete frequency information. IEEE Transactions on
Information Theory, 52, 489–509.
46
Candes, E. J. & Tao, T. (2006). Near-optimal signal recovery from random projections:
Universal encoding strategies? IEEE Transactions on Information Theory, 52, 5406–
5425.
Davenport, M. A., Boufounos, P. T., Wakin, M. B., & Baraniuk, R. G. (2010). Signal
processing with compressive measurements. IEEE J. Sel. Topics Signal Process., 4,
445–460.
De La Peña, V. H. & Giné, E. (1999). Decoupling: From Dependence to Independence.
(Springer Verlag).
Diaconis, P. & Shahshahani, M. (1994). On the eigenvalues of random matrices. Journal
of Applied Probability, 31, 49–62.
Donoho, D. & Tanner, J. (2005). Sparse nonnegative solution of underdetermined linear
equations by linear programming. Proceedings of the National Academy of Sciences
of the United States of America, 102, 9446.
Eftekhari, A., Yap, H. L., Rozell, C. J., & Wakin, M. B. (2012). The restricted isometry
property for random block diagonal matrices. Submitted.
Elad, M., Figueiredo, M., & Ma, Y. (2008). On the role of sparse and redundant representations in image processing. IEEE Proceedings - Special Issue on Applications of
Compressive Sensing & Sparse Representation.
Ganguli, S., Huh, D., & Sompolinsky, H. (2008). Memory traces in dynamical systems.
Proceedings of the National Academy of Sciences, 105, 18970.
Ganguli, S. & Sompolinsky, H. (2010). Short-term memory in neuronal networks
through dynamical compressed sensing. Conference on Neural Information Processing Systems.
47
Ganguli, S. & Sompolinsky, H. (2012). Compressed sensing, sparsity, and dimensionality in neuronal information processing and data analysis. Annual Review of
Neuroscience, 35, 485–508.
Haupt, J., Bajwa, W. U., Raz, G., & Nowak, R. (2010). Toeplitz compressed sensing matrices with applications to sparse channel estimation. IEEE Transactions on
Information Theory, 56, 5862–5875.
Hermans, M. & Schrauwen, B. (2010). Memory in linear recurrent neural networks in
continuous time. Neural Networks, 23, 341–355.
Hu, T., Genkin, A., & Chklovskii, D. B. (2012). A network of spiking neurons for
computing sparse representations in an energy-efficient way. Neural Computation,
24, 2852–2872.
Isely, G., Hillar, C. J., & Sommer, F. T. (2011). Deciphering subsampled data: adaptive
compressive sampling as a principle of brain communication. Proceedings of NIPS.
Jaeger, H. (2001). Short term memory in echo state networks. GMD Report 152 German
National Research Center for Information Technology.
Jaeger, H. & Haas, H. (2004). Harnessing nonlinearity: predicting chaotic systems and
saving energy in wireless communication. Science, 304, 78–80.
Krahmer, F., Mendelson, S., & Rauhut, H. (2012). Suprema of chaos processes and the
restricted isometry property. arXiv preprint arXiv:1207.0235.
Ledoux, M. & Talagrand, M. (1991). Probability in Banach Spaces: isoperimetry and
processes, vol. 23. (Springer).
Legenstein, R. & Maass, W. (2007). Edge of chaos and prediction of computational
performance for neural circuit models. Neural Networks, 20, 323–334.
48
Maass, W., Natschläger, T., & Markram, H. (2002). Real-time computing without stable
states: A new framework for neural computation based on perturbations. Neural
computation, 14, 2531–2560.
Mayor, J. & Gerstner, W. (2005). Signal buffering in random networks of spiking neurons: Microscopic versus macroscopic phenomena. Physical Review E, 72, 051906.
Mongillo, G., Barak, O., & Tsodyks, M. (2008). Synaptic theory of working memory.
Science, 319, 1543–1546.
Olshausen, B. A. & Field, D. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381, 607–609.
Park, J. Y., Yap, H. L., Rozell, C. J., & Wakin, M. B. (2011). Concentration of measure
for block diagonal matrices with applications to compressive signal processing. IEEE
Transactions on Signal Processing, 59, 5859–5875.
Rauhut, H. (2010). Compressive sensing and structured random matrices. Theoretical
Found. and Numerical Methods for Sparse Recovery, pp. 1–92.
Rhen, M. & Sommer, F. T. (2007). A network that uses few active neurones to code
visual input predicts the diverse shapes of cortical receptive fields. Journal of Computational Neuroscience, 22, 135–146.
Rozell, C. J., Johnson, D. H., Baraniuk, R. G., & Olshausen, B. A. (2010). Sparse
coding via thresholding and local competition in neural circuits. Neural Computation,
20, 2526–2563.
Rudelson, M. & Vershynin, R. (2008). On sparse reconstruction from fourier and gaussian measurements. Comms. Pure and Applied Math., 61, 1025–1045.
Shapero, S., Charles, A. S., Rozell, C., & Hasler, P. (2011). Low power sparse approximation on reconfgurable analog hardware. IEEE Jour. on Emer. and Sel. Top. in Circ.
and Sys., 2, 530–541.
49
Strauss, T., Wustlich, W., & Labahn, R. (2012). Design strategies for weight matrices
of echo state networks. Neural Computation, 24, 3246–3276.
Tropp, J. A., Laska, J. N., Duarte, M. F., Romberg, J. K., & Baraniuk, R. G. (2009). Beyond Nyquist: efficient sampling of sparse bandlimited signals. IEEE Trans. Inform.
Theory, 56.
Vapnik, V. N. & Chervonenkis, A. Y. (1971). On the uniform convergence of relative
frequencies of events to their probabilities. Theory of Probability & Its Applications,
16, 264–280.
Vershynin, R. (2012). Introduction to the non-asymptotic analysis of random matrices.
In Compressed Sensing, Theory and Applications, Y. Eldar & G. Kutyniok, eds.
(Cambridge Univ. Pr.), chap. 5, pp. 210–268.
Wallace, E., Hamid, R. M., & Latham, P. E. (2013). Randomly connected networks
have short temporal memory. Neural Computation, 25, 1408–1439.
White, O. L., Lee, D. D., & Sompolinsky, H. (2004). Short-term memory in orthogonal
neural networks. Physical Review Lett., 92, 148102.
Zhu, M. & Rozell, C. (2013). Visual nonclassical receptive field effects emerge from
sparse coding in a dynamical system. PLoS Computational Biology, 9, e1003191.
50
| 9 |
Checks and Balances: A Low-complexity High-gain
Uplink Power Controller for CoMP
arXiv:1610.08491v1 [] 26 Oct 2016
Fangzhou Chen†§ , Yin Sun†§ , Yiping Qin‡ , and C. Emre Koksal†
† Dept. of ECE, The Ohio State University, Columbus, OH
‡ Huawei Technologies Co., Shanghai, China
§ Co-primary authors
Abstract—Coordinated Multipoint (CoMP) promised substantial throughput gain for next-generation cellular systems. However,
realizing this gain is costly in terms of pilots and backhaul
bandwidth, and may require substantial modifications in physicallayer hardware. Targeting efficient throughput gain, we develop
a novel coordinated power control scheme for uplink cellular
networks called Checks and Balances (C&B), which checks the
received signal strength of one user and its generated interference
to neighboring base stations, and balances the two. C&B has some
highly attractive advantages: C&B (i) can be implemented easily
in software, (ii) does not require to upgrade non-CoMP physicallayer hardware, (iii) allows for fully distributed implementation
for each user equipment (UE), and (iv) does not need extra pilots
or backhaul communications. We evaluate the throughput performance of C&B on an uplink LTE system-level simulation platform,
which is carefully calibrated with Huawei. Our simulation results
show that C&B achieves much better throughput performance,
compared to several widely-used power control schemes.
Index Terms—Uplink power control, coordinated multipoint
(CoMP), LTE, system-level simulation, throughput improvement
I. I NTRODUCTION
Next generation of cellular communication, e.g., Long Term
Evolution Advanced (LTE-A) and 5G, is expected to significantly improve the average throughput and cell-edge throughput
for serving user equipments (UEs). One important candidate
technique for achieving such throughput improvement is Coordinated Multipoint (CoMP), which refers to the cooperation
among different Base Stations (BSs).
The promised benefits of CoMP are hard to realize because
of many issues in practical systems [1], [2]. In particular,
the uplink CoMP techniques (such as distributed interference
cancellation/alignment, joint detection, coordinated scheduling)
all require nearby BSs to communicate received signals, control
messages, and channel state information through backhaul
links. In addition, these CoMP techniques are costly in terms
of power and pilot resources, which considerably decreases
the resources allocated for data transmissions. Therefore, the
realized throughput performance is greatly degraded.
Aiming to realize the potential benefits of CoMP in practical
systems, we propose a novel coordinated power control design
for uplink cellular networks. The task of uplink power control
is to make the signal received at the base station sufficiently
This work was supported in part by Huawei, Inc. under Agreement YB
2013110091, National Science Foundation under Grants CNS-1054738 and
CNS-1514260.
TABLE I: Performance of FPC [5], Max Power [6], RLPC [7]
and C&B in Macrocell system-level simulations.
Average Throughput (Mbits/s)
5%-Edge Throughput (Mbits/s)
Power Efficiency (Mbits/J)
FPC
8.05
0.23
751
Max Power
12.01
0.09
6.77
RLPC
9.78
0.22
226
C&B
12.23
0.23
387
strong, and in the meanwhile keep the interference generated
to nearby base stations not severe. In practice, overly high
and low transmission powers are both harmful. Specifically,
increasing the transmission power of one UE can increase its
throughput, but it causes some strong interference to nearby
cells, which will degrade the throughput of other UEs. Hence,
finding the correct balance between a UE’s own performance
and its incurred cost to the other UEs is crucial to achieve a
satisfying performance.
In practical uplink cellular networks, each BS receiver experiences the interference from hundreds of UEs from neighboring cells. Even if perfect CSI knowledge of the signal and
interference channels are available, the optimal power control
problem is strongly NP hard [8], [9]. To make things worse,
the base station in current systems typically estimates the signal
channel of its served UEs, but the channel coefficients of interfering UEs are mostly unavailable. These practical limitations
make the power control problem even more challenging.
In our research, we develop a low-complexity Coordinated
power control scheme that provides significant throughput
gains, with minimum cost and modifications. To that end, the
following are the contributions of this paper:
• We develop a novel coordinated power control scheme,
named Checks and Balances (C&B). C&B requires very
little information, including the large-scale path loss from
one UE to several nearby BSs, coarse power level of cochannel interference, and the throughput vs SINR curve
of Adaptive Modulation and Coding (AMC). Based on
this information, C&B checks the SNR of one UE and its
generated INR to nearby BSs, and balances the two.
• C&B has some highly attractive advantages: C&B (i) can
be implemented easily in software, (ii) does not require to
upgrade non-CoMP physical-layer hardware, (iii) allows
for fully distributed implementation for each UE, and (iv)
does not need extra pilots or backhaul communications.
• We evaluate the throughput performance of C&B on a
system-level simulation platform for LTE uplink, which
is carefully calibrated with Huawei. As shown in Table
I, C&B increases the average throughput by 51.9% over
Fractional Power Control (FPC) [5] and 21.5% over
Reverse Link Power Control (RLPC) [7], and achieves
similar cell-edge throughput with FPC and RLPC. Compared to Max Power Control [6], C&B increases the
average throughput and cell-edge throughput by 1.8% and
156%, respectively, together with greatly improved power
efficiency.
We expect C&B to achieve even better throughput performance when working with physical-layer CoMP techniques,
which will be considered in future work.
Related Studies: Non-coordinated power control is standardized in 3GPP protocols [3] and has attracted vast research interests [5], [6], [10]–[15]. There exist three mainstream schemes:
1) Full Compensation Power Control (FCPC) [10] allocates
transmission power to one UE by making full compensation
of its large-scale path loss such that all UEs have the same
received signal strength, which results in poor performance
in per-cell average throughput and inter-cell interference management. 2) Max Power scheme [6] let all UEs transmit at
their maximum allowable power. It provides high average percell throughput, but performs poorly in power efficiency and
throughput of UEs at the cell edge. 3) Fractional Power Control
(FPC) [5] is currently the most widely adopted scheme [12]–
[15], which allocates transmission power by making fractional
compensation of UEs’ large-scale path losses, such that UEs in
the interior of one cell have stronger received signal strength
than UEs at the cell edge. The key drawback remains in the
unsatisfying average throughput per cell.
Two coordinated power control schemes have been proposed
[7], [16], which make additional compensation for large-scale
path losses from one UE to its neighboring cells. These schemes
are essentially variations of FCPC. Hence, they partially inherit
the drawbacks of FCPC which significantly limits the throughput gain.
II. S YSTEM M ODEL AND P ROBLEM D ESCRIPTION
We consider an LTE uplink multicellular network. In such
network, each UE transmits to its serving cell and meanwhile
generates interference to its neighboring cells. Consider UE u
transmits signal x at power P on a single subcarrier, its received
signal y at the serving cell c can be expressed as:
Xp
√
y = P · hx +
Pj · hj xj + n,
(1)
j
where h and hj denotes the instantaneous complex channel
gain from UE u and uj (served in neighboring cell cj ) to cell
c, respectively, n denotes the experienced noise.P
Hence
p the total
inter-cell interference experienced by UE u is j Pj · hj xj .
Assuming the transmitted signals x and xj have unit variance,
the signal-to-interference-plus-noise ratio (SINR) can thereby
be calculated as:
sinr = P
P · |h|2
,
2
2
j Pj · |hj | + σn
(2)
where σn2 denotes the variance of noise.
The power controller decides the transmission power of UEs
across all cells, which heavily affects their SINR. Unlike Noncoordinated power control that only exploits the CSI of UEs
to their serving cells, coordinated power control additionally
utilizes the CSI of UEs to multiple neighboring cells. The
problem we are tackling is to come up with a coordinated power
control design with low complexity and high throughput gain
over all existing solutions.
Besides the strong impact from power controller, the
throughput performance is also influenced by how each cell
picks the modulation and coding scheme (MCS) for active
UEs. In LTE networks, the existing Turbo-coded modulation
techniques are paired with associated bit rate selections to
form 29 different available MCS options [3]. For each SINR
value, one of these MCS options is chosen by an Adaptive
Modulation and Coding (AMC) module. The selected MCS
should provide a sufficient high throughput and meanwhile
guarantee a low decoding error probability. Usually, the block
decoding error rate is required to be less than 10%, which
will be later compensated by hybrid automatic repeat request
(HARQ). The stairs in the throughput curve are due to the AMC
module. In particular, when the SINR is lower than −6.5 dB,
no MCS can decode successfully and hence the throughput is
zero. When the SINR is higher than 18 dB, the maximum MCS
can decode perfectly, achieving a maximum throughput. Hence,
the SINR region for effective AMC selection is [−6.5 dB, 18
dB].
III. C HECKS
AND
BALANCES : A P OWER C ONTROL D ESIGN
In this section, we present a novel power controller design,
called Checks and balances (C&B), for uplink cellular systems.
This power controller requires very little information, including
the throughput versus SINR curve of the receiver design, the
large-scale path loss from one UE to several nearby BSs,
and some coarse distribution information of the co-channel
interference in the cellular system. The key idea in C&B is
to cooperatively balance the SNR of a UE and its generated
INR to nearby BSs. The complexity of C&B is very low, and
the throughput gain is huge. One can consider C&B as the
simplest implementation of CoMP, which provides significant
throughput improvement without incurring huge cost in pilots
or backhaul. There is no upgrade of the physical layer design,
except for the change of uplink transmission power.
A. Approximations of SINR and Throughput
C&B can operate in the open-loop mode, when only large
scale CSI is utilized, whereas small scale CSI is unavailable
due to the lack of instantaneous channel estimation pilot. The
large-scale path loss between UE uj and cell c is defined as:
−1
,
(3)
PLj , E |hj |2
1
SINR-Throughput Mapping
Piece-wise Approximation f(SINR)
4
0.8
3
0.6
SNR
IoT
cdf
Spectrum Efficiency (bits/s/Hz)
5
2
0.4
1
0.2
0
-20
-10
0
SINR (dB)
10
0
-10
20
0
10
Time-average (dB)
20
Fig. 1: Throughput Curve Approximation.
Fig. 2: Time-average SNR and IoT distributions.
where E[·] denotes the expectation. When we only consider
such large-scale path losses, u’s received signal-to-noise ratio
(SNR) and interference over thermal noise (IoT) at c, and generated interference-to-noise ratio (INR) to cj can be respectively
approximated as:
Hence, we propose a method to estimate IoTS . In the simulation
environment described in Section IV, the distribution of the
IoT in the uplink cellular system is illustrated in Fig. 2. We
select IoTS to be the 95th percentile of IoT that experienced
by an arbitrary UE, which is IoTS = 9 dB. Note that this
statistical distribution can be collected at the BS. In practical
applications, we recommend to initialize the system with this
value and update it according to the measured IoT distribution.
The approximated throughput RS (P ) is plotted in Fig. 3a,
which increases with the transmission power P .
SNR(P )
=
IoT =
INRj (P )
=
PL−1 · P
,
N0
P
N0 + j Pj · PL−1
j
PL−1
u→j
N0
N0
·P
,
(4)
,
(5)
(6)
where N0 denotes the average noise power and is assumed to
be the same and known for all UEs. We also define PLu→j as
the large-scale path loss from UE u to cell cj . Derived from
Eq. (4) and (5), we can approximate the received SINR of UE
u as:
SNR(P )
.
(7)
SINR =
IoT
As C&B only acquires large scale CSI and approximated
SINR information, there is no need to use an accurate throughput curve to determine the transmission power. Therefore, we
introduce a piece-wise function to approximate the foregoing
throughput curve:
f (SINR) = min Tmax , a · log2 (1 + b · SINR) ,
(8)
where Tmax = 4.18, a = 0.7035 and b = 0.7041. These
parameters are achieved by curve fitting as shown in Fig. 1.
Note that for different physical layer designs, we can always
find such an approximated throughput function f (SINR). Next,
we use function f (SINR) to evaluate the throughput of one UE
u and the throughput of the UEs interfered by UE u.
B. Checking the Influence of SNR and INR
Consider an arbitrarily chosen UE u, its uplink throughput
can be approximated as
SNR(P )
RS (P ) = f
,
(9)
IoTS
where IoTS denotes a prior estimation of the interference
experienced by UE u. In fully distributed power control, the
instantaneous interference power that UE u will experience
during its transmissions is not available at the power controller.
Consider the UEs that are notably interfered by UE u, their
sum uplink throughput can be approximated as:
X
SNRI
.
(10)
f
RI (P ) =
IoTI + INRj (P )
PL
<PL
u→j
th
Note that we only consider the non-negligible interference
generated by UE u. To this end, we set the threshold PLth
in (10) as the path loss of a interference link such that the
interference power generated by UE u with the maximum
power Pmax is stronger than noise, i.e.,
PL−1
th · Pmax
> 1,
N0
(11)
where Pmax = 200 mW (23 dBm) [18]. Furthermore, in fully
distributed power control, the received SNR and IoT (excluding
the interference generated by u) of these UEs, denoted by SNRI
and IoTI , are unknown to the power controller of UE u. We
also propose methods to estimate SNRI and IoTI by using
the SINR region [−6.5 dB, 18 dB] in Fig. 1 for effective
AMC selection. Their values are given by SNRI = 24 dB
and IoTI = 5 dB. Due to space limitations, the details for
computing these parameters will be explained in the journal
version. The approximated sum throughput RI (P ) is plotted
in Fig. 3b, which decreases with the transmission power P .
C. Balancing the Two
In order to find a proper balance between SNR and INR, we
consider the following weighted sum throughput maximization
problem:
max
0≤P ≤Pmax
RS (P ) + ζ · RI (P ),
(12)
27
4
26
28
27.5
Weighted Sum Utility
5
3
UI(P)
US(P)
25
2
24
0
-5
22
0
5
10
15
P (dBm)
20
25
(a) Approximated throughput of UE u.
21
-5
26
25.5
23
1
27
26.5
25
24.5
0
5
10
15
P (dBm)
20
25
24
-5
0
5
10
15
P (dBm)
20
25
(b) Approximated total throughput of UEs that (c) Weighted sum throughput approximation.
are interfered by UE u.
Fig. 3: Approximated throughput functions for C&B power controller design.
where ζ is the weight parameter that adjusts the relative
importance of SNR and INR, which is chosen around 1. Large ζ
(e.g. ζ > 1) indicates more focus on mitigating INR rather than
enhancing SNR, which leads to more conservative transmission
power. This benefits the UEs with poor channel states due to
the reduced interference, while constraining the transmission
power of UEs with good channel states that prevents them from
achieving better throughput. It works the other way around
when we choose small ζ (e.g. ζ < 1). Tending to suppress
strong interference, we initially select ζ to be 1.3. As shown in
Fig. 3c, the weighted sum throughput is maximized at a unique
transmission power, i.e., the balance we choose between SNR
and INR.
IV. P ERFORMANCE E VALUATION ON S YSTEM - LEVEL
S IMULATION P LATFORM
A. Simulation Platform Configuration
(13)
We consider an LTE uplink cellular network [19], where the
BSs are located on a typical hexagonal lattice including 19
BS sites, 3 sectors per site. Each sector is regarded as a cell.
The minimal distance between two neighboring sites is 500
meters (Macrocell) [19]. The UEs are uniformly distributed in
the entire network area. The distance from a UE to a nearest
BS is no smaller than 35 meters [18]. Each UE has a single
antenna, and the BSs are equipped with 2 antennas per sector.
The wireless channel coefficients and BS antenna pattern are
generated by following the SCM model for Urban Macro environments in 3GPP TR 25.996 [18], where 3D antenna pattern
and Rayleigh fading [23] are adopted. We set the maximum
doppler shift frequency at 7 Hz according to moving speed 3
km/h and carrier frequency 2.5 GHz. The receivers employ nonCoMP maximum mean square error (MMSE) estimation [25]
with interference rejection combination (IRC) techniques [24].
To obtain the channel coefficients, we estimate the pilots, i.e.,
demodulation reference signals (DMRS) [3], with DFT-based
estimation [22]. Adaptive Transmission Bandwidth (ATB) [21]
non-CoMP packet scheduling scheme is implemented. The
frequency bandwidth is 10MHz, and the noise figure of each BS
receiver is 5 dB [3]. We set a uniform penetration loss of 20 dB
[18] for all users. The delay of control signaling is uniformly
set as 6 ms (i.e., 6 time slots) [17]. The wrap around technique
is employed to avoid the border effect. The parameters of our
system level simulation platform are listed in Table II.
We use the proportional-fair (PF) policy [25] in stochastic
network control in which the weight of UE u used in the ATB
scheduler at time-slot t is
(ru [t])α
,
(14)
(r̄u )β
We use bisection method to find the solution of (13), with
the detailed steps presented in Algorithm 1. This algorithm is
easy to implement in software, and the number of iterations
for convergence is no more than 10. We note that Algorithm 1
is a fully distributed algorithm, where each UE can choose its
transmission power independently.
where ru [t] and r̄u denote the UE u’s potentially achieved data
rate in time slot t and long-term average data rate, respectively.
We set the two associated parameters at: α = 1, β = 1.
We compare C&B with three reference policies: One policy
is the widely used fractional power control (FPC) scheme,
which determines the transmission power P F P C of UE u for
Algorithm 1: Bisection method for solving (13).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Given l = −10 dBm, r = Pmax , tolerance ǫ = 0.1;
if r − l < ǫ then
m := Pmax ;
else
while r − l ≥ ǫ do
m := (l + r)/2;
if RS′ (m) + ζ · RI′ (m) < 0 then
l := m;
else
r := m;
end
end
end
return P := m.
It is easy to show that Problem (12) is a one-dimensional
quasi-convex optimization problem, and thereby can be obtained by solving the following problem:
RS′ (P ) + ζ · RI′ (P ) = 0.
TABLE II: SYSTEM LEVEL SIMULATION PARAMETERS
Parameter
Deployment Scenario
Inter-site Distance
System Bandwidth
Avg. UEs per Cell
UE/BS Antennas
Distance-dependent Path Loss
Shadowing Standard Deviation
Antenna Pattern
Penetration Loss
Scheduling Decision Delay
Target BLER
Traffic Model
Scheduling Algorithm
Power Control (PC) scheme
Stochastic Network Control Scheme
Link Adaptation
BS Receiver Type
Channel Estimation
Maximum Doppler Shift
α (PF)
β (PF)
Pmax
P0F P C (FPC)
κ (FPC)
P0RL (RLPC)
φ (RLPC)
Setting
19 BS sites, 3 sectors (cells) per site,
wrap-around
500m (Macrocell)
10MHz [50 PRBs, 2 used for control]
10
1/2 per cell
According to 3GPP 36.814 [19]
8 dB
3D
20 dB
6 ms
10%
Fully backlogged queues
ATB
FPC, Max Power, RLPC, C&B
Proportional Fair (PF)
AMC, based on 3GPP TS 36.213 [3]
IRC MMSE
DFT-based Estimation
7Hz
1
1
23 dBm
−87 dBm
0.8
−102 dBm
0.8
each resource block by [3]:
P F P C = min(Pmax , P0F P C + κ · PL),
(15)
P0F P C
where Pmax is the maximum power constraint,
is the
default transmission power, and PL is the large-scale path loss
from UE u to its serving cell. The values of the parameters are
Pmax = 23 dBm, P0F P C = −87 dBm and κ = 0.8. The second
policy is Max Power, which sets all UEs at their maximum
allowable transmission power Pmax . The third reference policy
is the coordinated Reverse Link Power Control (RLPC) scheme,
where the transmission power P RL is decided by [7]:
P RL = min Pmax , P0RL + φ · PL + (1 − φ) · PLmin , (16)
where PLmin denotes the measured minimum path loss from
u to its neighboring cells. The rest parameters are selected as
P0RL = −102 dBm and φ = 0.8. The parameters of C&B are
chosen as we discussed in Section III.
B. Simulation Results
We compare the performance of different power control
schemes in terms of three key metrics: cell average throughput
(i.e. sum average throughput per cell), cell edge throughput and
power efficiency. In particular, cell-edge throughput is defined
as the 5th percentile throughput performance among all UEs,
denoted as 5%-Edge, which is widely used in evaluating the
performance of UE fairness [6], [11], [15].
1) Performance Comparison in Macrocell Scenario: First,
we investigate the performance in enhancing received signal
strength, which is evaluated by time-average SNR as illustrated
in Fig. 4a. It can be seen that C&B, with the weight parameter
set as ζ = 1.3, is able to significantly boost UEs’ timeaverage SNR over FPC. Particularly, more than 40% UEs
have gained at least 3.5 dB and approximately 20% UEs even
achieve more than 10 dB SNR increase. C&B also improves
the SNR of the top 20% UEs compared to RLPC, which
greatly benefit the UEs with good channel conditions. It is
clear that Max Power has the best SNR performance due to the
maximum transmission power. Nevertheless, it simultaneously
incurs severe interference which is quite undesirable.
Next, we switch our attention on the interference mitigation
performance, which is represented by the time-average IoT as
shown in Fig. 4b. Compared with FPC, C&B only slightly
increase the UEs’ average IoT by 1.5 dB. On the other hand,
C&B performs a lot better than Max Power and shows clear
advantage over RLPC in suppressing the inter-cell interference.
Now we concentrate on the throughput performance, which
is the comprehensive result of signal enhancement and interference mitigation. In comparison with FPC and RLPC, C&B
noticeably improves the throughput of UEs with good channel
condition (the top 30%), as illustrated in Fig. 4c. In particular,
the top 20% UEs have even achieved throughput gain of at least
1 Mbits/s. As for UEs with poor channel condition (bottom
10%), C&B shows great advantage over Max Power.
Finally, we present C&B’s advantages over all other candidate power control schemes based on the detailed performance
summary in Table I. C&B shows desirable advantage over FPC
by providing 51.9% improvement on cell average throughput,
while keeping the same 5%-Edge throughput performance and
only dropping the power efficiency by 48%. Further, C&B beats
Max Power in significantly improving 5%-Edge throughput
and power efficiency by 156% and 5, 716% respectively, with
slightly increased cell average throughput. Comparison with a
CoMP power control scheme called RLPC, C&B achieves appreciable gains in cell average throughput and power efficiency
respectively by 25.1% and 71.2%, and simultaneous keeps 5%Edge throughput slightly improved.
Note that the performance improvement of C&B is solely
obtained by power control. One can further incorporate coordinated transceiver and scheduling techniques to achieve even
higher gain. For example, it is known that CoMP receiving techniques can enhance the edge throughput significantly [2]. As
shown in Table I, the throughput gain of C&B is more evident
in the high throughput UEs. Hence, additional improvement is
promising by combining C&B with other CoMP techniques.
2) Tradeoff between cell average throughput and cell edge
throughput: Recall the utility maximization problem (12), we
can vary the weight parameter ζ to adjust the balance between
received signal enhancement and interference mitigation. As
we gradually decrease ζ from 1.3 to 0.7, the cell average
throughput keeps improving in pair with the continuously
degrading 5%-Edge throughput, as summarized in Table III.
This tradeoff between average and edge throughput can be
explained as follows: (a) the UEs with good channel conditions
contribute most part of the average throughput. Increasing
such UEs’ transmission power by decreasing ζ greatly helps
them to achieve stronger received signal, which results in their
throughput gains. Such gains lead the cell average throughput
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
cdf
cdf
1
cdf
1
0.4
FPC
Max Power
RLPC
C&B
0.2
0
-20
0
20
SNR (dB)
0.4
FPC
Max Power
RLPC
C&B
0.2
40
60
(a) Time-average received SNR.
0
0
5
10
UE IoT (dB)
15
FPC
Max Power
RLPC
C&B
0.2
20
0
0
(b) Time-average experienced IoT.
1
2
3
4
Average Throughput (Mbits/s)
5
(c) Time-average throughput.
Fig. 4: Simulation results in Macrocell scenario.
to improve. (b) In contrast, boosting transmission power incurs
stronger inter-cell interference, which especially jeopardizes
those vulnerable UEs with poor channel states. As a result,
the 5%-Edge throughput is decreased.
TABLE III: Throughput performance comparison between different weight factor ζ selections.
Average Throughput (Mbits/s)
5%-Edge Throughput (Mbits/s)
ζ = 1.3
12.23
0.23
ζ = 1.1
12.41
0.21
ζ = 0.9
12.95
0.18
ζ = 0.7
13.17
0.15
Note that when ζ = 0.7, C&B is remarkably better than
the Max Power policy in terms of both average and edgethroughput. In addition, when ζ = 1.3, C&B is significantly
better than the FPC and RLPC in term of cell average throughput. Therefore, one can adjust the weight parameter in order to
adapt different system requirements.
V. C ONCLUSION
AND
F UTURE W ORK
We investigate how to use limited large scale CSI to achieve
the throughput gain of CoMP through the design of a power
controller named C&B. The optimal power control by itself
is an NP hard problem, which has been open up to date.
Further, as very limited coordination is possible in the openloop mode, the power controllers of different BSs are not
allowed to communicate. C&B satisfies all practical constraints
in cellular systems, and can significantly improve the average
and edge throughput over existing power control schemes with
very low complexity and almost no cost. Further throughput
enhancement is promising by combining C&B with other
coordinated transceiver and scheduling techniques.
R EFERENCES
[1] A. Lozano, R. W. Heath, and J. G. Andrews, “ Fundamental limits of
cooperation, ” IEEE Trans. Inform. Theory, vol. 59, no. 9, pp. 5213-5226,
Sep. 2013.
[2] M. H. C. Suh and D. N. C. Tse, “ Downlink interference alignment, ”
IEEE Trans. Commun., vol. 59, no. 9, pp. 2616-2626, Sep. 2011.
[3] 3GPP TS 36.213, “ E-UTRA Physical Layer Procedures.”
[4] A. Barbieri, P. Gaal, S. Geirhofer, T. Ji, D. Malladi, Y. Wei, and F. Xue,
“ Coordinate downlink multi-point communications in heterogeneous 4g
cellular networks, ” in Proc. Inform. Theory and Appl. Workshop, San
Diego, CA, Feb. 2012.
[5] J. F. Whitehead, “ Signal-Level-Based Dynamic Power Control for Cochannel Interference Management, ” 43rd IEEE Vehicular Technology
Conference, pp. 499-502, May 1993.
[6] A. Simonsson and A. Furuskar, “ Uplink Power Control in LTE Overview and Performance, Subtitle: Principles and Benefits of Utilizing
rather than Compensating for SINR Variations, ” Vehicular Technology
Conference, 2008.
[7] A. M. Rao, “ Reverse Link Power Control for Managing Inter-cell Interference in Orthogonal Multiple Access Systems, ” Vehicular Technology
Conference, 2007.
[8] D. Shah, D. N. C. Tse, and J. N. Tsitsiklis, “ Hardness of low delay
network scheduling, ” IEEE Trans. Inf. Theory, vol. 57, no. 12, pp. 78107817, Dec. 2011.
[9] Z.-Q. Luo and S. Zhang, “ Dynamic spectrum management: Complexity
and duality, ” IEEE Journal of Selected Topics in Signal Processing, vol.
2, no. 1, pp. 57-73, Feb 2008.
[10] J.G.Gipson, “ The Communication Handbook, ” CRC Press, IEEE Press
1996.
[11] C. U. Castellanos, D. L. Villa, C. Rosa, K. I. Pedersen, F. D. Calabrese,
P. Michaelsen, and J. Michel, “ Performance of Uplink Fractional Power
Control in UTRAN LTE, ” Vehicular Technology Conference, 2008.
[12] T. D. Novlan, H. S. Dhillon and J. G. Andrews, “ Analytical Modeling
of Uplink Cellular Networks, ” IEEE Transactions on Wireless Communications, vol. 12, no. 6, Jun 2013.
[13] T. D. Novlan and J. G. Andrews, “ Analytical Evaluation of Uplink
Fractional Frequency Reuse, ” IEEE Transactions on Communications,
vol. 61, no. 5, May 2013.
[14] W. Xiao, R. Ratasuk, A. Ghosh, R. Love, Y. Sun, and R. Nory, “ Uplink
Power Control, Interference Coordination and Resource Allocation for
3GPP E-UTRA, ” Vehicular Technology Conference, 2006.
[15] M. Coupechoux and J. -M. Kelif, “ How to Set the Fractional Power
Control Compensation Factor in LTE, ” IEEE Sarnoff Symposium, 2011.
[16] R. Yang, A. Papathanassiou, W. Guan, A. T. Koc, H. Yin, Y. Chio, “
Supporting Material for UL OLPC Proposal, ” IEEE C80216m-09-0844,
2009.
[17] 3GPP TS 36.321, “ E-UTRA Medium Access Control (MAC) Protocol
Specification. ”
[18] 3GPP TR 25.996 v9.0.0., “ Channel Model for Multiple Input Multiple
Output (MIMO) Simulations, ” 2009.
[19] 3GPP TR 36.814, “ E-UTRA Further Advancements for E-UTRA Physical Layer Aspects. ”
[20] 3GPP TS 36.133, “ E-UTRA Requirements for Support of Radio Resource Management. ”
[21] F. Calabrese, C. Rosa, M. Anas, P. Michaelsen, K. Pedersen, and P.
Mogensen, “ Adaptive transmission bandwidth based packet scheduling
for lte uplink, ” in VTC 2008-Fall, Sept 2008, pp. 1-5.
[22] G. Huang, A. Nix, and S. Armour, “ DFT-Based Channel Estimation and
Noise Variance Estimation Techniques for Single-Carrier FDMA, ”in VTC
2010-Fall, 2010.
[23] Y. R. Zheng and C. Xiao, “ Simulation Models with Correct Statistical Properties for Rayleigh Fading Channels, ” IEEE Transactions on
Communications, vol. 51, issue 6, pp. 920-928, 2003.
[24] Y. Leost, M. Abdi, R. Richter and M. Jeschke, “ Interference Rejection
Combining in LTE Networks, ” Bell Labs Technical Journal, pp. 25-50,
2012.
[25] D. Tse and P. Viswanath, “ Fundamentals of Wireless Communication, ”
Cambridge University Press, 2005.
| 3 |
PERFECT SNAKE-IN-THE-BOX CODES
FOR RANK MODULATION
arXiv:1602.08073v3 [math.CO] 14 Oct 2016
ALEXANDER E. HOLROYD
Abstract. For odd n, the alternating group on n elements is generated by the permutations that jump an element from any odd position to position 1. We prove Hamiltonicity
of the associated directed Cayley graph for all odd n 6= 5. (A result of Rankin implies that
the graph is not Hamiltonian for n = 5.) This solves a problem arising in rank modulation
schemes for flash memory. Our result disproves a conjecture of Horovitz and Etzion, and
proves another conjecture of Yehezkeally and Schwartz.
1. Introduction
The following questions are motivated by applications involving flash memory. Let Sn
be the symmetric group of permutations π = [π(1), . . . , π(n)] of [n] := {1, . . . , n}, with
composition defined by (πρ)(i) = π(ρ(i)). For 2 ≤ k ≤ n let
τk := k, 1, 2, . . . , k − 1, k + 1, . . . , n ∈ Sn
be the permutation that jumps element k to position 1 while shifting elements 1, 2, . . . , k−1
right by one place. Let Sn be the directed Cayley graph of Sn with generators τ2 , . . . , τn ,
i.e. the directed graph with vertex set Sn and a directed edge, labelled τi , from π to πτi
for each π ∈ Sn and each i = 2, . . . , n.
We are concerned with self-avoiding directed cycles (henceforth referred to simply as
cycles except where explicitly stated otherwise) in Sn . (A cycle is self-avoiding if it visits
each vertex at most once). In applications to flash memory, a permutation represents the
relative ranking of charges stored in n cells. Applying τi corresponds to the operation of
increasing the ith charge to make it the largest, and a cycle is a schedule for visiting a
set of distinct charge rankings via such operations. Schemes of this kind were originally
proposed in [10].
One is interested in maximizing the length of such a cycle, since this maximizes the
information that can be stored. It is known that Sn has a directed Hamiltonian cycle,
i.e. one that includes every permutation exactly once; see e.g. [8, 10, 11, 13]. However,
for the application it is desirable that the cycle should not contain any two permutations
that are within a certain fixed distance r of each other, with respect to some metric d on
Sn . The motivation is to avoid errors arising from one permutation being mistaken for
another [10, 14]. The problem of maximizing cycle length for given r, d combines notions
of Gray codes [18] and error-detecting/correcting codes [2], and is sometimes known as a
snake-in-the-box problem. (This term has its origins in the study of analogous questions
involving binary strings as opposed to permutations; see e.g. [1]).
Date: 24 February 2016 (revised 10 October 2016).
Key words and phrases. Hamiltonian cycle; Cayley graph; snake-in-the-box; Gray code; rank
modulation.
1
2
ALEXANDER E. HOLROYD
The main result of this article is that, in the case that has received most attention
(described immediately below) there is a cycle that is perfect, i.e. that has the maximum
size even among arbitrary sets of permutations satisfying the distance constraint.
More precisely, our focus is following case considered in [9, 24, 25]. Let r = 1 and let
d be the Kendall tau metric [12], which is defined by setting d(π, σ) to be the inversion
number of π −1 σ, i.e. the minimum number of elementary transpositions needed to get from
π to σ. (The ith elementary transposition swaps the permutation elements in positions
i and i + 1, where 1 ≤ i ≤ n − 1). Thus, the cycle is not allowed to contain any two
permutations that are related by a single elementary transposition. The primary object of
interest is the maximum possible length Mn of such a directed cycle in Sn .
It is easy to see that Mn ≤ n!/2. Indeed, any set of permutations satisfying the above
distance constraint includes at most one from the pair {π, πτ2 } for every π, but these pairs
partition Sn . To get a long cycle, an obvious approach is to restrict to the alternating
group An of all even permutations. Since an elementary transposition changes the parity
of a permutation, this guarantees that the distance condition is satisfied. The generator
τk lies in An if and only if k is odd. Therefore, if n is odd, this approach reduces to
the problem of finding a maximum directed cycle in the directed Cayley graph An of An
with generators τ3 , τ5 , . . . , τn . Yehezkeally and Schwartz [24] conjectured that for odd n
the maximum cycle length Mn is attained by a cycle of this type; our result will imply
this. (For even n this approach is less useful, since without using τn we can access only
permutations that fix n.) As in [9, 24, 25], we focus mainly on odd n.
For small odd n, it is not too difficult to find cycles in An with length reasonably close
to the upper bound n!/2, by ad-hoc methods. Finding systematic approaches that work
for all n is more challenging. Moreover, getting all the way to n!/2 apparently involves a
fundamental obstacle, but we will show how it can be overcome.
Specifically, it is obvious that M3 = 3!/2 = 3. For general odd n ≥ 5, Yehezkeally √
and
Schwartz [24] proved the inductive bound Mn ≥ n(n − 2)Mn−2 , leading to Mn ≥ Ω(n!/ n)
asymptotically. They also showed by computer search that M5 = 5!/2 − 3 = 57. Horowitz
and Etzion [9] improved the inductive bound to Mn ≥ (n2 −n−1)Mn−2 , giving Mn = Ω(n!).
They also proposed an approach for constructing a longer cycle of length n!/2 − n + 2(=
(1 − o(1))n!/2), and showed by computer search that it works for n = 7 and n = 9. They
conjectured that this bound is optimal for all odd n. Zhang and Ge [25] proved that the
scheme of [9] works for all odd n, establishing Mn ≥ n!/2 − n + 2, and proposed another
scheme aimed at improving the bound by 2 to n!/2 − n + 4. Zhang and Ge proved that
their scheme works for n = 7, disproving the conjecture of [9] in this case, but were unable
to prove it for general odd n.
The obvious central question here is whether there exists a perfect cycle, i.e. one of
length n!/2, for any odd n > 3. As mentioned above, Horovitz and Etzion [9] conjectured
a negative answer for all such n, while the authors of [24, 25] also speculate that the answer
is negative. We prove a positive answer for n 6= 5.
Theorem 1. For all odd n ≥ 7, there exists a directed Hamiltonian cycle of the directed
Cayley graph An of the alternating group An with generators τ3 , τ5 , . . . , τn . Thus, Mn =
n!/2.
Besides being the first of optimal length, our cycle has a somewhat simpler structure
than those in [9, 25]. It may in principle be described via an explicit rule that specifies
PERFECT SNAKE-IN-THE-BOX CODES
3
which generator should immediately follow each permutation π, as a function of π. (See
[8, 22] for other Hamiltonian cycles of Cayley graphs that can be described in this way).
While the improvement from n!/2 − n + 2 to n!/2 is in itself unlikely to be important for
applications, our methods are quite general, and it is hoped that they will prove useful for
related problems.
We briefly discuss even n. Clearly, one approach is to simply leave the last element of the
permutation fixed, and use a cycle in An−1 , which gives Mn ≥ Mn−1 for even n. Horovitz
and Etzion [9] asked for a proof or disproof that this is optimal. In fact, we expect that
one can do much better. We believe that Mn ≥ (1 − o(1))n!/2 asymptotically as n → ∞
(an n-fold improvement over (n − 1)!/2), and perhaps even Mn ≥ n!/2 − O(n2). We will
outline a possible approach to showing bounds of this sort, although it appears that a full
proof for general even n would be rather messy. When n = 6 we use this approach to show
M6 ≥ 315 = 6!/2 − 45, improving the bound M6 ≥ 57 of [9] by more than a factor of 5.
Hamiltonian cycles of Cayley graphs have been extensively studied, although general
results are relatively few. See e.g. [4, 13, 15, 23] for surveys. In particular, it is unknown
whether every undirected Cayley graph is Hamiltonian. Our key construction (described
in the next section) appears to be novel in the context of this literature also.
Central to our proof are techniques having their origins in change ringing (Englishstyle church bell ringing). Change ringing is also concerned with self-avoiding cycles in
Cayley graphs of permutations groups (with a permutation representing an order in which
bells are rung), and change ringers discovered key aspects of group theory considerably
before mathematicians did – see e.g. [5, 7, 20, 21]. As we shall see, the fact that A5 has no
Hamiltonian cycle (so that we have the strict inequality M5 < 5!/2) follows from a theorem
of Rankin [16, 19] that was originally motivated by change ringing.
2. Breaking the parity barrier
In this section we explain the key obstruction that frustrated the previous attempts at
a Hamiltonian cycle of An in [9, 24, 25]. We then explain how it can be overcome. We will
then use these ideas to prove Theorem 1 in Sections 3 and 4.
By a cycle cover of a directed Cayley graph we mean a set of self-avoiding directed
cycles whose vertex sets partition the vertex set of the graph. A cycle or a cycle cover can
be specified in several equivalent ways: we can list the vertices or edges encountered by a
cycle in order, or we can specify a starting vertex of a cycle and list the generators it uses
in order, or we can specify which generator immediately follows each vertex – i.e. the label
of the unique outgoing edge that belongs to the cycle or cycle cover. It will be useful to
switch between these alternative viewpoints.
A standard approach to constructing a Hamiltonian cycle is to start with a cycle cover,
and then successively make local modifications that unite several cycles into one, until we
have a single cycle. (See [3, 4, 7–9, 15, 17, 22–25] for examples.) However, in An and many
other natural cases, there is a serious obstacle involving parity, as we explain next.
The order order(g) of a group element g is the smallest t ≥ 1 such that g t = id, where
id is the identity. In our case, let τk , τℓ be two distinct generators of An , and observe that
their ratio ρ := τℓ τk−1 is simply the permutation that jumps element ℓ to position k while
shifting the intervening elements by 1. For example, when n = 9 we have τ9 = [912345678]
and τ7−1 = [234567189], so τ9 τ7−1 = [123456978] (element 9 jumps first to position 1 and
4
ALEXANDER E. HOLROYD
then back to position 7). In general, the ratio ρ has order q := |k − ℓ| + 1, which is odd.
In the example, q = 3.
The fact that order(ρ) = q corresponds to the fact that in the Cayley graph An , starting
from any vertex, there is a cycle of length 2q consisting of directed edges oriented in
alternating directions and with alternating labels τℓ and τk . Consider one such alternating
cycle Q, and suppose that we have a cycle cover that includes all q of the τk -edges of Q.
Consequently, it includes none of the τℓ -edges of Q (since it must include only one outgoing
edge from each vertex). An example is the cycle cover that uses the outgoing τk -edge from
every vertex of An . Then we may modify the cycle cover as follows: delete all the τk -edges
of Q, and add all the τℓ -edges of Q. This results in a new cycle cover, because each vertex
of the graph still has exactly one incoming edge and one outgoing edge present.
Suppose moreover that all the τk -edges of Q lay in distinct cycles in the original cycle
cover. Then the effect of the modification is precisely to unite these q cycles into one new
cycle (having the same vertices). The new cycle alternately traverses the new τℓ -edges
and the remaining parts of the q original cycles. All other cycles of the cycle cover are
unaffected. See Figure 1 (left) for the case (k, ℓ) = (n − 2, n) (with q = 3), and Figure 1
(right) for the permutations at the vertices of the alternating cycle Q.
A modification of the above type reduces the total number of cycles in the cycle cover
by q − 1, and therefore, since q is odd, it does not change the parity of the total number of
cycles. Less obviously, it turns out that this parity is preserved by such a modification even
if we relax the assumption that the q deleted edges lie in distinct cycles. (See [16] or [19]
for proofs.) This is a problem, because many cycle covers that one might naturally start
with have an even number of cycles. This holds in particular for the cycle cover that uses
a single generator τk everywhere (for n ≥ 5), and also for the one that arises in an obvious
inductive approach to proving Theorem 1 (comprising |An |/|An−2| = n(n − 1) cycles each
of length |An−2 |). Thus we can (apparently) never get to a Hamiltonian cycle (i.e. a cycle
cover of one cycle) by this method.
The above ideas in fact lead to the following rigorous condition for non-existence of
directed Hamiltonian cycles. The result was proved by Rankin [16], based on an 1886
proof by Thompson [20] of a special case arising in change ringing; Swan [19] later gave a
simpler version of the proof.
Theorem 2. Consider the directed Cayley graph G of a finite group with two generators
a, b. If order(ab−1 ) is odd and |G|/ order(a) is even, then G has no directed Hamiltonian
cycle.
An immediate consequence is that A5 has no directed Hamiltonian cycle (confirming the
computer search result of [9]), and indeed An has no directed Hamiltonian cycle using only
two generators for odd n ≥ 5.
To break the parity barrier, we must use at least three generators in a fundamental
way. The problem with the previous approach was that order(τℓ τk−1 ) is odd: we need an
analogous relation involving composition of an even number of ratios of two generators. In
terms of the graph An , we need a cycle of length a multiple of 4 whose edges are oriented
in alternating directions. It is clear that such a thing must exist for all odd n ≥ 7, because
the ratios τk τℓ−1 generate the alternating group on the n − 2 elements {3, . . . , n}, which
contains elements of even order. We will use the example:
−1
−1
−1
(1)
order ζ = 2, where ζ := τn τn−2
τn τn−4
τn τn−4
.
PERFECT SNAKE-IN-THE-BOX CODES
·
c
·
b
·
a
·
τn
τn−2
τn−2
τn
τn
τn−2
·
·
·
·
·
·
·
5
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
a
·
c
·
b
·
a
b
a
a
c
c
b
b
c
b
b
a
a
c
c
Figure 1. Left: linking 3 cycles by replacing generator τn−2 with generator
τn in 3 places. We start with the 3 thin blue cycles, each of which comprises a
dotted edge labeled with generator τn−2 , and a curved arc that represents the
remaining part of the cycle. We delete the dotted edges and replace them
with the thick solid black edges (labelled τn ), to obtain one (solid) cycle,
containing the same vertices as the original 3 cycles. Right: the permutations
at the six vertices that are marked with solid discs in the left picture. The
permutation at the (green) circled vertex is [. . . . . . , a, b, c], where a, b, c ∈ [n],
and the permutations are listed in clockwise order around the inner hexagon
starting and finishing there. The ellipsis · · · · · · represents a sequence of n−3
distinct elements of [n], the same sequence everywhere it occurs. A solid
black curve indicates that the ratio between the two successive permutations
is τn (so that an element jumps from position n to 1), while a dotted blue
−1
curve indicates τn−2
(with a jump from 1 to n − 2).
τn−4
τn
τn
τn−4
τn
τn−2
τn−2
τn
τn−4
τn
τn
τn−4
·
e
·
d
·
c
·
e
·
b
·
a
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
·
a
·
a
·
d
·
c
·
c
·
b
·
a
b
a
b
a
a
d
d
c
d
c
c
b
b
c
b
e
b
b
a
a
d
e
d
d
c
c
d
c
c
e
e
b
b
a
a
e
e
d
d
e
d
d
c
c
e
e
b
b
a
a
e
e
Figure 2. The key construction. Left: replacing a suitable combination
of generators τn−2 and τn−4 with τn links 6 cycles into one, breaking the
parity barrier. We start with the 2 blue and 4 red thin cycles, and replace
the dotted edges with the thick black solid edges to obtain the solid cycle.
Right: the permutations appearing at the vertices marked with solid discs,
listed in clockwise order starting and ending at the circled vertex, which is
[. . . . , a, b, c, d, e]. The ellipsis · · · · represents the same sequence everywhere
it occurs.
6
ALEXANDER E. HOLROYD
−1
It is a routine matter to check (1): the ratio τn τn−s
is the permutation that jumps an
element from position n to n − s (while fixing 1, . . . , n − s − 1 and shifting n − s, . . . , n − 1
right one place), so to compute the composition ζ of three such ratios we need only keep
track of the last 5 elements. Figure 2 (right) shows the explicit computation: starting
from an arbitrary permutation π = [. . . , a, b, c, d, e] ∈ An , the successive compositions
−1
−1
π, πτn , πτn τn−2
, πτn τn−2
τn , . . . , πζ 2 = π are listed – the ellipsis · · · · represents the same
sequence everywhere it occurs. This explicit listing of the relevant permutations will be
useful later.
We can use the above observation to link 6 cycles into one, as shown in Figure 2 (left).
Let Q′ be a length-12 cycle in An with edges in alternating orientations that corresponds
to the identity (1). That is to say, every alternate edge in Q′ has label τn , and is oriented
in the same direction around Q′ . The other 6 edges are oriented in the opposite direction,
and have successive labels τn−2 , τn−4 , τn−4 , τn−2 , τn−4 , τn−4 . Suppose that we start with a
cycle cover in which the two τn−2 -edges and the four τn−4 -edges of Q′ all lie in distinct
cycles. Then we can delete these 6 edges and replace them with the six τn -edges of Q′ .
This results in a new cycle cover in which these 6 cycles have been united into one, thus
reducing the number of cycles by 5 and changing its parity. See Figure 2 (left) – the old
cycles are in thin red and blue, while the new cycle is shown by solid lines and arcs.
We will prove Theorem 1 by induction. The inductive step will use one instance of
the above 6-fold linkage to break the parity barrier, together with many instances of the
simpler 3-fold linkage described earlier with (k, ℓ) = (n − 2, n). The base case n = 7 will
use the 6-fold linkage in the reverse direction (replacing six τn -edges with τn−2 , τn−4 , . . .),
together with the cases (k, ℓ) = (7, 5), (7, 3) of the earlier linkage.
3. Hypergraph spanning
The other main ingredient for our proof is a systematic way of organizing the various
linkages. For this the language of hypergraphs will be convenient. Similar hypergraph
constructions were used in [9, 25]. A hypergraph (V, H) consists of a vertex set V and a
set H of nonempty subsets of V , which are called hyperedges. A hyperedge of size r is
called an r-hyperedge.
The incidence graph of a hypergraph (V, H) is the bipartite graph with vertex set
V ∪ H, and with an edge between v ∈ V and h ∈ H if v ∈ h. A component of a
hypergraph is a component of its incidence graph, and a hypergraph is connected if it has
one component. We say that a hypergraph is acyclic if its incidence graph is acyclic. Note
that this a rather strong condition: for example, if two distinct hyperedges h and h′ share
two distinct vertices v and v ′ then the hypergraph is not acyclic. (Several non-equivalent
notions of acyclicity for hypergraphs have been considered – the notion we use here is
sometimes called Berge-acyclicity – see e.g. [6]).
We are interested in hypergraphs of a particular kind that are related to the linkages
considered in the previous section. Let [n](k) be the set of all n!/(n − k)! ordered ktuples of distinct elements of [n]. If t = (a, b, c) ∈ [n](3) is a triple, define the triangle
∆(t) = ∆(a, b, c) := {(a, b), (b, c), (c, a)} ⊂ [n](2) of pairs that respect the cyclic order.
(Note that ∆(a, b, c) = ∆(c, a, b) 6= ∆(c, b, a).) In our application to Hamiltonian cycles,
∆(a, b, c) will encode precisely the linkage of 3 cycles shown in Figure 1. The following fact
and its proof are illustrated in Figure 3.
PERFECT SNAKE-IN-THE-BOX CODES
19
7
98
87
28
18
81
29
71
91
27
92
39
38
82 37 72
36
76
61
12
49
62
35
52
23
93 48 83
47
59 94
73
46
63
34
58
69
84
57
74
45
95
68
85
56
79
96
67
78
89
17
26
42
51
65
53
64
75
86
97
31
25
41
14
24
16
13
43
54
15
21
32
Figure 3. The hypergraph of Proposition 3, when n = 9. The vertices are
all the ordered pairs (a, b) = ab ∈ [n](2) , and the hyperedges are triangles of
the form {ab, bc, ca}. Hyperedges are colored according to the step of the
induction at which they are added. In the last step from n = 8 to n = 9, all
the white hyperedges are added, i.e. those incident to vertices that contain
9.
Proposition 3. Let n ≥ 3. There exists an acyclic hypergraph with vertex set [n](2) , with
all hyperedges being triangles ∆(t) for t ∈ [n](3) , and with exactly two components: one
containing precisely the 3 vertices of ∆(3, 2, 1), and the other containing all other vertices.
Proof. We give an explicit inductive construction. When n = 3 we simply take as hyperedges the two triangles ∆(3, 2, 1) and ∆(1, 2, 3).
Now let n ≥ 4, and assume that ([n − 1](2) , H) is a hypergraph satisfying the given
conditions for n − 1. Consider the larger hypergraph ([n](2) , H) with the same set of hyperedges, and note that its components are precisely: (i) ∆(3, 2, 1); (ii) an acyclic component
which we denote K that contains all vertices of [n − 1](2) \ ∆(3, 2, 1); and (iii) the 2n − 2
isolated vertices {(i, n), (n, i) : i ∈ [n − 1]}.
We will add some further hyperedges to ([n](2) , H). For i ∈ [n − 1], write i+ for the
integer in [n − 1] that satisfies i+ ≡ (i + 1) mod (n − 1), and define
D := ∆(i, i+ , n) : i ∈ [n − 1]
= ∆(1, 2, n), ∆(2, 3, n), . . . , ∆(n − 2, n − 1, n), ∆(n − 1, 1, n) .
Any element ∆(i, i+ , n) of D has 3 vertices. One of them, (i, i+ ), lies in K, while the
others, (i+ , n) and (n, i), are isolated vertices of ([n](2) , H). Moreover, each isolated vertex
of ([n](2) , H) appears in exactly one hyperedge in D. Therefore, ([n](2) , H ∪ D) has all the
claimed properties.
We remark that the above hypergraph admits a simple (non-inductive) description – it
consists of all ∆(a, b, c) such that max{a, b} < c and b ≡ (a + 1) mod (c − 1).
In order to link cycles into a Hamiltonian cycle we will require a connected hypergraph.
For n ≥ 3 there is no connected acyclic hypergraph of triangles with vertex set [n](2) .
(This follows from parity considerations: an acyclic component composed of m triangles
has 1 + 2m vertices, but |[n](2) | is even.) Instead, we simply introduce a larger hyperedge,
as follows.
8
ALEXANDER E. HOLROYD
Corollary 4. Let n ≥ 5 and let a, b, c, d, e ∈ [n] be distinct. There exists a connected acyclic
hypergraph with vertex set [n](2) such that one hyperedge is the 6-hyperedge ∆(a, b, e) ∪
∆(c, d, e), and all others are triangles ∆(t) for t ∈ [n](3) .
Proof. By symmetry, it is enough to prove this for any one choice of (a, b, c, d, e); we choose
(2, 1, 4, 5, 3). The result follows from Proposition 3, on noting that ∆(3, 4, 5) = ∆(4, 5, 3)
is a hyperedge of the hypergraph constructed there: we simply unite it with ∆(3, 2, 1) =
∆(2, 1, 3) to form the 6-hyperedge.
4. The Hamiltonian cycle
We now prove Theorem 1 by induction on (odd) n. We give the inductive step first,
followed by the base case n = 7. The following simple observation will be used in the
inductive step.
Lemma 5. Let n ≥ 3 be odd, and consider any Hamiltonian cycle of An . For every i ∈ [n]
there exists a permutation π ∈ An with π(n) = i that is immediately followed by a τn -edge
in the cycle.
Proof. Since the cycle visits all permutations of An , it must contain a directed edge from
a permutation π satisfying π(n) = i to a permutation π ′ satisfying π ′ (n) 6= i. This is a
τn -edge, since any other generator would fix the rightmost element.
Proof of Theorem 1, inductive step. We will prove by induction on odd n ≥ 7 the statement:
(2)
there exists a Hamiltonian cycle of An that includes at least one τn−2 -edge.
As mentioned above, we postpone the proof of the base case n = 7. For distinct a, b ∈ [n]
define the set of permutations of the form [. . . , a, b]:
n
o
An (a, b) := π ∈ An : π(n − 1), π(n) = (a, b) .
Let n ≥ 9, and let L = (τs(1) , τs(2) , . . . , τs(m) ) be the sequence of generators used by a
Hamiltonian cycle of An−2 , as guaranteed by the inductive hypothesis, in the order that
they are encountered in the cycle starting from id ∈ An−2 (where m = (n − 2)!/2, and
s(i) ∈ {3, 5, . . . , n − 2} for each i). Now start from any permutation π ∈ An (a, b) and
apply the sequence of generators L (where a generator τk ∈ An−2 is now interpreted as
the generator τk ∈ An with the same name). This gives a cycle in An whose vertex set is
precisely An (a, b). (The two rightmost elements a, b of the permutation are undisturbed,
because L does not contain τn .) Note that, for given a, b, different choices of the starting
permutation π ∈ An (a, b) in general result in different cycles.
We next describe the idea of the proof, before giving the details. Consider a cycle cover
C comprising, for each (a, b) ∈ [n](2) , one cycle C(a, b) with vertex set An (a, b) of the form
described above (so n(n − 1) cycles in total). We will link the cycles of C together into a
single cycle by substituting the generator τn at appropriate points, in the ways discussed
in Section 2. The linking procedure will be encoded by the hypergraph of Corollary 4. The
vertex (a, b) of the hypergraph will correspond to the initial cycle C(a, b). A 3-hyperedge
∆(a, b, c) will indicate a substitution of τn for τn−2 in 3 of the cycles of C, linking them
together in the manner of Figure 1. The 6-hyperedge will correspond to the parity-breaking
linkage in which τn is substituted for occurrences of both τn−2 and τn−4 , linking 6 cycles as
PERFECT SNAKE-IN-THE-BOX CODES
9
in Figure 2. One complication is that the starting points of the cycles of C must be chosen
so that τn−2 - and τn−4 -edges occur in appropriate places so that all these substitutions are
possible. To address this, rather than choosing the cycle cover C at the start, we will in fact
build our final cycle sequentially, using one hyperedge at a time, and choosing appropriate
cycles C(a, b) as we go. We will start with the 6-hyperedge, and for each subsequent
3-hyperedge we will link in two new cycles. Lemma 5 will ensure enough τn−2 -edges for
subsequent steps: for any (a, b, c) ∈ [n](3) , there is a vertex of the form [. . . , a, b, c] in C(b, c)
followed by τn−2 -edge. The inductive hypothesis (2) will provide the τn−4 -edges needed for
the initial 6-fold linkage.
We now give the details. In preparation for the sequential linking procedure, choose an
acyclic connected hypergraph ([n](2) , H) according to Corollary 4, with the 6-hyperedge
being ∆0 ∪ ∆′0 , where ∆0 := ∆(c, d, e) and ∆′0 := ∆(a, b, e), and where we write
(3)
(a, b, c, d, e) = (n − 4, n − 3, n − 2, n − 1, n).
Let N = |H| − 1, and order the hyperedges as H = {h0 , h1 , . . . , hN } in such a way that
h0 = ∆0 ∪ ∆′0 is the 6-hyperedge, and, for each 1 ≤ i ≤ N, the hyperedge hi shares
S
exactly one vertex with i−1
ℓ=0 hℓ . (To see that this is possible, note that for any choice of
h0 , . . . , hi−1 satisfying this condition, connectedness of the hypergraph implies that there
exists hi that shares at least one vertex with one of its predecessors; acyclicity then implies
that it shares exactly one.)
We will construct the required Hamiltonian cycle via a sequence of steps j = 0, . . . , N.
At the end of step j we will have a self-avoiding directed cycle Cj in An with the following
properties.
S
(i) The vertex set of Cj is the union of An (x, y) over all (x, y) ∈ ji=0 hi .
S
(ii) For every (x, y, z) ∈ [n](3) such that (y, z) ∈ ji=0 hi but ∆(x, y, z) ∈
/
{∆0 , ∆′0 , h1 , h2 , . . . , hj }, there exists a permutation π ∈ An of the form
[. . . , x, y, z] that is followed immediately by a τn−2 -edge in Cj .
We will check by induction on j that the above properties hold. The final cycle CN will
be the required Hamiltonian cycle. The purpose of the technical condition (ii) is to ensure
that suitable edges are available for later linkages; the idea is that the triple (x, y, z) is
available for linking in two further cycles unless it has already been used.
We will describe the cycles Cj by giving their sequences of generators. Recall that L is
the sequence of generators of the Hamiltonian cycle of An−2 . Note that L contains both
τn−2 and τn−4 , by Lemma 5 and the inductive hypothesis (2) respectively. For each of
k = n − 2, n − 4, fix some location i where τk occurs in L (so that s(i) = k), and let L[τk ]
be the sequence obtained by starting at that location and omitting this τk from the cycle:
L[τk ] := τs(j+1) , τs(j+2) . . . , τs(m) , τs(1) , . . . , τs(j−1) .
Note that the composition in order of the elements of L[τk ] is τk−1 .
For step 0, let C0 be the cycle that starts at id ∈ An and uses the sequence of generators
τn , L[τn−2 ], τn , L[τn−4 ], τn , L[τn−4 ],
τn , L[τn−2 ], τn , L[τn−4 ], τn , L[τn−4 ],
10
ALEXANDER E. HOLROYD
(where commas denote concatenation). This cycle is precisely of the form illustrated in
Figure 2 (left) by the solid arcs and lines. The curved arcs represent the paths corresponding to the L[·] sequences. The vertex set of each such path is precisely An (u, v) for some
pair (u, v); we denote this path P (u, v). The solid lines represent the τn -edges. Moreover, since Figure 2 (right) lists the vertices (permutations) at the beginning and end of
each path P (u, v), we can read off the pairs (u, v). With a, . . . , e as in (3), the pairs are
{(d, e), (c, d), (e, c), (b, e), (a, b), (e, a)}. This set equals ∆0 ∪ ∆′0 = h0 , so property (i) above
holds for the cycle C0 .
We next check that C0 satisfies (ii). Let (x, y, z) ∈ [n](3) be such that (y, z) ∈ h0 . The
cycle C0 includes a path P (y, z) with vertex set An (y, z) and generator sequence L[τk ]
(where k is n − 2 or n − 4). Let C(y, z) be the cycle that results from closing the gap,
i.e. appending a τk -edge f to the end of P (y, z). Note that P (y, z) and C(y, z) both have
vertex set An (y, z). By Lemma 5 applied to An−2, the cycle C(y, z) contains a permutation
of the form [. . . , x, y, z] immediately followed by a τn−2 -edge, g say. Edge g is also present
in C0 unless g = f . Consulting Figure 2, and again using the notation in (3), we see that
this happens only in the two cases (x, y, z) = (e, c, d), (e, a, b). But in these cases we have
∆(x, y, z) = T0 , T0′ respectively. Thus condition (ii) is satisfied at step 0.
Now we inductively describe the subsequent steps. Suppose that step j − 1 has been
completed, giving a cycle Cj−1 that satisfies (i) and (ii) (with parameter j −1 in place of j).
We will augment Cj−1 to obtain a larger cycle Cj , in a manner encoded by the hyperedge
hj . Let
hj = ∆(a, b, c) = (a, b), (b, c), (c, a)
(where we no longer adopt the notation (3)). By our choice of the ordering of H, exactly
Sj−1
one of these pairs belongs to i=0
hi ; without loss of generality, let it be (b, c). By property
(ii) of the cycle Cj−1, it contains a vertex of the form [. . . , a, b, c] immediately followed by
a τn−2 -edge, f say. Delete edge f from Cj−1 to obtain a directed path Pj−1 with the same
vertex set. Append to Pj−1 the directed path that starts at the endvertex of Pj−1 and then
uses the sequence of generators
τn , L[τn−2 ], τn , L[τn−2 ], τn .
−1
) = 3, this gives a cycle, which we denote Cj .
Since order(τn τn−2
The new cycle Cj has precisely the form shown in Figure 1 (left) by the solid arcs and
lines, where Cj−1 is the thin blue cycle in the upper left, containing the circled vertex,
which is the permutation [. . . , a, b, c]. The arc is Pj−1, and the dotted edge is f . As
before, the permutations at the filled discs may be read from Figure 1 (right). Thus, Cj
consists of the path Pj−1 , together with two paths P (a, b), P (c, a) with respective vertex
sets An (a, b), An (c, a) (the other two thin blue arcs in the figure), and three τn -edges (thick
black lines) connecting these three paths. Hence Cj satisfies property (i).
We now check that Cj satisfies (ii). The argument is similar to that used in step 0. Let
(x, y, z) satisfy the assumptions in (ii). We consider two cases. First suppose (y, z) ∈
/ hj .
Sj−1
hi , and so property (ii) of Cj−1 implies that Cj−1 has a vertex of the
Then (y, z) ∈ i=0
form [. . . , x, y, z] followed by a τn−2 -edge g, say. Then g is also present in Cj unless g = f .
But in that case we have (x, y, z) = (a, b, c), and so ∆(x, y, z) = hj , contradicting the
assumption on (x, y, z). On the other hand, suppose (y, z) ∈ hj . Then (y, z) equals (a, b)
or (c, a). Suppose the former; the argument in the latter case is similar. Let C(a, b) be the
PERFECT SNAKE-IN-THE-BOX CODES
11
row
permutations
generator
1
6777∗∗∗, 7776∗∗∗
τ5
2
67∗∗∗∗∗, 76∗∗∗∗∗
τ3
3
5671∗∗∗, 576∗∗∗∗
τ5
4
2567∗∗∗, 4576∗∗∗
τ5
5 5671234, 5612347, 5623714, 5637142
τ3
6
5623471, 5671423
τ5
7
otherwise
τ7
Table 1. Rules for generating a directed Hamiltonian cycle of A7 . Permutations of the given forms should be followed by the generator in the same
row of the table. The symbol ∗ denotes an arbitrary element of [7], and a
denotes any element other than a.
cycle obtained by appending a τn−2 -edge to P (a, b). Applying Lemma 5 shows that C(a, b)
contains a vertex of the form [. . . , x, a, b] followed by a τn−2 -edge g, say. Then g is also
present in P (a, b) unless x = c, but then ∆(x, y, z) = hj , contradicting the assumption in
(ii). Thus, property (ii) is established.
To conclude the proof, note that the final cycle CN is Hamiltonian, by property (i) and
the fact that the hypergraph of Corollary 4 has vertex set [n](2) . To check that it includes
some τn−2 -edge as required for (2), recall that hN has only one vertex in common with
h0 , . . . , hN −1 , so there exist x, y, z with (y, z) ∈ hN but ∆(x, y, z) ∈
/ H. Hence property (ii)
implies that CN contains a τn−2 -edge.
Proof of Theorem 1, base case. For the base case of the induction, we give an explicit directed Hamiltonian cycle of A7 that includes τ5 at least once. (In fact the latter condition
must necessarily be satisfied, since, as remarked earlier, Theorem 2 implies that there is
no Hamiltonian cycle using only τ3 and τ7 .)
Table 1 specifies which generator the cycle uses immediately after each permutation of
A7 , as a function of the permutation itself. The skeptical reader may simply check by
computer that these rules generate the required cycle. But the rules were constructed by
hand; below we briefly explain how.
First suppose that from every permutation of A7 we apply the τ7 generator, as specified
in row 7 of the table. This gives a cycle cover comprising |A7 |/7 = 360 cycles of size 7.
Now consider the effect of replacing some of these τ7 ’s according to rows 1–6 in succession.
Each such replacement performs a linkage, as in Figures 1 and 2. Row 1 links the cycles
in sets of 3 to produce 120 cycles of length 21, each containing exactly one permutation
of the form 67∗∗∗∗∗ or 76∗∗∗∗∗. Row 2 then links these cycles in sets of 5 into 24 cycles
of length 105, each containing exactly one permutation of the form 675∗∗∗∗ or 765∗∗∗∗.
Rows 3 and 4 link various sets of three cycles, permuting elements 1234, to produce 6
cycles. Finally, rows 5 and 6 break the parity barrier as discussed earlier, uniting these 6
cycles into one.
12
ALEXANDER E. HOLROYD
5. Even size
We briefly discuss a possible approach for even n. Recall that Mn is the maximum length
of a cycle Sn in which no two permutations are related by an adjacent transposition.
To get a cycle longer than Mn−1 we must use τn . But this is an odd permutation, so
we cannot remain in the alternating group An . We suggest following τn immediately by
another odd generator, say τn−2 , in order to return to An (note that τ2 is forbidden). In
order to include permutations of the form [. . . , j] for every j ∈ [n], we need to perform
such a transition (at least) n times in total in our cycle. In the ith transition we visit one
odd permutation, αi say, between the generators τn and τn−2 . For the remainder of the
cycle we propose using only generators τk for odd k, so that we remain in An .
In fact, one may even try to fix the permutations α1 , . . . , αn in advance. The problem
then reduces to that of finding long self-avoiding directed paths in An−1 , with specified start
and end vertices, and avoiding certain vertices – those that would result in a permutation
that is related to some αi by an elementary transposition. Since there are n αi ’s and n − 1
elementary transpositions, there are O(n2 ) vertices to be avoided in total.
Since, for large n, the number of vertices to be avoided is much smaller than |An−1 |, we
think it very likely that paths of length (1 − o(1))|An−1| exist, which would give Mn ≥
(1 − o(1))n!/2 as n → ∞. It is even plausible that Mn ≥ n!/2 − O(n2 ) might be achievable.
The graph An−1 seems to have a high degree of global connectivity, as evidenced by the
diverse constructions of cycles of close to optimal length in [9, 24, 25]. For a specific
approach (perhaps among others), one might start with a short path linking the required
start and end vertices, and then try to successively link in short cycles (say those that use a
single generator such as τn−1 ) in the manner of Figure 1, leaving out the relatively few short
cycles that contain forbidden vertices. It is conceivable that the forbidden permutations
might conspire to prevent such an approach, for example by blocking even short paths
between the start and end vertices. However, this appears unlikely, especially given the
additional flexibility in the choice of α1 , . . . , αn .
While there appear to be no fundamental obstacles, a proof for general even n along the
above lines might be rather messy. (Of course, this does not preclude some other more
elegant approach). Instead, the approach was combined with a computer search to obtain
a cycle of length 315(= 6!/2−45) for n = 6, which is presented below, answering a question
of [9], and improving the previous record M6 ≥ 57 [9] by more than a factor of 5. The case
n = 6 is in some respects harder than larger n: the forbidden vertices form a larger fraction
of the total, and A5 has only two generators, reducing available choices. (On the other
hand, the search space is of course relatively small). Thus, this result also lends support
to the belief that Mn ≥ (1 − o(1))n!/2 as n → ∞.
The search space was reduced by quotienting the graph S6 by a group of order 3 to
obtain a Schreier graph, giving a cycle in which the sequence of generators is repeated 3
times. The cycle uses the sequence of generators (τk(i) ) where (k(i))315
i=1 is the sequence
64 55^
35^
3^
35555^
3555^
3555^
3^
35555^
35555^
3555^
35555^
3^
3555^
35555^
3^
3
3
64 555^
3^
35^
355^
3^
355^
3^
35^
3555^
35555^
3555^
3555^
3^
35555^
3555^
35555^
3^
35 .
(Here, commas are omitted, the superscript indicates that the sequence is repeated three
times, and 3’s are marked as an aid to visual clarity).
PERFECT SNAKE-IN-THE-BOX CODES
13
References
[1] H. L. Abbott and M. Katchalski. On the snake in the box problem. J. Combin. Theory
Ser. B, 45(1):13–24, 1988.
[2] J. Baylis. Error-correcting codes, a mathematical introduction. Chapman and Hall
Mathematics Series. Chapman & Hall, London, 1998.
[3] R. C. Compton and S. G. Williamson. Doubly adjacent Gray codes for the symmetric
group. Linear and Multilinear Algebra, 35(3-4):237–293, 1993.
[4] S. J. Curran and J. A. Gallian. Hamiltonian cycles and paths in Cayley graphs and
digraphs – a survey. Discrete Mathematics, 156(1):1–18, 1996.
[5] R. Duckworth and F. Stedman. Tintinnalogia, or, the art of ringing. London, 1667.
www.gutenberg.org/ebooks/18567.
[6] R. Fagin. Degrees of acyclicity for hypergraphs and relational database schemes. J.
Assoc. Comput. Mach., 30(3):514–550, 1983.
[7] D. Griffiths. Twin bob plan composition of Stedman Triples: Partitioning of graphs
into Hamiltonian subgraphs. Research Report 94(37), Univ. of Sydney, School of Math.
and Stat., 1994. www.maths.usyd.edu.au/res/CompAlg/Gri/2bob-sted.html.
[8] A. E. Holroyd, F. Ruskey, and A. Williams. Shorthand universal cycles for permutations. Algorithmica, 64(2):215–245, 2012.
[9] M. Horovitz and T. Etzion. Constructions of snake-in-the-box codes for rank modulation. IEEE Trans. Inform. Theory, 60(11):7016–7025, 2014.
[10] A. Jiang, R. Mateescu, M. Schwartz, and J. Bruck. Rank modulation for flash memories. IEEE Trans. Inform. Theory, 55(6):2659–2673, 2009.
[11] J. R. Johnson. Universal cycles for permutations. Discrete Math., 309(17):5264–5270,
2009.
[12] M. G. Kendall. A new measure of rank correlation. Biometrika, 30(1/2):81–93, 1938.
[13] D. E. Knuth. The art of computer programming. Vol. 4, Fasc. 2. Addison-Wesley,
Upper Saddle River, NJ, 2005. Generating all tuples and permutations.
[14] A. Mazumdar, A. Barg, and G. Zémor. Constructions of rank modulation codes. IEEE
Trans. Inform. Theory, 59(2):1018–1029, 2013.
[15] I. Pak and R. Radoičić. Hamiltonian paths in Cayley graphs. Discrete Mathematics,
309(17):5501–5508, 2009.
[16] R. A. Rankin. A campanological problem in group theory. Proc. Cambridge Philos.
Soc., 44:17–25, 1948.
[17] E. Rapaport-Strasser. Cayley color groups and Hamilton lines. Scripta Math., 24:51–
58, 1959.
[18] C. Savage. A survey of combinatorial Gray codes. SIAM Rev., 39(4):605–629, 1997.
[19] R. G. Swan. A simple proof of Rankin’s campanological theorem. Amer. Math.
Monthly, 106(2):159–161, 1999.
[20] W. H. Thompson. A note on Grandsire Triples. Macmillan and Bowes, Cambridge,
1886. Reprinted in W. Snowdon, Grandsire, London 1905.
[21] A. T. White. Fabian Stedman: the first group theorist? Amer. Math. Monthly,
103(9):771–778, 1996.
[22] A. Williams. Hamiltonicity of the Cayley digraph on the symmetric group generated
by σ = (12 · · · n) and τ = (12), 2013. arXiv:1307.2549.
14
ALEXANDER E. HOLROYD
[23] D. Witte and J. A. Gallian. A survey: Hamiltonian cycles in Cayley graphs. Discrete
Math., 51(3):293–304, 1984.
[24] Y. Yehezkeally and M. Schwartz. Snake-in-the-box codes for rank modulation. IEEE
Trans. Inform. Theory, 58(8):5471–5483, 2012.
[25] Y. Zhang and G. Ge. Snake-in-the-box codes for rank modulation under Kendall’s
τ -metric, 2015. arXiv:1506.02740.
Alexander E. Holroyd, Microsoft Research, 1 Microsoft Way, Redmond, WA 98052,
USA
E-mail address: holroyd at microsoft.com
URL: http://research.microsoft.com/~holroyd/
| 4 |
A polynomial identity via differential operators
arXiv:1703.04167v1 [] 12 Mar 2017
Anurag K. Singh
Dedicated to Professor Winfried Bruns, on the occasion of his 70th birthday
Abstract We give a new proof of a polynomial identity involving the minors of a
matrix, that originated in the study of integer torsion in a local cohomology module.
1 Introduction
Our study of integer torsion in local cohomology modules began in the paper [Si],
where we constructed a local cohomology module that has p-torsion for each prime
integer p, and also studied the determinantal example HI32 (Z[X]) where X is a 2 × 3
matrix of indeterminates, and I2 the ideal generated by its size 2 minors. In that
paper, we constructed a polynomial identity that shows that the local cohomology
module HI32 (Z[X]) has no integer torsion; it then follows that this module is a rational vector space. Subsequently, in joint work with Lyubeznik and Walther, we
showed that the same holds for all local cohomology modules of the form HIkt (Z[X]),
where X is a matrix of indeterminates, It the ideal generated by its size t minors,
and k an integer with k > heightIt , [LSW, Theorem 1.2]. In a related direction,
in joint work with Bhatt, Blickle, Lyubeznik, and Zhang, we proved that the local
cohomology of a polynomial ring over Z can have p-torsion for at most finitely
many p; we record a special case of [BBLSZ, Theorem 3.1]:
Theorem 1. Let R be a polynomial ring over the ring of integers, and let f1 , . . . , fm
be elements of R. Let n be a nonnegative integer. Then each prime integer that is a
nonzerodivisor on the Koszul cohomology module H n ( f1 , . . . , fm ; R) is also a nonzerodivisor on the local cohomology module H(nf ,..., fm ) (R).
1
A. K. Singh
Department of Mathematics, University of Utah, 155 S 1400 E, Salt Lake City, UT 84112, USA
e-mail: [email protected]
1
2
Anurag K. Singh
These more general results notwithstanding, a satisfactory proof or conceptual
understanding of the polynomial identity from [Si] had previously eluded us; extensive calculations with Macaulay2 had led us to a conjectured identity, which we
were then able to prove using the hypergeometric series algorithms of Petkovšek,
Wilf, and Zeilberger [PWZ], as implemented in Maple. The purpose of this note is
to demonstrate how techniques using differential operators underlying the papers
[BBLSZ] and [LSW] provide the “right” proof of the identity, and, indeed, provide
a rich source of similar identities.
We remark that there is considerable motivation for studying local cohomology
of rings of polynomials with integer coefficients such as HIkt (Z[X]): a matrix of indeterminates X specializes to a given matrix of that size over an arbitrary commutative
noetherian ring (this is where Z is crucial), which turns out to be useful in proving
vanishing theorems for local cohomology supported at ideals of minors of arbitrary
matrices. See [LSW, Theorem 1.1] for these vanishing results, that build upon the
work of Bruns and Schwänzl [BS].
2 Preliminary remarks
We summarize some notation and facts. As a reference for Koszul cohomology and
local cohomology, we mention [BH]; for more on local cohomology as a D-module,
we point the reader towards [Ly1] and [BBLSZ].
Koszul and Čech cohomology
For an element f in a commutative ring R, the Koszul complex K • ( f ; R) has a
natural map to the Čech complex C• ( f ; R) as follows:
f
K • ( f ; R) := 0 −−→ R −−→
y
R −−→ 0
1
yf
C• ( f ; R) := 0 −−→ R −−→ R f −−→ 0.
For a sequence of elements f = f1 , . . . , fm in R, one similarly obtains
K • ( f ; R) :=
N
iK
•( f
i;
R) −−→
N
iC
•( f
i;
R)
=: C• ( f ; R),
and hence, for each n > 0, an induced map on cohomology modules
H n ( f ; R) −−→ H(nf ) (R).
(1)
A polynomial identity via differential operators
3
Now suppose R is a polynomial ring over a field F of characteristic p > 0. The
Frobenius endomorphism ϕ of R induces an additive map
H(nf ) (R) −−→ H(nf p ) (R) = H(nf ) (R),
where f p = f1p , . . . , fmp . Set R{ϕ } to be the extension ring of R obtained by adjoining
the Frobenius operator, i.e., adjoining a generator ϕ subject to the relations ϕ r = r p ϕ
for each r ∈ R; see [Ly2, Section 4]. By an R{ϕ }-module we will mean a left R{ϕ }module. The map displayed above gives H(nf ) (R) an R{ϕ }-module structure. It is not
hard to see that the image of H n ( f ; R) in H(nf ) (R) generates the latter as an R{ϕ }module; what is much more surprising is a result of Àlvarez, Blickle, and Lyubeznik,
[ABL, Corollary 4.4], by which the image of H n ( f ; R) in H(nf ) (R) generates the
latter as a D(R, F)-module; see below for the definition. The result is already notable
in the case m = 1 = n, where the map (1) takes the form
H 1 ( f ; R) = R/ f R −→ R f /R = H(1f ) (R)
[1] 7−→ [1/ f ] .
By [ABL], the element 1/ f generates R f as a D(R, F)-module. It is of course evie
dent that 1/ f generates R f as an R{ϕ }-module since the elements ϕ e (1/ f ) = 1/ f p
with e > 0 serve as R-module generators for R f . See [BDV] for an algorithm to
e
explicitly construct a differential operator δ with δ (1/ f ) = 1/ f p , along with a
Macaulay2 implementation.
Differential operators
Let A be a commutative ring, and x an indeterminate; set R = A[x]. The divided
power partial differential operator
1 ∂k
k! ∂ xk
is the A-linear endomorphism of R with
1 ∂k m
m m−k
(x ) =
x
k
k! ∂ x
k
for m > 0,
where we use the convention that the binomial coefficient mk vanishes if m < k.
Note that
r+s
1
∂ r+s
1 ∂r 1 ∂s
·
=
.
r! ∂ xr s! ∂ xs
(r + s)! ∂ xr+s
r
For the purposes of this paper, if R is a polynomial ring over A in the indeterminates x1 , . . . , xd , we define the ring of A-linear differential operators on R, de-
4
Anurag K. Singh
noted D(R, A), to be the free R-module with basis
1 ∂ k1
1 ∂ kd
· ··· ·
k
k1 ! ∂ x 1
kd ! ∂ xkd
1
for ki > 0,
d
with the ring structure coming from composition. This is consistent with more general definitions; see [Gr, 16.11]. By a D(R, A)-module, we will mean a left D(R, A)module; the ring R has a natural D(R, A)-module structure, as do localizations
of R. For a sequence of elements f in R, the Čech complex C• ( f ; R) is a complex of D(R, A)-modules, and hence so are its cohomology modules H(nf ) (R). Note
that for m > 1, one has
1 ∂k
1
1
k m+k−1
= (−1)
.
m
k
m+k
k! ∂ x x
k
x
We also recall the Leibniz rule, which states that
1 ∂i
1 ∂j
1 ∂k
(
f
g)
=
(
f
)
(g).
∑
i
k! ∂ xk
j! ∂ x j
i+ j=k i! ∂ x
3 The identity
Let R be the ring of polynomials with integer coefficients in the indeterminates
uvw
.
xy z
The ideal I generated by the size 2 minors of the above matrix has height 2; our
interest is in proving that the local cohomology module HI3 (R) is a rational vector
space. We label the minors as ∆1 = vz − wy, ∆2 = wx − uz, and ∆3 = uy − vx. Fix a
prime integer p, and consider the exact sequence
p
0 −−→ R −−→ R −−→ R −−→ 0,
where R = R/pR. This induces an exact sequence of local cohomology modules
π
p
−−→ HI2 (R) −−→ HI2 (R) −−→ HI3 (R) −−→ HI3 (R) −−→ HI3 (R) −−→ 0.
The ring R/IR is Cohen-Macaulay of dimension 4, so [PS, Proposition III.4.1] implies that HI3 (R) = 0. As p is arbitrary, it follows that HI3 (R) is a divisible abelian
group. To prove that it is a rational vector space, one needs to show that multiplication by p on HI3 (R) is injective, equivalently that π is surjective. We first prove this
using the identity (2) below, and then proceed with the proof of the identity.
For each k > 0, one has
A polynomial identity via differential operators
∑
i, j>0
k
i+ j
5
k + i k + j (−wx)i (vx) j uk+1
k
k
∆2k+1+i ∆3k+1+ j
k
k + i k + j (−uy)i (wy) j vk+1
+ ∑
k
k
∆3k+1+i ∆1k+1+ j
i, j>0 i + j
k
k + i k + j (−vz)i (uz) j wk+1
= 0. (2)
+ ∑
k
k
∆1k+1+i ∆2k+1+ j
i, j>0 i + j
Since the binomial coefficient i+k j vanishes if i or j exceeds k, this equation may
be rewritten as an identity in the polynomial ring Z[u, v, w, x, y, z] after multiplying
by (∆1 ∆2 ∆3 )2k+1 .
Computing HI2 (R) as the cohomology of the Čech complex C• (∆1 , ∆2 , ∆3 ; R),
equation (2) gives a 2-cocycle in
C2 (∆1 , ∆2 , ∆3 ; R) = R∆1 ∆2 ⊕ R∆1∆3 ⊕ R∆2∆3 ;
we denote the cohomology class of this cocycle in HI2 (R) by ηk . When k = pe − 1,
one has
k
k+i k+ j
≡ 0 mod p for (i, j) 6= (0, 0),
i+ j
k
k
so (2) reduces modulo p to
up
e
e
∆2p ∆3p
e
+
vp
e
e
∆3p ∆1p
e
+
wp
e
e
∆1p ∆2p
e
≡0
mod p,
and the cohomology class η pe −1 has image
!#
"
e
e
e
−v p
up
wp
π (η pe −1 ) =
e
e,
e
e,
e
e
∆1p ∆2p ∆1p ∆3p ∆2p ∆3p
in HI2 (R).
Since R is a regular ring of positive characteristic, HI2 (R) is generated as an R{ϕ }module by the image of
H 2 (∆1 , ∆2 , ∆3 ; R) −−→ HI2 (R).
The Koszul cohomology module H 2 (∆1 , ∆2 , ∆3 ; R) is readily seen to be generated,
as an R-module, by elements corresponding to the relations
u∆1 + v∆2 + w∆3 = 0
and
x∆1 + y∆2 + z∆3 = 0.
These two generators of H 2 (∆1 , ∆2 , ∆3 ; R) map, respectively, to
−v
u
−y
x
w
z
α :=
,
,
and β :=
,
,
∆1 ∆2 ∆1 ∆3 ∆2 ∆3
∆1 ∆2 ∆1 ∆3 ∆2 ∆3
6
Anurag K. Singh
in HI2 (R). Thus, HI2 (R) is generated over R by ϕ e (α ) and ϕ e (β ) for e > 0. But
ϕ e (α ) = π (η pe −1 )
is in the image of π , and hence so is ϕ e (β ) by symmetry. Thus, π is surjective.
The proof of the identity
We start by observing that C2 (∆1 , ∆2 , ∆3 ; R) is a D(R, Z)-module. The element
w
−v
u
,
,
∆1 ∆2 ∆1 ∆3 ∆2 ∆3
is a 2-cocycle in C2 (∆1 , ∆2 , ∆3 ; R) since
w
v
u
+
+
= 0.
∆1 ∆2 ∆1 ∆3 ∆2 ∆3
(3)
We claim that the identity (2) is simply the differential operator
D=
1 ∂k 1 ∂k 1 ∂k
·
·
k! ∂ uk k! ∂ yk k! ∂ zk
applied termwise to (3); we first explain the choice of this operator: set k = pe − 1,
and consider D = D mod p as an element of
D(R, Z)/pD(R, Z) = D(R/pR, Z/pZ).
It is an elementary verification that
D(u∆2p
e −1
D(v∆3p
∆3p
e −1
e −1
∆1p
) ≡ up
e −1
) ≡ vp
e
e
D(w∆1p −1 ∆2p −1 )
≡ w
e
e
mod p.
pe
pe
Since k < pe , the differential operator D is R -linear; dividing the above equations
e
e
e
e
e
e
by ∆2p ∆3p , ∆3p ∆1p , and ∆1p ∆2p respectively, we obtain
!
e
e
e
−v
u
wp
w
−v p
up
mod p,
D
,
,
≡
e
e,
e
e,
e
e
∆1 ∆2 ∆1 ∆3 ∆2 ∆3
∆1p ∆2p ∆1p ∆3p ∆2p ∆3p
which maps to the desired cohomology class ϕ e (α ) in HI2 (R). Of course, the operator D is not unique in this regard.
Using elementary properties of differential operators recorded in §2, we have
A polynomial identity via differential operators
D
v
∆3 ∆1
7
1 ∂k 1 ∂k 1 ∂k
v
·
·
k! ∂ uk k! ∂ yk k! ∂ zk (uy − vx)(vz − wy)
1 ∂k 1 ∂k
v(−v)k
=
·
k! ∂ uk k! ∂ yk (uy − vx)(vz − wy)k+1
v(−v)k (−y)k
1 ∂k
=
k! ∂ yk (uy − vx)k+1 (vz − wy)k+1
k
yk
k+1 1 ∂
=v
k! ∂ yk (uy − vx)k+1(vz − wy)k+1
∂ k−i− j k
1 ∂i
1
1
1 ∂j
1
= vk+1 ∑
y
i
k+1
j! ∂ y j (vz − wy)k+1 (k − i − j)! ∂ yk−i− j
i, j i! ∂ y (uy − vx)
k+i
(−u)i
k+ j
k
wj
= vk+1 ∑
yi+ j
k+1+i
k+1+ j i + j
i
j
(uy
−
vx)
(vz
−
wy)
i, j
k+i k+ j
k
(−uy)i (wy) j
.
= vk+1 ∑
i
j
i + j ∆ k+1+i ∆ k+1+ j
i, j
1
3
=
A similar calculation shows that
w
k+i k+ j
k
(−vz)i (uz) j
D
.
= wk+1 ∑
∆1 ∆2
i
j
i + j ∆ k+1+i ∆ k+1+ j
i, j
1
2
u
; we reduce this to the previous calculation as
It remains to evaluate D
∆2 ∆3
∂ ∂
∂ ∂
follows. First note that the differential operators
·
and
·
commute; it
∂u ∂y
∂v ∂x
u
. Consequently the operators
is readily checked that they agree on
∆2 ∆3
1 ∂k 1 ∂k 1 ∂k
·
·
k! ∂ uk k! ∂ yk k! ∂ zk
agree on
and
1 ∂k 1 ∂k 1 ∂k
·
·
k! ∂ vk k! ∂ zk k! ∂ xk
u
as well. But then
∆2 ∆3
u
u
1 ∂k 1 ∂k 1 ∂k
·
·
D
=
∆2 ∆3
k! ∂ vk k! ∂ zk k! ∂ xk (wx − uz)(uy − vx)
which, using the previous calculation and symmetry, equals
k+i k+ j
k
(−wx)i (vx) j
uk+1 ∑
.
i
j
i + j ∆ k+1+i ∆ k+1+ j
i, j
2
3
8
Anurag K. Singh
Identities in general
Suppose f = f1 , . . . , fm are elements of a polynomial ring R over Z, and g1 , . . . , gm
are elements of R such that
g1 f1 + · · · + gm fm = 0.
Then, for each prime integer p and e > 0, the Frobenius map on R = R/pR gives
e
e
e
e
g1p f1p + · · · + gmp fmp ≡ 0 mod p.
(4)
Now suppose p is a nonzerodivisor on the Koszul cohomology module H m ( f ; R).
Then Theorem 1 implies that (4) lifts to an equation
G1 f1N + · · · + Gm fmN = 0
(5)
in R in the sense that the cohomology class corresponding to (5) in H(m−1
(R) maps
f)
to the cohomology class corresponding to (4) in H(m−1
f ) (R).
Acknowledgements NSF support under grant DMS 1500613 is gratefully acknowledged. This
paper owes an obvious intellectual debt to our collaborations with Bhargav Bhatt, Manuel Blickle,
Gennady Lyubeznik, Uli Walther, and Wenliang Zhang; we take this opportunity to thank our
coauthors.
References
ABL.
J. Àlvarez Montaner, M. Blickle, and G. Lyubeznik, Generators of D-modules in characteristic p > 0, Math. Res. Lett. 12 (2005), 459–473.
BBLSZ. B. Bhatt, M. Blickle, G. Lyubeznik, A. K. Singh, and W. Zhang, Local cohomology
modules of a smooth Z-algebra have finitely many associated primes, Invent. Math. 197
(2014), 509–519.
BDV.
A. F. Boix, A. De Stefani, and D. Vanzo, An algorithm for constructing certain differential operators in positive characteristic, Matematiche (Catania) 70 (2015), 239–271.
BH.
W. Bruns and J. Herzog, Cohen-Macaulay rings, revised edition, Cambridge Stud. Adv.
Math. 39, Cambridge Univ. Press, Cambridge, 1998.
BS.
W. Bruns and R. Schwänzl, The number of equations defining a determinantal variety,
Bull. London Math. Soc. 22 (1990), 439–445.
Gr.
A. Grothendieck, Éléments de géométrie algébrique IV, Étude locale des schémas et des
morphismes de schémas IV, Inst. Hautes Études Sci. Publ. Math. 32 (1967), 5–361.
Ly1.
G. Lyubeznik, Finiteness properties of local cohomology modules (an application of Dmodules to commutative algebra), Invent. Math. 113 (1993), 41–55.
Ly2.
G. Lyubeznik, F-modules: applications to local cohomology and D-modules in characteristic p > 0, J. Reine Angew. Math. 491 (1997), 65–130.
LSW.
G. Lyubeznik, A. K. Singh, and U. Walther, Local cohomology modules supported at
determinantal ideals, J. Eur. Math. Soc. 18 (2016), 2545–2578.
PS.
C. Peskine and L. Szpiro, Dimension projective finie et cohomologie locale, Inst. Hautes
Études Sci. Publ. Math. 42 (1973), 47–119.
A polynomial identity via differential operators
PWZ.
Si.
9
M. Petkovšek, H. S. Wilf, and D. Zeilberger, A = B, with a foreword by Donald E. Knuth,
A K Peters Ltd., Wellesley, MA, 1996.
A. K. Singh, p-torsion elements in local cohomology modules, Math. Res. Lett. 7 (2000),
165–176.
| 0 |
Debiasing the Debiased Lasso with Bootstrap
Sai Li∗
arXiv:1711.03613v1 [] 9 Nov 2017
Department of Statistics and Biostatistics, Rutgers University
November 13, 2017
Abstract
In this paper, we prove that under proper conditions, bootstrap can further debias the
debiased Lasso estimator for statistical inference of low-dimensional parameters in highdimensional linear regression. We prove that the required sample size for inference with
bootstrapped debiased Lasso, which involves the number of small coefficients, can be of smaller
order than the existing ones for the debiased Lasso. Therefore, our results reveal the benefits
of having strong signals. Our theory is supported by results of simulation experiments, which
compare coverage probabilities and lengths of confidence intervals with and without bootstrap,
with and without debiasing.
1
Introduction
High-dimensional linear regression is a highly active aera of research in statstics and machine
learning. When the dimension p of the model is larger than the sample size n, regularized least square
estimators are typically used when the signal is believed to be sparse. Properties of regularized least
square estimators in prediction, coefficient estimation and variable selection have been extensively
studied. However, regularized methods do not directly provide valid inference procedures, such as
confidence intervals and hypothesis testing.
Among the regularized regression procedures, the Lasso (Tibshirani, 1996) is one of the most
popular methods as it is computationally manageable and theoretically well-understood. However,
the limiting distribution of the Lasso estimator (Knight and Fu, 2000) depends on unknown
parameters in low-dimensional settings and is not available in high-dimensional settings. Chatterjee
and Lahiri (2010) showed the inconsistency of residual bootstrap for the Lasso if at least one true
coefficient is zero in fixed-dimensional settings. Thus, there is substantial difficulty in drawing valid
inference based on the Lasso estimates directly.
1.1
Debiased Lasso
In the p n scenario, Zhang and Zhang (2014) proposed to construct confidence intervals for
regression coefficients and their low-dimensional linear combinations by “debiasing” regularized
estimators, such as the Lasso. Such estimators are known as the “debiased Lasso” or the “desparsified Lasso”.
∗
Email: [email protected]
1
Along this line of research, many recent papers study computational algorithms and theories
for the debiased Lasso and its extensions beyond linear models. Van de Geer et al. (2014) proved
asymptotic efficiency of the debiased Lasso estimator in linear models and for convex loss functions.
Javanmard and Montanari (2014a) carefully studied a quadratic programming in Zhang and Zhang
(2014) to generate a direction for debiasing the Lasso and demonstrated its benefits. Jankova and
Van De Geer (2015) and Ren et al. (2015) proved asymptotic efficiency of the debiased Lasso in
estimating individual entries of a precision matrix. Mitra and Zhang (2016) proposed to debias a
scaled group Lasso for chi-squared-based statistical inference for large variable groups. Fang et al.
(2016) considered statistical inference with the debiased Lasso in high-dimensional Cox model.
Chernozhukov et al. (2017) studied debiased method in a semiparametric model with machine
learning approaches.
The sample size requirement for asymptotic normality in aforementioned papers is typically
n (s log p)2 , where s is the number of nonzero regression coefficients. However, it is known that
point estimation consistency of the Lasso estimators holds with n s log p. Therefore, it becomes
an intriguing question whether it is possible to conduct statistical inference of individual coefficients
in the regime s log p n . (s log p)2 .
Very little work has been done in this direction. Cai and Guo (2017) proved that adaptivity
in s is infeasible for statistical inference with random design when n (s log p)2 in a minimax
sense. However, for standard Gaussian design, Javanmard and Montanari (2014b) proved that the
debiased estimator is asymptotically Gaussian in an average sense if s = O(n/ log p) with s/p,
n/p constant, but they did not provide theoretical results when the covariance of the design is
unknown. Javanmard and Montanari (2015), denoted as [JM15], proved that asymptotic normality
√
for the debiased Lasso holds when s n/(log p)2 , sj n/ log p and min{s, sj } n/ log p under
Gaussian design and other technical conditions, where sj is the number of nonzero elements in j-th
column of the precision matrix of the design. In this paper, we show that the sample size conditions
for the debiased Lasso can be improved by bootstrap if a significant proportion of signals are strong,
for both deterministic and random designs.
1.2
Bootstrap
Bootstrap has been widely studied in high-dimensional models for conducting inference. For the
debiased Lasso procedure, Zhang and Cheng (2017) proposed a Gaussian bootstrap method to
conduct simultaneous inference with non-Gaussian errors. Dezeure et al. (2016) proposed residual,
paired and wild multiplier bootstrap for the debiased Lasso estimators, which demonstrates the
benefits of bootstrap for heteroscedastic errors as well as simultaneous inference. However, the
aforementioned papers do not provide improvement on the sample size conditions.
For fixed number of covariates p, Chatterjee and Lahiri (2011) proposed to apply bootstrap to a
modified Lasso estimator as well as to the Adaptive Lasso estimator (Zou, 2006). In a closely related
paper, Chatterjee and Lahiri (2013) showed the consistency of bootstrap for Adaptive Lasso when
p increases with n under some conditions which guarantee sign consistency. They also proved the
second-order correctness for a studentized pivot with a bias-correction term. It is worth mentioning
that a beta-min condition is required in their theorems as sign consistency is used to prove bootstrap
consistency.
In this paper, we prove that the bias of the debiased Lasso estimator can be further removed by
bootstrap without assuming the beta-min condition. We provide a refined analysis to distinguish
the effects of small and large coefficients and show that bootstrap can remove the bias caused
by strong coefficients. Under deterministic designs, the sample size requirement is reduced to
n max{s log p, (s̃ log p)2 }, where s̃ is the number of coefficients whose size is no larger than
2
p
C log p/n for some constant C > 0 (Theorem 3.4 in Section 3). One can see that the condition on
the overall sparsity s, s n/ log p, provides the rate of point estimation. If a majority of signals are
strong, say s̃ s, our sample size condition is weaker than the usual n (s log p)2 . Comparable
results are also proved for Gaussian designs, which involve the sparsity of the j-th column of the
precision matrix (Theorem 4.3 in Section 4).
1.3
Some other related literatures
In the realm of high-dimensional inference, many other topics have been studied. For bootstrap
theories, Mammen (1993) considered estimating the distribution of linear contrasts and of F-test
statistics when p increases with n. Chernozhukov et al. (2013) and Deng and Zhang (2017) developed
theories for multiplier bootstrap to approximate the maximum of a sum of high-dimensional random
vectors. Belloni et al. (2014) proposed to construct confidence regions for instrumental median
regression estimator and other Z-estimators based on Neyman’s orthogonalization, which is firstorder equivalent to the bias correction. Inference based on selected model has been considered in
many recent papers (Berk et al., 2013; Lockhart et al., 2014; Lee et al., 2016; Tibshirani et al.,
2016). Barber and Candes (2016) considered false discovery rate control via a knockoff filter in
high-dimensional setting.
1.4
Notations
For vectors u and v, let kukq denote the `q norm of u, kuk0 the number of nonzero entries of u,
hv, ui = uT v the inner product. For a set T , let |T | denote the cardinality of T and uT the subvector
of u with components in T . We use ej to refer to the j-th standard basis element, for example,
e1 = (1, 0, . . . , 0). For a matrix A ∈ Rk1 ×k2 , let kAkq denote the `q operator norm of A. Specially,
let kAk∞ = maxj≤k1 kAj,. k1 . Let Λmax (A) and Λmin (A) be the largest and smallest singular values
of A, AT1 ,T2 the submatrix of A consisting of rows in T1 and columns in T2 . For a vector b ∈ Rp ,
let sgn(b) be an element of the sub-differential of the `1 norm of b. Specifically, (sgn(b))j = bj /|bj |
if bj 6= 0 and (sgn(b))j ∈ [−1, 1] if bj = 0. We use an bn to refer to |an /bn | → 0. The notation
an bn is defined analogously. Let Φ(x) = P(Z ≤ x), where Z is a standard normal random
variable.
We use v0 , v1 , v2 , c1 , c2 , . . . to denote generic constants which may vary from one appearance to
the other.
1.5
Outline of the paper
The remainder of this paper is organized as follows. In Section 2, we describe the procedure under
consideration and layout the main ideas of the proof. In Section 3 and 4, we prove the bootstrap
consistency for the debiased Lasso and the asymptotic normality of a bias-corrected debiased Lasso
estimator under fixed designs and Gaussian designs, respectively. We illustrate our theoretical
results with simulation experiments in Section 5 and conclude with a discussion in Section 6. Proofs
of the main theorems and lemmas are provided in Section 7.
2
Main contents
In this section, we describe the procedure of bootstrapping the debiased Lasso under consideration
and the main ideas of this paper.
3
2.1
Bootstrapping the debiased Lasso
Consider a linear regression model
yi = xTi β + i ,
where β ∈ Rp is the true unknown parameter and 1 , . . . , n are i.i.d. random variables with mean 0
and variance σ 2 . We assume the true β is sparse in the sense that the number of nonzero entries of β
is relatively small compared with min{n, p}. For simplicity, we also assume that xj ’s are normalized,
s.t. kxj k22 = n, for j = 1, . . . , p.
The Lasso estimator (Tibshirani, 1996) is defined as
β̂ = arg min
b∈Rp
1
ky − Xbk22 + λkbk1 ,
2n
(1)
where λ > 0 is a tuning parameter.
Suppose that we are interested in making inference of a single coordinate βj , j = 1, . . . , p. The
debiased Lasso (Zhang and Zhang, 2014) corrects the Lasso estimator by a term calculated from
residuals. Specifically, it takes the form
(DB)
β̂j
= β̂j +
zjT (y − X β̂)
zjT xj
,
(2)
where zj is an estimate of the least favorable direction (Zhang, 2011). For the construction of zj , it
can be computed either as the residual of another `1 -penalized regression of xj on X−j (Zhang and
Zhang, 2014; Van de Geer et al., 2014) or by a quadratic optimization (Zhang and Zhang, 2014;
Javanmard and Montanari, 2014a). We adopt the first procedure in this paper. Formally,
zj = xj − X−j γ̂−j , with
1
γ̂−j = arg min
kxj − X−j γ−j k22 + λj kγ−j k1 , λj > 0.
γ−j ∈Rp−1 2n
(3)
(4)
While it is also possible to debias other regularized estimators of β, such as Dantzig selector (Candes
and Tao, 2007), SCAD (Fan and Li, 2001) and MCP (Zhang, 2010), we restrict our attention to
bootstrapping the debiased Lasso.
We consider Gaussian bootstrap although the noise i are not necessarily assumed to be normally
distributed. We generate the bootstrapped response vector as
yi∗ = xi β̂ + ˆ∗i , i = 1, . . . , n,
(5)
where xi are unchanged and ˆ∗i are i.i.d. standard Gaussian random variables multiplied by an
estimated standard deviation σ̂. Namely,
ˆ∗i = σ̂ξi , i = 1, . . . , n,
(6)
where ξi , are i.i.d standard normal. For the choice of variance estimator, we use
σ̂ 2 =
1
n − kβ̂k0
ky − X β̂k22
(7)
(Sun and Zhang, 2012; Reid et al., 2016; Zhang and Cheng, 2017; Dezeure et al., 2016). This is the
same proposal of bootstrapping the residuals as in Zhang and Cheng (2017). However, we do not
4
directly use ˆ∗i in (6) to simulate the distribution of the debiased estimator. Instead, we recompute
the debiased Lasso based on (X, y ∗ ) as follows:
(∗,DB)
β̂j
=
β̂j∗
+
zjT (y ∗ − X β̂ ∗ )
zjT xj
,
(8)
where zj is the same as the sample version in (3) and β̂ ∗ is the bootstrap version of the Lasso
estimator computed via (1) with (X, y ∗ ) instead of the original sample.
We construct the confidence interval for βj as
(DB)
(∗,DB)
(DB)
(∗,DB)
β̂j
− q1−α/2 (β̂j
− β̂j ), β̂j
− qα/2 (β̂j
− β̂j ) ,
(9)
where qc (u) is the c-quantile of the distribution of u.
We prove that under proper conditions, the approximation error of the debiased Lasso estimator
(DB)
β̂j
in (2) is dominated by a constant term. We propose to estimate this dominating constant
bias by the median of the bootstrapped approximation errors and construct a double debiased Lasso
(DDB) estimator
(DDB)
(DB)
(∗,DB)
β̂j
= β̂j
− median β̂j
− β̂j ,
(10)
which is asymptotically normal under proper conditions.
2.2
Main ideas
Our analysis is based on a different error decomposition for the debiased Lasso from the one originally
introduced. In Zhang and Zhang (2014), the error of the debiased Lasso is decomposed into two
terms, a noise term and a remainder term:
!
TX
z
zjT
j
(DB)
β̂j
− βj =
(β̂ − β) .
(11)
− eTj − T
zjT xj
z j xj
| {z }
|
{z
}
Orig.noise
Orig.remainder
This is the starting point of many existing analysis of the debiased Lasso (Van de Geer et al., 2014;
Javanmard and Montanari, 2014a; Dezeure et al., 2016). Typically, the Orig.remainder term is
bounded by OP (sλλj ) through an `∞ -`1 splitting with λj in (4).
Our analysis is motivated by the following observations. Let S and Ŝ be the support of β and
β̂ respectively. It follows from the KKT condition of Lasso (1) that
β̂Ŝ = βŜ +
1 T
X X
n Ŝ Ŝ
−1
−1
1 T
1 T
X −λ
X X
sgn(β̂Ŝ ) and β̂Ŝ c = 0,
n Ŝ
n Ŝ Ŝ
assuming that XŜT XŜ /n is invertible. Our idea is to approximate β̂ by an oracle estimator β̂ o in
the analysis, where
−1
−1
1 T
1 T
1 T
β̂So = βS +
XS XS
XS − λ
XS XS
sgn(βS ) and β̂So c = 0,
(12)
n
n
n
when XST XS /n is invertible. This estimator β̂ o is oracle as it requires the knowledge of the true
support of β. However, it is different from the oracle least square estimator as the last term in β̂So
is added to mimic the Lasso estimator. In fact, β̂ o = β̂ when the Lasso estimator is sign consistent.
5
Inference based on the oracle estimator β̂ o (12) is relatively easy, because its approximation
error does not involve random support selection. In fact, its approximation error is linear in with
an unknown intercept. Our idea is that when the difference between the oracle estimator β̂ o and
the Lasso estimator β̂ is small, the approximation error of the debiased Lasso in (11) is dominated
by a bias term associated with this intercept. Therefore, bootstrap can be used to remove this main
bias term. Specifically, we decompose the error of the debiased Lasso in (2) as
!
!
zjT
zjT X
zjT X
(DB)
T
o
T
β̂j
− βj = T − ej − T
(β̂ − β) − ej − T
(β̂ − β̂ o )
zj x j
z j xj
zj x j
!S
−1
zjT
zjT X
XST
1 T
= T − eTj − T
XS XS
n
n
zj xj
zj x j
|
{zS
}
N oise
!
!
−1
TX
TX
z
z
1
j
j
+ λ eTj − T
X T XS
sgn(βS ) + eTj − T
(β̂ o − β̂) .
(13)
n S
zj x j
zj x j
S
{z
}
|
{z
} |
Remainder
Bias
Here the N oise term is the sum of the Orig.noise in (11) and a noise term associated with the
oracle estimator β̂ o in (12). The Bias term is from the intercept of the oracle estimator β̂ o in (12),
which is a constant of order OP (sλλj ) (Remark 3.1 in Section 3). The Remainder term arises from
the difference between the oracle estimator β̂ o and the Lasso estimator β̂ in (1). We prove that the
consistency of bootstrap when the Remainder is of order o(n−1/2 ), even if the Bias term is of larger
order than n−1/2 . The error decomposition in (13) will demonstrate benefits over the decomposition
in (11) when the Remainder term in (13) is of smaller order than the Orig.remainder term in (11).
One way to bound the Remainder term in (13) is by considering the event that the selected
support by Lasso is inside the true support and the an `∞ -bound exists for its estimation error:
n
o
Ω0 = Ŝ ⊆ S and kβ̂ − βk∞ ≤ Cn,p λ, Cn,p > 0 .
(14)
Recall that when the Lasso estimator β̂ is sign consistent, β̂ o = β̂ and Remainder in (13) is zero.
Let S̃ be a set of “small” coefficients, such that S̃ = {j : 0 < |βj | ≤ Cn,p λ}. In Ω0 , we can get
sgn(βj ) = sgn(β̂j ), for j ∈ S\S̃. And hence the sign inconsistency only occurs on S̃. Formally,
ksgn(β̂S ) − sgn(βS )k1 ≤ 2|S̃|.
(15)
We show that the Remainder term in (13) is associated with the order of |S̃|. This leads to the
improvement in sample size requirement when |S̃| is of smaller order than |S|.
3
Main results: deterministic design
In this section, we carry out detailed analysis for deterministic designs. We first provide sufficient
conditions for our theorems. For ease of notation, let Σn = X T X/n.
Condition 3.1. X is deterministic with
Λmin (ΣnS,S ) = Cmin > 0 and
6
max |Xi,j | ≤ K0 .
i≤n,j≤p
Condition 3.2.
ΣnS c ,S (ΣnS,S )−1
∞
≤ κ < 1.
Condition 3.3.
(ΣnS,S )−1
∞
≤ K1 < ∞.
Condition 3.4. i , i = 1, . . . , n, are i.i.d. random variables from a distribution with E[1 ] =
0, E[21 ] = σ 2 and E[|1 |4 ] ≤ M0 .
Condition 3.5. For any j ≤ p, kzj k44 = o(kzj k42 ) and kzj k22 /n ≥ K2 > 0.
As K1 is assumed to be a constant in Condition 3.3, the eigenvalue condition in Condition 3.1 is
redundant in the sense that Λmin (ΣnS,S ) ≥ 1/K1 . Note that the eigenvalue condition and Condition
3.3 are only required on a block of the Gram matrix consisting of rows and columns in the true
support. The quantity in Condition 3.2 is called incoherence parameter (Wainwright, 2009). This
condition is equivalent to the uniformity of the strong irrepresentable condition (Zhao and Yu, 2006)
over all sign vectors. Another related condition, the neighborhood stability condition (Meinshausen
and Bühlmann, 2006), has been studied for model selection in Gaussian graphical models. Condition
3.3 is required for establishing an `∞ -bound of estimation error of the Lasso estimator. Condition
3.4 involves only first four moments of allowing some heavy-tailed distributions. Condition 3.5
contains some regularity conditions on zj , which are verifiable after the calculation of zj .
3.1
Preliminary lemmas
We first prove that event Ω0 in (14) holds true with large probability for deterministic designs.
Lemma 3.1. Suppose that Conditions 3.1 - 3.4 are satisfied and (n, p, s, λ) satisfies that
r
32σ 2
2 log p
16σ
n≥ 2
and λ >
.
2
λ (1 − κ)
1−κ
n
Then it holds that
(16)
r
2 log p
Ŝ ⊆ S and kβ̂S − βS k∞ ≤ K1 λ + 8σ
,
Cmin n
{z
}
|
(17)
g1 (λ)
with probability greater than 1 − 4 exp(−c1 log p) − c2 /n for some c1 , c2 > 0.
Lemma 3.1 is proved in Section 7.1. Lemma 3.1 asserts that the Lasso estimator does not have
false positive selection with large probability under Conditions 3.1 - 3.4. It is known that Condition
3.3 and beta-min condition together imply the selection consistency of the Lasso estimator. However,
we do not impose the beta-min
condition but distinguish the effects of small and large signals. Note
p
that g1 (λ) λ for λ log p/n.
Next we show that analogous results of Lemma 3.1 hold for the bootstrap version of the Lasso
estimator β̂ ∗ .
Lemma 3.2. Assume that Conditions 3.1 - 3.4 are satisfied. If (n, p, s, λ) satisfies (16), n s log p
and
r
p
2 log p
4σ
≤ λ log p/n,
(18)
1−κ
n
then with probability going to 1,
r
2 log p
∗
∗
Ŝ ⊆ S and kβ̂S − β̂S k∞ ≤ K1 λ + 2σ
.
(19)
Cmin n
|
{z
}
g10 (λ)
7
Lemma 3.2 is proved in Section 7.2. We mention that the condition n s log p is required for
the consistency of σ̂ 2 (see Lemma A.4 for details). Note that it is Ŝ instead of S that is the true
support under the bootstrap resampling proposal and Ŝ ⊆ S with large probability by Lemma 3.1.
However, (19) is sufficient for the error of the bootstrapped debiased Lasso to approximate the error
decomposition in (13), which can be seen from the next Lemma.
In the following lemma, we consider the error decomposition in (13) and bound the Remainder
term for the debiased Lasso and its bootstrap analogue. Let N oise∗ be the bootstrap version of
N oise, which is
!
−1 T ∗
zjT ˆ∗
zjT X
XS ˆ
1 T
∗
T
N oise = T − ej − T
XS XS
.
(20)
n
n
zj x j
zj xj
S
Let s̃ be the number of small coefficients, such that
s̃ = j : 0 < |βj | < g1 (λ) + g10 (λ)
,
(21)
for g1 (λ) and g10 (λ) defined in (17) and (19) respectively.
Lemma 3.3 (Bounding the remainder terms). Suppose that Conditions 3.1 - 3.4 hold true, λ
p
(DB)
(∗,DB)
log p/n satisfies (16) and (18) and n s log p. For β̂j
and β̂j
defined in (2) and (8)
respectively, we have
!
s̃λλj
(DB)
= o(1),
(22)
P β̂j
− βj − N oise − Bias > 2K1 T
zj xj /n
!
s̃λλ
j
(∗,DB)
P β̂j
− β̂j − N oise∗ − Bias > 2K1 T
= o(1),
(23)
zj xj /n
where N oise and Bias are defined in (13), N oise∗ is defined in (20) and s̃ is defined in (21).
Lemma 3.3 is proved in Section 7.3. The factor zjT xj /n is calculable and can be treated as a
positive constant typically. In fact, this factor is proportional to the standard deviation of N oise
and N oise∗ , so that it will be cancelled in the analysis of the asymptotic normality. Therefore, we
have proved that the Remainder term in (13) is of order OP (s̃λλj ).
p
Remark 3.1. Under Conditions 3.1 - 3.4 and λ λj log p/n, we can get a natural upper
bound on Bias in (13):
!
sλλj
s log p
Bias = OP
= OP
= oP (1).
n
(zjT xj /n)Cmin
Note that the order of Bias is not guaranteed to be o(n−1/2 ) under the sample size conditions of
Lemma 3.3. There will be no guarantee of improvement on the sample size requirement if we do not
remove the Bias term.
3.2
Consistency of bootstrap approximation
Inference for βj is based on the following pivotal statistics
Rj =
zjT xj (DB)
zjT xj (∗,DB)
(β̂j
− βj ) and Rj∗ =
(β̂
− β̂j ),
kzj k2
kzj k2 j
8
(24)
where β̂ (DB) and β̂ (∗,DB) are defined in (2) and (8) respectively. We show the consistency of
bootstrap approximation of Rj∗ to Rj as well as the asymptotic normality of a pivot based on the
(DDB)
in (10):
(DDB)
=
double debiased Lasso estimator β̂j
Rj
zjT xj
(DDB)
(β̂j
− βj ).
σ̂kzj k2
(25)
We specify the sample size conditions as following:
n
p
A1 = (n, p, s, s̃, λ, λj ) : (n, p, s, λ) satisfies (16) and (18), λ λj log p/n and
o
n max{s log p, (s̃ log p)2 } for s̃ in (21) .
(26)
As discussed in Section 1, the condition on the overall sparsity recovers the rate of point estimation.
If s̃ s, our sample size condition is weaker than the typical one n (s log p)2 .
Theorem 3.4. Assume that Conditions 3.1 - 3.5 hold true and (n, p, s, s̃, λ, λj ) ∈ A1 . Then for Rj
and Rj∗ defined in (24), it holds that
sup
P{Rj ≤ qα (Rj∗ )} − α = oP (1),
α∈(0,1)
(DDB)
where qα (Rj∗ ) is the α-quantile of the distribution of Rj∗ . For Rj
sup
α∈(0,1)
(DDB)
P(Rj
defined in (25),
≤ zα ) − α = oP (1).
Theorem 3.4 is proved in Section 7.4. Based on Theorem 3.4, a two-sided 100 × (1 − α)%
confidence interval for βj can be constructed as in (9).
For the double debiased estimator (10), the Biasin (48) is estimated
by the median of the
(∗,DB)
(∗,DB)
distribution of β̂j
− β̂j . In practice, the median β̂j
− β̂j can be approximated by the
sample median of bootstrap realizations.
Remark 3.2. Suppose we are interested in making inference for a linear combination of regression
coefficients ha0 , βi for a0 ∈ Rp . It is not hard to see that Gaussian bootstrap remains consistent
under the conditions of Theorem 3.4 if ka0 k1 /ka0 k2 is bounded.
4
Main results: Gaussian designs
This section includes main results in the case of Gaussian design. The proof follows similar steps
as for deterministic designs. We first describe conditions we impose in our theorems.
Condition 4.1. X has independent Gaussian rows with mean 0 and covariance Σ.
Condition 4.2.
ΣS c ,S Σ−1
S,S
Condition 4.3.
−1/2 2
ΣS,S
∞
∞
≤ κ < 1.
≤ K1 < ∞.
9
Condition 4.4.
Λmin (ΣS,S ) ≥ Cmin , 1/ max(Σ−1 )j,j ≥ C∗ and max Σj,j ≤ C ∗ < ∞.
j≤p
j≤p
Condition 4.5. i , i = 1, . . . , n, are i.i.d from Gaussian distribution with mean 0 and variance σ 2 .
Condition 4.2 - Condition 4.3 are population versions of Condition 3.2 - Condition 3.3. In
Condition 4.4, we require that the largest diagonal element of Σ−1 is upper bounded, in order to
lower bound kzj k22 /n asymptotically. It is also worth mentioning that Condition 4.3 is related to
condition (iii) in [JM15]. Specifically, [JM15] required that
ρ(Σ, C0 s0 ) =
max
T ⊆[p],|T |≤C0 s
Σ−1
T,T
∞
< ρ, for C0 ≥ 33.
This condition is on a set T , which is actually the support of the estimation error of a perturbed
Lasso estimator, while Condition 4.3 is assumed on the true support S.
4.1
Preliminary lemmas
Lemma 4.1. Assume that Conditions 4.1 - 4.5 are satisfied and (n, p, s, λ) satisfies that
r
σ 2 Cmin
8σ
C ∗ log p
2
sλ ≤
and λ ≥
.
2
1−κ
n
Then for
r
g2 (λ) = (1 + Cn )K1 λ + 4σ
log p
with Cn = O
Cmin n
s∨
√
s log p
√
,
n
(27)
(28)
it holds that
Ŝ ⊆ S and kβ̂S − βS k∞ ≤ g2 (λ),
(29)
with probability greater than 1 − c1 /n − 2 exp(−c2 log p) − 2 exp(−c3 n) for some c1 , c2 , c3 > 0.
Lemma 4.1 is proved in Section 7.5. Note that g2 (λ) = o(1) if n s log p. If s2 ∨ s log p = O(n),
then g2 (λ) = O(λ), which is the same as deterministic design case. Theorem 3 in Wainwright (2009)
considers the same scenario, but their results require s → ∞ and their upper bound on kβ̂ − βk∞
only holds for sign consistency case.
In the next Lemma, we prove a bootstrap analogue of Lemma 4.1.
Lemma 4.2. Assume that Conditions 4.1 - 4.5 hold true. If (n, p, s, λ) satisfies (27), n s log p
and
r
r
4σ
log p
log p
≤λ
,
(30)
1−κ
n
n
then with probability going to 1,
Ŝ ∗ ⊆ S and kβ̂S∗ − β̂S k∞ ≤ g2 (λ), for g2 (λ) in (28).
(31)
Lemma 4.2 in proved in Section 7.6. Same as the deterministic design case, the condition
n s log p is required for the consistency of σ̂ 2 .
10
4.2
Consistency of bootstrap approximation
Under Conditions 4.1 - 4.5, we prove the consistency of Gaussian bootstrap under Gaussian designs.
For g2 (λ) defined in (28), define
s̃ = |{j : 0 < |βj | < 2g2 (λ)}| .
We specify the required sample size condition as following:
n
p
A2 = (n, p, s, s̃, sj , λ, λj ) : (n, p, s, λ) satisfies (27) and (30), λ λj log p/n and
o
n max{ss̃ log p, (s̃ log p)2 , sj log p} for s̃ in (32) .
(32)
(33)
Theorem 4.3. Suppose that Conditions 4.1 - 4.5 are satisfied and (n, p, s, s̃, sj , λ, λj ) ∈ A2 . Then
it holds that
sup P{Rj ≤ qα (Rj∗ )} − α = oP (1).
α∈(0,1)
(DDB)
For Rj
defined in (25),
sup
α∈(0,1)
(DDB)
P(Rj
≤ zα ) − α = oP (1).
It can be seen from the proof (Section 7.7) that condition n ss̃ log p in (33) is used to achieve
desired rates for |Remainder| and its bootstrap analogue, such that k(ΣnS,S )−1 k∞ s̃λ2 = oP (n−1/2 ).
The condition n sj log p is required to prove that kzj k22 /n is asymptotically bounded away from
zero.
√
In terms of the sparsity requirements, A2 (33) implies that it is sufficient to require s = O( n)
√
√
and s̃ = o( n/ log p). Compared to the typical condition, s = o( n/ log p), our condition allows at
least an extra order of log p. Moreover, if s̃ is constant, our requirement on s is s n/ log p, which
recovers the rate of point estimation. Comparing with the sparsity condition assumed in [JM15] for
unknown Gaussian design case, our analysis still benefits when s̃ is sufficiently small:
• If the sparsity of the j-th column of precision matrix is much larger than the sparsity of β, i.e.
s̃ ≤ s sj , [JM15] required n max{(s log p)2 , sj log p}, which is no better than the rate in
A2 (33) as discussed above. If s̃ s, A2 is weaker than the sparsity conditions assumed in
[JM15].
• If the j-th column of the precision matrix is much sparser, i.e. s sj , [JM15] required
that n max{s(log p)2 , (sj log p)2 }. If s̃ log p, then ss̃ log p s(log p)2 and hence the
sample size condition in A2 is weaker. If s̃ log p, [JM15] required weaker condition on s
but stronger condition on sj .
5
Simulations
In this section, we report the performance of the debiased Lasso with Gaussian bootstrap and other
comparable methods in simulation experiments.
Consider deterministic design case with n = 100, p = 500, Xi ∼ N (0, Ip ) and i ∼ N (0, 1). We
consider a relatively large sparsity level, s = 20, and two levels of true regression coefficients as
following.
11
(i) All the signals are strong: β1 = · · · = β20 = 2.
(ii) A large proportion of signals are strong: β1 = · · · = β5 = 1, β6 = · · · = β20 = 2.
We compare the performance of bootstrapping the debiased Lasso (BS-DB), the debiased Lasso
without bootstrap (DB) and the Adaptive Lasso with residual bootstrap (BS-ADP). For BS-DB,
we generate (1 − α)% confidence interval (CI) according to (9) with 500 bootstrap resamples. We
take λ = λj at the universal level for the Lasso procedures. For DB, we estimate the noise level by
(7) and take λ = λj at the universal level for the Lasso procedures. (1 − α)% confidence intervals
are generated according to
!
kzj k2
kzj k2
(DB)
(DB)
β̂j
+ σ̂zα/2 T , β̂j
+ σ̂z1−α/2 T
.
zj x j
zj xj
For BS-ADP, we consider the pivot defined in (4.2) of Chatterjee and Lahiri (2013), which can
achieve second-order correctness under some conditions. Such estimators also have a bias-correction
term, which can be explicitly calculated assuming sign consistency. The choices of λ1,n and λ2,n are
according to Section 6 of Chatterjee and Lahiri (2013). Each confidence interval is generated with
500 bootstrap resamples.
We construct two-sided 95% confidence intervals using each of the aforementioned methods.
Each setting is replicated with 1000 independent realizations. In the following table, we report
the average coverage probability on S and S c (cov
c S and cov
c S c , respectively) as well as the average
length of CIs on S and S c (`S and `S c , respectively) for identity covariance matrix and equicorrelated
covariance matrix with Σj,j = 1 and Σj,k = 0.2 (j 6= k).
β
(i)
(ii)
Methods
BS-DB
DB
BS-ADP
BS-DB
DB
BS-ADP
cov
cS
0.997
0.940
0.274
0.974
0.939
0.279
Σj,k = 0
cov
c Sc
0.999
0.982
0.950
0.998
0.982
0.951
(j =
6 k)
`S
1.127
0.886
0.181
1.057
0.887
0.195
`S c
0.539
0.885
0.201
0.554
0.886
0.187
cov
cS
0.893
0.627
0.241
0.820
0.649
0.280
Σj,k = 0.2 (j 6= k)
cov
c Sc
`S
`S c
0.997 1.074 0.436
0.934 0.760 0.778
0.945 0.432 0.319
0.987 0.963 0.424
0.925 0.716 0.733
0.943 0.638 0.280
One can see that BS-DB always gives larger coverage probabilities than DB across different
settings. We mention that noise level is overestimated (see (7)). For example, in setting (i) and
(ii) with the identity covariance matrix, the average of σ̂ is 2.240 and 2.244, respectively. The CIs
given by BS-DB are longer than those computed with DB on S, but on S c the CIs given by BS-DB
are shorter than the ones given by DB. On the other hand, BS-ADP exhibits the overconfidence
phenomenon: the average lengths of CIs are small, which results in low coverage probabilities on
S. In the presence of equicorrelation, which is a harder case, BS-DB is significantly better than DB
and BS-ADP in terms of coverage probability.
12
●
1.5
1.6
●
●
●
●
●
●
●
1.25
●
●
●
●
●
1.00
Centers of CIs
●
1.0
●
1.2
●
●
●
●
●
●
0.75
●
●
0.8
0.5
●
●
●
●
●
●
●
●
0.50
●
●
●
●
●
●
●
●
●
●
●
0.4
●
●
●
●
0.0
DB
DDB
Las
DB
●
DDB
0.25
●
●
●
●
●
Las
DB
●
●
2.5
●
●
●
●
DDB
Las
●
●
●
●
2.4
●
●
●
●
Centers of CIs
●
●
●
●
2.0
2.0
●
●
●
2.0
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
1.5
●
●
●
●
●
●
1.6
1.5
●
●
●
●
●
●
●
●
●
●
DB
DDB
●
●
●
●
●
●
1.0
●
●
●
●
●
●
●
DB
DDB
Las
DB
DDB
●
0.2
●
●
●
●
0.4
●
●
●
Las
Las
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0.0
●
●
●
●
0.0
Centers of CIs
●
0.2
●
●
●
●
●
●
●
●
●
●
−0.2
0.0
−0.2
●
●
●
●
●
●
●
●
●
●
●
●
●
−0.2
●
●
DB
DDB
●
−0.4
−0.4
●
●
●
Las
DB
DDB
Las
DB
DDB
Las
Figure 1: Boxplots of the double debiased Lasso (DDB) (10), the debiased Lasso (DB) (2) and the
Lasso (Las) (1) with the identity covariance matrix in setting (ii). First row consists of estimates
for weak signals: β1 = β2 = β3 = 1. Second row consists of estimates for strong signals: β6 = β7 =
β8 = 2. Third row consists of estimates for zeros: β21 = β22 = β23 = 0. Each Boxplot is based on
1000 independent replications.
Figure 1 demonstrates the bias-correction effects of debiasing and bootstrap across different
levels of signal strengths. Concerning the overall performance, DDB is better than DB in terms of
bias-correction, which is in line with our theoretical results. For j ∈ S, DDB and DB are less biased
than the Lasso estimators. On S c , the Lasso estimates the regression coefficients as zero with a
large probability. Thus, the Boxplot degenerates to a point at zero with a few outliers. Comparing
row-wise, one can see that bootstrap has more significant correction effects on strong signals (second
row) than on weak signals (first row). When true coefficients are zeros, DDB is also less biased than
DB.
13
6
Discussions
We consider the bias-correction effect of bootstrap for statistical inference with debiased Lasso under
proper conditions. Our analysis on the approximation error of debiased Lasso admits sample size
conditions in terms of the number of weak signals. Our results contribute to the inference problem
in the regime s log p n . (s log p)2 , but also demonstrate the benefits of having strong signals for
the debiased Lasso procedure. We establish the consistency of Gaussian bootstrap and show that
confidence intervals can be constructed based on bootstrap samples.
Besides Gaussian bootstrap, we also considered residual bootstrap, which is robust in the
presence of heteroscedastic errors. However, the proof involves a more technical analysis and may
impair the sample size conditions. To focus on the main idea, this is omitted from the paper. We
also considered the proof techniques in [JM15], which construct a perturbed version of the Lasso
estimator assuming βj is known and utilize its independence of xj . However, these techniques
cannot be directly applied to the bootstrapped debiased Lasso, since the “true” parameters β̂ and
σ̂ under the bootstrap resampling plan are not independent with xj for j ∈ S.
7
Proofs of main lemmas and theorems
To simplify our notations in this section, let ûS = β̂S − βS , û∗S = β̂S∗ − β̂S , WSn = XST /n, WS∗ =
XST ˆ∗ /n and S\j = S\{j}.
7.1
Proof of Lemma 3.1
Proof. Firstly, we use Lemma 1 - Lemma 3 in Wainwright (2009) to prove that (17) holds with large
probablity. Consider a restricted Lasso problem
β̌S = arg min
b∈Rs
1
ky − XS bS k22 + λkbS k1 and β̌S c = 0.
2n
(34)
PS⊥
T
−1
XS (XS XS ) sgn(β̌S ) +
.
nλ
(35)
Define Tj as
Tj =
xTj
By Lemma 1 of Wainwright (2009), if ΣnS,S is invertible and |Tj | < 1 for ∀j ∈ S c , then the β̌ is the
unique solution to the Lasso with Ŝ ⊆ S. Note that
maxc |Tj | ≤ kXSTc XS (XST XS )−1 sgn(β̌S )k∞ + maxc
j∈S
j∈S
≤
XSTc XS (XST XS )−1 ∞
xTj PS⊥
nλ
xTj PS⊥
+ maxc
.
j∈S
nλ
|
{z
}
Q1
We use standard symmetrization techniques to prove that Q1 ≤ (1 − κ)/2 with large probability
(see Lemma A.1 for detailed results). By Condition 3.2 and (76) in Lemma A.1, there exists some
c1 , c2 > 0 such that
κ+1
1−κ
c2
P maxc |Tj | >
≤ P Q1 >
≤ 4 exp (−c1 log p) + ,
j∈S
2
2
n
14
for λ in (16). Together with Condition 3.1, we have Ŝ ⊆ S with probability greater than
4 exp (−c1 log p) + c2 /n. By the KKT condition of the Lasso (1), Ŝ ⊆ S implies that
ûS = (ΣnS,S )−1 WSn − λ(ΣnS,S )−1 sgn(β̂S ).
(36)
Then we have
kûS k∞ = k(ΣnS,S )−1 WSn − λ(ΣnS,S )−1 sgn(β̂S )k∞
≤ k(ΣnS,S )−1 WSn k∞ +λk(ΣnS,S )−1 k∞ .
|
{z
}
Q2
By (77) in Lemma A.1 and Condition 3.3, there exists some c3 , c4 > 0 such that
!
r
2 log p
c4
≤ 4 exp(−c3 log p) + .
P kûS k∞ > K1 λ + 8σ
Cmin n
n
7.2
Proof of Lemma 3.2
Proof. Formally, the bootstrapped Lasso estimator β̂ ∗ is defined via
β̂ ∗ = arg min
b∈Rp
1 ∗
ky − Xbk22 + λkbk1 .
2n
(37)
Define a restricted Lasso problem with observations (XS , y ∗ ):
β̌S∗ = arg min
b∈Rs
1 ∗
ky − XS bS k22 + λkbS k1 and β̌S∗ c = 0.
2n
(38)
Define Tj∗ as
Tj∗
xTj
XS (XST XS )−1 sgn(β̌S∗ )
P ⊥ ˆ∗
+ S
nλ
.
(39)
P(Ŝ ∗ ⊆ S) ≥ P maxc |Tj∗ | < 1, Ŝ ⊆ S
j∈S
∗
= P maxc |Tj | < 1 − P(Ŝ 6⊆ S).
(40)
=
By Condition 3.1 and Lemma A.2,
j∈S
Note that
maxc |Tj∗ | ≤ maxc xTj XS (XST XS )−1 sgn(β̌S∗ )
j∈S
j∈S
+ maxc
1
j∈S
xTj PS⊥ ˆ∗
.
nλ
| {z }
Q∗1,j
By construction (5), Q∗1,j is a Gaussian variable with mean zero and variance no larger than
σ̂ 2 /(nλ2 ), ∀j ∈ S c , conditioning on σ̂ 2 . Thus,
1−κ
1−κ 2
∗
∗
2
P maxc |Q1,j | ≥
≤ P maxc |Q1,j | ≥
|σ̂ ≤ 2σ + P(σ̂ 2 ≥ 2σ 2 )
j∈S
j∈S
2
2
nλ2 (1 − κ)2
≤ 2(p − s) exp −
+ o(1),
16σ 2
15
where the last step is due to the consistency of σ̂ 2 in (7) (see
q Lemma A.4). Condition 3.2 implies
2 log p
4σ
that maxj∈S c kxTj XS (XST XS )−1 sgn(β̌S∗ )k1 ≤ κ. For λ > 1−κ
and some c1 > 0, we have
n
1+κ
∗
< 1 ≥ 1 − 2 exp(−c1 log p) − o(1).
P |Tj | ≤
2
By Lemma 3.1 and (40), P(Ŝ ⊆ S) → 1 and hence
P (Ŝ ∗ ⊆ S) ≥ 1 − 2 exp(−c1 log p) − o(1).
By the KKT condition of the bootstrapped Lasso (37), in the event that {Ŝ ∗ ⊆ S},
û∗S = (ΣnS,S )−1 WS∗ − λ(ΣnS,S )−1 sgn(β̂S∗ ).
(41)
Therefore,
kû∗S k∞ ≤ k(ΣnS,S )−1 WS∗ k∞ + kλ(ΣnS,S )−1 sgn(β̂S∗ )k∞
≤ k(ΣnS,S )−1 WS∗ k∞ + λK1 .
(42)
Again using the Gaussian property of ˆ∗ , there exists some c2 > 0 such that
!
r
log
p
2σ
P k(ΣnS,S )−1 WS∗ k∞ ≥ √
n
Cmin
!
r
2σ
log
p
≤ P k(ΣnS,S )−1 WS∗ k∞ ≥ √
σ̂ 2 ≤ 2σ 2 , Ŝ ∗ ⊆ S + P(σ̂ 2 > 2σ 2 ) + P(Ŝ ∗ 6⊆ S)
n
Cmin
≤ 2 exp(−c2 log p) + o(1).
Together with (42), the proof is completed.
7.3
Proof of Lemma 3.3
Proof. The KKT condition of (4) is
1 T
z X−j
n j
as
≤ λj .
(43)
∞
In the event that (17) holds true, (36) holds true and hence we can rewrite Remainder in (13)
!
zjT X
T
Remainder = λ ej − T
(ΣnS,S )−1 [sgn(β̂S ) − sgn(βS )].
(44)
zj x j
S
Let S̃ be the set of small coefficients, such that S̃ = {j : 0 < |βj | < g1 (λ) + g10 (λ)} for g1 (λ) and
g10 (λ) in (17) and (19) respectively. kûS k∞ ≤ g1 (λ) further implies that for ∀j ∈ S\S̃,
β̂j = βj + ûj > βj − max |ûj | > βj − g1 (λ) > g10 (λ), for βj > g1 (λ) + g10 (λ).
j
β̂j = βj + ûj < βj + max |ûj | < βj + g1 (λ) < −g10 (λ), for βj < −[g1 (λ) + g10 (λ)].
j
16
Therefore, if (17) holds true, then
sgn(βj ) = sgn(β̂j ) and |β̂j | > g 0 (λ) for j ∈ S\S̃.
(45)
The sign inconsistency of the Lasso estimator β̂ only occurs on S̃ and hence
sgn(β̂S ) − sgn(βS )
1
≤ 2s̃.
(46)
By (44), we have
|Remainder| ≤ λ
≤λ
≤
eTj −
zjT XS
(ΣnS,S )−1
zjT xj
zjT XS\j
zjT xj
!
S
sgn(β̂S ) − sgn(βS )
∞
(ΣnS,S )−1
∞
sgn(β̂S ) − sgn(βS )
∞
1
1
2K1 λλj s̃
,
zjT xj /n
where the last step is due to Condition 3.3, (43) and (46). The proof for (22) is completed by the
fact that (17) holds with probability going to 1.
For the bootstrap version, define an oracle Lasso estimator computed with the bootstrap samples.
Formally,
−1
(∗,o)
(∗,o)
β̂S = β̂S + ΣnS,S
(WS∗ − λsgn(βS )) and β̂S c = 0.
(47)
(∗,o)
If Ŝ ⊆ S and Ŝ ∗ ⊆ S, we can plug in β̂S
(∗,DB)
β̂j
− β̂j =
=
zjT ˆ∗
zjT xj
zjT ˆ∗
+
eTj −
zjT X
zjT xj
zjT X
and obtain that
!
(∗,o)
(β̂S
− β̂S ) +
eTj −
!S
zjT X
!
zjT xj
(∗,o)
(β̂S∗ − β̂S
)
S
(ΣnS,S )−1 WS∗ + Bias
zjT xj
{z S
}
N oise∗
!
zjT X
T
+ λ ej − T
(ΣnS,S )−1 sgn(β̂S∗ ) − sgn(βS ) ,
zj x j
S
|
{z
}
zjT xj
|
−
eTj −
(48)
Remainder∗
where N oise∗ is in (20) and Bias is in (13).
In view of (19) and (45) , we have sgn(β̂j∗ ) = sgn(β̂j ) = sgn(βj ) for j ∈ S\S̃. Hence,
ksgn(β̂S∗ ) − sgn(βS )k1 ≤ 2s̃.
Together with (43) and Condition 3.3, it holds that
|Remainder∗ | ≤
2K1 s̃λλj
= oP (1),
zjT xj /n
in the event that {Ŝ ⊆ S, Ŝ ∗ ⊆ S}, which holds with large probability by Lemma 3.1 and 3.2.
17
(49)
7.4
Proof of Theorem 3.4
We simplify the notations for the terms in (13) and (48). Let
bj = Bias in (13), Remj = Remainder in (13), ηj = N oise in (13),
Rem∗j = Remainder∗ in (48), ηj∗ = N oise∗ in (20).
(50)
Proof. Define a version of pivots in (24) which is standardized and bias-removed:
Rjo =
zjT xj
zjT xj
(DB)
(∗,o)
(∗,DB)
(β̂j
− βj − bj ) and Rj
=
(β̂j
− β̂j − bj ).
σkzj k2
σkzj k2
(∗,o)
We first find the limiting distribution for Rjo and Rj
Let ζj be the normalized version of ηj in (50):
ζj =
(51)
.
n
X
zjT xj
ηj =
ζi,j ,
σkzj k2
(52)
i=1
T /n .
where ζi,j = σkz1j k2 zi,j i − (zjT xj eTj − zjT XS )(ΣnS,S )−1 Xi,S
i
Statement (22) in Lemma 3.3 implies that
√
zjT xj
nK1 s̃λλj
nK1 s̃λλj
Rjo = OP
+
ηj = OP
+ ζj = oP (1) + ζj ,
σkzj k2
σkzj k2
σK2
(53)
where the second step is due to Condition 3.5 and the last step is by our sample size condition.
Note that ζj is a random variable with mean zero and variance s2n , where
s2n = V ar(ζj )
1
=
kzj − (zjT xj eTj − zjT X)S (ΣnS,S )−1 XST /nk22
kzj k22
1
2
=1+
(z T xj eTj − zjT X)S (ΣnS,S )−1 (zjT xj ej − X T zj )S −
z T XS (ΣnS,S )−1 (zjT xj ej − X T zj )S
nkzj k22 j
nkzj k22 j
3
2
=1+
(zjT xj eTj − zjT X)S (ΣnS,S )−1 (zjT xj ej − X T zj )S −
z T xj eTj (ΣnS,S )−1 (zjT xj ej − X T zj )S .
2
nkzj k2
nkzj k22 j
|
{z
} |
{z
}
H1
H2
(54)
Note that H1 in (54) can be bounded by
|H1 | ≤
≤
3 (zjT xj eTj − zjT X)S /n
2
2
(ΣnS,S )−1
2
kzj k22 /n
3skzjT XS\j /nk2∞
K2 Cmin
≤
3sλ2j
,
K2 Cmin
(55)
where the second last step is by Conditions 3.1 and 3.5 and the last step is by (43). Similarly, H2
in (54) can be bounded by
√
sλj
2
T
n
−1
T
|H2 | ≤
zj XS\j /n
(ΣS,S )
xj zj ≤ √
.
(56)
2
2
2
kzj k2
K2 Cmin
18
Thus, for n s log p, we have
s2n = 1 + o(1).
(57)
Now we check the Lyapunov condition, which is
n
1 X
E[|ζi,j |4 ] = 0.
lim
n→∞ s4
n
i=1
Using Condition 3.4, we can obtain that
n
X
E[ζi,j |4 ] ≤
i=1
n
E[||4 ] X
T
|zi,j − (zjT xj eTj − zjT X)S (ΣnS,S )−1 Xi,S
/n|4
σ 4 kzj k42 i=1
M0
≤ 4
23
σ kzj k42
n
X
4
|zi,j | +
i=1
n
X
!
|(zjT xj eTj
−
T
zjT X)S (ΣnS,S )−1 Xi,S
/n|4
.
(58)
i=1
T /n, i = 1, . . . , n. Then we have
For ease of notation, let ci = (zjT xj eTj − zjT X)S (ΣnS,S )−1 Xi,S
kck22
nsλ2j
kzj k22
H1 ≤
,
=
3
Cmin
max |ci | ≤ s zjT XS\j /n
∞
i∈[n]
(ΣnS,S )−1
max |xi,j | ≤ K1 K0 sλj .
∞ i,j
As a consequence,
n
X
4
|ci | ≤ max |ci |
2
i∈[n]
i=1
n
X
i=1
nK12 K02 s3 λ4j
nsλ2j
=
.
|ci | ≤ (K1 K0 sλj )
Cmin
Cmin
2
2
In view of (58), it holds that
n
M0 23 (nK12 K02 s3 λ4j /Cmin + kzj k44 )
1 X
4
E[ζ
|
]
≤
lim
i,j
n→∞ s4
n→∞
σ 4 kzj k42 (1 − o(1))2
n
lim
i=1
≤ lim
M0 23 K12 K02 s3 λ4j
= 0,
σ 4 K22 Cmin n
q
as long as s3 λ4 n. For s n/ log p and λj logn p , it is easy to check that s3 λ4 = O(n/ log p)
n.
We have proved that
D
ζj /sn −
→ Z, for Z ∼ N (0, 1).
n→∞
Together with (53) and (57), we have
sup P(Rjo ≤ c) − Φ(c) = oP (1).
(59)
c∈R
(∗,o)
For the bootstrap version, consider Rj
defined in (51). By Lemma A.4 and (23) in Lemma
3.3,
(∗,o)
Rj
= OP
nK1 s̃λλj
σkzj k2
zjT xj
+
ηj + oP (1) = OP
σkzj k2
19
√
nK1 s̃λλj
σK2
+ ζj∗ + oP (1) = oP (1) + ζj∗ ,
where
ζj∗ =
zjT xj ∗
η
σkzj k2 j
(60)
is a Gaussian variable with mean zero and variance 1 + oP (1). This implies that
sup P(ζj∗ ≤ c) − Φ(c) = oP (1).
(61)
c∈R
Let F∗ (c) be the cumulative distribution function of ζj∗ , i.e. F∗ (c) = P(ζj∗ ≤ c). For ∀v1 , v2 > 0 and
∀α ∈ (0, 1),
n
o
(∗,o)
(∗,o)
P qα (Rj ) − zα > v1 ≤ P F∗ qα Rj
> F∗ (zα + v1 )
≤ P {α > F∗ (zα + v1 )}
≤ P {α + v2 > Φ(zα + v1 )} + P {F∗ (zα + v1 ) ≤ Φ(zα + v1 ) − v2 } + o(1)
= P {α + v2 > Φ(zα + v1 )} + o(1),
where the first step is due to the monotonicity of F∗ , the second step is by the definition of quantile
function, and the last step is due to (61). By first taking v2 → 0, we have proved that for ∀α ∈ (0, 1)
and ∀v1 > 0,
n
o
(∗,o)
P qα Rj
− zα > v1 = o(1).
A matching lower bound can be proved by a completely analogous argument. Thus,
(∗,o)
sup qα Rj
− zα = oP (1).
(62)
α∈(0,1)
To complete our proof, note that by Lemma A.4,
Rj =
σRjo
zjT xj
zjT xj
(∗,o)
∗
+
bj + oP (1) and Rj = σRj
+
bj + oP (1).
kzj k2
kzj k2
Together with (59) and (62), it holds that
sup P Rj ≤ qα (Rj∗ ) − α ≤ sup
α∈(0,1)
α∈(0,1)
≤ sup
(63)
n
o
(∗,o)
P Rjo ≤ qα Rj
−α
P{Rjo ≤ zα } − α + oP (1)
α∈(0,1)
= oP (1).
(DDB)
Next, we prove the asymptotic normality of Rj
(∗,DB)
β̂j
(DDB)
in (25). Note that β̂j
in (10) corrects
with an estimated bias
(∗,DB)
b̂j = median β̂j
− β̂j .
Due to (48), we can easily obtain that
zjT xj
zjT xj
(∗,o)
b̂j =
bj + median Rj
+ oP (1)
σkzj k2
σkzj k2
zjT xj
=
bj + z0.5 + oP (1)
σkzj k2
=
zjT xj
bj + oP (1),
σkzj k2
20
(64)
(DDB)
where the second step is due to (62). By definition of β̂j
(10) and Rj defined in (24),
zjT xj
zjT xj
1
(DDB)
(β̂j
− βj ) = Rj −
b̂j = Rjo + oP (1) = Z + oP (1),
σkzj k2
σ
σkzj k2
(65)
(DDB)
where the last step is due to (59) for Z ∼ N (0, 1). For Rj
defined in (25), (65) and Lemma
A.4 implies that
(DDB)
P Rj
≤ c = P(Z ≤ c) + oP (1) = Φ(c) + oP (1).
7.5
Proof of Lemma 4.1
Proof. (i) Let
c
toj = xj − XS Σ−1
S,S ΣS,j , for j ∈ S .
(66)
For β̌S and Tj defined in (34) and (35) respectively, we can rewrite Tj as
Tj =
(toj )T
XS (XST XS )−1 sgn(β̌S )
|
P ⊥
+ S
nλ
{z
+Σj,S Σ−1
S,S .
}
E1,j
Conditioning on XS and , toj is a Gaussian random variable with mean 0 and variance at most Σj,j .
Thus,
Var(E1,j |XS , ) ≤ Σj,j
(XS (XST XS )−1 sgn(β̌S )
P ⊥
+ S
nλ
2
2
T
T
−1
≤ Σj,j (sgn(β̌S )) (XS XS ) sgn(β̌S ) + kk22 /(nλ)2 .
Define an event
B1 = kk22 /n ≤ 2σ 2 , Λmax ((ΣnS,S )−1 ) ≤
4
Cmin
, k(ΣnS,S )−1 k∞
(67)
≤ (1 + Cn )K1 for Cn in Lemma A.3 (ii) .
B1 implies that
sgn(β̌S )T (XST XS )−1 sgn(β̌S ) ≤ ksgn(β̌S )k22 k(XST XS )−1 k2 ≤
s
4s
Λmax ((ΣnS,S )−1 ) ≤
.
n
nCmin
Thus, by (67) and Condition 4.4, in B1 ,
maxc Var(E1,j ) ≤ C
∗
j∈S
4s
2σ 2
+
nCmin nλ2
.
(68)
Thus, by Lemma A.3 and Condition 4.5,
P maxc |E1,j | > x ≤ P maxc |E1,j | > x, B1 + P(B1c )
j∈S
j∈S
(
)
x2
c1
≤ 2(p − s) exp −
+
+ 2 exp(−c2 log p) + 2 exp(−c3 n),
4s
2σ 2
∗
n
2C ( nCmin + nλ2 )
21
for some constant c1 , c2 , c3 > 0. Let x = (1 − κ)/2 and solve
x2
2C ∗ ( nC4smin +
2σ 2
)
nλ2
≥ 2 log p.
For
r
σ 2 Cmin
8σ
C ∗ log p
sλ ≤
and λ ≥
,
2
1−κ
n
there exists some c1 > 0, such that
c1
1+κ
≤
P maxc |Tj | >
+ 2 exp(−c4 log p) + 2 exp(−c3 n),
j∈S
2
n
2
for some constant c1 , c3 , c4 > 0.
(ii) The second task is to bound kûS k∞ . In the event that {Ŝ ⊆ S},
kûS k∞ ≤ k(ΣnS,S )−1 WSn k∞ + λk(ΣnS,S )−1 sgn(β̂S )k∞
≤ k(ΣnS,S )−1 WSn k∞ + λk(ΣnS,S )−1 k∞ .
{z
} |
{z
}
|
E2
E3
For E2 , conditioning on X, (ΣnS,S )−1 WSn is a Gaussian random vector with mean 0 and variance
(σ 2 /n)(ΣnS,S )−1 . And hence,
P (E2 > x) ≤ P (E2 > x, B1 ) +
P (B1c )
nCmin x2
≤ 2s exp −
8σ 2
+ P (B1c ) .
Lemma A.3 implies that
P (E3 > λ(1 + Cn )K1 ) ≤ 2 exp(−c5 log p),
for some c5 > 0. Using part (i) of the proof, we can obtain that for some c6 , c7 , c8 > 0,
!
r
log p
c6
P kûS k∞ ≤ 4σ
+ λ(1 + Cn )K1 ≥ 1 −
− 2 exp(−c7 n) − 2 exp(−c8 log p).
Cmin n
n
7.6
Proof of Lemma 4.2
Proof. (i) Define an event
B2 = maxc |Tj | < 1 for Tj defined in (35) and Λmax ((ΣnS,S )−1 ) ≤
j∈S
Since B2 implies {Ŝ ⊆ S}, for Tj∗ defined in (39), Lemma A.2 implies that
∗
P(Ŝ ⊆ S) ≥ P
maxc |Tj∗ |
j∈S
22
< 1, B2 .
4
Cmin
.
For toj defined in (66), we have
maxc |Tj∗ |
j∈S
PS⊥ ˆ∗
≤ maxc
+
j∈S
nλ
PS⊥ ˆ∗
T
−1
∗
o T
XS (XS XS ) sgn(β̌S ) +
+ ΣS c ,S Σ−1
≤ maxc (tj )
S,S
j∈S
nλ
|
{z
}
xTj
XS (XST XS )−1 sgn(β̌S∗ )
∞
.
(69)
∗
E1,j
Recall that under the bootstrap resampling plan (5) and (6), yi∗ ∼ N (xi,Ŝ β̂Ŝ , σ̂ 2 ) conditioning on
(X, β̂, Ŝ, σ̂ 2 ).
∗ , we first show that X (X T X )−1 sgn(β̌ ∗ ) is independent of to in (66) ∀j ∈ S c , in the
For E1,j
S
j
S S
S
event of B2 . Note that by Lemma 1 in Wainwright (2009), B2 implies that β̌ in (34) is the unique
solution to the Lasso (1). As a result, β̂ is a function of (XS , ) and Ŝ ⊆ S. Ŝ ⊆ S further implies
that β̌S∗ in (38) is a function of (XS , β̂, ˆ∗ ). Therefore, the following arguments hold true:
B2 ⊆ {β̂ is a function of (XS , ), β̌S∗ in (38) is a function of (XS , β̂, ˆ∗ )}
⊆ {β̂ is a function of (XS , ), σ̂ 2 in (7) is a function of (XS , ), β̌S∗ in (38) is a function of (XS , , σ̂, ξ)}
⊆ {β̂ is a function of (XS , ), σ̂ 2 in (7) is a function of (XS , ), β̌S∗ in (38) is a function of (XS , , ξ)}
⊆ {XS (XST XS )−1 sgn(β̌S∗ ) is a function of (XS , , ξ)}.
(70)
B2 ∩ {σ̂ 2 ≤ 2σ 2 , kξk22 ≤ 2n} implies that
XS (XST XS )−1 sgn(β̌S∗ )
P ⊥ ˆ∗
+ S
nλ
2
≤
2
4s
Cmin n
+
σ̂ 2 kξk22
4s
4σ 2
≤
+
.
n2 λ 2
Cmin n nλ2
Thus,
1−κ
∗
2
2
2
P maxc E1,j ≥
, B2 , σ̂ ≤ 2σ , kξk2 ≤ 2n
j∈S
2
1−κ
∗
≤ P maxc E1,j ≥
, (70) and (71) hold true
j∈S
2
(
)
(1 − κ)2
≤ 2(p − s) exp −
σ2
8C ∗ ( nCsmin + nλ
2)
≤ 2 exp(−c1 log p),
q
log p
4σ
for n s log p and λ ≥ 1−κ
n .
q
log p
We conclude that for n s log p and λ >
n ,
1+κ
1−κ
∗
∗
P maxc |Tj | ≤
, B2 ≥ P maxc |E1,j | <
, B2
j∈S
j∈S
2
2
1−κ
∗
= P(B2 ) − P maxc |E1,j | ≥
, B2
j∈S
2
1−κ
∗
≥ P(B2 ) − P maxc |E1,j
|≥
, B2 , σ̂ 2 ≤ 2σ 2 , kξk22 ≤ 2n
j∈S
2
2
2
2
− P σ̂ > 2σ − P kξk2 > 2n
4σ
1−κ
= 1 − o(1).
23
(71)
(ii) Let
B3 = Ŝ ∗ ⊆ S, Λmax ((ΣnS,S )−1 ) ≤
4
Cmin
, k(ΣnS,S )−1 k∞ ≤ K1 (1 + Cn ) for Cn in Lemma A.3 (ii),
σ̂ 2 ≤ 2σ 2 .
(72)
In B3 , (41) holds true and we have
kû∗S k∞ ≤ k(ΣnS,S )−1 WS∗ k∞ + λk(ΣnS,S )−1 sgn(β̂S∗ )k∞
T
n
−1
≤ max σ̂(ΣnS,S )−1
j,S XS ξ/n +λk(ΣS,S ) k∞ .
j∈S
|
{z
}
∗
E2,j
In B3 , for ∀j ∈ S,
n
X
−1 T
T
2
n
2
((ΣnS,S )−1
j,. Xi,S /n) = k(ΣS,S )j,. XS /nk2 ≤
i=1
1
4
Λmax ((ΣnS,S )−1 ) ≤
.
n
nCmin
By the Gaussian property of ξ, in B3 ,
nCmin x2
∗
P max E2,j
> x ≤ 2s exp(−
).
j∈S
16σ 2
B3 is a large probability event due to part (i) of the proof, Lemma A.3 and Lemma A.4. Putting
these pieces together, we have
!
r
log p
∗
+ K1 (1 + Cn )λ ≤ 2 exp(−c2 log p) + o(1) → 0,
P kûS k∞ > 4σ
Cmin n
for some c2 > 0 and λ satisfying (30).
7.7
Proof of Theorem 4.3
Proof. Under Gaussian designs, we still consider error decompositions as in (13) and (48). We use
simplified notations described in (50).
In the event that (29) holds, we can obtain that
!
zjT XS
T
|Remj | ≤ λ ej − T
(ΣnS,S )−1 [sgn(β̂S ) − sgn(βS )]
zj x j
S
≤λ
= OP
zjT XS\j
zjT xj
1
(ΣnS,S )−1
∞
sgn(β̂S ) − sgn(βS )
∞
K1 (1 + Cn )ns̃λλj
zjT xj
1
!
,
where the last step is by (43), Lemma 4.1, Lemma A.3 and the definition of s̃ in (32).
By Lemma 5.3 of Van de Geer et al. (2014), if n sj log p,
kzj k22 /n = 1/(Σ−1 )j,j + oP (1),
24
(73)
where maxj≤p (Σ−1 )j,j ≤ 1/C∗ by Condition 4.4. Thus, for Rjo defined in (51) and ζj in (52) we
have
nK1 (1 + Cn )s̃λλj
o
+ ζj + oP (1)
Rj = OP
σkzj k2
√
nK1 (1 + Cn )s̃λλj
√
= OP
+ ζj + oP (1)
σ C∗
= oP (1) + ζj ,
(74)
where the last step is due to in A2 ,
√
p
s̃ log p
ss̃ log p
ns̃λλj =
= o(1), ss̃λλj =
= o(1) and s log ps̃λλj =
n
n
r
s log p s̃ log p
= o(1).
n
n
Next we show that for ζj in (74),
sup |P(ζj ≤ c) − Φ(c)| = oP (1).
c∈R
Conditioning on X, ζj is a Gaussian random variable with mean 0 and variance of the form (54).
By (73) and Lemma A.3, we can similarly prove (55) and (56) by replacing K2 with C∗ (1 − oP (1))
√
and replacing Cmin with Cmin /4 + oP (1). Hence, |s2n − 1| = OP ( sλj ) = oP (1). Then we have,
P(ζj ≤ c) = E[P(ζj ≤ c|X)] = E[Φ(c/sn )] = Φ(c) + oP (1).
(∗,o)
in (51) and ζj∗ in (60), Lemma 4.2 implies that
√
nK1 s̃λλj
√
= OP
+ ζj∗ + oP (1) = oP (1) + ζj∗ .
σ C∗
For the bootstrap version Rj
(∗,o)
Rj
(75)
Conditioning on X and , we can similarly prove ζj∗ is a Gaussian random variable with mean
0 and variance 1 + oP (1). Thus,
sup P(ζj∗ ≤ c) − Φ(c) = oP (1).
c∈R
Together with (63), (74), (75) and Lemma A.4,
sup
P Rj ≤ qα (Rj∗ ) − α ≤ sup
α∈(0,1)
α∈(0,1)
≤ sup
n
o
(∗,o)
P Rjo ≤ qα Rj
−α
P{Rjo ≤ zα } − α + oP (1)
α∈(0,1)
= oP (1).
(DDB)
The asymptotic normality of Rj
omitted here.
can be similarly proved as for the fixed design case and is
Acknowledgements
The author would like to thank Professor Cun-Hui Zhang for insightful and constructive discussions
on the technical part of the paper as well as the presentation.
25
References
R. F. Barber and E. J. Candes. A knockoff filter for high-dimensional selective inference. 2016.
preprint, http://arxiv.org/abs/1602.03574.
A. Belloni, V. Chernozhukov, and K. Kato. Uniform post-selection inference for least absolute
deviation regression and other z-estimation problems. Biometrika, 102(1):77–94, 2014.
R. Berk, L. Brown, A. Buja, et al. Valid post-selection inference. The Annals of Statistics, 41(2):
802–837, 2013.
P. Bühlmann and S. Van De Geer. Statistics for high-dimensional data: methods, theory and
applications. Springer Science & Business Media, 2011.
T. T. Cai and Z. Guo. Confidence intervals for high-dimensional linear regression: Minimax rates
and adaptivity. The Annals of Statistics, 45(2):615–646, 2017.
E. Candes and T. Tao. The dantzig selector: Statistical estimation when p is much larger than n.
The Annals of Statistics, pages 2313–2351, 2007.
A. Chatterjee and S. N. Lahiri. Asymptotic properties of the residual bootstrap for lasso estimators.
Proceedings of the American Mathematical Society, 138(12):4497–4509, 2010.
A. Chatterjee and S. N. Lahiri. Bootstrapping lasso estimators. Journal of the American Statistical
Association, 106(494):608–625, 2011.
A. Chatterjee and S. N. Lahiri. Rates of convergence of the adaptive lasso estimators to the oracle
distribution and higher order refinements by the bootstrap. Annals of Statistics, 41(3):1232–1259,
2013.
V. Chernozhukov, D. Chetverikov, and K. Kato. Gaussian approximations and multiplier bootstrap
for maxima of sums of high-dimensional random vectors. The Annals of Statistics, 41(6):2786–
2819, 2013.
V. Chernozhukov, D. Chetverikov, M. Demirer, et al. Double/debiased machine learning for
treatment and structural parameters. The Econometrics Journal, 2017.
H. Deng and C.-H. Zhang. Beyond gaussian approximation: Bootstrap for maxima of sums of
independent random vectors. 2017. preprint, http://arxiv.org/abs/1705.09528.
R. Dezeure, P. Bühlmann, and C.-H. Zhang. High-dimensional simultaneous inference with the
bootstrap. 2016. preprint, http://arxiv.org/abs/1606.03940.
J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties.
Journal of the American statistical Association, 96(456):1348–1360, 2001.
E. X. Fang, Y. Ning, and H. Liu. Testing and confidence intervals for high dimensional proportional
hazards models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2016.
J. Jankova and S. Van De Geer. Confidence intervals for high-dimensional inverse covariance
estimation. Electronic Journal of Statistics, 9(1):1205–1229, 2015.
A. Javanmard and A. Montanari. Confidence intervals and hypothesis testing for high-dimensional
regression. Journal of Machine Learning Research, 15:2869–2909, 2014a.
26
A. Javanmard and A. Montanari. Hypothesis testing in high-dimensional regression under the
gaussian random design model: Asymptotic theory. IEEE Transactions on Information Theory,
60(10):6522–6554, 2014b.
A. Javanmard and A. Montanari. De-biasing the Lasso: Optimal Sample Size for Gaussian Designs.
pages 1–32, 2015. preprint, http://arxiv.org/abs/1508.02757.
K. Knight and W. Fu. Asymptotics for lasso-type estimators. The Annals of Statistics, 28(5):
1356–1378, 2000.
J. D. Lee, D. L. Sun, Y. Sun, et al. Exact post-selection inference, with application to the lasso.
The Annals of Statistics, 44(3):907–927, 2016.
R. Lockhart, J. Taylor, R. J. Tibshirani, et al. A significance test for the lasso. Annals of statistics,
42(2):413, 2014.
E. Mammen. Bootstrap and wild bootstrap for high dimensional linear models. The Aannals of
Statistics, pages 255–285, 1993.
N. Meinshausen and P. Bühlmann. High-dimensional graphs and variable selection with the lasso.
The annals of statistics, pages 1436–1462, 2006.
R. Mitra and C.-H. Zhang. The benefit of group sparsity in group inference with de-biased scaled
group lasso. Electronic Journal of Statistics, 10(2):1829–1873, 2016.
S. Reid, R. Tibshirani, and J. Friedman. A study of error variance estimation in lasso regression.
Statistica Sinica, 26:35–67, 2016.
Z. Ren, T. Sun, C.-H. Zhang, et al. Asymptotic normality and optimalities in estimation of large
gaussian graphical models. The Annals of Statistics, 43(3):991–1026, 2015.
T. Sun and C.-H. Zhang. Scaled sparse linear regression. Biometrika, 99(4):879–898, 2012.
R. Tibshirani. Regression selection and shrinkage via the lasso. Journal of the Royal Statistical
Society B, 58(1):267–288, 1996.
R. J. Tibshirani, J. Taylor, R. Lockhart, et al. Exact post-selection inference for sequential regression
procedures. Journal of the American Statistical Association, 111(514):600–620, 2016.
S. Van de Geer, P. Bühlmann, Y. Ritov, et al. On asymptotically optimal confidence regions and
tests for high-dimensional models. Annals of Statistics, 42(3):1166–1202, 2014.
R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. pages 210–268,
2010. preprint, http://arxiv.org/abs/1011.3027.
M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using `1 constrained quadratic programming (Lasso). IEEE Transactions on Information Theory, 55(5):
2183–2202, 2009.
C.-H. Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of
statistics, 38(2):894–942, 2010.
C.-H. Zhang. Statistical inference for high-dimensional data. In Very High Dimensional
Semiparametric Models, Mathematisches Forschungsinstitut Oberwolfach, (48), 2011.
27
C.-H. Zhang and S. Zhang. Confidence intervals for low dimensional parameters in high dimensional
linear models. Journal of the Royal Statistical Society. Series B: Statistical Methodology, 76(1):
217–242, 2014.
X. Zhang and G. Cheng. Simultaneous inference for high-dimensional linear models. Journal of the
American Statistical Association, 0(0):1–12, 2017.
P. Zhao and B. Yu. On model selection consistency of Lasso. The Journal of Machine Learning
Research, 7:2541–2563, 2006.
H. Zou. The adaptive lasso and its oracle properties. Journal of the American statistical association,
101(476):1418–1429, 2006.
28
A
Some technical lemmas
Lemma A.1 (Symmetrization). Assume that Conditions 3.1 - 3.4 hold true,
r
2 log p
32σ 2
16σ
.
and n ≥ 2
λ>
1−κ
n
λ (1 − κ)2
Then we have
xTj PS⊥
1−κ
P maxc
>
j∈S
nλ
2
!
n
P max (ΣnS,S )−1
j,. WS > 8σ
≤ 4 exp(−c1 log p) +
r
j∈S
2 log p
Cmin n
c2
,
n
!
≤ 4 exp(−c1 log p) +
(76)
c2
,
n
(77)
for some c1 , c2 > 0.
Proof. In this proof, we apply standard symmetrization techniques.
Let
n
xTj PS⊥
1 X (xTj PS⊥ )i i
=
,
Q1,j =
nλ
n
λ
i=1
(˜
1 , . . . , ˜n ) be an independent copy of (1 , . . . , n ). Q̃1,j = xTj PS⊥ ˜/(nλ) and ω1 , . . . , ωn be a
Rademacher sequence. Note that
"
#
!2
n
T P ⊥)
2
X
2
(xTj PS⊥ )i i
(x
1
j S i i
= 1 PS⊥ xj σ 2 ≤ σ ≡ C1 .
maxc E
= 0, maxc
E
j∈S
j∈S n
λ
λ
nλ2
λ2
2
i=1
We apply symmetrization inequalities (Problem 14.5 in Bühlmann and Van De Geer (2011)),
which gives for ∀ 0 < η < 1,
P max c |Q − Q̃ | > (1 − η)t
1,j
1,j
j∈S
P maxc |Q1,j | > t ≤
j∈S
1 − C1 /nη 2 t2
Pn (xTj PS⊥ )i i ωi
1
2P maxj∈S c n i=1
> (1 − η)t/2
λ
≤
.
1 − C1 /nη 2 t2
For n ≥ 2C1 /(η 2 t2 ), we have
!
n
1 X (xTj PS⊥ )i i ωi
P maxc |Q1,j | > t ≤ 4P maxc
> (1 − η)t/2 .
j∈S n
j∈S
λ
(78)
i=1
Conditioning on , by McDiarmid’s inequality, we have
P maxc
j∈S
1
n
n
X
i=1
(xTj PS⊥ )i i ωi
λ
!
> (1 − η)t/2
X
2((1 − η)t/2)2
≤
exp − P h
i2 . (79)
n
T
⊥
c
j∈S
i=1 2(xj PS )i i /(nλ)
29
Moreover, by Chebyshev’s inequality, ∀j ∈ S c
n h
X
maxc P
j∈S
2(xTj PS⊥ )i i /(nλ)
i=1
i2
!
16kPS⊥ xj k44 E[4i ]
4σ 2
>
t
≤
max
−
j∈S c
nλ2
(nλ)4 t2
≤
16(K0 + K0 κ)2 M0
,
n3 λ4 t2
where the last step is due to
maxc kPS⊥ xj k44 ≤ max
(xTj PS⊥ )2i kPS⊥ xj k22
c
j∈S
j∈S ,i≤n
2
xi,j − xi,S (XST XS )−1 XST xj
≤ n max
j∈S c ,i≤n
2
n
n
−1
≤ n max |xi,j | + max |xi,j |kΣS c ,S (ΣS,S ) k∞
i,j
i,j
2
≤ n(K0 + K0 κ) .
Thus, for any constant C2 ,
P
n h
X
2(xTj PS⊥ )i i /(nλ)
i=1
i2
!
√
4σ 2
C2 (K0 + K0 κ) M0
16
>
+
≤
→ 0.
nλ2
nλ2
nC22
And hence by (78) and (79), for n ≥ 8σ 2 /(λ2 η 2 (1 − κ)2 ),
1−κ
2[(1 − η)(1 − κ)/4]2 nλ2
16
√
P maxc |Q1,j | >
≤ 4(p − s) exp − 2
+
.
j∈S
2
nC22
4σ + C2 (K0 + K0 κ) M0
q
2
2
2 log p
16σ
and n ≥ λ232σ
, we have
Take C2 = (K +K4σκ)√M and η = 1/2. Then for λ > 1−κ
n
(1−κ)2
0
0
0
1−κ
c2
P maxc |Q1,j | >
≤ 4 exp (−c1 log p) + ,
j∈S
2
n
for some c1 , c2 > 0.
P
T
(ii) Now consider Q2,j = ni=1 (ΣnS,S )−1
j,S Xi,S i /n. Previous arguments still applies with
" n
#
X
2
1
1
)2 2i ≤ max XS (ΣnS,S )−1
max E
(Xi,S (ΣnS,S )−1
E 2i
.,j
S,j
n j∈S
n j∈S
2
i=1
σ2
,
j∈S
Cmin
" n
#
n
2
4
X
M0 X
−1 T
T
n
max Var
2(ΣnS,S )−1
X
/n
≤
max
2(Σ
)
X
i
S,S j,S i,S
j,S i,S
j∈S n4
j∈S
= max eTj (ΣnS,S )−1 ej σ 2 ≤
i=1
i=1
≤ max
j∈S
≤
n
4
16M0 X
−1
n
T
k(Σ
)
k
kX
k
1
∞
S,S j,S
i,S
n4
i=1
4
4
16M0 K1 K0
.
n3
Thus,
r
P max |Q2,j | > 8σ
j∈S
2 log p
Cmin n
!
30
≤ 4 exp(−c3 log p) +
c4
.
n
Lemma A.2 (Selection consistency of the bootstrapped Lasso). For β̌ ∗ defined in (38) and Tj∗
defined in (39), if maxj∈S c |Tj∗ | < 1, Ŝ ⊆ S and ΣnS,S is invertible, then β̌ ∗ is the unique solution to
the bootstrapped Lasso and Ŝ ∗ ⊆ S.
Proof. In the event that {Ŝ ⊆ S}, β̂S c = 0. By the KKT condition of β̌S∗ in (38),
ΣnS,S (β̌S∗ − β̂S ) − WS∗ + λsgn(β̌S∗ ) = 0.
If |Tj∗ | < 1 for ∀ j ∈ S c , then there exists sgn(β̌S∗ c ) such that
ΣnS c ,S (β̌S∗ − β̂S ) − WS∗c + λsgn(β̌S∗ c ) = 0.
And hence there exists sgn(β̌ ∗ ) such that β̌ ∗ in (38) is a solution to
Σn (β̌ ∗ − β̂) − W ∗ + λsgn(β̌ ∗ ) = 0,
which is the KKT condition of the bootstrapped Lasso (37). By Lemma 1 in Wainwright (2009), β̌ ∗
is an optimal solution to the bootstrapped Lasso problem (37). Moreover, β̌ ∗ is the unique solution,
since ΣnS,S is invertible and |Tj∗ | < 1 for ∀ j ∈ S c . This implies that Ŝ ∗ ⊆ S.
Lemma A.3. Under Conditions 4.1 - 4.4, we have the following results.
(i) Let c1 > 4, c2 > 0. For n > c1 s, with probability at least 1 − 2 exp(−c2 n) → 1,
Λmax ((ΣnS,S )−1 ) ≤
4
.
Cmin
(80)
√
√
√
√
√
√
(ii) Let cn = ( s ∨ log p)/ n and Cn = 4 scn /(1 − 2cn )2 = O((s ∨ s log p)/ n). With
probability at least 1 − 2 exp(− log p/2),
k(ΣnS,S )−1 k∞ ≤ K1 (1 + Cn ) .
(81)
Proof. Let X̃ = XΣ−1/2 and Σ̃n = X̃ T X̃/n. Then Σ̃n = Ip×p .
By Corollary 5.35 of Vershynin (2010), with probability at least 1 − 2 exp(−x2 /2),
r
r
2
2
s
s
x
x
n
n
1−
−√
+√
≤ Λmin (Σ̃S,S ) ≤ Λmax (Σ̃S,S ) ≤ 1 +
.
(82)
n
n
n
n
√
√
√
√
For x = n/2 − s and n 4s, with probability at least 1 − 2 exp(−( n/2 − s)2 /2) → 1,
Λmin (Σ̃nS,S ) ≥ 1/4.
And hence,
n
Λmax ((Σ̃nS,S )−1 ) = Λ−1
min (Σ̃S,S ) ≤ 4
−1/2
−1/2
Λmax ((ΣnS,S )−1 ) = k(ΣnS,S )−1 k2 ≤ kΣS,S k2 k(Σ̃nS,S )−1 k2 kΣS,S k2 ≤
Moreover,
k(Σ̃nS,S )−1 k∞ ≤ 1 + k(Σ̃nS,S )−1 − Ik∞
√
≤ 1 + sk(Σ̃nS,S )−1 − Ik2
√
≤ 1 + sΛmax ((Σ̃nS,S )−1 ) − I)
√
n
≤ 1 + s(Λ−1
min (Σ̃S,S ) − 1).
31
4
Cmin
.
Taking x =
√
s∨
√
log p in (82), we have with probability 1 − exp(− log p/2),
√
−2
√
s ∨ log p
−1
n
√
−1
Λmin (Σ̃S,S ) − 1 ≤ 1 − 2
n
√
√
√
√
√
4( s ∨ log p)/ n − 4( s ∨ log p)2 /n
=
2
√ √
1 − 2 s∨√nlog p
√
≤ Cn / s.
Putting these arguments together, we have
−1/2
(ΣnS,S )−1
∞
−1/2
≤ ΣS,S (Σ̃nS,S )−1 ΣS,S
≤
−1/2
kΣS,S k2∞
(Σ̃nS,S )−1
∞
∞
= K1 (1 + Cn ).
q
Lemma A.4 (Consistency of variance estimator in (7)). Assume that n s log p and λ logn p .
If either (i) Conditions 3.1 - 3.5 hold true, or (ii) Conditions 4.1 - 4.5 hold true, then we have
σ̂ 2 = σ 2 + oP (1).
(83)
Proof. (i) For deterministic designs with satisfying Condition 3.4, we have
kXST /nk∞ = OP (λ),
By (77) in Lemma A.1, In the event that (17) holds, we have
kûS k22 ≤ 2k(ΣnS,S )−1 WSn k22 + 2λ2 k(ΣnS,S )−1 sgn(β̂S )k22
2λ2
≤ 2sk(ΣnS,S )−1 WSn k2∞ + 2 ksgn(β̂S )k22
Cmin
s log p
= OP
.
n
p
√
Therefore, kûS k1 ≤ skûS k2 = OP (s log p/n) and |T XS ûS | ≤ nkWSn k∞ kûS k1 = OP (s log p), by
a similar as for (77). Moreover, by the KKT condition of the Lasso (1),
s log p
T n
T
n
n
ûS ΣS,S ûS = ûS WS − λsgn(β̂S ) ≤ kûS k1 kWS − λsgn(β̂S )k∞ = OP
.
(84)
n
Note that |Ŝ| ≤ |S| n. Hence,
σ̂ 2 =
1
n − |Ŝ|
1
ky − X β̂k22
kk22 + kX ûk22 − 2T X û
n − |Ŝ|
1 s log p
= σ 2 + OP
+
+ oP (1)
n
n
=
= σ 2 + oP (1).
32
(85)
(ii) For the Gaussian designs with Gaussian errors (Condition 4.4), we have
x2
T
.
P kXS k∞ > x|XS ≤ s exp −
2nσ 2
In the event that (29) holds,
kûS k22 ≤ 2k(ΣnS,S )−1 WSn k22 + 2λ2 k(ΣnS,S )−1 sgn(β̂S )k22
8
n 2
2
= OP
skWS k∞ + sλ
Cmin
s log p
= OP
.
nCmin
Hence, kûS k1 = OP (sλ). By (84), (85) and (80) in Lemma A.3,
1 s log p
2
2
σ̂ = σ + OP
+
= σ 2 + oP (1).
n
n
33
| 10 |
Deep Health Care Text Classification
Vinayakumar R, Barathi Ganesh HB, Anand Kumar M, Soman KP
Center for Computational Engineering and Networking (CEN), Amrita School of
Engineering Coimbatore, Amrita Vishwa Vidyapeetham, Amrita University, India,
[email protected],[email protected],
m [email protected], kp [email protected]
arXiv:1710.08396v1 [cs.CL] 23 Oct 2017
Abstract
Health related social media mining is a valuable apparatus for the early recognition of the diverse antagonistic
medicinal conditions. Mostly, the existing methods are based on machine learning with knowledge-based learning.
This working note presents the Recurrent neural network (RNN) and Long short-term memory (LSTM) based embedding for automatic health text classification in the social media mining. For each task, two systems are built and that
classify the tweet at the tweet level. RNN and LSTM are used for extracting features and non-linear activation function at the last layer facilitates to distinguish the tweets of different categories. The experiments are conducted on 2nd
Social Media Mining for Health Applications Shared Task at AMIA 2017. The experiment results are considerable;
however the proposed method is appropriate for the health text classification. This is primarily due to the reason that,
it doesn’t rely on any feature engineering mechanisms.
Introduction
With the expansion of micro blogging platforms such as Twitter, the Internet is progressively being utilized to spread
health information instead of similarly as a wellspring of data1, 2 . Twitter allows users to share their status messages
typically called as tweets, restricted to 140 characters. Most of the time, these tweets expresses the opinions about the
topics. Thus analysis of tweets has been considered as a significant task in many of the applications, here for health
related applications.
Health text classification is taken into account a special case of text classification. The existing methods have used
machine learning methods with feature engineering. Most commonly used features are n-grams, parts-of-speech tags,
term frequency-inverse document frequency, semantic features such as mentions of chemical substance and disease,
WordNet synsets, adverse drug reaction lexicon, etc3–6, 16 . In6, 7 proposed ensemble based approach for classifying the
adverse drug reactions tweets.
Recently, the deep learning methods have performed well8 and used in many tasks mainly due to that it doesn’t rely on
any feature engineering mechanism. However, the performance of deep learning methods implicitly relies on the large
amount of raw data sets. To make use of unlabeled data,9 proposed semi-supervised approach based on Convolutional
neural network for adverse drug event detection. Though the data sets of task 1 and task 2 are limited, this paper
proposes RNN and LSTM based embedding method.
Background and hyper parameter selection
This section discusses the concepts of tweet representation and deep learning algorithms particularly recurrent neural
network (RNN) and long short-term memory (LSTM) in a mathematical way.
Tweet representation
Representation of tweets typically called as tweet encoding. This contains two steps. The tweets are tokenized to
words during the first step. Moreover, all words are transformed to lower-case. In second step, a dictionary is formed
by assigning a unique key for each word in a tweet. The unknown words in a tweet are assigned to default key 0.
To retain the word order in a tweet, each word is replaced by a unique number according to a dictionary. Each tweet
vector sequence is made to same length by choosing the particular length. The tweet sequences that are too long than
the particular length are discarded and too short are padded by zeros. This type of word vector representation is passed
as input to the word embedding layer. For task 1, the maximum tweet sequence length is 35. Thus the train matrix
of shape 6725*35, valid matrix of shape 3535*35 is passed as input to an embedding layer. For task 2, the maximum
tweet sequence length is 34. Thus the train matrix of shape 1065*34, valid matrix of shape 712*34 is passed as input to
an embedding layer. Word embedding layer transforms the word vector to the word embedding by using the following
mathematical operation.
Input − shape ∗ weights − of − word − embedding = (nb − words, word − embedding − dimension) (1)
where input-shape = (nb-words, vocabulary-size), nb-words denotes the number of top words, vocabulary-size denotes
the number of unique words, weights-of-word-embedding = (vocabulary-size, word-embedding-dimension), wordembedding-dimension denotes the size of word embedding vector. This kind of mathematical operation transforms
the discrete number to its vectors of continuous numbers. This word embedding layer captures the semantic meaning
of the tweet sequence by mapping them in to a high dimensional geometric space. This high dimensional geometric
space is called as an embedding space. If an embedding is properly learnt the semantics of the tweet by encoding as
a real valued vectors, then the similar tweets appear in a same cluster with close to each other in a high dimensional
geometric space. To select optimal parameter for the embedding size, two trails of experiments are run with embedding
size 128, 256 and 512. For each experiment, learning rate is set to 0.01. An experiment with embedding size 512
performed well in both the RNN and LSTM networks. Thus for the rest of the experiments embedding size is set
to 512. The embedding layer output vector is further passed to RNN and its variant LSTM layer. RNN and LSTM
obtain the optimal feature representation and those feature representation are passed to the dropout layer. Dropout
layer contains 0.1 which removes the neurons and its connections randomly. This acts as a regularization parameter.
In task 1 the output layer contains sigmoid activation function and sof tmax activation function for task 2.
Recurrent neural network (RNN) and its variant
Recurrent neural network (RNN) was an enhanced model of feed forward network (FFN) introduced in 199010. The
input sequences xT of arbitrary length are passed to RNN and a transition function tf maps them into hidden state
vector hit−1 recursively. The hidden state vector hit−1 are calculated based on the transition function tf of present
input sequence xT and previous hidden state vector hit−1 . This can be mathematically formulated as follows
hit =
0
t=0
tf (hit−1 , xt ) otherwise
(2)
This kind of transition function results in vanishing and exploding gradient issue while training11. To alleviate, LSTM
was introduced11–13 . LSTM network contains a special unit typically called as a memory block. A memory block
composed of a memory cell m and set of gating functions such as input gate (ig), forget gate (f r) and output gate
(og) to control the states of a memory cell. The transition function tf for each LSTM units is defined below
igt = σ(wig xt + Pig hit−1 + Qig mt−1 + big )
(3)
f gt = σ(wf g xt + Pf g hit−1 + Qf g mt−1 + bf g )
(4)
ogt = σ(wog xt + Pog hit−1 + Qog mt−1 + bog )
(5)
m1t = tanh(wm xt + Pm hit−1 + bm )
(6)
mt = f gti ⊙ mt−1 + igt ⊙ m1
(7)
hit = ogt ⊙ tanh(mt )
(8)
where xt is the input at time step t, P and Q are weight parameters, σ is sigmoid activation function,⊙ denotes
element-wise multiplication.
Experiments
This section discusses the data set details of task 1 and task 2 and followed by experiments related to parameter tuning.
Task 1 is aims at classifying the twitter posts to either the existence of adverse drug reaction (ADR) or not. Task 2
aims at classifying the twitter posts to personal medication intake, possible medication intake or non-intake. The data
sets for all two tasks are provided by shared task committee and the detailed statistics of them are reported in Table 1
and Table 2. Each task data set is composed of train, validation and test data sets.
Table 1: Task 1 Data Statistics
Data
Training
Validation
Testing
Total #
Tweets
6725
3535
9961
Total #
Classes
2
2
2
# ADR Mentioned
Tweets
721
240
9190
# ADR not
Mentioned Tweets
6004
3295
771
Table 2: Task 2 Data Statistics
Data
Training
Validation
Testing
Total #
Tweets
1065
712
7513
Total #
Classes
3
3
3
Personal
Medicine Intake
192
125
1731
Possible
Medicine Intake
373
230
2697
Non
Intake
500
357
3085
Results
All experiments are trained using backpropogation through time (BPTT)14 on Graphics processing unit (GPU) enabled
TensorFlow16 computational framework in conjunction with Keras framework in Ubuntu 14.04. We have submitted
one run based on LSTM for task 1 and two runs composed of one run based on RNN and other one based on LSTM
for task 2. The evaluation results is given by shared task committee are reported in Table 3 and 4.
Table 3: Task 1 Results
Run
1
ADR Precision
0.078
ADR Recall
0.17
ADR F-score
0.107
Table 4: Task 2 Results
Run
1
2
Micro-averaged precision
for classes 1 and 2
0.414
0.843
Micro-averaged recall
for classes 1 and 2
0.107
0.487
Micro-averaged F-score
for classes 1 and 2
0.171
0.617
Conclusion
Social media mining is considerably an important source of information in many of health applications. This working
note presents RNN and LSTM based embedding system for social media health text classification. Due to limited
number of tweets, the performance of the proposed method is very less. However, the obtained results are considerable
and open the way in future to apply for the social media health text classification. Moreover, the performance of the
LSTM based embedding for task 2 is good in comparison to the task 1. This is primarily due to the fact that the target
classes of task 1 data set imbalanced. Hence, the proposed method can be applied on large number of tweets corpus in
order to attain the best performance.
References
1. Lee, Kathy, Ankit Agrawal, and Alok Choudhary, Real-time disease surveillance using twitter data: Demonstration
on flu and cancer, In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery
and Data Mining, KDD 13, pages 14741477, New York, NY, USA, 2013. ACM.
2. Lee, Kathy, Ankit Agrawal, and Alok Choudhary, Mining social media streams to improve public health allergy
surveillance, In 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining
(ASONAM), pages 815822, Aug 2015.
3. Sarker, Abeed, and Graciela Gonzalez Portable automatic text classification for adverse drug reaction detection via
multi-corpus training, Journal of Biomedical Informatics, 53:196 207, 2015.
4. Jonnagaddala, J. I. T. E. N. D. R. A., TONI ROSE Jue, and H. J. Dai Binary classification of Twitter posts for
adverse drug reactions, Proceedings of the Social Media Mining Shared Task Workshop at the Pacific Symposium
on Biocomputing, Big Island, HI, USA. 2016.
5. Dai, Hong-Jie, Musa Touray, Jitendra Jonnagaddala, and Shabbir Syed-Abdul Feature engineering for recognizing
adverse drug reactions from twitter posts, Information 7, no. 2 (2016): 27
6. Rastegar-Mojarad, M. A. J. I. D., Ravikumar Komandur Elayavilli, Yue Yu, and Hongfang Liu Detecting signals
in noisy data-can ensemble classifiers help identify adverse drug reaction in tweets In Proceedings of the Social
Media Mining Shared Task Workshop at the Pacific Symposium on Biocomputing. 2016.
7. Zhang, Zhifei, J. Y. Nie, and Xuyao Zhang An ensemble method for binary classification of adverse drug reactions
from social media,In Proceedings of the Social Media Mining Shared Task Workshop at the Pacific Symposium on
Biocomputing. 2016.
8. Nguyen, Huy, and Minh-Le Nguyen A Deep Neural Architecture for Sentence-level Sentiment Classification in
Twitter Social Networking,arXiv preprint arXiv:1706.08032 (2017).
9. Lee, Kathy, Ashequl Qadir, Sadid A. Hasan, Vivek Datla, Aaditya Prakash, Joey Liu, and Oladimeji Farri Adverse
Drug Event Detection in Tweets with Semi-Supervised Convolutional Neural Networks, In Proceedings of the 26th
International Conference on World Wide Web, pp. 705-714. International World Wide Web Conferences Steering
Committee, 2017.
10. Elman, Jeffrey LFinding structure in time, Cognitive science 14.2 (1990): 179-211.
11. Hochreiter, Sepp, and J´’urgen Schmidhuber Long short-term memory, Neural computation 9, no. 8 (1997): 17351780.
12. Gers, Felix A., Jrgen Schmidhuber, and Fred Cummins Learning to forget: Continual prediction with
LSTM,(1999): 850-855.
13. Gers, Felix A., Nicol N. Schraudolph, and Jrgen Schmidhuber Learning precise timing with LSTM recurrent
networks, Journal of machine learning research 3.Aug (2002): 115-143.
14. Werbos, Paul J Backpropagation through time: what it does and how to do it, Proceedings of the IEEE 78.10
(1990): 1550-1560.
15. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M.
and Kudlur, M. TensorFlow: A System for Large-Scale Machine Learning, In OSDI, vol. 16, pp. 265-283. 2016.
16. Barathi Ganesh HB, Anand Kumar M and Soman KP. ”Distributional Semantic Representation in Health Care
Text Classification.” (2016).
| 2 |
Fixed-point Characterization of Compositionality Properties
of Probabilistic Processes Combinators
Daniel Gebler
Department of Computer Science, VU University Amsterdam,
De Boelelaan 1081a, NL-1081 HV Amsterdam, The Netherlands
[email protected]
Simone Tini
Department of Scienza e Alta Tecnologia,
University of Insubria, Via Valleggio 11, I-22100, Como, Italy
[email protected]
Bisimulation metric is a robust behavioural semantics for probabilistic processes. Given any SOS
specification of probabilistic processes, we provide a method to compute for each operator of the
language its respective metric compositionality property. The compositionality property of an operator is defined as its modulus of continuity which gives the relative increase of the distance between
processes when they are combined by that operator. The compositionality property of an operator
is computed by recursively counting how many times the combined processes are copied along their
evolution. The compositionality properties allow to derive an upper bound on the distance between
processes by purely inspecting the operators used to specify those processes.
Keywords: SOS, probabilistic transition systems, bisimulation metric, compositionality, continuity
1
Introduction
Over the last decade a number of researchers have started to develop a theory of structural operational
semantics for probabilistic transition systems (PTSs). Several rule formats for various PTSs were proposed that ensure compositionality of bisimilarity [3, 10, 27] and of approximate bisimilarity [24, 31].
We will consider specifications with rules of the probabilistic GSOS format [3, 9, 29] in order to describe
nondeterministic probabilistic transition systems [30].
Bisimilarity is very sensitive to the exact probabilities of transitions. The slightest perturbation of the
probabilities can destroy bisimilarity. Bisimulation metric [6, 7, 13–16, 25] provides a robust semantics
for probabilistic processes. It is the quantitative analogue to bisimulation equivalence and assigns to
each pair of processes a distance which measures the proximity of their quantitative properties. The
distances form a pseudometric with bisimilar processes at distance 0. Alternative approaches towards a
robust semantics for probabilistic processes are approximate bisimulation [17, 25, 32] and bisimulation
degrees [33]. We consider bisimulation metrics as convincingly argued in e.g. [6, 15, 25].
For compositional specification and reasoning it is necessary that the considered behavioral semantics
is compatible with all operators of the language. For bisimulation metric semantics this is the notion
of uniform continuity. Intuitively, an operator is uniformly continuous if processes composed by that
operator stay close whenever their respective subprocesses are replaced by close subprocesses.
In the 1990s, rule formats that guarantee compositionality of the specified operators have been proposed by (reasonable) argumentation for admissible rules. Prominent examples are the GSOS format [5]
and the ntyft/ntyxt [26] format. More recently, the development of compositional proof systems for the
J. Borgström, S. Crafa (Eds.): Combined Workshop on Expressiveness in
Concurrency and Structural Operational Semantics (EXPRESS/SOS 2014)
EPTCS 160, 2014, pp. 63–78, doi:10.4204/EPTCS.160.7
© D. Gebler & S. Tini
This work is licensed under the
Creative Commons Attribution License.
64
Characterization of Compositionality Properties of Probabilistic Process Combinators
satisfaction relation of HML-formulae [18, 23] allowed to derive rule formats from the logical characterization of the behavioral relation under investigation [4, 19–21].
We propose a new approach that allows to derive for any given specification the compositionality
property of each of its specified operators. The compositionality properties are derived from an appropriate denotational model of the specified language. First, we develop for a concrete process algebra an
appropriate denotational model. The denotation of an open process term describes for each resolution of
the nondeterministic choices how many instances of each process variable are spawned while the process evolves. The number of spawned process replicas is weighted by the likelihood of its realization just
like the bisimulation metric weights the distance between target states by their reachability. We derive
from the denotation of an open process term an upper bound on the bisimulation distance between the
closed instances of the denoted process. Then we generalize this method to arbitrary processes whose
operational semantics is specified by probabilistic GSOS rules. In fact, the upper bound on the bisimulation distance between closed instances of f (x1 , . . . , xr( f ) ) is a modulus of continuity of operator f if the
denotation of f (x1 , . . . , xr( f ) ) is finitely bounded. In this case the operator f is uniformly continuous and
admits for compositional reasoning wrt. bisimulation metric.
This paper continues our research programme towards a theory of robust specifications for probabilistic processes. Earlier work [24] investigated compositional process combinators with respect to approximate bisimulation. Besides the different semantics considered in this paper, we extend substantially
on the approach of [24] by using the newly developed denotational approach. The denotational model
separates clearly between nondeterministic choice, probabilistic choice, and process replication. This
answers also the open question of [24] how the distance of processes composed by process combinators
with a nondeterministic operational semantics can be approximated.
2
Preliminaries
2.1 Probabilistic Transition Systems
A signature is a structure Σ = (F, r), where (i) F is a countable set of operators, and (ii) r : F → N is
a rank function. r( f ) gives the arity of operator f . We write f ∈ Σ for f ∈ F. We assume an infinite
set of state variables Vs disjoint from F. The set of Σ-terms (also called state terms) over V ⊆ Vs ,
notation T (Σ, V), is the least set satisfying: (i) V ⊆ T (Σ, V), and (ii) f (t1 , . . . , tr( f ) ) ∈ T (Σ, V) for f ∈ Σ and
t1 , . . . , tr( f ) ∈ T (Σ, V). T (Σ, ∅) is the set of all closed terms and abbreviated as T(Σ). T (Σ, Vs ) is the set of
open terms and abbreviated as T(Σ). We may refer to operators as process combinators, to variables as
process variables, and to closed terms as processes. Var(t) denotes the set of all state variables in t.
P
Probability distributions are mappings π : T(Σ) → [0, 1] with t∈T(Σ) π(t) = 1 that assign to each closed
term t ∈ T(Σ) its respective probability π(t). By ∆(T(Σ)) we denote the set of all probability distributions
on T(Σ). We let π range over ∆(T(Σ)). The probability mass of T ⊆ T(Σ) in π is defined by π(T ) =
P
i.e., δt (t) = 1 and δt (t′ ) = 0 if t and t′ are
t∈T π(t). Let δt for t ∈ T(Σ) denote the Dirac distribution,
P
syntactically not equal. The convex combination i∈I qi πi of a family {πi }i∈I of probability distributions
P
P
P
πi ∈ ∆(T(Σ)) with qi ∈ (0, 1] and i∈I qi = 1 is defined by ( i∈I qi πi )(t) = i∈I (qi πi (t)). By f (π1 , . . . , πr( f ) )
Qr( f )
we denote the distribution defined by f (π1 , . . . , πr( f ) )( f (t1 , . . . , tr( f ) )) = i=1 πi (ti ). We may write π1 f π2
for f (π1 , π2 ).
In order to describe probabilistic behavior, we need expressions that denote probability distributions. We assume an infinite set of distribution variables Vd . We let µ range over Vd , and x, y range
over V = Vs ∪ Vd . The set of distribution terms over state variables V s ⊆ Vs and distribution variables
D. Gebler & S. Tini
65
Vd ⊆ Vd , notation T (Γ, V s , Vd ) with Γ denoting the signature extending Σ by operators to describe disP
tributions, is the least set satisfying: (i) Vd ∪ {δ(t) | t ∈ T(Σ, V s )} ⊆ T(Γ, V s , Vd ), (ii) i∈I qi θi ∈ T(Γ, V s , Vd )
P
if θi ∈ T(Γ, V s , Vd ) and qi ∈ (0, 1] with i∈I qi = 1, and (iii) f (θ1 , . . . , θr( f ) ) ∈ T(Γ, V s , Vd ) if f ∈ Σ and
θi ∈ T(Γ, V s , Vd ). A distribution variable µ ∈ Vd is a variable that takes values from ∆(T(Σ)). An instantiable Dirac distribution δ(t) is an expression that takes as value the Dirac distribution δt′ when variables
in t are substituted so that t becomes the closed term t′ . Case (ii) allows to construct convex combinations
of distributions. We write θ1 ⊕q θ2 for qθ1 + (1 − q)θ2 . Case (iii) lifts the structural inductive construction
of state terms to distribution terms. T(Γ) denotes T (Γ, Vs , Vd ). Var(θ) denotes the set of all state and
distribution variables in θ.
A substitution is a mapping σ : V → T(Σ) ∪ T(Γ) such that σ(x) ∈ T(Σ) if x ∈ Vs , and σ(µ) ∈ T(Γ)
if µ ∈ Vd . A substitution extends to a mapping from state terms to state terms as usual. A substituP
P
tion extends to distribution terms by σ(δ(t)) = δσ(t) , σ( i∈I qi θi ) = i∈I qi σ(θi ) and σ( f (θ1 , . . . , θr( f ) )) =
f (σ(θ1 ), . . . , σ(θr( f ) )). Notice that closed instances of distribution terms are probability distributions.
Probabilistic transition systems generalize labelled transition systems (LTSs) by allowing for probabilistic choices in the transitions. We consider nondeterministic probabilistic LTSs (Segala-type systems) [30] with countable state spaces.
Definition 1 (PTS) A nondeterministic probabilistic labeled transition system (PTS) is given by a triple
− where Σ is a signature, A is a countable set of actions, and →
− ⊆ T(Σ) × A × ∆(T(Σ)) is a
(T(Σ), A,→),
transition relation.
a
a
a
We write t −
→ π for (t, a, π) ∈ →,
− and t −
→ if t −
→ π for some π ∈ ∆(T(Σ)).
2.2 Specification of Probabilistic Transition Systems
We specify PTSs by SOS rules of the probabilistic GSOS format [3] and adapt from [29] the language
to describe distributions. We do not consider quantitative premises because they are incompatible1 with
compositional approximate reasoning.
Definition 2 (PGSOS rule) A PGSOS rule has the form:
ai,m
{xi −−−→ µi,m | i ∈ I, m ∈ Mi }
bi,n
6 | i ∈ I, n ∈ Ni }
{xi −−−→
a
f (x1 , . . . , xr( f ) ) −
→θ
with I = {1, . . . , r( f )} the indices of the arguments of operator f ∈ Σ, finite index sets Mi , Ni , actions
ai,m , bi,n , a ∈ A, state variables xi ∈ Vs , distribution variables µi,m ∈ Vd , distribution term θ ∈ T(Γ), and
constraints:
1. all µi,m for i ∈ I, m ∈ Mi are pairwise different;
2. all x1 , . . . , xr( f ) are pairwise different;
3. Var(θ) ⊆ {µi,m | i ∈ I, m ∈ Mi } ∪ {x1 . . . , xr( f ) }.
ai,m
bi,n
6 ) above the line are called positive (resp. negative) premises.
The expressions xi −−−→ µi,m (resp. xi −−−→
ai,m
We call µi,m in xi −−−→ µi,m a derivative of xi . We denote the set of positive (resp. negative) premises
a
of rule r by pprem(r) (resp. nprem(r)). The expression f (x1 , . . . , xr( f ) ) −
→ θ below the line is called the
1 Cases 8 and 9 in [24] show that rules with quantitative premises may define operators that are not compositional wrt.
approximate bisimilarity. The same holds for metric bisimilarity.
Characterization of Compositionality Properties of Probabilistic Process Combinators
66
conclusion, notation conc(r), f (x1 , . . . , xr( f ) ) is called the source, notation src(r), the xi are called the
source variables, notation xi ∈ src(r), and θ is called the target, notation trgt(r).
A probabilistic transition system specification (PTSS) in PGSOS format is a triple P = (Σ, A, R),
where Σ is a signature, A is a countable set of actions and R is a countable set of PGSOS rules. R f is
the set of those rules of R with source f (x1 , . . . , xr( f ) ). A supported model of P is a PTS (T(Σ), A,→)
−
a
such that t −
→ π ∈→
− iff for some rule r ∈ R and some closed substitution σ all premises of r hold, i.e.
ai,m
bi,n
ai,m
6 ∈ nprem(r) we have
− and for all xi −−−→
for all xi −−−→ µi,m ∈ pprem(r) we have σ(xi ) −−−→ σ(µi,m ) ∈ →
bi,n
a
σ(xi ) −−−→ π < →
− for all π ∈ ∆(T(Σ)), and the conclusion conc(r) = f (x1 , . . . , xr( f ) ) −
→ θ instantiates to
σ( f (x1 , . . . , xr( f ) )) = t and σ(θ) = π. Each PGSOS PTSS has exactly one supported model [2, 5] which is
moreover finitely branching.
2.3 Bisimulation metric on Probabilistic Transition Systems
Behavioral pseudometrics are the quantitative analogue to behavioral equivalences and formalize the
notion of behavioral distance between processes. A 1-bounded pseudometric is a function d : T(Σ) ×
T(Σ) → [0, 1] with (i) d(t, t) = 0, (ii) d(t, t′ ) = d(t′ , t), and (iii) d(t, t′ ) ≤ d(t, t′′ ) + d(t′′ , t′ ), for all terms
t, t′ , t′′ ∈ T(Σ).
We define now bisimulation metrics as quantitative analogue to bisimulation equivalences. Like for
bisimulation we need to lift the behavioral pseudometric on states T(Σ) to distributions ∆(T(Σ)) and sets
of distributions P(∆(T(Σ))). A matching ω ∈ ∆(T(Σ) × T(Σ)) for (π, π′ ) ∈ ∆(T(Σ)) × ∆(T(Σ)) is given if
P
P
′
′ ′
′
′
t∈T(Σ) ω(t, t ) = π (t ) for all t, t ∈ T(Σ). We denote the set of all matchings
t′ ∈T(Σ) ω(t, t ) = π(t) and
′
′
for (π, π ) by Ω(π, π ). The Kantorovich pseudometric K(d) : ∆(T(Σ)) × ∆(T(Σ)) → [0, 1] is defined for a
pseudometric d : T(Σ) × T(Σ) → [0, 1] by
X
d(t, t′ ) · ω(t, t′ )
K(d)(π, π′ ) = min ′
ω∈Ω(π,π )
t,t′ ∈T(Σ)
for π, π′ ∈ ∆(T(Σ)). The Hausdorff pseudometric H(d̂) : P(∆(T(Σ))) × P(∆(T(Σ))) → [0, 1] is defined for a
pseudometric d̂ : ∆(T(Σ)) × ∆(T(Σ)) → [0, 1] by
(
)
H(d̂)(Π1 , Π2 ) = max sup inf d̂(π1 , π2 ), sup inf d̂(π2 , π1 )
π1 ∈Π1 π2 ∈Π2
π2 ∈Π2 π1 ∈Π1
for Π1 , Π2 ⊆ ∆(T(Σ)) whereby inf ∅ = 1 and sup ∅ = 0.
A bisimulation metric is a pseudometric on states such that for two states each transition from one
state can be mimicked by a transition from the other state and the distance between the target distributions
does not exceed the distance of the source states.
Definition 3 (Bisimulation metric) A 1-bounded pseudometric d on T(Σ) is a bisimulation metric if for
a
a
all t, t′ ∈ T(Σ) with d(t, t′ ) < 1, if t −
→ π′ with K(d)(π, π′ ) ≤ d(t, t′ ).
→ π then there exists a transition t′ −
We order bisimulation metrics d1 ⊑ d2 iff d1 (t, t′ ) ≤ d2 (t, t′ ) for all t, t′ ∈ T(Σ). The smallest bisimulation metric, notation d, is called bisimilarity metric and assigns to each pair of processes the least
possible distance. We call the bisimilarity metric distance also bisimulation distance. Bisimilarity equivalence [28, 30] is the kernel of the bisimilarity metric [13], i.e. d(t, t′ ) = 0 iff t and t′ are bisimilar. We
say that processes t and t′ do not totally disagree if d(t, t′ ) < 1.
D. Gebler & S. Tini
67
a
a
→ for all a ∈ A, i.e. t and t′
Remark 1 Let t, t′ be processes that do not totally disagree. Then t −
→ iff t′ −
agree on the actions they can perform immediately.
Bisimulation metrics can alternatively be defined as prefixed points of a monotone function. Let
([0, 1]T(Σ)×T(Σ) , ⊑) be the complete lattice defined by d ⊑ d′ iff d(t, t′ ) ≤ d′ (t, t′ ), for all t, t′ ∈ T(Σ). We
define the function B : [0, 1]T(Σ)×T(Σ) → [0, 1]T(Σ)×T(Σ) for d : T(Σ) × T(Σ) → [0, 1] and t, t′ ∈ T(Σ) by:
B(d)(t, t′ ) = sup H(K(d))(der(t, a), der(t′ , a))
a∈A
a
with der(t, a) = {π | t −
→ π}.
Proposition 1 ([13]) The bisimilarity metric d is the least fixed point of B.
3
Denotational model
We develop now a denotational model for open terms. Essentially, the denotation of an open term t
describes for each variable in t how many copies are spawned while t evolves. The denotation of t allows
us to formulate an upper bound on the bisimulation distance between closed instances of t. In this section
we consider a concrete process algebra. In the next section we generalize our method to arbitrary PGSOS
specifications.
Let ΣPA be the signature of the core operators of the probabilistic process algebra in [10] defined by
the stop process 0, a family of n-ary prefix operators a.([q1 ] ⊕ · · · ⊕ [qn ] ) with a ∈ A, n ≥ 1, q1 , . . . , qn ∈
Pn
(0, 1] and
Ln i=1 qi = 1, alternative composition + , and parallel composition kB for each B ⊆ A. We
write a. i=1 [qi ] for a.([q1 ] ⊕ · · · ⊕ [qn ] ), and a. for a.[1] (deterministic prefix operator). Moreover,
we write k for kA (synchronous parallel composition). The PTSS PPA = (ΣPA , A, RPA ) is given by
the following PGSOS rules in RPA :
a
a
x2 −
→ µ2
x1 −
→ µ1
a.
n
M
a
[qi ]xi −
→
i=1
a
x1 −
→ µ1
n
X
a
qi δ(xi )
x1 + x2 −
→ µ1
a
x1 + x2 −
→ µ2
i=1
a
x2 −
→ µ2
a
(a ∈ B)
x1 kB x2 −
→ µ1 k B µ2
a
x1 −
→ µ1
a
(a < B)
x1 kB x2 −
→ µ1 kB δ(x2 )
a
x2 −
→ µ2
(a < B)
a
x1 kB x2 −
→ δ(x1 ) kB µ2
We call the open terms T(ΣPA ) nondeterministic probabilistic process terms. We define two important subclasses of T(ΣPA ) that allow for a simpler approximation of the distance of their closed instances.
Let Tdet (ΣPA ) be the set of deterministic process terms, which are those terms of T(ΣPA ) that are built exclusively from the stop process 0, deterministic prefix a. , and synchronous parallel composition k (no
nondeterministic and no probabilistic choices). We call the open terms Tdet (ΣPA ) deterministic because
all probabilistic or nondeterministic choices in the operational semantics of the closed instances σ(t),
with σ : Vs → T(ΣPA ) any closed substitution, arise exclusively from the processes in σ. Let Tprob (ΣPA )
be the set of probabilistic process terms, L
which are those terms of T(ΣPA ) that are built exclusively from
n
the stop process 0, probabilistic prefix a. i=1 [qi ] , and synchronous parallel composition k (no nondeterministic choices). Again, all nondeterministic choices in σ(t) arise exclusively from the processes
in σ.
68
Characterization of Compositionality Properties of Probabilistic Process Combinators
The denotation of a deterministic process term t ∈ Tdet (ΣPA ) is a mapping m : V → N∞ that describes
for each process variable x ∈ Var(t) how many copies of x or some derivative of x are spawned while t
evolves. We call m the multiplicity of t. Let M be the set of all mappings V → N∞ . The denotation
of t, notation ~tM , is defined by ~0M (x) = 0, ~xM (x) = 1, ~xM (y) = 0 if x , y, ~t1 k t2 M (x) =
~t1 M (x) + ~t2 M (x), and ~a.t′ M (x) = ~t′ M (x).
We use notation 0 ∈ M for the multiplicity that assigns 0 to each x ∈ V, and nV ∈ M with V ⊆ V for
the multiplicity such that nV (x) = n if x ∈ V and nV (x) = 0 if x < V. We write nx for n{x} . As it will become
clear in the next sections, we need the denotation m(x) = ∞ for (unbounded) recursion and replication.
We will approximate the bisimulation distance between σ1 (t) and σ2 (t) for closed substitutions
σ1 , σ2 using the denotation of t and the bisimulation distances between processes σ1 (x) and σ2 (x) of
variables x ∈ Var(t). The bisimulation distance of variables is represented by a mapping e : V → [0, 1).
We call e a process distance. Let E be the set of all process distances V → [0, 1). We henceforth assume
closed substitutions σ1 , σ2 with a bisimulation distance between σ1 (x) and σ2 (x) that is strictly less than
1. Practically, this is a very mild restriction because for any (non-trivial) process combinator the composition of processes that totally disagree (i.e. which are in bisimulation distance 1) may lead to composed
processes that again totally disagree. For any d : T(Σ) × T(Σ) → [0, 1] and any closed substitutions σ1 , σ2
we define the associated process distance d(σ1 , σ2 ) ∈ E by d(σ1 , σ2 )(x) = d(σ1 (x), σ2 (x)).
Definition 4 For a multiplicity m ∈ M and process distance e ∈ E we define the deterministic distance
approximation from above as
Y
(1 − e(x))m(x)
D(m, e) = 1 −
x∈V
To understand the functional D remind that e(x) is the distance between processes σ1 (x) and σ2 (x). In
other words, processes σ1 (x) and σ2 (x) disagree by e(x) on their behavior. Hence, σ1 (x) and σ2 (x) agree
Q
by 1 − e(x). Thus, m(x) copies of σ1 (x) and m(x) copies of σ2 (x) agree by at least x∈V (1 − e(x))m(x) ,
Q
and disagree by at most 1 − x∈V (1 − e(x))m(x) .
Example 1 Consider the deterministic process term t = x k x and substitutions σ1 (x) = a.a.0 and σ2 (x) =
a.([0.9]a.0 ⊕ [0.1]0). In this and all following examples we assume that σ1 and σ2 coincide on all other
variables for which the substitution is not explicitly defined, i.e. σ1 (y) = σ2 (y) if x , y in this example. It
is clear that d(σ1 (x), σ2 (x)) = 0.1. Then, d(σ1 (t), σ2 (t)) = 0.1 · 0.9 + 0.9 · 0.1 + 0.1 · 0.1 = 0.19, which is
the likelihood that either the first, the second or both arguments of σ2 (x k x) can perform action a only
once. The denotation of t is ~tM (x) = 2. Then, D(~tM , d(σ1 , σ2 )) = 1 − (1 − 0.1)2 = 0.19.
The functional D defines an upper bound on the bisimulation distance of deterministic processes.
Proposition 2 Let t ∈ Tdet (ΣPA ) be a deterministic process term and σ1 , σ2 be closed substitutions. Then
d(σ1 (t), σ2 (t)) ≤ D(~tM , d(σ1 , σ2 )).
The distance d(σ1 , σ2 ) abstracts from the concrete reactive behavior of terms σ1 (x) and σ2 (x). It
is not hard to see that for deterministic process terms without parallel composition the approximation
functional D gives the exact bisimulation distance. However, the parallel composition of processes may
lead to an overapproximation if the bisimulation distance of process instances arises (at least partially)
from reactive behavior on which the processes cannot synchronize.
Example 2 Consider t = x k a.a.0 and substitutions σ1 (x) = a.b.0 and σ2 (x) = a.([0.9]b.0 ⊕ [0.1]0) with
d(σ1 (x), σ2 (x)) = 0.1. We have d(σ1 (t), σ2 (t)) = 0 since both σ1 (t) and σ2 (t) make an a move to a distribution of parallel compositions either b.0 k a.0 or 0 k a.0 that all cannot proceed. Note that the bisimulation distance between σ1 (x) and σ2 (x) arises from the difference on performing action b which cannot
D. Gebler & S. Tini
69
synchronize with a. The denotation of t is ~tM (x) = 1 which gives in this case an overapproximation of
the distance d(σ1 (t), σ2 (t)) = 0 < D(~tM , d(σ1 , σ2 )) = 1 − (1 − 0.1) = 0.1. However, for σ′1 (x) = a.a.0 and
σ′2 (x) = a.([0.9]a.0 ⊕ [0.1]0) with d(σ′1 (x), σ′2 (x)) = 0.1 we get d(σ′1 (t), σ′2 (t)) = 0.1 = D(~tM , d(σ′1 , σ′2 )).
We remark that the abstraction of the closed substitutions to process distances is intentional and very
much in line with common compositionality criteria that relate the distance of composed processes with
the distance of the process components.
The denotation of a probabilistic process term t ∈ Tprob (ΣPA ) is a distribution p ∈ ∆(M) that describes
for each multiplicity m ∈ M the likelihood p(m) that for each process variable x ∈ Var(t) exactly m(x)
copies of x or some derivative of x are spawned while t evolves. We call p the probabilistic multiplicity
of t. Let P be the set of all distributions ∆(M). The denotation of t, notation ~tP , is defined by
P
~0P = δm with m = 0, ~xP = δm with m = 1x , ~t1 k t2 P (m) =
~t1 P (m1 ) · ~t2 P (m2 ), and
m1 ,m2 ∈M
m(x)=m1 (x)+m2 (x)
for
all
x∈V
Ln
P
~a. i=1 [qi ]ti P = ni=1 qi ~ti P . Notice that ~tP = δ~tM for all t ∈ Tdet (ΣPA ).
For important probabilistic multiplicities we use the same symbols as for multiplicities but it will
always be clear from the context if we refer to probabilistic multiplicities or multiplicities. By 0 ∈ P we
mean the probabilistic multiplicity that gives probability 1 to the multiplicity 0 ∈ M. By nV ∈ P we mean
the probabilistic multiplicity that gives probability 1 to the multiplicity nV ∈ M.
Definition 5 For a probabilistic multiplicity p ∈ P and process distance e ∈ E we define the probabilistic
distance approximation from above as
X
p(m) · D(m, e)
P(p, e) =
m∈M
Example 3 Consider t = a.([0.5](x k x)⊕[0.5]0) and substitutions σ1 (x) = a.a.0 and σ2 (x) = a.([0.9]a.0⊕
[0.1]0) with d(σ1 (x), σ2 (x)) = 0.1. It holds that d(σ1 (t), σ2 (t)) = 0.5(1 − (1 − 0.1)2 ). The probabilistic multiplicity of t is ~tP (2x ) = 0.5 and ~tP (0) = 0.5. Then, D(2x , d(σ1 , σ2 )) = 1 − (1 − 0.1)2 and
D(0, d(σ1 , σ2 )) = 0. Hence, we get the probabilistic distance approximation P(~tP , d(σ1 , σ2 )) = 0.5(1 −
(1 − 0.1)2 ).
Remark 2 The functional P shows a very important interaction between probabilistic choice and process replication. Consider again the process term t = a.([0.5](x k x) ⊕ [0.5]0) and any closed substitutions σ1 , σ2 with d(σ1 (x), σ2 (x)) = ǫ for any ǫ ∈ [0, 1). In the probabilistic distance approximation
P(~tP , d(σ1 , σ2 )) the deterministic distance approximation D(2x , d(σ1 , σ2 )) = 1 − (1 − ǫ)2 of the synchronous parallel execution x k x of two instances of x is weighted by the likelihood 0.5 of its realization.
Hence, P(~tP , d(σ1 , σ2 )) = 0.5(1 − (1 − ǫ)2 ). From Bernoulli’s inequality m1 (1 − (1 − ǫ)n ) ≤ ǫ if m ≥ n, we
get 0.5(1 − (1 − ǫ)2 ) ≤ ǫ. Hence, the distance between instances of two copies running synchronously in
parallel with a probability of 0.5 is at most the distance between those instances running (non-replicated)
with a probability of 1.0.
Notice that P(~tP , d(σ1 , σ2 )) = D(~tM , d(σ1 , σ2 )) for all t ∈ Tdet (ΣPA ). The functional P defines an
upper bound on the bisimulation distance of probabilistic processes.
Proposition 3 Let t ∈ Tprob (ΣPA ) be a probabilistic process term and σ1 , σ2 be closed substitutions. Then
d(σ1 (t), σ2 (t)) ≤ P(~tP , d(σ1 , σ2 )).
Before we can introduce the denotation of nondeterministic probabilistic processes, we need to orP
der the denotation of probabilistic processes. Let π : M → [0, 1] with m∈M π(m) ≤ 1 be a subdistribution over multiplicities. We define the weighting of π as a mapping π : V → R≥0 defined π(x) =
70
Characterization of Compositionality Properties of Probabilistic Process Combinators
P
P
(1/|π|) m∈M π(m) · m(x) if |π| > 0, with |π| = m∈M π(m) the size of π, and π(x) = 0 if |π| = 0. Intuitively,
the number of process copies m(x) are weighted by the probability π(m) of realization of that multiplicity. We order probabilistic multiplicities p1 ⊑ p2 if p1 can be decomposed into subdistributions such that
each multiplicity in p2 is above some weighted subdistribution of p1 . The order is now defined by:
p1 ⊑ p2 iff there is a ω ∈ Ω(p1 , p2 ) with ω(·, m) ⊑ m for all m ∈ M
m1 ⊑ m2 iff m1 (x) ≤ m2 (x) for all x ∈ V
The denotation of a nondeterministic probabilistic process term t ∈ T(ΣPA ) is a set of probabilistic
multiplicities P ⊆ P that describes by p ∈ P some resolution of the nondeterministic choices in t such
that the process evolves as a probabilistic process described by p. We construct a Hoare powerdomain
over the probabilistic multiplicities P and use as canonical representation for any set of probabilistic
multiplicities P ⊆ P the downward closure defined as ↓ P = {p ∈ P | p ⊑ p′ for some p′ ∈ P}. Let D be
the set of non-empty downward closed sets of probabilistic multiplicities {P ⊆ P | P , ∅ and ↓ P = P}. We
use downward closed sets such that D will form a complete lattice with the order defined below (esp.
satisfies antisymmetry, cf. Proposition 4). The denotation of t, notation ~t, is defined by ~0 = {~0P },
~x = ↓ {~xP }, p ∈ ~t1 kB t2 iff there are p1 ∈ ~t1 and p2 ∈L
~t2 such that p ⊑ p′ with p′ defined by
P
n
′
p (m) =
p1 (m1 ) · p2 (m2 ) for all m ∈ M, p ∈ ~a. i=1 [qi ]ti iff there are pi ∈ ~ti such that
m1 ,m2 ∈M
m(x)=m1 (x)+m2 (x)
for all x∈V
P
p ⊑ p′ with p′ defined by p′ = ni=1 qi · pi , and ~t1 + t2 = ~t1 ∪ ~t2 . Notice that ~t = ↓ {~tP } for all
t ∈ Tprob (ΣPA ). By 0 ∈ D we mean the singleton set containing the probabilistic multiplicity 0 ∈ P, and
by nV ∈ D the downward closure of the singleton set with element nV ∈ P.
Definition 6 For a nondeterministic probabilistic multiplicity P ∈ D and process distance e ∈ E we define
the nondeterministic probabilistic distance approximation from above as
A(P, e) = sup P(p, e)
p∈P
Example 4 Consider the nondeterministic probabilistic process term t = a.([0.5](x k x) ⊕ [0.5]0) + b.y,
and substitutions σ1 (x) = a.a.0, σ2 (x) = a.([0.9]a.0 ⊕ [0.1]0) and σ1 (y) = b.b.0, σ2 (y) = b.([0.8]b.0 ⊕
[0.2]0). It is clear that d(σ1 (x), σ2 (x)) = 0.1 and d(σ1 (y), σ2 (y)) = 0.2. Now, d(σ1 (t), σ2 (t)) = max{0.5(1−
(1 − 0.1)2 ), 0.2}. The nondeterministic probabilistic multiplicity of t is ~t = ↓ {p1 , p2 }, for p1 (2x ) =
0.5, p1 (0) = 0.5 and p2 (1y ) = 1.0. Thus A(~t, d(σ1 , σ2 )) = max(P(p1 , d(σ1 , σ2 )), P(p2 , d(σ1 , σ2 ))) =
max(0.5(1 − (1 − 0.1)2 ), 0.2).
Notice that A(~t, d(σ1 , σ2 )) = P(~tP , d(σ1 , σ2 )) for all t ∈ Tprob (ΣPA ). Moreover, A(P, e) = A(↓ P, e) for
any P ⊆ P. The functional A defines an upper bound on the bisimulation distance of nondeterministic
probabilistic process terms.
Theorem 1 Let t ∈ T(ΣPA ) be a nondeterministic probabilistic process term and σ1 , σ2 be closed substitutions. Then d(σ1 (t), σ2 (t)) ≤ A(~t, d(σ1 , σ2 )).
Theorem 1 shows that the denotation of a process term is adequate to define an upper bound on the
distance of closed instances of that process term. The converse notion is full-abstraction in the sense
that d(σ1 (t), σ2 (t)) = A(~P, d(σ1 , σ2 )) (no over-approximation). As demonstrated in Example 2, the approximation functionals would require for process variables x ∈ Var(t) besides the bisimulation distance
between σ1 (x) and σ2 (x) also information about the reactive behavior and the branching. However, for
our objective to study the distance of composed processes in relation to the distance of its components,
the bisimulation distance is the right level of abstraction.
D. Gebler & S. Tini
71
We introduce now an order on D that ensures monotonicity of both the approximation functional
A and the functional F introduced in the next section to compute the denotation of arbitrary terms of a
PGSOS PTSS. The order is defined by
P1 ⊑ P2 iff for all p1 ∈ P1 there is a p2 ∈ P2 such that p1 ⊑ p2 .
Proposition 4 (D, ⊑) is a complete lattice.
We order process distances by e1 ⊑ e2 iff e1 (x) ≤ e2 (x) for all x ∈ V. The nondeterministic probabilistic distance approximation A is monotone in both arguments.
Proposition 5 Let P, P′ ∈ D and e, e′ ∈ E. Then A(P, e) ≤ A(P′ , e) if P ⊑ P′ , and A(P, e) ≤ A(P, e′ ) if
e ⊑ e′ .
We will see in the following section that the denotations developed for terms of PPA are sufficient for
terms of any PGSOS PTSS.
4
Distance of composed processes
Now we provide a method to determine the denotation of an arbitrary term. In line with the former section
this gives an upper bound on the bisimulation distance of closed instances of that term. In particular, the
denotation for the term f (x1 , . . . , xr( f ) ) gives an upper bound on the distance of processes composed by
the process combinator f . This allows us in the next section to formulate a simple condition to decide if a
process combinator is uniformly continuous, and hence if we can reason compositionally over processes
combined by that process combinator.
4.1 Operations on process denotations
We start by defining two operations on process denotations that allow us to compute the denotation of
process terms by induction over the term structure. We define the operations first on M and then lift
them to D.
The composition of two processes t1 and t2 which both proceed requires that their multiplicities are
summed up (cf. parallel composition in the prior section). We define the summation of multiplicities by:
(m1 ⊕ m2 )(x) = m1 (x) + m2 (x)
In order to define by structural induction the multiplicity of a term f (t1 , . . . , tr( f ) ), we need an operation
that composes the multiplicity denoting the operator f with the multiplicity of ti . We define the pointed
multiplication of multiplicities with respect to variable y ∈ V by:
(m1 ⊙y m2 )(x) = m1 (y) · m2 (x)
Then, the multiplicity of a state term f (t1 , . . . , tr( f ) ) is given by:
r( f )
M
~ f (t1 , . . . , tr( f ) )M =
~ f (x1 , . . . , xr( f ) )M ⊙ xi ~ti M
i=1
Example 5 Consider the open term t = a.x k y. From Section 3 we get ~a.xM = 1x , ~yM = 1y and
~x1 k x2 M = 1{x1 ,x2 } . Then, we have ~tM = (~x1 k x2 M ⊙ x1 ~a.xM ) ⊕ (~x1 k x2 M ⊙ x2 ~yM ) = ((1x1 ⊕
1x2 ) ⊙ x1 1x ) ⊕ ((1x1 ⊕ 1x2 ) ⊙ x2 1y ) = 1{x,y} .
Characterization of Compositionality Properties of Probabilistic Process Combinators
72
It remains to define the multiplicity of f (x1 , . . . , xr( f ) ) for operators f with an operational semantics
defined by some rule r. We define the multiplicity of f (x1 , . . . , xr( f ) ) in terms of the multiplicity of the
target of r. Let µ be a derivative of the source variable x in rule r. We use the property (m ⊙µ 1x )(x) = m(µ)
in order to express the multiplicity m(µ) as a multiplicity of x. Then, the multiplicity of f (x1 , . . . , xr( f ) ) is
defined for any variable x as the summation of the multiplicity of x and its derivatives in the rule target:
!
M
~trgt(r)M ⊙µi,m 1xi
~trgt(r)M ⊕
ai,m
xi
−−−→µi,m ∈
pprem(r)
Example 6 Consider t = f (x) and the following rule r:
a
x−
→µ
a
f (x) −
→µkµ
The operator f mimics the action a of its argument, replicates the derivative µ, and proceeds as a process
that runs two instances of the derivative in parallel. Consider again the closed substitutions σ1 (x) = a.a.0
and σ2 (x) = a.([0.9]a.0 ⊕ [0.1]0) with d(σ1 (x), σ2 (x)) = 0.1. Then, d(σ1 (t), σ2 (t)) = 1 − (1 − 0.1)2 . The
denotation of the target of r is ~trgt(r)M = 2µ . Hence, the denotation of t is 2µ ⊕ (2µ ⊙µ 1x ) = 2{µ,x} . Thus,
D(~tM , d(σ1 , σ2 )) = 1 − (1 − 0.1)2 by d(σ1 , σ2 )(x) = 0.1 and d(σ1 , σ2 )(µ) = 0.
Operations op ∈ {⊕, ⊙y } over M lift to D by
X
(p1 op p2 )(m) =
p1 (m1 ) · p2 (m2 )
m1 ,m2 ∈M
m=m1 op m2
p ∈ (P1 op P2 ) iff ∃p1 ∈ P and p2 ∈ P2 such that p ⊑ p1 op p2
4.2 Approximating the distance of composed processes
Let (Σ, A, R) be any PGSOS PTSS. We compute the denotation of terms and rules as least fixed point of
a monotone function. Let S = S T × S R with S T = T(Σ) ∪ T(Γ) → D and S R = R → D. A pair (τ, ρ) ∈ S
assigns to each term t ∈ T(Σ) ∪ T(Γ) its denotation τ(t) ∈ D and to each rule r ∈ R its denotation ρ(r) ∈ D.
Let S = (S , ⊑) be a poset with ordering (τ, ρ) ⊑ (τ′ , ρ′ ) iff τ(t) ⊑ τ′ (t) and ρ(r) ⊑ ρ′ (r) for all t ∈ T(Σ) ∪ T(Γ)
and r ∈ R. S forms a complete lattice with least element (⊥T , ⊥R ) defined by ⊥T (t) = ⊥R (r) = 0 for all
t ∈ T(Σ) ∪ T(Γ) and r ∈ R.
Proposition 6 S is a complete lattice.
We assume that for all rules r ∈ R the source variable of argument i is called xi . Let Xr be the set
of source variables xi for which r tests the reactive behavior, i.e. xi ∈ Xr iff r has either some positive
ai,m
bi,n
6 .
premise xi −−−→ µi,m or some negative premise xi −−−→
The mapping F : S → S defined in Figure 1 computes iteratively the nondeterministic probabilistic
multiplicities for all terms and rules. As expected, the denotation of a state term f (t1 , . . . , tr( f ) ) is defined
as the application of all rules R f to the denotation of the arguments. However, for distribution terms the
application of the operator needs to consider two peculiarities. First, different states in the support of a
distribution term f (θ1 , . . . , θr( f ) ) may evolve according to different rules of R f .
D. Gebler & S. Tini
73
Function F : S → S is defined by F(τ, ρ) = (τ′ , ρ′ ) with
1x
if t = x
t = f (t1 , . . . , tr( f ) )
M
r( f )
′
[
τ (t) =
ρ f ⊙ xi τ(ti )
if ρ f =
ρ(r)
i=1
r∈R f
1µ
if θ = µ
τ(t)
if θ = δ(t)
X
X
q
·
τ(θ
)
if
θ
=
qi θi
i
i
τ′ (θ) =
i∈I
i∈I
θ = f (θ1 , . . . , θr( f ) )
r( f )
M
if
ρ f ⊙ xi τ(θi )
sup(sup
ρ(r),
1
)
sup
ρ
=
↓
X
f
r
i=1
r∈R f
)
( M
p ⊙µi,m 1xi | p ∈ τ(trgt(r))
ρ′ (r) = p ⊕
ai,m
xi
−−−→µi,m ∈
pprem(r)
Figure 1: Computation of the denotation of arbitrary terms
Example 7 Consider the operator f defined by the following rule:
a
x−
→µ
a
f (x) −
→ µ+µ
Operator f replicates the derivative of x and evolves as alternative composition of both process copies.
Consider the closed substitutions σ1 (x) = a.([0.9]a.a.0 ⊕ [0.1]0) and σ2 (x) = a.([0.9]a.0 ⊕ [0.1]0) with
d(σ1 (x), σ2 (x)) = 0.9. Then, d(σ1 ( f (x)), σ2 ( f (x))) = 1 − 0.12 = 0.99. The denotations for the two rules
defining the alternative composition (see Section 3) are the downward closed sets with maximal elements 1{x1 ,µ1 } and 1{x2 ,µ2 } . Since sup 1{x1 ,µ1 } = 1{x1 ,µ1 } ∈ P, sup 1{x2 ,µ2 } = 1{x2 ,µ2 } ∈ P and 1Xr+ = {x1 },
1
1Xr+ = {x2 } we get ρ+ = ↓ {sup(1{x1 ,µ1 } , 1{x2 ,µ2 } )} = 1{x1 ,x2 ,µ1 ,µ2 } ∈ D. Hence, the denotation for the tar2
get of the f -defining rule is ~µ + µ = (1{x1 ,x2 ,µ1 ,µ2 } ⊙ x1 1µ ) ⊕ (1{x1 ,x2 ,µ1 ,µ2 } ⊙ x2 1µ ) = 2µ . Thus, ~ f (x) = 2x .
Then, D(2x , d(σ1 , σ2 )) = 0.99.
Second, in the distribution term f (θ1 , . . . , θr( f ) ) the operator f may discriminate states in derivatives
belonging to θi solely on the basis that in some rule r ∈ R f the argument xi ∈ Xr gets tested on the ability
to perform or not perform some action.
Example 8 Consider the operators f and g defined by the following rules:
a
x−
→µ
a
f (x) −
→ g(µ)
a
y−
→ µ′
a
g(y) −
→ δ(0)
Operator f mimics the first move of its argument and then, by operator g, only tests the states in the
derivative for their ability to perform action a. Consider first operator g. We get d(σ1 (g(y)), σ2 (g(y))) = 0
74
Characterization of Compositionality Properties of Probabilistic Process Combinators
for all closed substitutions σ1 , σ2 . Clearly, ~g(y) = 0. Consider now t = f (x) and substitutions σ1 (x) =
a.a.0 and σ2 (x) = a.([0.9]a.0 ⊕ [0.1]0) with d(σ1 (x), σ2 (x)) = 0.1. The distance between σ1 ( f (x)) and
σ2 ( f (x)) is the distance between distributions δg(a.0) and 0.9δg(a.0) + 0.1δg(0) . From d(g(a.0), g(0)) = 1 we
get d( f (σ1 (x)), f (σ2 (x))) = K(d)(δg(a.0) , 0.9δg(a.0) + 0.1δg(0) ) = 0.1.
If we would ignore that g tests its argument on the reactive behavior, then the denotation of g(µ)
would be ~g(µ) = ~g(x) ⊙ x 1µ = 0, and the denotation of f (x) would be ~g(µ) ⊕ (~g(µ) ⊙µ 1x ) = 0.
Then D(0, d(σ1 , σ2 )) = 0 < 0.1 = d( f (σ1 (x)), f (σ2 (x))).
Because the operator g tests its argument on the ability to perform action a, it can discriminate instances of the derivative µ the same way as if the process would progress (without replication). Thus, the
denotation of operator g if applied in the rule target is ρg = ↓ {sup(sup 0, 1Xrg )} = 1x as Xrg = {x}. Hence,
~g(µ) = ρg ⊙ x 1µ = 1µ . Thus, ~ f (x) = 1x . It follows, d( f (σ1 (x)), f (σ2 (x))) ≤ D(~ f (x), d(σ1 , σ2 )) = 0.1.
To summarize Examples 7 and 8: The nondeterministic probabilistic multiplicity for operator f applied to some distribution term is given by ρ f = ↓ {supr∈R f sup(sup ρ(r), 1Xr ))} (Figure 1). We explain
this expression stepwise. For any rule r we define by sup ρ(r) ∈ P the least probabilistic multiplicity
which covers all nondeterministic choices represented by the probabilistic multiplicities in ρ(r) ∈ D. By
sup(sup ρ(r), 1Xr ) ∈ P we capture the case that premises of r only test source variables in Xr on their ability to perform an action (Example 8). By supr∈R f sup(sup ρ(r), 1Xr ) ∈ P we define the least probabilistic
multiplicity which covers all choices of rules r ∈ R f (Example 7). Finally, by the downward closure
↓ {supr∈R f sup(sup ρ(r), 1Xr ))} ∈ D we gain the nondeterministic probabilistic multiplicity ρ f that can be
applied to the distribution term (Figure 1).
Proposition 7 F is order-preserving and upward ω-continuous.
From Proposition 6 and 7 and the Knaster-Tarski fixed point theorem we derive the existence and uniqueness of the least fixed point of F. We denote by (ωT , ωR ) the least fixed point of F. We write ~t for ωT (t)
and ~tτ for τ(t). We call ~t the canonical denotation of t. It is not hard to verify that all denotations
presented in Section 3 for PPA are canonical.
A denotation of terms τ ∈ S T is compatible with a distance function d ∈ [0, 1]T(Σ)×T(Σ) , notation d
~·τ , if d(σ1 (t), σ2 (t)) ≤ A(~tτ , d(σ1 , σ2 )) for all t ∈ T(Σ) and all closed substitutions σ1 , σ2 . Now we
can show that the functional B to compute the bisimulation distance and functional F to compute the
denotations preserve compatibility (Proposition 8). A simple inductive argument allows then to show
that the canonical denotation of terms ~· is compatible with the bisimilarity metric d (Theorem 2).
Proposition 8 Let d ∈ [0, 1]T(Σ)×T(Σ) with d ⊑ B(d) = d′ and (τ, ρ) ∈ S with (τ, ρ) ⊑ F(τ, ρ) = (τ′ , ρ′ ). Then
d ~·τ implies d′ ~·τ′ .
Theorem 2 Let P be any PGSOS PTSS with d the bisimilarity metric on the associated PTS and ~· the
canonical denotation of terms according to P. Then d ~·.
Proof sketch. Remind that d is the least fixed point of B : [0, 1]T(Σ)×T(Σ) → [0, 1]T(Σ)×T(Σ) defined by
B(d)(t, t′ ) = supa∈A {H(K(d))(der(t, a), der(t′ , a))} and H the Hausdorff metric functional (Proposition 1).
Let dn = Bn (0) and (τn , ρn ) = Fn (⊥T , ⊥R ). Proposition 8 shows that dn ~·τn by reasoning inductively
over the transitions specified by the rules. Monotonicity and upward ω-continuity (Proposition 7) ensures
that this property is also preserved in the limit.
5
Compositional Reasoning
In order to reason compositionally over probabilistic processes it is enough if the distance of the composed processes can be related to the distance of their parts. This property is known as uniform continuity.
D. Gebler & S. Tini
75
In essence, compositional reasoning over probabilistic processes is possible whenever a small variance
in the behavior of the parts leads to a bounded small variance in the behavior of the composed processes.
Technically this boils down to the existence of a modulus of continuity. Uniform continuity generalizes
earlier proposals of non-expansiveness [15] and non-extensiveness [1].
Definition 7 (Modulus of continuity) Let f ∈ Σ be any process combinator. A mapping z : [0, 1]r( f ) →
[0, 1] is a modulus of continuity for operator f if z(0, . . . , 0) = 0, z is continuous at (0, . . . , 0), and
′
′
′
d( f (t1 , . . . , tr( f ) ), f (t1′ , . . . , tr(
f ) )) ≤ z(d(t1 , t1 ), . . . , d(tr( f ) , tr( f ) ))
for all closed terms ti , ti′ ∈ T(Σ).
Definition 8 (Uniformly continuous operator) A process combinator f ∈ Σ is uniformly continuous if
f admits a modulus of continuity.
Intuitively, a continuous binary operator f ensures that for any non-zero bisimulation distance ǫ
(understood as the admissible tolerance from the operational behavior of the composed process f (t1 , t2 ))
there are non-zero bisimulation distances δ1 and δ2 (understood as the admissible tolerances from the
operational behavior of the processes t1 and t2 , respectively) such that the distance between the composed
processes f (t1 , t2 ) and f (t1′ , t2′ ) is at most ǫ = z(δ1 , δ2 ) whenever the component t1′ (resp. t2′ ) is in distance
of at most δ1 from t1 (resp. at most δ2 from t2 ). We consider the uniform notion of continuity because
we aim for universal compositionality guarantees.
The denotation of f (x1 , . . . , xr( f ) ) allows to derive a candidate for the modulus of continuity for operator f as follows.
Definition 9 (Derived modulus of continuity) Let P be any PGSOS PTSS. For any operator f ∈ Σ we
define
r( f )
X
z f (ǫ1 , . . . , ǫr( f ) ) = min m f (xi )ǫi , 1
i=1
with m f = sup~ f (x1 , . . . , xr( f ) ).
′ )) ≤ z (d(t , t′ ), . . . , d(t
′
Trivially, we have z f (0, . . . , 0) = 0 and d( f (t1 , . . . , tr( f ) ), f (t1′ , . . . , tr(
f
1 1
r( f ) , tr( f ) ))
f)
for all closed terms ti , ti′ ∈ T(Σ) by Theorem 2. However, z f is continuous at (0, . . . , 0) only if the multiplicities in the denotation ~ f (x1 , . . . , xr( f ) assign to each variable a finite value.
Theorem 3 Let P be any PGSOS PTSS. A process combinator f ∈ Σ is uniformly continuous if
~ f (x1 , . . . , xr( f ) ) ⊑ n{x1 ,...,xr( f ) }
for some n ∈ N.
Example 9 We will show that unbounded recursion operators may be not uniformly continuous. We
consider the replication operator of π-calculus specified by the rule:
a
x−
→µ
a
!x −
→ µ k δ(!x)
The replication operator is not continuous since no z with d(!t, !t′ ) ≤ z(d(t, t′ )) and z(0) = 0 will be
continuous at 0 since z(δ) = 1 for any δ > 0. The denotation ~!x = ∞ x shows that the argument x is
infinitely often replicated. Hence, the replication operator is not continuous.
76
Characterization of Compositionality Properties of Probabilistic Process Combinators
Even more, for uniformly continuous operators f the function z f is a modulus of continuity.
Theorem 4 Let P be any PGSOS PTSS. A uniformly continuous process combinator f ∈ Σ satisfies
′
′
′
d( f (t1 , . . . , tr( f ) ), f (t1′ , . . . , tr(
f ) )) ≤ z f (d(t1 , t1 ), . . . , d(tr( f ) , tr( f ) ))
for all closed terms ti , ti′ ∈ T(Σ).
In reverse, for a given modulus of continuity (as specification of some process combinator), we can
derive the maximal replication of process of this operator.
Definition 10 (Derived multiplicity) Let z : [0, 1]n → [0, 1] be a mapping with z(0, . . . , 0) = 0 and z continuous at (0, . . . , 0). Let m : V → R∞
≥0 be defined by
n
X
∞
m = sup
m
:
V
→
R
|
∀e
∈
E.
m(x
)e(x
)
≤
z(e(x
),
.
.
.
,
e(x
))
i
i
1
n
≥0
i=1
where m1 , m2 : V → R∞
≥0 are ordered m1 ⊑ m2 iff m1 (x) ≤ m2 (x) for all x ∈ V. We call m the derived
multiplicity of z.
Theorem 5 Let P be any PGSOS PTSS, z : [0, 1]n → [0, 1] be a mapping with z(0, . . . , 0) = 0 and z continuous at (0, . . . , 0), and m the derived multiplicity of z. Then, an operator f ∈ Σ with r( f ) = n has z as
modulus of continuity if
sup~ f (x1 , . . . , xr( f ) ) ⊑ m
To conclude, the methods provided in Section 3 and 4 to compute an upper bound on the distance between instances of the term f (x1 , . . . , xr( f ) ) can be used to derive the individual compositionality property
of operator f given by its the modulus of continuity z f . Note that z f depends on all those rules which define operators of processes to which an instance of f (x1 , . . . , xr( f ) ) may evolve to. Traditional rule formats
define syntactic criteria on single rules in order to guarantee a desired compositionality property of the
specified operator. In contrast, our approach derives the compositionality property of an operator from
the the syntactic properties of those rules which define the operational behavior of processes composed
by that operator.
6
Conclusion and Future Work
We presented a method to approximate the bisimulation distance of arbitrary process terms (Theorem 1
and 2). This allows to decide for any given PTSS which operators allow for compositional metric reasoning, i.e. which operators are uniformly continuous (Theorem 3). Moreover, our method allows to
compute for any given PTSS a modulus of continuity of each uniformly continuous operator (Theorem 4). Additionally, for any given modulus of continuity (understood as the required compositionality
property of some operator) we provide a sufficient condition to decide if an operator satisfies the modulus of continuity (Theorem 5). The condition characterizes the maximal number of times that processes
combined by the operator may be replicated during their evolution in order to guarantee the modulus of
continuity.
We will continue this line of research as follows. First, we will investigate the compositionality
of process combinators with respect to convex bisimulation metric [12], discounted bisimulation metric [15], and generalized bisimulation metric [8]. Second, we will explore compositionality with respect
D. Gebler & S. Tini
77
to behavioral pseudometrics based on trace semantics [11] and testing semantics. Finally, we will investigate how the denotational approach to decide the compositionality properties of operators relates to the
logical approach to derive rule formats of [4, 22]. Besides this general line, we want to investigate how
our structural syntactic approach to compositionality relates to the algorithmic computational approach
in [1].
References
[1] Giorgio Bacci, Giovanni Bacci, Kim G Larsen & Radu Mardare (2013): Computing Behavioral Distances,
Compositionally. In: Proc. MFCS’13, Springer, pp. 74–85, doi:10.1007/978-3-642-40313-2_9.
[2] Falk Bartels (2002): GSOS for probabilistic transition systems. In: Proc. CMCS’02, ENTCS 65, Elsevier,
pp. 29–53, doi:10.1016/S1571-0661(04)80358-X.
[3] Falk Bartels (2004): On Generalised Coinduction and Probabilistic Specification Formats. Ph.D. thesis, VU
University Amsterdam.
[4] Bard Bloom, Wan Fokkink & Rob J. van Glabbeek (2004): Precongruence formats for decorated trace
semantics. ACM TOCL 5, pp. 26–78, doi:10.1145/963927.963929.
[5] Bard Bloom, Sorin Istrail & Albert R. Meyer (1995): Bisimulation can’t be traced. J. ACM 42, pp. 232–268,
doi:10.1145/200836.200876.
[6] Franck van Breugel & James Worrell (2005): A Behavioural Pseudometric for Probabilistic Transition Systems. TCS 331(1), pp. 115–142, doi:10.1016/j.tcs.2004.09.035.
[7] Franck van Breugel & James Worrell (2006): Approximating and computing behavioural distances in probabilistic transition systems. TCS 360(1), pp. 373–385, doi:10.1016/j.tcs.2006.05.021.
[8] Konstantinos Chatzikokolakis, Daniel Gebler, Catuscia Palamidessi & Lili Xu: Generalized bisimulation
metrics. In: Proc. CONCUR’14, LNCS, Springer, To appear.
[9] Pedro R. D’Argenio, Daniel Gebler & Matias David Lee (2014): Axiomatizing Bisimulation Equivalences
and Metrics from Probabilistic SOS Rules. In: Proc. FoSSaCS’14, LNCS 8412, Springer, pp. 289–303,
doi:10.1007/978-3-642-54830-7_19.
[10] Pedro R. D’Argenio & Matias David Lee (2012): Probabilistic Transition System Specification: Congruence
and Full Abstraction of Bisimulation. In: Proc. FoSSaCS’12, LNCS 7213, Springer, pp. 452–466, doi:10.
1007/978-3-642-28729-9_30.
[11] L. De Alfaro, M. Faella & M. Stoelinga (2004): Linear and Branching Metrics for Quantitative Transition
Systems. In: Proc. ICALP’04, LNCS 3142, Springer, pp. 97–109, doi:10.1007/978-3-540-27836-8_11.
[12] L. De Alfaro, R. Majumdar, V. Raman & M. Stoelinga (2007): Game relations and metrics.
Proc. LICS’07, IEEE, pp. 99–108, doi:10.1109/LICS.2007.22.
In:
[13] Yuxin Deng, Tom Chothia, Catuscia Palamidessi & Jun Pang (2006): Metrics for Action-labelled Quantitative
Transition Systems. ENTCS 153(2), pp. 79–96, doi:10.1016/j.entcs.2005.10.033.
[14] Yuxin Deng & Wenjie Du (2011): Logical, Metric, and Algorithmic Characterisations of Probabilistic Bisimulation. Technical Report CMU-CS-11-110, CMU.
[15] Josée Desharnais, Vineet Gupta, Radha Jagadeesan & Prakash Panangaden (2004): Metrics for Labelled
Markov Processes. TCS 318(3), pp. 323–354, doi:10.1016/j.tcs.2003.09.013.
[16] Josée Desharnais, Radha Jagadeesan, Vineet Gupta & Prakash Panangaden (2002): The Metric Analogue of
Weak Bisimulation for Probabilistic Processes. In: Proc. LICS’02, IEEE, pp. 413–422, doi:10.1109/LICS.
2002.1029849.
[17] Josée Desharnais, Francois Laviolette & Mathieu Tracol (2008): Approximate Analysis of Probabilistic Processes: Logic, Simulation and Games. In: Proc. QEST’08, IEEE, pp. 264–273, doi:10.1109/QEST.2008.
42.
78
Characterization of Compositionality Properties of Probabilistic Process Combinators
[18] Wan Fokkink, Rob J. van Glabbeek & Paulien de Wind (2006): Compositionality of Hennessy-Milner logic
by structural operational semantics. TCS 354, pp. 421–440, doi:10.1016/j.tcs.2005.11.035.
[19] Wan Fokkink, Rob J. van Glabbeek & Paulien de Wind (2006): Divide and Congruence Applied to ηBisimulation. ENTCS 156, pp. 97–113, doi:10.1016/j.entcs.2005.10.029.
[20] Wan Fokkink, Rob J. van Glabbeek & Paulien de Wind (2006): Divide and Congruence: From Decomposition of Modalities to Preservation of Branching Bisimulation. In: Proc. FMCO’05, LNCS 4111, Springer,
pp. 195–218, doi:10.1007/11804192_10.
[21] Wan Fokkink, Rob J. van Glabbeek & Paulien de Wind (2012): Divide and congruence: From decomposition
of modal formulas to preservation of branching and η-bisimilarity. I&C 214, pp. 59–85, doi:10.1016/j.
ic.2011.10.011.
[22] Maciej Gazda & Wan Fokkink (2010): Congruence from the Operator’s Point of View: Compositionality
Requirements on Process Semantics. In: Proc. SOS’10, EPTCS 32, pp. 15–25, doi:10.4204/EPTCS.32.2.
[23] Daniel Gebler & Wan Fokkink (2012): Compositionality of Probabilistic Hennessy-Milner Logic through
Structural Operational Semantics. In: Proc. CONCUR’12, LNCS 7454, Springer, pp. 395–409, doi:10.
1007/978-3-642-32940-1_28.
[24] Daniel Gebler & Simone Tini (2013): Compositionality of Approximate Bisimulation for Probabilistic Systems. In: Proc. EXPRESS/SOS’13, EPTCS 120, OPA, pp. 32–46, doi:10.4204/EPTCS.120.4.
[25] Alessandro Giacalone, Chi-Chang Jou & Scott A. Smolka (1990): Algebraic Reasoning for Probabilistic
Concurrent Systems. In: Proc. IFIP TC2 Working Conf. on Prog. Concepts and Methods, pp. 443–458.
[26] Jan Friso Groote (1993): Transition System Specifications with Negative Premises. TCS 118(2), pp. 263–299,
doi:10.1016/0304-3975(93)90111-6.
[27] Ruggero Lanotte & Simone Tini (2009): Probabilistic Bisimulation as a Congruence. ACM TOCL 10, pp.
1–48, doi:10.1145/1462179.1462181.
[28] Kim G. Larsen & Arne Skou (1991): Bisimulation Through Probabilistic Testing. I&C 94, pp. 1–28, doi:10.
1016/0890-5401(91)90030-6.
[29] Matias David Lee, Daniel Gebler & Pedro R. D’Argenio (2012): Tree Rules in Probabilistic Transition
System Specifications with Negative and Quantitative Premises. In: Proc. EXPRESS/SOS’12, EPTCS 89,
pp. 115–130, doi:10.4204/EPTCS.89.9.
[30] Roberto Segala (1995): Modeling and Verification of Randomized Distributed Real-Time Systems. Ph.D.
thesis, MIT.
[31] Simone Tini (2010): Non-expansive ǫ-bisimulations for Probabilistic Processes. TCS 411, pp. 2202–2222,
doi:10.1016/j.tcs.2010.01.027.
[32] Mathieu Tracol, Josée Desharnais & Abir Zhioua (2011): Computing Distances between Probabilistic Automata. In: Proc. QAPL’11, EPTCS 57, pp. 148–162, doi:10.4204/EPTCS.57.11.
[33] Mingsheng Ying (2002): Bisimulation indexes and their applications. TCS 275(1), pp. 1–68, doi:10.1016/
S0304-3975(01)00124-4.
| 6 |
Speculative Staging for Interpreter Optimization
Stefan Brunthaler
University of California, Irvine
[email protected]
arXiv:1310.2300v1 [] 8 Oct 2013
Abstract
many problems, e.g., a lot of tricky details, hard to debug,
and substantial implementation effort. An alternative route
is to explore the area of purely interpretative optimization
instead. These are optimizations that preserve innate interpreter characteristics, such as ease-of-implementation and
portability, while offering important speedups. Prior work in
this area already reports the potential of doubling the execution performance [7, 8]. As a result, investigating a general
and principled strategy for optimizing high abstraction-level
interpreters is particularly warranted.
Interpreting a dynamically typed programming language,
such as JavaScript, Python, or Ruby, has its own challenges.
Frequently, these interpreters use one or a combination of the
following features:
Interpreters have a bad reputation for having lower performance than just-in-time compilers. We present a new way
of building high performance interpreters that is particularly
effective for executing dynamically typed programming languages. The key idea is to combine speculative staging of
optimized interpreter instructions with a novel technique of
incrementally and iteratively concerting them at run-time.
This paper introduces the concepts behind deriving optimized instructions from existing interpreter instructions—
incrementally peeling off layers of complexity. When compiling the interpreter, these optimized derivatives will be compiled along with the original interpreter instructions. Therefore, our technique is portable by construction since it leverages the existing compiler’s backend. At run-time we use
instruction substitution from the interpreter’s original and
expensive instructions to optimized instruction derivatives to
speed up execution.
Our technique unites high performance with the simplicity
and portability of interpreters—we report that our optimization makes the CPython interpreter up to more than four
times faster, where our interpreter closes the gap between and
sometimes even outperforms PyPy’s just-in-time compiler.
• dynamic typing to select type-specific operations,
• reference counting for memory management, and
• modifying boxed data object representations.
To cope with these features, interpreter instructions naturally
become expensive in terms of assembly instructions required
to implement their semantics. Looking at successful research
in just-in-time compilation, we know that in order to achieve
substantial performance improvements, we need to reduce the
complexity of the interpreter instructions’ implementation.
Put differently, we need to remove the overhead introduced by
dynamic typing, reference counting, and operating on boxed
data objects.
In this paper we combine ideas from staged compilation
with partial evaluation and interpreter optimization to devise a
general framework for interpreter optimization. From staged
compilation, we take the idea that optimizations can spread
out across several stages. From partial evaluation, we take
inspiration from the Futamura projections to optimize programs. From interpreter optimization, we model a general
theory of continuous optimization based on rewriting instructions. These ideas form the core of our framework, which
is purely interpretative, i.e., it offers ease-of-implementation
and portability while avoiding dynamic code generation, and
delivers high performance. As a result, close to three decades
after Deutsch and Schiffman [16] described the ideas of what
would eventually become the major field of just-in-time compilation, our framework presents itself as an alternative to
implementing JIT compilers.
General Terms Design, Languages, Performance
Keywords Interpreter, Optimization, Speculative Staging,
Partial Evaluation, Quickening, Python
1.
Introduction
The problem with interpreters for dynamically typed programming languages is that they are slow. The fundamental
lack of performance is due to following reasons. First, their
implementation is simple and does not perform known interpreter optimizations, such as threaded code [3, 4, 14] or
superinstructions [19, 20, 39]. Second, even if the interpreters
apply these known techniques, their performance potential
is severely constrained by expensive interpreter instruction
implementations [6].
Unfortunately, performance-conscious implementers suffer from having only a limited set of options at their disposal
to solve this problem. For peak performance, the current bestpractice is to leverage results from dynamic compilation. But,
implementing a just-in-time, or JIT, compiler is riddled with
1
2013/07/12
Summing up, this paper makes the following contributions.
work at compile time, link-time, load-time, or finally at runtime. The problem with staged optimizations for high-level
languages such as JavaScript, Python, and Ruby is that they
require at least partial knowledge about the program. But, as
the example of the sum function illustrates, only at run-time
we will actually identify the concrete type τ for parameters a
and b.
For our target high-level languages and their interpreters,
staged compilation is not possible, primarily due to two
reasons. First, none of these interpreters have a JIT compiler,
i.e., preventing staged partial optimizations. Second, the
stages of staged compilation and interpreters are separate.
The traditional stages listed above need to be partitioned into
stages that we need to assemble the interpreter (viz. compiletime, link-time, and load-time), and separate stages of running
the program: at interpreter run-time, it compiles, potentially
links, loads and runs hosted programs.
• We introduce a general theory for optimizing interpreter
instructions that relies on speculatively staging of optimized interpreter instructions at interpreter compile-time
and subsequent concerting at run-time (see Section 3).
• We present a principled procedure to derive optimized
interpreter instructions via partial evaluation. Here, speculation allows us to remove previous approaches’ requirement to specialize towards a specific program (see Section 3.2).
• We apply the general theory to the Python interpreter
and describe the relevant implementation details (see Section 4).
• We provide results of a careful and detailed evaluation
of our Python based implementation (see Section 5), and
report substantial speedups of up to more than four times
faster than the CPython implementation for the spectralnorm benchmark. For this benchmark our technique outperforms the current state-of-the-art JIT compiler, PyPy
1.9, by 14%.
2.
3.
The previous section sketches the problem of performing traditional staged optimizations for interpreters. In this section
we are first going to dissect interpreter performance to identify bottlenecks. Next, we are going to describe which steps
are necessary to formalize the problem, and subsequently use
speculative staged interpreter optimizations to conquer them
and achieve high performance.
Example
In this section we walk through a simple example that
illustrates how interpreters address—or rather fail to address—
the challenge of efficiently executing a high-level language.
The following listing shows a Python function sum that “adds”
its parameters and returns the result of this operation.
1
2
Speculative Staged Interpreter
Optimization
3.1
Dissecting Example Performance Obstacles
Python’s compiler will emit the following sequence of interpreter instructions, often called bytecodes, when compiling
the sum function (ignoring argument bytes for the LOAD_FAST instructions):
def sum (a , b ):
return a + b
In fact, this code does not merely “add” its operands:
depending on the actual types of the parameters a and b,
the interpreter will select a matching operation. In Python,
this means that it will either concatenate strings, or perform
arithmetic addition on either integers, floating point numbers,
or complex numbers; or the interpreter could even invoke
custom Python code—which is possible due to Python’s
support for ad-hoc polymorphism.
In 1984, Deutsch and Schiffman [16] report that there
exists a “dynamic locality of type usage,” which enables speculative optimization of code for any arbitrary but fixed and
observed type τ . Subsequent research into dynamic compilation capitalizes on this observed locality by speculatively
optimizing code using type feedback [26, 27]. From their very
beginning, these dynamic compilers—or just-in-time compilers as they are referred to frequently—had to operate within a
superimposed latency time. Put differently, dynamic compilers traditionally sacrifice known complex optimizations for
predictable compilation times.
Staged compilation provides a solution to the latency
problem of JIT compilers by distributing work needed to
assemble optimized code among separate stages. For example,
a staged compiler might break up an optimization to perform
LOAD
FAST
LOAD
FAST
BINARY RETURN
ADD
VALUE
We see that the interpreter emits untyped, polymorphic
instructions that rely on dynamic typing to actually select the
operation. Furthermore, we see that Python’s virtual machine
interpreter implements a stack to pass operand data between
instructions.
Let us consider the application sum(3, 4), i.e., sum is
used with the specific type Int×Int → Int. In this case, the
BINARY_ADD instruction will check operand types and select
the proper operation for the types. More precisely, assuming absence of ad-hoc polymorphism, the Python interpreter
will identify that both Int operands are represented by a C
struct called PyLong_Type. Next, the interpreter will determine that the operation to invoke is PyLong_Type->tp_as_number->nb_add, which points to the long_add function. This operation implementation function will then unbox
operand data, perform the actual integer arithmetic addition,
and box the result. In addition, necessary reference counting
operations enclose these operations, i.e., we need to decrease
2
2013/07/12
the reference count of the arguments, and increase the reference count of the result. Both, (un-)boxing and adjusting
reference count operations add to the execution overhead of
the interpreter.
Contrary to the interpreter, a state-of-the-art JIT compiler
would generate something like this
1
2
3
4
exercises the compiler’s backend to have portable code generation and furthermore allows the interpreter implementation
to remain simple. The whole process is, however, speculative:
only by actually interpreting a program, we know for sure
which optimized interpreter instructions are required. As a
result, we restrict ourselves to generate optimized code only
for instructions that have a high likelihood of being used.
To assemble optimized instruction sequences at run-time,
we rely on a technique known as quickening [34]. Quickening
means that we replace instructions with optimized derivatives
of the exact same instruction at run-time. Prior work focuses
on using only one level of quickening, i.e., replacing one
instruction with another instruction derivative. In this work,
we introduce multi-level quickening, i.e., the process of
iteratively and incrementally rewriting interpreter instructions
to ever more specialized derivatives of the generic instruction.
movq % rax , -8(% rsp )
movq % rbx , -16(% rsp )
addq % rax , % rbx
ret
The first two lines assume a certain stack layout to where
to find operands a and b, both of which we assume to be
unboxed. Hence, we can use the native machine addition
operation (line 3) to perform arithmetic addition and return
the operation result in %rax.
Bridging the gap between the high abstraction-level representation of computation in Python bytecodes and the low
abstraction-level representation of native machine assembly
instructions holds the key for improving interpreter performance. To that end, we classify both separate instruction sets
accordingly:
3.2.1
In this section we present a simplified substrate of a dynamically typed programming language interpreter, where we
illustrate each of the required optimization steps.
• Python’s instruction set is untyped and operates exclu1
sively on boxed objects.
2
• Native-machine assembly instructions are typed and di-
3
rectly modify native machine data.
4
LOAD
INT
INT
ADD
6
7
VBool Bool
VInt Int
VFloat Float
VString String
type Stack = [Value]
type Instr = Stack → Stack
We use Value to model a small set of types for our interpreter. The operand stack Stack holds intermediate values,
and an instruction Instr is a function modifying the operand
stack Stack. Consequently, the following implementation
of the interpreter keeps evaluating instructions until the list
of instructions is empty, whereupon it returns the top of the
operand stack element as its result:
RETURN
INT
In this lower-level instruction set the instructions are typed,
which allows using a different operand data passing convention, and directly modifying unboxed data—essentially operating at the same semantic level as the assembly instructions
shown above; disregarding the different architectures, i.e.,
register vs. stack.
3.2
data Value =
|
|
|
5
An efficient, low-level interpreter instruction set allows us
to represent the sum function’s computation for our assumed
type in the following way:
LOAD
INT
Prerequisites
1
2
3
Systematic Derivation
4
The previous section informally discusses the goal of our
speculatively staged interpreter optimization: optimizing
from high to low abstraction-level instructions. This section
systematically derives required components for enabling
interpreters to use this optimization technique. In contrast
with staged compilation, staged interpreter optimization is
purely interpretative, i.e., it avoids dynamic code generation
altogether. The key idea is that we:
5
6
7
interp :: Stack → [Instr] → Value
interp (x:xs) [] = x
interp s (i:is) =
let
s’= i s
in
eval s’ is
Using interp as the interpreter, we can turn our attention
to the actual interpreter instructions. The following example
illustrates a generic implementation of a binary interpreter
instruction, which we can instantiate, for example for an
arithmetic add operation:
• stage and compile new interpreter instructions at inter-
1
preter compile-time,
• concert optimized instruction sequences at run-time by
2
3
the interpreter.
4
The staging process involves the ahead-of-time compiler
that is used to compile the interpreter. Therefore, this process
5
6
3
binaryOp :: (Value → Value → Value) → Stack →
Stack
binaryOp f s =
let
(x, s’) = pop s
(y, s’’) = pop s’
in
2013/07/12
7
That means that for all instructions I of an interpreter interp,
we derive an optimized instruction derivative I 0 specialized to
a type τ by partially evaluating an instruction I with the type
τ . Hence, we speculate on the likelihood of the interpreter
operating on data of type τ but do eliminate the need to have
a priori knowledge about the program Π.
To preserve semantics, we need to add a guard statement
to I 0 that ensures that the actual operand types match up with
the specialized ones. If the operand types do not match, we
need to take corrective measures and redirect control back
to I. For example, we get the optimized derivative intAdd
from dtAdd by fixing the operand type to VInt:
(f x y):s’’
8
9
10
addOp :: Stack → Stack
addOp = binaryOp dtAdd
11
12
13
14
15
16
17
dtAdd
dtAdd
dtAdd
dtAdd
dtAdd
...
:: Value → Value → Value
(VInt x) (VInt y) = (VInt (x + y))
(VFloat x) (VFloat y) = (VFloat (x + y))
(VString x) (VString y) = (VString (x ++ y))
(VBool x) (VBool y) = (VBool (x && y))
The generic implementation binaryOp shows the operand
stack modifications all binary operations need to follow. The
addOp function implements the actual resolving logic of the
dynamic types of Values via pattern matching starting on
line 13 in function dtAdd.
intAdd := JmixKL [dtAdd, VInt]
1
3.2.2
First-level Quickening: Type Feedback
2
3
Definition 1 (Instruction Derivative). An instruction I 0 is
an instruction derivative of an instruction I, if and only if it
implements the identical semantics for an arbitrary but fixed
subset of I’s functionality.
(Π)[I 0 /Ip ]
1
2
3
(1)
4
5
In our scenario, this is not particularly helpful, because we
do not know program Π a priori. The second Futamura
projection tells us how to derive a compiler by applying
mix to itself with the interpreter as its input:
∀
Iτ0 := JmixKL [I, τ ]
quicken :: [ Instr ] → Int → Instr → [ Instr ]
quicken Π p derivative =
let
(x, y:ys)= splitAt p Π
r= x ++ derivative : ys
Using this derivation step, we effectively create a typed
interpreter instruction set for an instruction set originally only
consisting of untyped interpreter instructions.
3.2.3
(2)
Second-level Quickening: Type Propagation
Taking a second look at the optimized derivative instruction
intAdd shows that it still contains residual overhead: unboxing the VInt operands and boxing the VInt result (cf. line
three). It turns out that we can do substantially better by identifying complete expressions that operate on the same type
and subsequently optimizing the whole sequence.
For identifying a sequence of instructions that operate on
the same type, we leverage the type information collected at
run-time via the first-level quickening step described in the
previous section. During interpretation, the interpreter will
optimize programs it executes on the fly and single instruction occurrences will carry type information. To expand this
By using the interpreter interp as its input program, the
second Futamura projection eliminates the dependency on
the input program Π. However, for a dynamically typed
programming language, a compiler derived by applying the
second Futamura projection without further optimizations is
unlikely to emit efficient code—because it lacks information
on the types [43].
Our idea is to combine these two Futamura projections in
a novel way:
I∈interp
(5)
It is worth noting that this rewriting, or quickening as it
is commonly known, is purely interpretative, i.e., it does not
require any dynamic code generation—simply updating the
interpreted program suffices:
To obtain all possible instruction derivatives I 0 for any
given interpreter instruction I, we rely on insights obtained
by partial evaluation [30]. The first Futamura projection [23]
states that we can derive a compiled version of an interpreted
program Π by partially evaluating its corresponding interpreter interp written in language L:
compiler := JmixKL [mix, interp]
intAdd :: Value → Value → Value
intAdd (VInt x) (VInt y) = (VInt (x + y))
intAdd x y = dtAdd x y
The last line in our code example acts as a guard statement,
since it enables the interpreter to execute the type-generic
dtAdd instruction whenever the speculation of intAdd fails.
The interpreter can now capitalize on the “dynamic locality
of type usage” [16] and speculatively eliminate the overhead
of dynamic typing by rewriting an instruction I at position p
of a program Π to its optimized derivative I 0 :
Example 1. In our example interpreter interp, the addOp
instruction has type Value×Value → Value. An instruction
derivative intAdd would implement the subset case of integer
arithmetic addition only, i.e., where operands have type VInt.
Analogous cases are for all combinations of possible types,
e.g., for string concatenation (VString).
compiledΠ := JmixKL [interp, Π]
(4)
(3)
4
2013/07/12
Abstract Interpretation Taking inspiration from Leroy’s
description of Java bytecode verification [33], we also use
an abstract interpreter that operates over types instead of
values. Using the type information captured in the previous
step, for example from quickening from the type generic
addOp to the optimized instruction derivative intAdd, we
can propagate type information from operation instructions
to the instructions generating its operands.
For example, we know that the intAdd instruction expects
its operands to have type Int and produces an operand of
type Int:
information to bigger sequences, we use an abstract interpreter that propagates the collected type information. Once
we have identified a complete sequence of instructions operating on the same type, we then quicken the complete sequence
to another set of optimized derivatives that directly modify
unboxed data. Since these instructions operate directly on
unboxed data, we need to take care of several pending issues.
First, operating on unboxed data requires modifying the
operand stack data passing convention. The original instruction set, as well as the optimized typed instruction set, operates on boxed objects, i.e., all elements on the operand stack
are just pointers to the heap, having the type of one machine
word (uint64 on modern 64-bit architectures). If we use
unboxed data elements, such as native machine integers and
floats, we need to ensure that all instructions follow the same
operand passing convention.
intAdd : (VInt.VInt.S) → (VInt.S)
Similar to our actual interpreter, the abstract interpreter uses
a transition relation i : S → S 0 to represent an instruction
i’s effect on the operand stack. All interpreter instructions
of the original instruction set are denoted by type-generic
rules that correspond to the top element of the type lattice,
i.e., in our case Value. The following rules exemplify the
representation, where we only separate instructions by their
arity:
Definition 2 (Operand Stack Passing Convention). All instructions operating on untyped data need to follow the same
operand stack data passing convention. Therefore, we define
a conversion function c to map data to and from at least one
native machine word:
cτ : τ → uint64+
c−1
τ
+
: uint64 → τ
(6)
unaryOp : (Value.S) → (Value.S)
(7)
binaryOp : (Value.Value.S) → (Value.S)
Second, we need to provide and specify dedicated (un)boxing operations to unbox data upon entering a sequence
of optimized interpreter instructions, and box results when
leaving an optimized sequence.
The set of types our abstract interpreter operates on corresponds to the set of types we generated instruction derivatives for in the first-level quickening step, i.e., (Int, Bool,
Float, String). For simplicity, our abstract interpreter ignores branches in the code, which limits our scope to propagate types along straight-line code, but on the other hand
avoids data-flow analysis and requires only one linear pass
to complete. This is important insofar as we perform this
abstract interpretation at run-time and therefore are interested
to keep latency introduced by this step at a minimum.
Type propagation proceeds as follows. The following
example shows an original example program representation
as emitted by some other program, e.g., another compiler:
Definition 3 (Boxing and Unboxing of Objects). We define
a function m to map objects to at least one machine word and
conversely from at least one machine word back to proper
language objects of type π.
mπ : π → τ +
m−1
π
:τ
+
→π
(8)
(9)
Third, this optimization is speculative, i.e., we need to take
precautions to preserve semantics and recover from misspeculation. Preserving semantics of operating on unboxed data
usually requires to use a tagged data format representation,
where we reserve a set of bits to hold type information. But,
we restrict ourselves to sequences where we know the types a
priori, which allows us to remove the restrictions imposed by
using a tagged data format, i.e., additional checking code and
decreasing range of representable data. In general, whenever
our speculation fails we need to generalize specialized instruction occurrences back to their more generic instructions
and resume regular interpretation.
In all definitions, we use the + notation to indicate that
a concrete instantiation of either c or m is able to project
data onto multiple native machine words. For example, the
following implementation section will detail one such case
where we represent Python’s complex numbers by two native
machine words.
[· · · , push0 , push1 , addOp2 , push3 , addOp4 , pop5 , · · · ]
After executing this example program, the first-level quickening captures types encountered during execution:
[· · · , push0 , push1 , intAdd2 , push3 , intAdd4 , pop5 , · · · ]
Now, we propagate the type information by abstract interpretation. Since intAdd expects operands of type Int, we can
infer that the first two push instructions must push operands
of type Int onto the operand stack. Analogously, the second
occurrence of intAdd allows us to infer that the result of
the first intAdd has type Int, as does the third occurrence
of the push instruction. Finally, by inspecting the type stack
when the abstract interpreter reaches the pop instruction, we
know that it must pop an operand of type Int off the stack.
5
2013/07/12
Therefore, after type propagation our abstract interpreter will
have identified that the complete sequence of instructions
actually operate exclusively on data of type Int:
3
4
5
6
[· · · , push0Int , push1Int , intAdd2 , push3Int , intAdd4 , pop5Int , · · ·7 ]
8
We denote the start and end instructions of a candidate
sequence by S and E, respectively. In our example, the six
element sequence starts with the first push instruction, and
terminates with the pop instruction terminates:
9
Finally, we need to make sure that once we leave an optimized sequence of instructions, the higher level instruction
sets continue to function properly. Hence, we need to box all
objects the sequence computes at the end of the optimized
sequence. For example, if we have a store instruction that
saves a result into the environment, we need to add boxing to
its implementation:
S := push0Int
E := pop5Int
Unboxed Instruction Derivatives Analogous to the previous partial evaluation step we use to obtain the typed instructions operating exclusively on boxed objects, we can use the
same strategy to derive the even more optimized derivatives.
For regular operations, such as our intAdd example, this
is simple and straightforward, as we just replace the boxed
object representation Value by its native machine equivalent
Int:
1
2
1
2
3
4
5
6
7
intAdd’ :: Int → Int → Int
intAdd’ x y = x + y
2
3
4
5
6
7
8
pushInt :: Value → S →S
pushInt v s =
let
unboxConvert= cint64 · mVInt
in
case v of
VInt value → (unboxConvert value) : s
_
→ -- misspeculation, generalize
Since the operand passing convention is type dependent,
we cannot use the previous implementation of binaryOp
anymore, and need a typed version of it:
1
2
storeIntOp :: S →Env → String → S
storeIntOp s e ident =
let
−1
boxPopInt = m−1
VInt · cint64 · pop
(obj, s’) = boxPopInt s
in
(λx (update e ident obj)) → s’
Generalizing When Speculation Fails In our example of
the pushInt interpreter instruction, we see on the last line
that there is a case when speculation fails. A specific occurrence of pushInt verifies that the operand matches an
expected type such that subsequent instructions can modify
on its unboxed representation. Therefore, once the interpreter
detects the misspeculation, we know that we have to invalidate all subsequent instructions that speculate on that specific
type.
The interpreter can back out of the misspeculation and
resume sound interpretation by i) finding the start of the
speculatively optimized sequence, and ii) generalizing all
specialized instructions up by at least one level. Both of
these steps are trivial to implement, particularly since the
instruction derivation steps result in having three separate
instruction sets. Hence, the first step requires to identify
the first instruction i that does not belong to the current
instruction set, which corresponds to the predecessor of the
start instruction S identified by our abstract interpreter. The
second step requires that we map each instruction starting at
offset i + 1 back to its more general parent instruction—a
mapping we can create and save when we create the initial
mapping from an instruction I to its derivative I 0 .
To be sound, this procedure requires that we further restrict
our abstract interpreter to identify only sequences that have
no side-effects. As a result, we eliminate candidate sequences
that have call instructions in between. This is however only
an implementation limitation and not an approach limitation,
since all non-local side-effects possible through function
calls do not interfere with the current execution. Therefore,
we would need to ensure that we do not re-execute already
executed functions and instead push the boxed representation
of a function call’s result onto the operand stack.
As a result, the compiler will generate an efficient native
machine addition instruction and completely sidestep the
resolving of dynamic types, as well as (un-)boxing and
reference counting operations.
The problem with this intAdd’ instruction derivative is,
however, that we cannot perform a type check on the operands
anymore, as the implementation only allows operands of
type Int. In consequence, for preserving semantics of the
original interpreter instructions, we need a new strategy for
type checking. Our solution to this problem is to bundle
type checks and unboxing operations together via function
composition and move them to the load instructions which
push unboxed operands onto the stack. Assuming that we
modify the declaration of Stack to contain heterogeneous
data elements (i.e., not only a list of Value, but also unboxed
native machine words, denoted by S), pushing an unboxed
integer value onto the operand stack looks like this:
1
let
popInt = c−1
int64 · pop
pushInt = cint64 · f
(x, s’) = popInt s
(y, s’’) = popInt s’
in
(pushInt x y) : s’’
binaryIntOp :: (Int → Int → Int) → S →S
binaryIntOp f s =
6
2013/07/12
···
Ii
Ij
In
··· 0
Ii
0
Ij:π
In
1
1st -level: Type Feedback
E
2
Abstract Interpretation
S
m−1
π
mπ
S 00
Start Instr.
LOAD_CONST
LOAD_FAST
Original Instructions
cτ
c−1
τ
E 00
3
Table 1: Valid start and end instructions used for abstract
interpretation.
nd
2 -level: Unboxed Data
Python instructions to illustrate both the abstract interpretation as well as deriving the optimized interpreter instructions.
Python itself is implemented in C and we use casts to force
the compiler to use specific semantics.
Figure 2 shows our example Python instruction sequence
and how we incrementally and iteratively rewrite this sequence using our speculatively staged optimized interpreter
instruction derivatives. We use the prefix INCA to indicate
optimized interpreter instruction derivatives used for inline
caching, i.e., the first-level quickening. The third instruction
set uses the NAMA prefix, which abbreviates native machine execution since all instructions directly operate on machine data
and hence use efficient machine instructions to implement
their operation semantics. This corresponds to the secondlevel quickening. Note that the NAMA instruction set is portable
by construction, as it leverages the back-end of the ahead-oftime compiler at interpreter compile-time.
The LOAD_FAST instruction pushes a local variable onto
the operand stack, and conversely STORE_FAST pops an
object off the operand stack and stores it in the local stack
frame. Table 1 lists the set of eligible start and end instructions
for our abstract interpreter, and Figure 3 illustrates the data
flow of the instruction sequence as assembled by the abstract
interpreter. In our example, the whole instruction sequence
operates on a single data type, but in general the abstract
interpreter needs to be aware of the type lattice implemented
by the Python interpreter. For example, dividing two long
numbers results in a float number and comparing two complex
numbers results in a long number. We model these type
conversions as special rules in our abstract interpreter.
Having identified a complete sequence of instructions that
all operate on operands of type PyFloat_Type, we can replace all instructions of this sequence with optimized derivatives directly operating on native machine float numbers.
Figure 1: Speculatively staged interpreter optimization using
multi-level quickening.
3.3
Putting It All Together
Figure 1 shows, in a general form, how speculative staging
of interpreter optimizations works. The first part of our technique requires speculatively staging optimized interpreter
instructions at interpreter compile-time. In Figure 1, the enclosing shaded shape highlights these staged instructions. We
systematically derive these optimized interpreter instructions
from the original interpreter instructions. The second part of
our technique requires run-time information and concerts the
speculatively staged interpreter instructions such that optimized interpretation preserves semantics.
The interpreter starts out executing instructions belonging
to its original instruction set (see 0 in Figure 1). During
execution, the interpreter can capture type feedback by rewriting itself to optimized instruction derivatives: Figure 1 shows
0
this in step 1 , where instruction Ij:π
replaces the more
generic instruction Ij , thereby capturing the type information
π at offset j. The next step, 2 , propagates the captured
type information π to a complete sequence of instructions,
starting with instruction S and terminating in instruction E.
We use an abstract interpreter to identify candidate sequences
operating on the same type. Once identified, 3 of Figure 1
illustrates how we rewrite a complete instruction sequence
S, . . . , E to optimized instruction derivatives S 00 , . . . , E 00 .
This third instruction set operates on unboxed native machine
data types, and therefore requires generic ways to handle
(un-)boxing of objects of type π (cf. mπ and m−1
π ), as well as
converting data to and from the operand stack (cf. cτ and c−1
τ ).
All instructions circled by the dotted line of Figure 1 are instruction derivatives that we speculatively stage at interpreter
compile-time.
4.
End Instr.
POP_JUMP_IF_FALSE
POP_JUMP_IF_TRUE
RETURN_VALUE
STORE_FAST
YIELD_VALUE
4.1
Implementation
Deriving Optimized Python Interpreter
Instructions
In this section we present details to implement speculative
staging of optimized interpreter instructions in a methodological fashion. First, we are going describe the functions we
use to (un-)box Python objects and the conventions we use to
pass operand data on the mixed-value operand stack. We treat
arithmetic operations for Python’s integers, floating point
and complex numbers, though our technique is not limited
This section presents implementation details of how we use
speculative staging of optimized interpreter instructions to
substantially optimize execution of a Python 3 series interpreter. This interpreter is an implementation vehicle that we
use to demonstrate concrete instantiations of our general
optimization framework. We use an example sequence of
7
2013/07/12
Original Python
bytecode
LOAD_FAST
LOAD_FAST
LOAD_FAST
BINARY_MULT
LOAD_FAST
LOAD_FAST
BINARY_MULT
BINARY_ADD
LOAD_FAST
LOAD_FAST
BINARY_MULT
BINARY_ADD
LOAD_CONST
BINARY_POWER
BINARY_MULT
STORE_FAST
After Type Feedback
via Quickening
LOAD_FAST
LOAD_FAST
LOAD_FAST
INCA_FLOAT_MULT
LOAD_FAST
LOAD_FAST
INCA_FLOAT_MULT
INCA_FLOAT_ADD
LOAD_FAST
LOAD_FAST
INCA_FLOAT_MULT
INCA_FLOAT_ADD
LOAD_CONST
INCA_FLOAT_POWER
INCA_FLOAT_MULT
STORE_FAST
INCA
F
F
F
F
F
F
After Type Propagation
and Quickening
NAMA_FLOAT_LOAD_FAST
NAMA_FLOAT_LOAD_FAST
NAMA_FLOAT_LOAD_FAST
NAMA_FLOAT_MULT
NAMA_FLOAT_LOAD_FAST
NAMA_FLOAT_LOAD_FAST
NAMA_FLOAT_MULT
NAMA_FLOAT_ADD
NAMA_FLOAT_LOAD_FAST
NAMA_FLOAT_LOAD_FAST
NAMA_FLOAT_MULT
NAMA_FLOAT_ADD
NAMA_FLOAT_LOAD_CONST
NAMA_FLOAT_POWER
NAMA_FLOAT_MULT
NAMA_FLOAT_STORE_FAST
Figure 2: Concrete Python bytecode example and its step-wise optimization using multi-level quickening for concerting at
run-time.
LOAD_FAST
LOAD_FAST LOAD_FAST
INCA_FLOAT_MULT
Floating Point Numbers Floating point numbers are represented by the C struct PyFloat_Type.
LOAD_FAST
INCA_FLOAT_MULT
LOAD_FAST
INCA_FLOAT_ADD
LOAD_FAST
mPyFloat Type := PyFloat AS DOUBLE : PyFloatObject∗ → double
INCA_FLOAT_MULT
INCA_FLOAT_ADD
LOAD_FAST
m−1
PyFloat Type := PyFloat FromDouble : double → PyFloatObject∗
LOAD_CONST
INCA_FLOAT_POWER
Complex Numbers Complex numbers are represented by
the C struct PyComplex_Type, but we cannot directly
access a native machine representation of its data. This is
due to a complex number consisting of two parts, a real and
an imaginary part:
INCA_FLOAT_MULT
STORE_FAST
Figure 3: Data-flow of the sequence of instructions of Figure 2
in abstract interpretation. Arrows indicate direction of type
propagation.
1
2
3
4
Furthermore, we need to know something about the internals
of the PyComplex_Type implementation to access the native
machine data. Internally, Python uses a similar struct to
complex_t (Py_complex) to hold the separate parts and we
can access the data via ((PyComplexObject*)x)->cval.
to these. Second, we illustrate the derivation steps in concrete Python code examples—while in theory we could use
a partial evaluator to generate the derivatives, the manual
implementation effort is so low that it does not justify using
a partial evaluator.
4.1.1
typedef struct {
double real ;
double imag ;
} complex_t ;
mPyComplex Type := PyComplexObject∗ → (double, double)
m−1
PyComplex Type := (double, double) → PyComplexObject∗
(Un-)boxing Python Objects
The Python interpreter supports uniform procedures to box
and unbox Python objects to native machine numbers, and
we just briefly mention which functions to use and give their
type.
4.1.2
Operand Stack Data Passing Convention
Across all three instruction sets, instructions pass operands
using the operand stack. The original Python instruction set
operates exclusively on Python objects which reside on the
heap, i.e., the operand stack passes pointers to the heap, of
Integers Python represents its unbounded range integers by
type uint64 on a 64-bit machine. The lowest level instruction
the C struct PyLong_Type.
set operates on native machine data, i.e., we need to map
native machine data types to the existing operand stack. We
mPyLong Type := PyLong AS LONG : PyLongObject∗ → int64 rely on C constructs of explicit casts and unions to ensure
m−1
PyLong Type := PyLong FROM LONG : int64 → PyLongObject∗ that we attach the proper semantics to the bits passed around.
8
2013/07/12
Passing Integers Naturally, it is trivial in C to support
passing integers on the mixed operand stack: we simply
cast from the signed integer representation to the unsigned
representation used for pointers, too.
3
4
5
6
7
8
9
cint64 (o : int64) := (uint64)o
10
c−1
int64 (o : uint64) := (int64)o
The resolving procedure of dynamic types is not visible and
resides in the PyNumber_Subtract function. The resolving
is much more complicated than our simplified interpreter
substrate suggests, in particular due to the presence of ad-hoc
polymorphism and inheritance. In our simplified interpreter
substrate all dynamic types were known at compile-time and
we could use pattern matching to express semantics properly
and exhaustively. For languages such as Python, this is in
general not possible. For example, one could use operator
overloading for integers to perform subtraction, or perform
the traditional integer addition by inheriting from the system’s
integers.
Passing Floats To pass floating point numbers, we need to
avoid implicit casting a C compiler would insert when using
a cast from double to uint64. A solution to circumvent this,
is to use a C union that allows to change the semantics the
compiler attaches to a set of bits. More precisely, we use the
following union:
1
typedef union { uint64 word ; double dbl ; } map_t ;
Map_t allows us to change the semantics by using the corresponding field identifier and thus suffices to map doubles to
uint64 representation in a transparent and portable fashion.
First-level Quickening By fixing operands v and w to the
type PyFloat_Type, we can derive the following optimized
instruction derivative, expressed as INCA_FLOAT_SUBTRACT.
cdouble (o : double) := (map t m; m.dbl = o; (uint64)m.word)
c−1
double (o : uint64) := (map t m; m.word = o; m.dbl)
1
Passing Complex Numbers As previously described, we
represent a complex number using two double numbers.
Therefore, we can reuse the functions that map floating point
numbers:
2
3
4
5
6
7
ccomplex (o : complex t) := (cdouble (o.real), cdouble (o.imag))
c−1
complex (i : uint64, r : uint64) := (complex t c;
8
9
10
11
−1
c.real = c−1
double (r); c.imag = cdouble (i))
12
13
But, the operand stack passing convention alone does not suffice since passing native machine complex numbers requires
two stack slots instead of one. Consequently, we double the
operand stack size such that all instruction operands on the
stack could be two-part complex numbers. Since the abstract
interpreter identifies whole sequences of instructions, there is
always a termination instruction that boxes the two-part double numbers into a PyComplexObject instance. As a result,
this temporary use of two operand stack slots is completely
transparent to all predecessor instructions of the sequence as
well as all successor instructions of the sequence.
4.1.3
Example Instructions
Second-level Quickening The second-level quickening
step optimizes sequences of interpreter instructions to directly modify native machine data. Hence, we move the
required type check to the corresponding load instruction:
1
2
Original Python Instruction The following program excerpt shows Python’s original implementation of the arithmetic subtraction operation:
2
case I N C A _ F L O A T _ S U B T R A C T :
w = POP ();
v = TOP ();
if (!T(v , w , PyFloat_Type )) {
/* misspeculation , g e n e r a l i z e */
goto B I N A R Y _ S U B T R A C T _ M I S S ;
} // if
x = PyFloat_Type . tp_as_number - > nb_sub (v , w );
Py_DECREF ( w );
Py_DECREF ( v );
SET_TOP ( x );
if ( x != NULL ) DISPATCH ();
goto on_error ;
On lines four to seven, we see what happens on misspeculation. After the type check (stylized by symbol T) fails,
we resume execution of the general instruction that INCA_FLOAT_SUBTRACT derives from, BINARY_SUBTRACT in this
case. We change the implementation of BINARY_SUBTRACT
to add another label that we can use for resuming correct
execution.
Directly calling nb_sub on PyFloat_Type optimizes the
complex resolving of dynamic typing hinted at before.
Furthermore, this type-specialized instruction derivative
illustrates that we can in fact infer that both operands as well
as the result (x) are of type PyFloat_Type.
The previous section contains all details necessary to inductively construct all optimized interpreter instructions that
modify native machine data types. In this section, we give
concrete Python interpreter instruction implementation examples for completeness.
1
v = TOP ();
BINARY_SUBTRACT_MISS :
x = P y N u m b e r _ S u b t r a c t (v , w );
Py_DECREF ( v );
Py_DECREF ( w );
SET_TOP ( x );
if ( x != NULL ) DISPATCH ();
goto on_error ;
3
4
5
6
7
8
case BINARY_SUBTR A CT :
w = POP ();
9
case N A M A _ F L O A T _ L O A D _ F A S T :
PyObject * x = fastlocals [ oparg ];
map_t result ;
if (T(x , PyFloat_Type ))
result . dbl = P y F l o a t _ A S _ D O U B L E ( x );
else /* misspeculation , g e n e r a l i z e */
PUSH ( result . word );
NEXT_INSTR ();
2013/07/12
5.
The corresponding floating point subtract operation need not
perform any type checks, reference counting, or (un-)boxing
operations anymore:
1
2
3
4
5
6
7
8
9
10
11
Evaluation
Systems and Procedure We ran the benchmarks on an Intel
Nehalem i7-920 based system running at a frequency of 2.67
GHz, on Linux kernel version 3.0.0-26 and gcc version
4.6.1. To minimize perturbations by third party systems,
we take the following precautions. First, we disable Intel’s
Turbo Boost [28] feature to avoid frequency scaling based on
unpublished, proprietary heuristics. Second, we use nice -n
-20 to minimize operating system scheduler effects. Third,
we use 30 repetitions for each pairing of a benchmark with
an interpreter to get stable results; we report the geometric
mean of these repetitions, thereby minimizing bias towards
outliers.
case N A M A _ F L O AT _ S U B T R A C T :
w = POP ();
v = TOP ();
{
map_t s , t ;
s . word = ( uint64 ) v ;
t . word = ( uint64 ) w ;
s . dbl -= t . dbl ;
SET_TOP ( s . word );
DISPATCH ();
}
Both of the native floating point interpreter instructions
make use of the map_t union as previously explained to avoid
implicit conversions as emitted by the compiler.
In general, the type-specific load instructions have to
validate all assumptions needed to manipulate unboxed native
machine data. For example, integers in Python 3 are by
default unbounded, i.e., they can exceed native machine
bounds. As a result, we modify the corresponding integer
load instruction to check whether it is still safe to unbox
the integer. However, it is worth noting that these are only
implementation limitations, as for example we could expand
this technique to use two machine words and perform a 128bit addition because we already doubled the stack size to
accommodate complex numbers.
Benchmarks We use several benchmarks from the computer language benchmarks game [22]. This is due to the
following reasons. First, since we are using a Python 3 series interpreter, we cannot use programs written for Python 2
to measure performance. Unfortunately, many popular third
party libraries and frameworks, such as Django, twisted, etc.
have not released versions of their software officially supporting Python 3. Compatibility concerns aside, the Python
community has no commonly agreed upon comprehensive
set of benchmarks identified to assess Python performance.
Second, some popular libraries have custom C code modules
that perform computations. Effectively, benchmarking these
programs corresponds to measuring time not spent in the
interpreter, and therefore would skew the results in the wrong
direction. Instead, we use the following benchmarks that
stress raw interpreter performance: binarytrees, fannkuch,
fasta, mandelbrot, nbody, and spectralnorm.
Finally, we rely on those benchmarks because they allow
comparison with other implementations, such as PyPy. PyPy
officially only supports Python 2, but since none of those
benchmarks use Python 3 specifics—with the notable exception of fannkuch, which required minor changes—we run
the identical programs under PyPy. This may sound like a
contradiction, but is in fact only possible with the chosen set
of programs and cannot in general be expected to hold for
other programs.
Implementation Remarks The first-level quickening step
has already been explored in 2010 by Brunthaler [7, 8], and
we used his publicly available implementation [9] as a basis
for ours. In the remainder of this paper, we refer to the original
interpreter as INCA (an abbreviation for inline caching), and
the modified interpreter that supports the new optimizations
as MLQ, which is short for multi-level quickening.
We use a simple code generator written in Python that
generates the C code for all instructions of the Python interpreter. We run our code generator as a pre-compile step
when compiling the interpreter, and rely on the existing build
infrastructure to build the interpreter. In consequence, all of
the instruction derivatives speculatively added to the interpreter and available for concerting at run-time; sidestepping
dynamic code generation altogether.
We provide templates of the C instructions using the
language of the Mako template engine [2]. The semantics of
all instruction derivatives is essentially identical, e.g., adding
numbers, which is why derivative instructions are merely
optimized copies operating on specialized structures or types.
Hence, these templates capture all essential details and help
keeping redundancy at bay. If we were to create a domainspecific language for generating interpreters, similar to the
VMgen/Tiger [10, 21] project, we could express the derivative
relationship for instructions, thereby reducing the costs for
creating these templates. The next section provides lines-ofcode data regarding our interpreter implementation generator
(see Section 5.2).
5.1
Benchmark Results
Figure 4 shows the performance results obtained on our test
system. We report a maximum speedup by up to a factor
of 4.222 over the CPython 3.2.3 interpreter using switch
dispatch. INCA itself improves performance by up to a factor
of 1.7362. As a result, the MLQ system improves upon the
previous maximum speedups by 143%.
PyPy Comparison Even though our speculatively staged
MLQ interpreter is no match in comparison to the multi-year,
multi-person effort that went into the PyPy implementation
(see implementation measurements in the discussion in Section 5.3), we want to give a realistic perspective of the potential of MLQ interpretation. Therefore, we evaluated the
10
2013/07/12
Interpreter
4
Speedup over switch−dispatch
Threaded Code
INCA
3.5
MLQ
3
2.5
2
1.75
1.5
1.25
1.1
1
0
0
0
0
0
0
0
0
0
0
0
2
4
6
9
0
00
00
20
30
40
50
00
00
00
40
50
60
00
s1
s1
s1
ch
h1
00
00
50 brot
rot lbrot y 100 y 150 y 50 orm orm orm orm
ree ytree ytree nkuc nnku a 10 a 15
t
a
b
l
l
t
y
d
n
n
n
n
s
e
e
e
t
t
l
l
l
l
o
fa
ar
ar
ar
fa
fan
nd
nd
nd nbod nbod
fas
fas
nb ectra ectra ectra ectra
bin
bin
bin
ma
ma
ma
sp
sp
sp
sp
Benchmark
Figure 4: Detailed speedups per benchmark normalized by the CPython 3.2.3 interpreter using switch-dispatch.
Benchmark
PyPy 1.9
MLQ
PyPy
MLQ
binarytrees
fannkuch
fasta
mandelbrot
nbody
spectralnorm
3.2031 ×
8.2245 ×
13.4537 ×
6.3224 ×
12.3592 ×
3.5563 ×
1.8334 ×
1.5240 ×
1.6906 ×
1.9699 ×
2.0639 ×
4.0412 ×
1.7471
5.2884
7.9579
3.2095
5.9883
0.8800
machine data instead of boxed objects—we can take full advantage of its benefits: determinism and space-efficiency [15].
5.2
Interpreter Data
We rely on David Wheeler’s sloccount utility [44] to measure lines of code. For calculating the number of interpreter
instructions, we use a regular expression to select the beginning of the instructions and wc -l to count their occurrences.
Table 2: Speedups of PyPy 1.9 and our MLQ system normalized by the CPython 3.2.3 interpreter using switch-dispatch.
Instruction-Set Extension The CPython 3.2.3 interpreter
has 100 interpreter instructions spanning 1283 lines of code.
The INCA instruction set of the INCA interpreter adds another
53 instructions to the interpreter, totaling 3050 lines of code
(i.e., a plus of 138% or 1767 lines of code). NAMA itself
requires additional 134 interpreter instructions adding another
1240 lines of code (increase by 41% over INCA). We adapted
the existing Python code generator of the INCA system to
generate the NAMA instruction derivatives’ C implementation.
The original code generator has 2225 lines of Python code,
where 1700 lines of code just reflect the type structure code
extracted from the C structs of Python objects via gdb.
The required changes were about 600 lines of Python code,
resulting in the updated code generator having 2646 lines of
Python code. The INCA code generator uses 2225 lines of C
code templates. To support the NAMA instruction set, we added
another 1255 lines of templatized C code—giving 3480 lines
of template code in total. In addition to this, our abstract
interpreter identifying eligible sequences requires around 400
lines of C code.
performance against PyPy version 1.9 [40]. Note that the
times we measured include start-up and warm-up times for
PyPy; since it is not clear at which point in time the benefits
of JIT compilation are visible.
Table 2 lists the geometric mean of speedups per benchmark that we measured on our Intel Nehalem system. During
our experiment we also measured overall memory consumption and report that our system uses considerably less memory
at run-time: PyPy uses about 20 MB, whereas our MLQ interpreter uses less than 7 MB. This is primarily due to the
systems using different memory management techniques:
MLQ uses CPython’s standard reference counting, whereas
PyPy offers several state-of-the-art garbage collectors, such
as a semi-space copying collector and a generational garbage
collector. Surprisingly, we find that using a more powerful
memory management technique does not automatically translate to higher performance. Since MLQ is particularly effective
at eliminating the overhead of reference counting—reducing
required reference count operations, as well as using native-
Portability As we have briefly mentioned before, our speculative staging leverages the existing backend of the ahead-oftime compiler that is used to compile the interpreter. There11
2013/07/12
Interpreter
Python 3.2.3/switch-dispatch
Python 3.2.3/threaded code
MLQ Python interpreter
Binary Size
(bytes)
2,135,412
2,147,745
2,259,616
computes 25,991, and for the jit directory 83,435 lines of
Python code. The reduction between the 100kLOC of PyPy
and the 6.5kLOC of MLQ is by a factor of almost 17×. This
is a testament to the ease-of-implementation property of interpreters, and also of purely interpretative optimizations in
general.
Increase
(kB) (%)
0 0.0
12 0.6
121 5.8
Table 3: MLQ binary size increase without debug information
on Intel Nehalem i7-920.
6.
Partial Evaluation In 1996, Leone and Lee [31] present
their implementation of an optimizing ML compiler that
relies on run-time feedback. Interestingly, they mention the
basic idea for our system:
fore, our technique is portable by construction, i.e., since we
implemented our optimized derivatives in C, the interpreter
is as portable as any other C program. We confirm this by
compiling the optimized interpreter on a PowerPC system.
This did not require changing a single line of code.
It is possible to pre-compile several alternative templates for the same code sequence and choose between
them at run time, but to our knowledge this has never
been attempted in practice.
Space Requirements Table 3 presents the effect of implementing our speculatively staged MLQ Python interpreter on
the binary size of the executable. We see that going from
a switch-based interpreter to a threaded code interpreter requires additional 12 kB of space. Finally, we see that adding
two additional instruction sets to our Python interpreter requires less than 110 kB of additional space (when discounting
the space requirement from threaded code).
5.3
Related Work
Substituting “interpreter instructions”—or derivatives, as
we frequently refer to them—for the term “templates” in
the quote, reveals the striking similarity. In addition, both
approaches leverage the compiler back-end of the aheadof-time compiler assembling the run-time system—in our
case the interpreter. This approach therefore automatically
supports all target architectures of the base-compiler and
hence there is no need for building a custom back-end.
In similar vein to Leone and Lee, other researchers addressed the prohibitive latency requirements of dynamic compilation [12, 13, 18, 25, 36] by leveraging ideas from partial
evaluation. While we take inspiration from these prior results,
we address the latency problem superimposed by dynamic
code generation by avoiding it altogether. Instead, we speculate on the likelihood of the interpreter using certain kinds
of types and derive optimized instructions for them. At runtime, we rely on our novel procedure of concerting these
optimized derivatives via abstract interpretation driven multilevel quickening. That being said, since these approaches are
orthogonal, we believe that there are further advancements
to be had by combining these approaches. For example ‘C,
or the recently introduced Terra/Lua [17], could be used to
either stage the optimized derivatives inside of the interpreter
source code, or generate the necessary derivatives at run-time,
thereby eliminating the speculation part.
The initial optimization potential of partial evaluation
applied to interpreters goes back to Futamura in 1971 [23].
But, prior work has repeatedly revisited this specific problem.
In particular, Thibault et al. [43] analyze the performance
potential of partially evaluated interpreters and report a
speedup of up to four times for bytecode interpreters. This
result is intimately related to our work, in particular since
they note that partial evaluation primarily targets instruction
dispatch when optimizing interpreters—similar to the first
Futamura projection. In 2009, Brunthaler established that
instruction dispatch is not a major performance bottleneck
for our class of interpreters [6]. Instead, our approach targets
Discussion
The most obvious take-away from Figure 4 is that there
is clearly a varying optimization potential when using our
optimization. Upon close investigation, we found that this is
due to our minimal set of eligible start and end instructions
(see Table 1). For example, there are other candidates for
start instructions that we do not currently support, such as
LOAD_ATTR, LOAD_NAME, LOAD_GLOBAL, LOAD_DEREF. In
consequence, expanding the abstract interpreter to cover
more cases, i.e., more instructions and more types, will
improve performance even further. Spectralnorm performs
best, because our abstract interpreter finds that all of the
instructions of its most frequently executed function (eval_A) can be optimized.
Finally, we were surprised about the performance comparison with PyPy. First, it is striking that we outperform PyPy
1.9 on the spectralnorm benchmark. Since we include startup and warm-up times, we decided to investigate whether this
affects our result. We timed successive runs with higher argument numbers (1000, 1500, 2000, and 4000) and verified that
our interpreter maintains its performance advantage. Besides
this surprising result, we think that the performance improvement of our interpreter lays a strong foundation for further
optimizations. For example, we believe that implementing
additional instruction-dispatch based optimizations, such as
superinstructions [19, 39] or selective inlining [37], should
have a substantial performance impact.
Second, we report that the interpreter data from Section 5.2
compares favorably with PyPy, too. Using sloccount on
the pypy directory on branch version-1.9 gives the following results. For the interpreter directory, sloccount
12
2013/07/12
forms PyPy by up to 14% on the spectralnorm benchmark,
and requires substantially less implementation effort.
known bottlenecks in instruction implementation: dynamic
typing, reference counting operations, and modifying boxed
value representations.
Glück and Jørgensen also connect interpreters with partial
evaluation [24], but as a means to optimize results obtained
by applying partial evaluation. Our technique should achieve
similar results, but since it is speculative in nature, it does not
need information of the actual program P that is interpreted,
which is also a difference between our work and Thibault et
al. [43].
Miscellaneous Prior research addressed the importance
of directly operating on unboxed data [32, 35]. There are
certain similarities, e.g., Leroy’s use of the wrap and unwrap
operators are related to our (un-)boxing functions, and there
exist similar concerns in how to represent bits in a uniform
fashion. The primary difference to the present work is that
we apply this to a different language, Python, which has a
different sets of constraints and is dynamically typed.
In 1998, Shields et al. [42] address overhead in dynamic
typing via staged type inference. This is an interesting approach, but it is unclear if or how efficient this technique
scales to Python-like languages. Our technique is much simpler, but we believe it could very well benefit of a staged
inference step.
Interpreter Optimization The most closely related work in
optimizing high-level interpreters is due to Brunthaler [7,
8]. In fact, the first-level quickening step to capture type
feedback goes back to the discovery by Brunthaler, and we
have compared his publicly available system against our
new technique. In addition to the second-level quickening
that targets the overheads incurred by using boxed object
representations, we also describe a principled approach to
using partial evaluation for deriving instructions.
7.
Conclusions
We present a general theory and framework to optimize interpreters for high-level languages such as JavaScript, Python,
and Ruby. Traditional optimization techniques such as aheadof-time compilation and partial evaluation only have limited
success in optimizing the performance of these languages.
This is why implementers usually resort to the expensive
implementation of dynamic compilers—evidenced by the
substantial industry efforts on optimizing JavaScript. Our
technique preserves interpreter characteristics, such as portability and ease of implementation, while at the same time
enabling substantial performance speedups.
This important speedup is enabled by peeling off layers
of redundant complexity that interpreters conservatively reexecute instead of capitalizing on the “dynamic locality of
type usage”—almost three decades after Deutsch and Schiffman described how to leverage this locality for great benefit.
We capitalize on the observed locality by speculatively staging optimized interpreter instruction derivatives and concerting them at run-time.
First, we describe how speculation allows us to decouple
the partial evaluation from any concrete program. This enables a principled approach to deriving the implementation of
optimized interpreter instruction derivatives by speculating
on types the interpreter will encounter with a high likelihood.
Second, we present a new technique of concerting optimized interpreter instructions at run-time. At the core, we
use a multi-level quickening technique that enables us to optimize untyped instructions operating on boxed objects down
to typed instructions operating on native machine data.
From a practical perspective, our implementation and
evaluation of the Python interpreter confirms that there is
a huge untapped performance potential waiting to be set
free. Regarding the implementation, we were surprised how
easy it was to provide optimized instruction derivatives
even without automated support by partial evaluation. The
evaluation indicates that our technique is competitive with
Just-in-time compilers Type feedback has a long and successful history in just-in-time compilation. In 1994, Hölzle
and Ungar [26, 27] discuss how the compiler uses type feedback to inline frequently dispatched calls in a subsequent
compilation run. This reduces function call overhead and
leads to a speedup by up to a factor of 1.7. In general, subsequent research gave rise to adaptive optimization in just-intime compilers [1]. Our approach is similar, except that we
use type feedback for optimizing the interpreter.
In 2012, there has been work on “repurposed JIT compilers,” or RJITs, which take an existing just-in-time compiler for a statically typed programming language and add
support for a dynamically typed programming language on
top [11, 29]. This approach is interesting, because it tries to
leverage an existing just-in-time compilation infrastructure
to enable efficient execution of higher abstraction-level programming languages—similar to what has been described
earlier in 2009 by Bolz et al. [5] and Yermolovich et al. [46],
but more invasive. Unfortunately, the RJIT work is unaware
of recent advances in optimizing interpreters, and therefore
misses some important optimization opportunities available
to a repurposed just-in-time compiler. Würthinger et al. [45]
found that obtaining information from the interpreter has
substantial potential to optimize JIT compilation, and we anticipate that this is going to have major impact on the future
of dynamic language implementation.
Regarding traditional just-in-time compilers, Python nowadays only has one mature project: PyPy [41]. PyPy follows a
trace-based JIT compilation strategy and achieves substantial
speedups over standard CPython. However, PyPy has downsides, too: because its internals differ from CPython, it is not
compatible with many third party modules written in C. Our
comparison to PyPy finds that it is a much more sophisticated
system offering class-leading performance on some of our
benchmarks. Surprisingly, we find that our technique outper13
2013/07/12
a dynamic compiler w.r.t. performance and implementation
effort: besides the speedups by a factor of up to 4.222, we
report a reduction in implementation effort by about 17×.
[11] J. G. Castanos, D. Edelsohn, K. Ishizaki, P. Nagpurkar,
T. Nakatani, T. Ogasawara, and P. Wu. On the benefits and
pitfalls of extending a statically typed language jit compiler
for dynamic scripting languages. In Proceedings of the 27th
ACM SIGPLAN Conference on Object Oriented Programming:
Systems, Languages, and Applications, Tucson, AZ, USA, October 21-25, 2012 (OOPSLA ’12), pages 195–212, 2012. doi:
http://doi.acm.org/10.1145/2384616.2384631.
References
[1] J. Aycock. A brief history of just-in-time. ACM Computing
Surveys, 35(2):97–113, 2003. ISSN 0360-0300. doi: http:
//doi.acm.org/10.1145/857076.857077.
[2] M. Bayer.
Mako, April 2013.
makotemplates.org.
[12] C. Chambers. Staged compilation. In Proceedings of the ACM
SIGPLAN Workshop on Partial Evaluation and SemanticsBased Program Manipulation, Portland, OR, USA, January
14-15, 2002 (PEPM ’02), pages 1–8, 2002. doi: http://doi.acm.
org/10.1145/503032.503045.
URL http://www.
[3] J. R. Bell. Threaded code. Communications of the ACM, 16
(6):370–372, 1973. ISSN 0001-0782. doi: http://doi.acm.org/
10.1145/362248.362270.
[13] C. Consel and F. Noël. A General Approach for Run-Time Specialization and its Application to C. In POPL ’96 [38], pages
145–156. doi: http://doi.acm.org/10.1145/237721.237767.
[4] M. Berndl, B. Vitale, M. Zaleski, and A. D. Brown. Context
threading: A flexible and efficient dispatch technique for virtual machine interpreters. In Proceedings of the 3rd IEEE /
ACM International Symposium on Code Generation and Optimization, San Jose, CA, USA, March 20-23, 2005 (CGO ’05),
pages 15–26, 2005.
[14] E. H. Debaere and J. M. van Campenhout. Interpretation and
instruction path coprocessing. Computer systems. MIT Press,
1990. ISBN 978-0-262-04107-2.
[15] L. P. Deutsch and D. G. Bobrow. An efficient, incremental,
automatic garbage collector. Communications of the ACM, 19
(9):522–526, 1976. ISSN 0001-0782. doi: http://doi.acm.org/
10.1145/360336.360345.
[5] C. F. Bolz, A. Cuni, M. Fijałkowski, and A. Rigo. Tracing
the meta-level: PyPy’s tracing JIT compiler. In Proceedings
of the 4th Workshop on the Implementation, Compilation,
Optimization of Object-Oriented Languages and Programming
Systems (ICOOOLPS ’09), Lecture Notes in Computer Science,
pages 18–25. Springer, 2009. ISBN 978-3-642-03012-3. doi:
http://doi.acm.org/10.1145/1565824.1565827.
[16] L. P. Deutsch and A. M. Schiffman. Efficient implementation of the Smalltalk-80 system. In Proceedings of the SIGPLAN ’84 Symposium on Principles of Programming Languages (POPL ’84), pages 297–302, New York, NY, USA,
1984. ACM. ISBN 0-89791-125-3. doi: http://doi.acm.org/10.
1145/800017.800542.
[6] S. Brunthaler. Virtual-machine abstraction and optimization
techniques. In Proceedings of the 4th International Workshop
on Bytecode Semantics, Verification, Analysis and Transformation, York, United Kingdom, March 29, 2009 (BYTECODE ’09),
volume 253(5) of Electronic Notes in Theoretical Computer
Science, pages 3–14, Amsterdam, The Netherlands, December
2009. Elsevier. doi: http://dx.doi.org/10.1016/j.entcs.2009.11.
011.
[17] Z. DeVito, J. Hegarty, A. Aiken, P. Hanrahan, and J. Vitek.
Terra: a multi-stage language for high-performance computing. In Proceedings of the ACM SIGPLAN Conference on
Programming Language Design and Implementation, Seattle,
WA, USA, June 16-22, 2013 (PLDI ’13), pages 105–116, 2013.
doi: http://doi.acm.org/10.1145/2462156.2462166.
[18] D. R. Engler, W. C. Hsieh, and M. F. Kaashoek. ‘C: A
Language for High-Level, Efficient, and Machine-Independent
Dynamic Code Generation. In POPL ’96 [38], pages 131–144.
doi: http://doi.acm.org/10.1145/237721.237765.
[7] S. Brunthaler. Inline caching meets quickening. In Proceedings
of the 24th European Conference on Object-Oriented Programming, Maribor, Slovenia, June 21-25, 2010 (ECOOP ’10), volume 6183/2010 of Lecture Notes in Computer Science, pages
429–451. Springer, 2010. ISBN 978-3-642-03012-3. doi:
http://dx.doi.org/10.1007/978-3-642-03013-0.
[19] M. A. Ertl and D. Gregg. The structure and performance of
efficient interpreters. Journal of Instruction-Level Parallelism,
5:1–25, November 2003.
[8] S. Brunthaler. Efficient interpretation using quickening. In
Proceedings of the 6th Symposium on Dynamic Languages,
Reno, NV, USA, October 18, 2010 (DLS ’10), pages 1–14, New
York, NY, USA, 2010. ACM Press. ISBN 978-3-642-03012-3.
doi: http://dx.doi.org/10.1007/978-3-642-03013-0.
[20] M. A. Ertl and D. Gregg. Combining stack caching with dynamic superinstructions. In Proceedings of the 2004 Workshop
on Interpreters, Virtual Machines and Emulators (IVME ’04),
pages 7–14, New York, NY, USA, 2004. ACM. ISBN 1-58113909-8. doi: http://doi.acm.org/10.1145/1059579.1059583.
[9] S. Brunthaler. Python quickening based optimizations for
cpython 3.3a0, 2012. URL http://www.ics.uci.edu/
~sbruntha/pydev.html.
[10] K. Casey, D. Gregg, and M. A. Ertl. Tiger – an interpreter
generation tool. In Proceedings of the 14th International
Conference on Compiler Construction, Edinburgh, United
Kingdom, April 4-8, 2005 (CC ’05), volume 3443/2005 of
Lecture Notes in Computer Science, pages 246–249. Springer,
2005. ISBN 3-540-25411-0. doi: http://dx.doi.org/10.1007/
978-3-540-31985-6 18.
[21] M. A. Ertl, D. Gregg, A. Krall, and B. Paysan. Vmgen: a
generator of efficient virtual machine interpreters. Software
Practice & Experience, 32:265–294, March 2002. ISSN 00380644. doi: 10.1002/spe.434. URL http://portal.acm.
org/citation.cfm?id=776235.776238.
[22] B. Fulgham. The computer language benchmarks game, 2013.
URL http://shootout.alioth.debian.org/.
[23] Y. Futamura.
Partial Evaluation of Computation
Process–An Approach to a Compiler-Compiler.
Sys-
14
2013/07/12
tems.Computers.Controls, 2(5):45–50, 1971.
Proceedings of the 5th ACM Conference on Functional Programming Languages and Computer Architecture, in Cambridge, UK, September 1991, pages 636–666, 1991. ISBN
3-540-54396-1. doi: http://dl.acm.org/citation.cfm?id=645420.
652528.
[24] R. Glück and J. Jørgensen. Generating Optimizing Specializers.
In Proceedings of the IEEE Computer Society International
Conference on Computer Languages, Toulouse, France, May
16-19, 1994 (ICCL ’94), pages 183–194, 1994.
[36] M. Philipose, C. Chambers, and S. J. Eggers. Towards Automatic Construction of Staged Compilers. In Proceedings
of the 29th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Portland, OR, USA, January 16-18, 2002 (POPL ’02), pages 113–125, 2002. doi:
http://doi.acm.org/10.1145/503272.503284.
[25] B. Grant, M. Mock, M. Philipose, C. Chambers, and S. J.
Eggers. Annotation-Directed Run-Time Specialization in C.
In Proceedings of the ACM SIGPLAN Workshop on Partial
Evaluation and Semantics-Based Program Manipulation, Amsterdam, The Netherlands, June 12-13, 1997 (PEPM ’97),
pages 163–178, 1997. doi: http://doi.acm.org/10.1145/258993.
259016.
[37] I. Piumarta and F. Riccardi. Optimizing direct threaded code
by selective inlining. In Proceedings of the ACM SIGPLAN
Conference on Programming Language Design and Implementation, Montréal, QC, Canada, June 17-19, 1998 (PLDI ’98),
pages 291–300, New York, NY, USA, 1998. ACM. ISBN 089791-987-4. doi: http://doi.acm.org/10.1145/277650.277743.
[26] U. Hölzle. Adaptive Optimization for SELF: Reconciling
High Performance with Exploratory Programming. PhD thesis,
Stanford University, Stanford, CA, USA, 1994.
[27] U. Hölzle and D. Ungar. Optimizing dynamically-dispatched
calls with run-time type feedback. In Proceedings of the
ACM SIGPLAN Conference on Programming Language Design
and Implementation, Orlando, FL, USA, June 20-24, 1994
(PLDI ’94), pages 326–336, 1994. ISBN 0-89791-662-X. doi:
http://doi.acm.org/10.1145/178243.178478.
[38] POPL ’96. Proceedings of the 23rd ACM SIGPLAN-SIGACT
Symposium on Principles of Programming Languages, St. Petersburg Beach, FL, USA, January 21-24, 1996 (POPL ’96),
1996.
[39] T. A. Proebsting. Optimizing an ANSI c interpreter with
superoperators. In Proceedings of the 22nd ACM SIGPLANSIGACT Symposium on Principles of Programming Languages,
San Francisco, CA, USA, January 23-25, 1995 (POPL ’95),
pages 322–332, 1995. doi: http://doi.acm.org/10.1145/199448.
199526.
[28] Intel. Intel turbo boost technology – on-demand processor performance, 2012. URL http://www.intel.com/
content/www/us/en/architecture-and-technology/
turbo-boost/turbo-boost-technology.html.
[29] K. Ishizaki, T. Ogasawara, J. G. Castanos, P. Nagpurkar,
D. Edelsohn, and T. Nakatani. Adding dynamically-typed
language support to a statically-typed language compiler: Performance evaluation, analysis, and tradeoffs. In Proceedings of the 8th ACM SIGPLAN International Conference on
Virtual Execution Environments, London, United Kingdom,
March 3-4, 2012 (VEE ’12), pages 169–180, 2012. doi:
http://doi.acm.org/10.1145/2151024.2151047.
[40] PyPy. The PyPy project, 2013. URL http://pypy.org/.
[41] A. Rigo and S. Pedroni. PyPy’s approach to virtual machine construction. In Proceedings of the 21st ACM SIGPLAN Conference on Object Oriented Programming: Systems, Languages, and Applications, Portland, OR, USA, October 22-26, 2006 (OOPSLA ’06), pages 944–953, 2006. doi:
http://doi.acm.org/10.1145/1176617.1176753. OOPSLA Companion.
[30] N. D. Jones, C. K. Gomard, and P. Sestoft. Partial evaluation
and automatic program generation. Prentice Hall international
series in computer science. Prentice Hall, 1993. ISBN 978-013-020249-9.
[42] M. Shields, T. Sheard, and S. L. Peyton Jones. Dynamic
Typing as Staged Type Inference. In Proceedings of the
25th ACM SIGPLAN-SIGACT Symposium on Principles of
Programming Languages, San Diego, CA, USA, January 19-21,
1998 (POPL ’98), pages 289–302, 1998. doi: http://doi.acm.
org/10.1145/268946.268970.
[31] P. Lee and M. Leone. Optimizing ML with run-time code generation. In Proceedings of the ACM SIGPLAN Conference on
Programming Language Design and Implementation, Philadephia, PA, USA, May 21-24, 1996 (PLDI ’96), pages 540–553,
1996. doi: http://doi.acm.org/10.1145/989393.989448.
[43] S. Thibault, C. Consel, J. L. Lawall, R. Marlet, and G. Muller.
Static and Dynamic Program Compilation by Interpreter
Specialization. Higher-Order and Symbolic Computation,
13(3):161–178, 2000. ISSN 1388-3690. doi: 10.1023/A:
1010078412711.
[32] X. Leroy. Unboxed Objects and Polymorphic Typing. In
Proceedings of the 196h ACM SIGPLAN-SIGACT Symposium
on Principles of Programming Languages, Albuquerque, NM,
USA, January 19-22, 1992 (POPL ’92), pages 177–188, 1992.
doi: http://doi.acm.org/10.1145/143165.143205.
[44] D. Wheeler. sloccount, May 2010.
dwheeler.com/sloccount/.
[33] X. Leroy. Java bytecode verification: Algorithms and formalizations. Journal of Automated Reasoning, 30(3-4):235–269,
2003. ISSN 0168-7433. doi: http://dx.doi.org/10.1023/A:
1025055424017.
URL http://www.
[45] T. Würthinger, A. Wöß, L. Stadler, G. Duboscq, D. Simon, and
C. Wimmer. Self-optimizing AST interpreters. In Proceedings
of the 8th Symposium on Dynamic Languages, Tucson, AZ,
USA, October 22, 2012 (DLS ’12), pages 73–82, 2012. doi:
http://doi.acm.org/10.1145/2384577.2384587.
[34] T. Lindholm and F. Yellin. The Java Virtual Machine Specification. Addison-Wesley, Boston, MA, USA, first edition,
1996.
[46] A. Yermolovich, C. Wimmer, and M. Franz. Optimization
of dynamic languages using hierarchical layering of virtual
machines. In Proceedings of the 5th Symposium on Dynamic
[35] S. L. Peyton Jones and J. Launchbury. Unboxed Values as
First Class Citizens in a Non-Strict Functional Language. In
15
2013/07/12
Languages, Orlando, FL, USA, October 26, 2009 (DLS ’09),
pages 79–88, New York, NY, USA, 2009. ACM. ISBN
978-1-60558-769-1. doi: http://doi.acm.org/10.1145/1640134.
1640147.
16
2013/07/12
| 6 |
The quadratic regulator problem and the Riccati
arXiv:1610.07127v1 [math.OC] 23 Oct 2016
equation for a process governed by a linear Volterra
integrodifferential equations∗
L. Pandolfi†
March 31, 2018
Abstract: In this paper we study the quadratic regulator problem for a process
governed by a Volterra integral equation in IRn . Our main goal is the proof that
it is possible to associate a Riccati differential equation to this quadratic control
problem, which leads to the feedback form of the optimal control. This is in contrast
with previous papers on the subject, which confine themselves to study the Fredholm
integral equation which is solved by the optimal control.
Key words:Quadratic regulator problem, Volterra integrodifferential equations, Riccati equation
AMS classification: 93B22, 45D05, 49N05, 49N35
1
Introduction
The quadratic regulator problem for control processes regulated by linear differential
equations both in finite and infinite dimensional spaces has been at the center
of control theory at least during the last eighty years, after the proof that the
synthesis of dissipative systems amounts to the study of a (singular) quadratic
∗ This
papers fits into the research program of the GNAMPA-INDAM and has been written in
the framework of the “Groupement de Recherche en Contrôle des EDP entre la France et l’Italie
(CONEDP-CNRS)”.
† Dipartimento di Scienze Matematiche “Giuseppe Luigi Lagrange”, Politecnico di Torino, Corso
Duca degli Abruzzi 24, 10129 Torino, Italy ([email protected])
1
control problem (see [2]). In this period, the theory reached a high level of maturity
and the monographs [1, 8] contain the crucial ideas used in the study of the quadratic
regulator problems for lumped and distributed systems (see [3, 4, 11, 12, 13] for the
singular quadratic regulator problem for distributed systems).
In recent times, the study of controllability of systems described by Voterra
integrodifferential equations (in Hilbert spaces) has been stimulated by several applications (see [14]) while the theory of the quadratic regulator problem for these
systems is still at a basic level. In essence, we can cite only the paper [15] and
some applications of the results in this paper, see for example [7]. In these papers, the authors study a standard regulator problem for a system governed by a
Volterra integral equation (in a Hilbert space and with bounded operators. The
paper [7] and some other applications of the results in [15] studies a stochastic
system) and the synthesis of the optimal control is given by relying on the usual
variational approach and Fredholm integral equation for the optimal control. The
authors of these papers do not develop a Riccati differential equation and this is our
goal here. In order to avoid the technicalities inevitably introduce by the presence
of unbounded operators which are introduced by the action of boundary controls,
we confine ourselves to study Volterra integral equations in IRn .
The control problem we consider is described by
Z t
N (t − s)x(s) ds + Bu(t) ,
x′ =
x(0) = x0
(1)
0
where x ∈ IRn , u ∈ IRm , B is a constant n × m matrix and N (t) is a continuous
n × n matrix (extension to B = B(t) and N = N (t, s) is simple). Our goal is the
study of the minimization of the standard quadratic cost
Z T
∗
x (t)Qx(t) + |u(t)|2 dt + x∗ (T )Q0 x(T )
(2)
0
where Q = Q∗ ≥ 0, Q0 = Q∗0 ≥ 0.
Existence of a unique optimal control in L2 (0, T ; IRm ) for every fixed x0 ∈ IRn
is obvious.
The plan of the paper is as follows: in order to derive a Riccati differential
equation, we need a suitable “state space” in which our system evolves. In fact, a
Volterra integral equation is a semigroup system in a suitable infinite dimensional
2
space (see [10, Ch. 6]) and we could relay on this representation of the Volterra
equation to derive a theory of the Riccati equation in a standard way but the shortcoming is that the “state space” is IRn × L2 (0, +∞; IRn ) and the Riccati differential
equation so obtained should be solved in a space with infinite memory, even if the
process is considered on a finite time interval [0, T ]. We wish a “Riccati differential
equation” in a space which has a “short memory”, say of duration at most T , as
required by the optimization problem. So, we need the introduction of a different
“state space approach” to Eq. (1). This is done in Sect. 2 where, using dynamic
programming, we prove that the minimum of the cost is a quadratic form which
satisfy a (suitable version) of the Linear Operator Inequality (LOI).
Differentiability properties of the cost are studied in section 3 (using a variational
approach to the optimal control related to the arguments in [15]). The regularity
properties we obtain finally allows us to write explicitly a system of partial differential equations (with a quadratic nonlinearity) on [0, T ], which is the version of the
Riccati differential equations for our system.
We believe that the introduction of the state space in Sect. 2 is a novelty of this
paper.
2
The state of the Volterra integral equation, and
the (LOI)
According to the general definition in [9]), the state at time τ is the information
at time τ needed to uniquely solve the equation for t > τ (assuming the control is
known for t > τ ).
It is clear that if τ = 0 then the sole vector x0 is sufficient to solve equation (1)
in the future, and the state space at τ = 0 is IRn . Things are different if we solve
the equation till time τ and we want to solve it in the future. In this case, Eq. (1)
for t > τ takes the form
Z t
Z
x′ =
N (t − s)x(s) ds + Bu(t) +
τ
τ
N (t − s)x(s) ds .
(3)
0
In order to solve this equation for t > τ we must know the pair1 Xτ = (x(τ ), xτ (·))
1 Remark
on the notation: xτ = xτ (s) is a function on (0, τ ) while Xτ (upper case letter) is the
3
where xτ (s) = x(s), s ∈ (0, τ ).
Note that in order to uniquely solve (3), xτ (·) needs not be a segment of previously computed trajectory. It can be an “arbitrary” function. This observation
suggests the definition of the following state space at time τ :
Mτ2 = IRn × L2 (0, τ ; IRn )
(to be compare with the state space of differential equations with a fixed delay h
which is IRn × L2 (−h, 0; IRn )).
Eq. (3) defines, for every fixed u and τ1 > τ , a solution map from Mτ2 to Mτ21
which is affine linear and continuous. An explicit expression of this map can be
obtained easily. Let us fix an initial time τ ≥ 0. Let t ≥ τ and let Z(t, τ ) be the
n × n matrix solution of
d
Z(t, τ ) =
dt
Z
t
Z(ξ, τ )N (t − ξ) dξ ,
Z(τ, τ ) = I .
(4)
τ
Then,
x(t) = Z(t, τ )x̂ +
Z
τ
Y (t, s; τ )x̃(s) ds +
0
Z
t
Z(t − r + τ, τ )Bu(r) dr
(5)
τ
where
Y (t, s; τ ) =
Z
t
Z(t − ξ + τ, τ )N (ξ − s) dξ .
τ
This way, for every τ1 > τ we define two linear continuous transformations:
E(τ1 ; τ ) from Mτ2 to Mτ21 (when u = 0) and Λ(τ1 ; τ ) from L2 (τ, τ1 ; IRm ) to Mτ21
(when Xτ = 0), as follows:
E(τ1 ; τ )(x̂, x̃(·)) = (x(τ1 ), y)
x(t) given by (5)
y=
x̃(t)
if
τ < t < t1
if
t ∈ (0, τ ) .
The operator Λ(τ1 ; τ ) is defined by the same formula as E(τ1 ; τ ), but when Xτ = 0
and u 6= 0.
The evolution of the system is describe by the operator
E(t1 ; τ )Xτ + Λ(t1 ; τ )u .
pair (x(τ ), xτ ).
4
(6)
The evolutionary properties of this operator follow from the unicity of solutions
of the Volterra integral equation. Let us consider Eq. (3) on [τ, T ] with initial
condition (x̂, x̃(·)), whose solution is given by (5). Let τ1 ∈ (τ, T ) and let us consider
Eq. (3) on [τ1 , T ] but with initial condition (x(τ1 ), xτ1 ). Eq. (3) on [τ1 , T ] and this
initial condition takes the form
Z
Z t
N (t − s)x(s) ds + Bu(t) +
x′ (t) =
τ1
N (t − s)xτ1 (s) ds ,
0
τ1
and so, on [τ1 , T ] we have
Z
x′ (t) = Z(t, τ1 )x(τ1− ) +
τ1
Y (t, s; τ1 )xτ1 (s) ds +
Z
x(τ1+ ) = x(τ1− )
t
Z(t − s − τ1 , τ1 )Bu(s) ds .
τ1
0
Unicity of the solutions of the Volterra integral equation shows that, for t ∈ (τ1 , T ]
the following equality holds
E(t, τ ) (x̂, x̃) + Λ(t; τ )u = E(t, τ1 ) [E(τ1 , τ ) (x̂, x̃) + Λ(τ1 ; τ )u] + Λ(t; t1 )u .
Remark 1 The solution Z(t, τ ) of Eq. (4) solves the following Volterra integral
equation on [τ, T ]:
Z(t) = 1 +
Z
t
Z(ξ)M (t − ξ) dξ ,
τ
M (t) =
Z
t
N (s) ds .
0
The usual Picard iteration gives
Z t
Z tZ ξ
Z(t, τ ) = 1 +
M (t − ξ) dξ +
M (ξ − ξ1 ) dξ1 M (t − ξ) dξ + · · · =
τ
τ
τ
Z t
Z t Z t−s
=1+
M (t − ξ) dξ +
M (t − s − r)M (r) dr ds + · · ·
τ
τ
0
The properties of these integrals is that, once exchanged, we have
Z t
Z(t, τ ) = 1 +
H(t − s) ds
τ
where H(t) does not depend on τ and it is differentiable. It follows that the function
(τ, t) 7→ Z(t, τ ) is continuously differentiable on 0 < τ < t < T and the derivative
has continuous extension to 0 ≤ τ ≤ t ≤ T .
Now we begin our study of the quadratic regulator problem and of the Riccati
equation.
5
One of the possible ways to derive an expression of the optimal control and
possibly a Riccati differential equation for the quadratic regulator problem is via
dynamic programming. We follow this way. For every fixed τ < T we introduce
Z T
∗
x (t)Qx(t) + |u(t)|2 dt + x∗ (T )Q0 x(T )
Jτ (Xτ , u) =
τ
where x(t) is the solution of (3) (given by (5)) and we define
W (τ ; Xτ ) =
min
u∈L2 (τ,T ;IRm )
Jτ (Xτ , u) .
(7)
Existence of the minimum is obvious and we denote u+ (t) = u+ (t; τ, Xτ ) the optimal
control. The corresponding solution is denoted x+ (t) = x+ (t; τ, Xτ ) while we put
Xt+ = x+ (t), x+
t (·) .
Let us fix any τ1 ∈ (τ, T ) and let u(t) = u1 (t) if t ∈ (τ, τ1 ), u(t) = u2 (t) if
t ∈ (τ1 , T ), while
Xt1 = E(t, τ )Xτ +Λ(t, τ )u1
t ∈ [τ, τ1 ] ,
Xt2 = E(t, τ1 )Xτ11 +Λ(t, τ1 )u2
t ∈ [τ1 , T ] .
We noted that X(t; τ, Xτ ) given by (6) on [τ, T ] is equal to Xt1 on [τ, τ1 ] and to Xt2
on [τ1 , T ].
Let xi be the IRn component of X i . Then, for every u we have (we use the
crochet to denote the inner product instead of the more cumberstome notation
∗
x1 (t) Qx1 (t))
Z τ1
hQx1 (t), x1 (t)i + |u1 (t)|2 dt + Jτ1 Xτ11 , u2 .
(8)
W (τ, Xτ ) ≤
τ
This inequality holds for every u1 and u2 and equality holds when u1 and u2 are
restrictions of the optimal control u+ .
We keep u1 fixed and we compute the minumum of the right hand side respect
to u2 . We get the Linear Operator Inequality (LOI):
Z τ1
hQx1 (t), x1 (t)i + |u1 (t)|2 dt + W τ1 , Xτ11 .
W (τ, Xτ ) ≤
(9)
τ
This inequality holds for every control u ∈ L2 (τ, τ1 ; IRn ). Let in particular u1 be the
restriction to (τ, τ1 ) of u+ (·) = u+ (·; τ, Xτ ). Inequality (8) shows that the minimum
of Jτ1 Xτ11 , u2 cannot be strictly less then Jτ1 Xτ11 , u+ , i.e. the optimal control
of the cost Jτ1 Xτ11 , u2 is the restriction to (τ1 , T ) of u+ (t), the optimal control of
Jτ (Xτ , u).
6
Equality holds in (9) if u1 = u+ .
In conclusion, we divide with τ1 − τ (which is positive) and we find the following
inequality, which holds with equality if u = u+ :
1
1
W τ1 ; Xτ11 − W (τ ; Xτ ) ≥ −
τ1 − τ
τ1 − τ
Z
τ
τ1
hQx1 (t), x1 (t)i + |u(t)|2 dt .
So, the following inequality holds when τ is a Lebesgue point of u(t) (every τ if u
is continuous):
lim inf
τ1
→τ +
1
W τ1 ; Xτ11 − W (τ ; Xτ ) ≥ − hQx(τ ), x(τ )i + |u(τ )|2 . (10)
τ1 − τ
Equality holds if u = u+ and τ is a Lebesgue point of u+ and in this case we can
even replace lim inf with lim, i.e. W τ1 ; Xτ+1 is differentiable if τ is a Lebesgue
points of u+ .
The previous argument can be repeated for every τ so that the previous inequalities/equalities holds a.e. on [0, T ] and we might even replace τ with the generic
notation t.
Remark 2 If it happens that ker N (t) = S, a subspace of IRn , we might also
consider as the second component of the “state” Xτ the projection of x̃ on (any
fixed) complement of S, similar to the theory developed in [5, 6]. We dont’t pursue
this approach here.
3
The regularity properties of the value function,
the synthesis of the optimal control and the Riccati equation
We prove that W is a continuous quadratic form with smooth coefficients and
we prove that u+ (t) is continuous (so that every time t is a Lebesgue point of
u+ (t)). We arrive at this result via the variational characterization of the optimal
pair (u+ , x+ ) (x+ is the IRn -component of X + ) in the style of [15]. The standard
perturbation approach gives a representation of the optimal control (and a definition
7
of the adjoint state p(t)):
"Z
T
u+ (t) = −B ∗
#
Z ∗ (s − t + τ, τ )Qx+ (r) dr + Z ∗ (T − t + τ, τ )Q0 x+ (T ) = −B ∗ p(t)
t
(11)
where p, the function in the bracket, solves the adjoint equation
Z T
N ∗ (s − t)p(s) ds ,
p(T ) = Q0 x+ (T ) .
p′ (t) = −Qx+ (t) −
(12)
t
Note that p depends on τ and that Eq. (12) has to be solved (backward) on the
interval [τ, T ].
The simplest way to derive the differential equation (12) is to note that the
function q(t) = p(T − t) is given by
Z T
q(t) =
Z ∗ (s − T + τ + t, t)Qx+ (s) ds + Z ∗ (t + τ, τ )Q0 x+ (T ) =
T −t
t
=
Z
Z ∗ (t − r + τ, τ )Qx+ (T − r) dr + Z ∗ (t + τ, τ )Q0 x+ (T ) .
0
Comparison with (5) shows that q(t) solves
Z t
q ′ (t) =
N ∗ (t − s)q(s) ds + Qx+ (T − t) ,
q(0) = Q0 x+ (T )
0
from which the equation of p(t) is easily obtained.
We recapitulate: the equations which characterize (x+ , u+ ) when the initial time
is τ and Xτ = (x̂, x̃(·)) is the following system of equations on the interval [τ, T ]:
x′ =
Rt
τ
N (t − s)x(s) ds − BB ∗ p(t) +
p′ (t) = −Qx(t) −
RT
t
Rτ
0
N (t − s)x̃(s) ds , x(τ ) = x̂
N ∗ (s − t)p(s) ds ,
p(T ) = Q0 x(T )
u+ (t) = −B ∗ p(t) .
(13)
We replace u+ (t) = u+ (t; τ, Xτ ) in (5). The solution is x+ (t). Then we replace
the resulting expression in (11). We get the Fredholm integral equation for u+ (t):
Z T
u+ (t) + B ∗ Z ∗ (T − t + τ, τ )Q0
Z(T − r + τ, τ )Bu+ (r) dr+
τ
+ B∗
Z
T
Z ∗ (s − t + τ, τ )Q
t
= −B
∗
"
Z
s
Z(s − r + τ, τ )Bu+ (r) dr ds =
τ
∗
Z (T − t + τ, τ )Q0 F (T, τ ) +
Z
t
8
T
∗
Z (s − t + τ, τ )QF (s, τ ) ds
#
where
F (t, τ ) = Z(t, τ )x̂ +
Z
τ
Y (t, s; τ )x̃(s) ds .
0
This Fredholm integral equation has to be solved on [τ, T ].
By solving the Fredholm integral equation we find an expression for u+ (t), of
the following form:
+
+
u (t) = u (t; τ, Xτ ) = Φ1 (t, τ )x̂ +
Z
τ
Φ2 (t, s; τ )x̃(s) ds ,
t≥τ
(14)
Z2 (t, r; τ )x̃(r) dr ,
t≥τ.
(15)
0
and so also
+
+
x (t) = x (t; τ, Xτ ) = Z1 (t, τ )x̂ +
Z
τ
0
The explicit form of the matrices Φ1 (t, τ ), Φ2 (t, s; τ ), Z1 (t, τ ), Z2 (t, r; τ ) (easily derived using the resolvent operator of the Fredholm integral equation) is not needed.
The important fact is that these matrices have continuous partial derivative respect
to their arguments t, s and τ . In particular, u+ (t) = u+ (t; τ, Xτ ) is a continuous
function of t for t ≥ τ . The derivative has continuous extensions to s = τ and to
t = τ . Differentiability respect to τ follows from Remark 1.
We replace (14) and (15) in (7) and we get
W (τ ; Xτ ) =
+
Z
T
τ
Z T
Q1/2 Z1 (s, τ )x̂ + Q1/2
Z
2
τ
Z2 (s, r; τ )x̃(r) dr
ds+
0
Φ1 (s; τ )x0 +
τ
Z
τ
2
Φ2 (s, r; τ )x̃(r) dr
ds .
(16)
0
This equality shows that Xτ 7→ W (τ, Xτ ) is a continuous quadratic form of Xτ ∈
Mτ .
We use dynamic programming again, in particular the fact that u+ (·; τ1 , Xτ+1 ) is
the restriction to [τ1 , T ] of u+ (·; τ, Xτ ). Hence, for every τ1 ≥ τ we have
W (τ1 ; Xτ+1 ) =
+
Z
T
τ1
Z T
Q1/2 Z1 (s, τ1 )x+ (τ1 ) + Q1/2
Z
τ1
2
Z2 (s, r; τ1 )x+ (r) dr
ds+
0
+
Φ1 (s; τ1 )x (τ1 ) +
τ1
Z
τ1
2
Φ2 (s, r; tτ1 )x+ (r) dr
ds .
(17)
0
We simplify the notations: from now on we drop the
+
and we replace τ1 with
t but we must recall that we are computing for t ≥ τ and, when we use equality
in (9), on the optimal evolution.
9
By expanding the squares we see that W (τ1 ; Xτ1 ) has the following general form:
Z t
W (t; Xt ) = x∗ (t)P0 (t)x(t) + x∗ (t)
P1 (t, s)x(s) ds+
0
+
Z
t
P1 (t, s)x(s) ds
0
∗
x(t) +
Z tZ
0
t
x∗ (r)K(t, ξ, r)x(ξ) dξ dr .
(18)
0
For example,
P0 (t) =
Z
t
T
[Z1∗ (s, t)QZ1 (s, t) + Φ∗1 (s, t)Φ1 (s, t)] ds .
Note that P0 (t) is a selfadjoint differentiable matrix.
Now we consider the matrix K(t, ξ, r). We consider the contribution of the first
line in (18) (the contribution of the second line is similar). Exchanging the order
of integration and the names of the variables of integration, we see that
"Z
#
Z
Z Z
t
t
t
x∗ (r)K(t, ξ, r)x(ξ) dξ dr =
0
=
0
Z tZ
0
t
0
∗
x (ξ)
"Z
T
x∗ (r)
0
t
T
t
Z2∗ (s, ξ, t)QZ2 (s, r, t)
Z2∗ (s, r, t)QZ2 (s, ξ, t) ds x(ξ) dr dξ =
#
ds x(r) dξ dr =
Z tZ
0
t
x∗ (ξ)K ∗ (t, r, ξ)x(r) dξ dr
0
so that we have
K(t, ξ, r) = K ∗ (t, r, ξ)
and this matrix function is differentiable respect to its arguments t, r and ξ.
Analogously we see differentiability of P1 (t, s).
We whish a differential equations for the matrix functions P0 (t), P1 (t, s),
K(t, s, r). In order to achieve this goal, we compute the right derivative of W (t; Xt )
(and any continuous control) for t > τ and we use inequality (10). We use explicitly that equality holds in (10) when the derivative is computed along an optimal
evolution.
3.1
The Riccati equation
In order to derive a set of differential equations for the matrices P0 (t), P1 (t, s),
K(t, ξ, r) we proceed as follows: we fix (any) τ ∈ [0, T ] and the initial condition
Xτ = (x̂, x̃(·)). We consider (18) with any continuous control u(t) on [τ, T ] (the corresponding solution of the Volterra equation is x(t)). We consider the quadratic form
W with the control u(t) and the corresponding solution Xt given in in (18). In this
10
form we separate the contribution of the functions on (0, τ ) and the contribution on
Rt
[τ, t]. For example x∗ (t)P0 (t)x(t) remains unchanged while x∗ (t) 0 P1 (t, ξ)x(ξ) dξ
is written as
Z t
Z
∗
∗
x (t)
P1 (t, ξ)x(ξ) dξ = x (t)
0
τ
∗
P1 (t, s)x̃(s) ds + x (t)
0
Z
t
P1 (t, s)x(s) ds .
τ
The other addenda are treated analogously.
We obtain a function of t which is continuously differentiable. Its derivative at
t = τ is the left hand side of (10) and so it satisfies the inequality (10), with equality
if it happens that we compute with u = u+ . So, the function of u ∈ IRm
d
W (τ ; Xτ ) + u∗ (τ )u(τ )
u 7→
dt
reaches a minimum at u = u+
τ . Note that τ ∈ [0, T ] is arbitrary and so by computing
this minimum we get an expression for u+ (τ ), for every τ ∈ [0, T ].
It turns out that d W (τ ; Xτ ) + u∗ (τ )u(τ ) is the sum of several terms. Some
dt
of them do not depend on u and the minimization concerns solely the terms which
depends on u. We get (we recall that P0 (τ ) is selfadjoint)
Z τ
+
∗ ∗
∗ ∗
u (τ ) = arg min u B P0 (τ )x̂ + u B
P1 (τ, s)x̃(s) ds+
0
Z τ
∗
∗
∗
∗
+x̂ P0 (τ )Bu +
x̃ (s)P1 (τ, s) ds Bu + u u .
(19)
0
The minimization gives
Z
u+ (τ ) = −B ∗ P0 (τ )x̂ +
τ
P1 (τ, s)x̃(s) ds .
0
(20)
If the system is solved up to time t along an optimal evolution (so that x+ (t) is
equal to x̃(t) when t < τ and it is the solution which corresponds to the optimal
control for larger times) we have
Z
u+ (t) = −B ∗ P0 (τ )x+ (t) +
0
τ
P1 (t, s)x+ (s) ds
and this is the feedback form of the optimal control (compare [15]).
We repalce (20) in the brace in (19) and we see that the minimum is
Z τ
− x̂∗ P0 (τ )BB ∗ P0 (τ )x̂ − x̂∗ P0 (τ )BB ∗
P1 (τ, ξ)x̃(ξ) dξ−
0
Z τ
Z τZ τ
−
x̃∗ (r)P1∗ (τ, r) dr BB ∗ P0 (τ )x̂ −
x̃(r)P1 (τ, r)BB ∗ P1 (τ, ξ)x̃(ξ) dξ dr .
0
0
0
(21)
11
Now we compute the derivative of the function τ 7→ W (τ ; Xτ ) along an optimal
evolution and we consider its limit for t → τ +. We insert this quantity in (10), which
is an equality since we are computing the limit along an optimal evolution. We take
into account that the terms which contains u sum up to the expression (21) and we
get the following equality. In this equality, a superimposed dot denotes derivative
with respect to the variable τ :
Ṗ0 (τ ) =
d
P0 (τ ) ,
dτ
Ṗ1 (τ, ξ) =
∂
P1 (τ, ξ) ,
∂τ
K̇(τ, ξ, r) =
∂
K(τ, ξ, r) .
∂τ
The equality is:
∗
∗
∗
∗
Z
τ
− x̂ P0 (τ )BB P0 (τ )x̂ − x̂ P0 (τ )BB
P1 (τ, ξ)x̃(ξ) dξ−
0
Z τ
Z τZ τ
−
x̃∗ (r)P1∗ (τ, r) dr BB ∗ P0 (τ )x̂ −
x̃(r)P1 (τ, r)BB ∗ P1 (τ, ξ)x̃(ξ) dξ dr+
0
0
Z0 τ
Z τ
∗
∗
∗
+
x̃ (r)N (τ − r) ds P0 (τ )x̂ + x̂ Ṗ0 (τ )x̂ + x̂∗
N (τ − ξ)x̃(ξ) dξ + x̂∗ P1 (τ, τ )x̂+
0
0
Z τ
Z τ
Z τ
∗ ∗
∗
∗
+ x̂ P1 (τ, τ )x̂ +
x̃ (r)N (τ − r) dr
P1 (τ, s)x̃(s) ds + x̂∗
Ṗ1 (τ, ξ)x̃(ξ) dξ+
0
0
0
Z τ
Z τ
Z τ
+
x̃∗ (r)Ṗ1∗ (τ, r) dr x̂ +
x̃∗ (r)P1 (τ, r) dr
N (τ − ξ)x̃(ξ) dξ +
0
0
Z0 τ
Z τ
∗
∗
+
x̃ (r)K(τ, τ, r) dr x̂ + x̂
K(τ, ξ, τ )x̃(ξ) dξ+
0
0
Z τ
Z τ
x̃∗ (r)
+
K̇(τ, ξ, r)x̃(ξ) dξ dr + x̂∗ Qx̂ = 0
0
0
The vector x̂ and the function x̃(·) are arbitrary. So, we first impose x̃(·) = 0
and x̂ arbitrary, then the converse and finally both nonzero arbitrary. We find that
the three matrix functions P0 (τ ), P1 (τ, r), K(τ, ξ, r) solve the following system of
differential equations in the arbitrary variable τ . The variables r and ξ belong to
[0, τ ] for every τ ∈ [0, T ].
P0′ (τ ) − P0 (τ )B ∗ BP0 (τ ) + Q(τ ) + P1 (τ, τ ) + P1∗ (τ, τ ) = 0
∂
P1 (τ, ξ) − P0 (τ )BB ∗ P1 (τ, ξ) + P0 (τ )N (τ − ξ) + K(τ, ξ, τ ) = 0
∂τ
∂
K(τ, ξ, r) − P1∗ (τ, r)BB ∗ P1 (τ, ξ)+
∂τ
+ P1∗ (τ, r)N (τ − ξ) + N ∗ (τ − r)P1 (τ, ξ) = 0
P0 (T ) = Q0 ,
P1 (T, ξ) = 0 ,
K(T, ξ, r) = 0
12
(22)
The final conditions are obtained by noting that when τ = T i.e. with XT =
(x̂, x̃T (·)) arbitrary in MT2 = IRn × L2 (0, T ; IRn ), the expression W (T, XT ) in (18)
is equal to JT (XT ; u) = x̂∗ Q0 x̂ for every XT .
This is the Riccati differential equation of our optimization problem.
Remark 3 We note the following facts:
• We take into account the fact that P0 is selfadjoint and K ∗ (τ, ξ, τ ) =
K(τ, τ, ξ). We compute the adjoint of the second line in (22) and we find:
∂ ∗
P (τ, r) − P1∗ (τ, r)BB ∗ P0 (τ ) + N ∗ (τ − r)P0 (τ ) + K(τ, τ, r) = 0 .
∂τ 1
• The form of the Riccati differential equations we derived for the Volterra
integral equation (1) has to be compared with the Riccati differential equation
“ in decoupled form” which was once fashionable in the study of the quadratic
regulator problem for systems with finite delays, see [16].
References
[1] Bittanti, S., Laub, A.J., Willems, J.C. Ed.s, The Riccati equation, SpringerVerlag, Berlin, 1991.
[2] Brune, O., Synthesis of a finite two-terminal network whose driving-point
impedance is a prescribed function of frequency, Journal of Mathematics and
Physics, 10 191-236, 1931.
[3] Bucci, F. Pandolfi, L., The value function of the singular quadratic regulator
problem with distributed control action. SIAM J. Control Optim. 36 115-136
(1998).
[4] Bucci, F. Pandolfi, L., The regulator problem with indefinite quadratic cost for
boundary control systems: the finite horizon case. Systems Control Lett. 39
79-86 (2000).
[5] Delfour, M.C., Manitius, A., The structural operator F and its role in the
theory of retarded systems. J. Math. Analysis Appl. Part I: 73 466-490 (1980);
Part II 74 359-381 (1980).
13
[6] Fabrizio M., Giorgi C., Pata V., A New Approach to Equations with Memory,
Arch. Rational Mech. Anal. 198 189-232 (2010).
[7] Huang, J., Li, X., Wang, T., Mean-Field Linear-Quadratic-Gaussian (LQG)
Games for Stochastic Integral Systems, IEEE Transactions on Automatic Control 61 2670-2675 (2016).
[8] Lasiecka, I., Triggiani, R., Control theory for partial differential equations: continuous and approximation theories. (Vol. 1 Abstract parabolic systems and
Vol. 2 Abstract hyperbolic-like systems over a finite time horizon.) Cambridge
University Press, Cambridge, 2000.
[9] Kalman, R. E., Falb, P. L., Arbib, M. A., Topics in mathematical system
theory. McGraw-Hill Book Co., New York-Toronto, 1969
[10] Engel, K.-J., Nagel, R. One-parameter semigroups for linear evolution equations. Springer-Verlag, New York, 2000.
[11] Pandolfi, L. Dissipativity and the Lur’ e problem for parabolic boundary control
systems. SIAM J. Control Optim. 36 2061-2081 (1998)
[12] Pandolfi, L. The Kalman-Yakubovich-Popov theorem for stabilizable hyperbolic boundary control systems. Integral Equations Operator Theory 34 478493 (1999)
[13] Pandolfi, L. The Kalman-Popov-Yakubovich theorem: an overview and new
results for hyperbolic control systems. Nonlinear Anal. 30 735-745 (1997).
[14] Pandolfi, L., Distributed systems with persistent memory. Control and moment
problems. Springer Briefs in Electrical and Computer Engineering. Control,
Automation and Robotics. Springer, Cham, 2014.
[15] Pritchard, A.J., You Y., Causal feedback Optimal control for Volterra integral
equations. SIAM J. Control Optim. 34 1874-1890, 1996.
[16] Ross, D. W., Flügge-Lotz, I., An optimal control problem for systems with
differential-difference equation dynamics. SIAM J. Control 7 609-623, 1969.
14
| 3 |
Ideal Theory in Rings
arXiv:1401.2577v1 [math.RA] 11 Jan 2014
(Idealtheorie in Ringbereichen)
Emmy Noether
Translated by Daniel Berlyne
January 14, 2014
Contents.
§1.
§2.
§3.
§4.
§5.
§6.
§7.
§8.
§9.
§10.
§11.
§12.
Introduction.
Ring, ideal, finiteness condition.
Representation of an ideal as the least common multiple of finitely
many irreducible ideals.
Equality of the number of components in two different
decompositions into irreducible ideals.
Primary ideals. Uniqueness of the prime ideals belonging to two
different decompositions into irreducible ideals.
Representation of an ideal as the least common multiple of maximal
primary ideals. Uniqueness of the associated prime ideals.
Unique representation of an ideal as the least common multiple of
relatively prime irreducible ideals.
Uniqueness of the isolated ideals.
Unique representation of an ideal as the product of coprime
irreducible ideals.
Development of the study of modules. Equality of the number of
components in decompositions into irreducible modules.
Special case of the polynomial ring.
Examples from number theory and the theory of differential
expressions.
Example from elementary divisor theory.
Translator’s notes.
Acknowledgements.
1
As this paper was originally written in the early twentieth century, there
are a number of mathematical terms used that do not have an exact modern
equivalent, and as such may be ambiguous in meaning. Such terms are
underlined as they appear in the text, and an explanation is given at the end
of the paper to clarify their meanings.
Introduction.
This paper aims to convert the decomposition theorems for the integers
or the decomposition of ideals in algebraic number fields into theorems for
ideals in arbitrary integral domains (and rings in general). To understand
this correspondence, we consider the decomposition theorems for the integers
in a form somewhat different from the commonly given formulation.
In the equation
a “ p̺11 p̺22 . . . p̺σσ “ q1 q2 . . . qσ
take the prime powers qi to be components of the decomposition with the
following characteristic properties:
1. They are pairwise coprime ; and no q can be written as a product of
pairwise coprime numbers, so in this sense irreducibility holds.
2. Each two components qi and qk are relatively prime; that is to say,
if bqi is divisible by qk , then b is divisible by qk . Irreducibility also holds in
this sense.
3. Every q is primary; that is to say, if a product bc is divisible by q, but
b is not divisible by q, then a power1 of c is divisible by q. The representation
furthermore consists of maximal primary components, since the product of
two different q is no longer primary. The q are also irreducible in relation
to the decomposition into maximal primary components.
4. Each q is irreducible in the sense that it cannot be written as the
least common multiple of two proper divisors.
The connection between these primary numbers q and the prime numbers p is that for every q there is one and (disregarding sign) only one p that
is a divisor of q and a power of which is divisible by q: the associated prime
number. If p̺ is the lowest such power — ̺ being the exponent from q —
then in particular, p̺ is equal to q here. The uniqueness theorem can now
1
If this power is always the first, then as is well-known it concerns a prime number.
2
be stated as follows:
Given two different decompositions of an integer into irreducible maximal primary components q, each decomposition has the same number of
components, the same associated prime numbers (up to sign) and the same
exponents. Because p̺ “ q, it also follows that the q themselves are the same
(up to sign).
As is well-known, the uncertainty brought about by the sign is eradicated
if instead of numbers, the ideals derived from them (all numbers divisible
by a) are considered; the formulation then holds exactly for the unique
decomposition of ideals of (finite) algebraic number fields into powers of
prime ideals.
In the following (§1) a general ring is considered, which must only satisfy
the finiteness condition that every ideal of the domain has a finite ideal basis.
Without such a finiteness condition, irreducible and prime ideals need not
exist, as shown by the ring of all algebraic integers, in which there is no
decomposition into prime ideals.
It is shown that — corresponding to the four characteristic properties
of the component q — in general four separate decompositions exist, which
follow successively from each other through subdivision. Thereby the decomposition into coprime irreducible ideals behaves as a factorisation, and the
remaining three decompositions behave as a reduced (§2) representation as
the least common multiple. Also, the connection between a primary ideal —
the irreducible ideals are also primary — and the corresponding prime ideal
is preserved: every primary ideal Q uniquely determines a corresponding
prime ideal P which is a divisor of Q and a power of which is divisible by
Q. If P̺ is the lowest such power — ̺ being the exponent of Q — then P̺
does not need to coincide with Q here, however. The uniqueness theorem
can now be expressed as follows:
The decompositions 1 and 2 are unique; given two different decompositions 3 or 4, the number of components and the corresponding prime ideals are the same;2 the isolated ideals (§7) occurring in the components are
uniquely determined.
For the proofs of the decomposition theorems, the notable “Theorem of
the Finite Chain” for finite modules, first by Dedekind, is followed using the
2
Moreover, the exponents are presumably also the same, and more generally the corresponding components are isomorphic.
3
finiteness condition, and from this we deduce representation 4 of each ideal
as the least common multiple of finitely many irreducible ideals. By revising
the statement of the reducibility of a component, the fundamental uniqueness theorem for decomposition 4 into irreducible ideals is produced. By
concluding that each of the remaining decompositions are given by finitely
many components, the uniqueness theorems for these emerge as a result of
uniqueness theorem 4.
Finally it is shown (§9) that the representation through finitely many
irreducible components also holds under weaker requirements; the commutativity of the ring is not necessary, and it suffices to consider a module
in relation to the ring instead of an ideal. In this more general case the
equality of the number of components for two different decompositions still
holds, while the notions of prime and primary are restricted to commutativity and the concept of an ideal; in contrast, the notion of coprime for ideals
is retained in non-commutative rings.
The simplest ring for which the four separate decompositions actually
occur is the ring of all polynomials in n variables with arbitrary complex
coefficients. The particular decompositions can be deduced here to be irrational due to the properties of algebraic figures, and the uniqueness theorem for the corresponding prime ideals is equivalent to the Fundamental
Theorem of Elimination Theory regarding the unique decomposability of algebraic figures into irreducible elements. Further examples are given by all
finite integral domains of polynomials (§10). In fact, the simple ring of all
even numbers, or more generally all numbers divisible by a given number, is
also an example for the separate decompositions (§11). An example of ideal
theory in non-commutative rings is provided by elementary divisor theory
(§12), where unique decomposition into irreducible ideals, or classes, holds.
These irreducible classes characterise completely the irreducible parts of elementary divisors, and in rings where the usual elementary divisor theory
breaks down can perhaps be considered as their equivalent.
In the available literature the following are to be noted: the decomposition into maximal primary ideals is given by Lasker for the polynomial
ring with arbitrary complex or integer coefficients, and taken further by
Macaulay at particular points.3 Both concern themselves with elimination
3
E. Lasker, Zur Theorie der Moduln und Ideale. Math. Ann. 60 (1905), p20, Theorems
VII and XIII. — F. S. Macaulay, On the Resolution of a given Modular System into
Primary Systems including some Properties of Hilbert Numbers. Math. Ann. 74 (1913),
4
theory, therefore using the fact that a polynomial can be expressed uniquely
as the product of irreducible polynomials. In fact, the decomposition theorems for ideals are independent of this hypothesis, as ideal theory in algebraic number fields allows one to suppose and as this paper shows. The
primary ideal is also defined by Lasker and Macaulay using concepts from
elimination theory.
The decomposition into irreducible ideals and into relatively prime irreducible ideals appears also not to be remarked upon for the polynomial ring
in the available literature; only a remark by Macaulay on the uniqueness of
the isolated primary ideals can be found.
The decomposition into coprime irreducible ideals is given by Schmeidler4 for the polynomial ring, using elimination theory for the proof of finiteness. However, here the uniqueness theorem is stated only for classes of ideals, not for the ideals themselves. This last uniqueness theorem can be found
in a joint paper,5 where it refers to ideals in non-commutative polynomial
rings. Only the finite ideal basis is made use of here, so theorems and methods for general rings remain to be addressed, becoming more of a problem
through this paper in terms of equality in size (§11). The present researches
give a strong generalisation and further development of the underlying concepts of both of these works. The basis of both works is the transition from
the expression as the least common multiple to an additive decomposition
of the system of residue classes. Here the least common multiple is kept for
the sake of a simpler representation; the additive decomposition then corresponds to the conversion of the idea of reducibility into a property of the
complement (§3). Therefore all given theorems are left to be reflected upon
and understood in the form of additive decomposition theorems for the system of residue classes and known subsystems, which is essentially equivalent
to the reflections of the joint paper. This system of residue classes forms a
ring of the same generality as originally laid down; that is, every ring can
be regarded as a system of residue classes of the ideal which corresponds to
the collection of all identity relations between the elements of the ring; or
also a subsystem of these relations, by assuming the remaining relations are
p66.
4
W. Schmeidler, Über Moduln und Gruppen hyperkomplexer Größen. Math. Zeitschr.
3 (1919), p29.
5
E. Noether - W. Schmeidler, Moduln in nichtkommutativen Bereichen, insbesondere
aus Differential- und Differenzenausdrücken. Math. Zeitschr. 8 (1920), p1.
5
also satisfied in the ring.
This remark also gives the classification in Fraenkel’s papers.6 Fraenkel
considers additive decompositions of rings that depend on such restrictive
conditions (existence of regular elements, division by these, decomposability
requirement), which coincide with the four decompositions for the corresponding ideal. Because of this concurrence, its finiteness condition also
means that the ideal only has finitely many proper divisors — apart from
some exceptional cases — a restriction no stricter than ours. Fraenkel’s
starting point is different, dependent on the essentially algebraic goals of his
work; through algebraic extension he achieved more general rings with fewer
restricting conditions.
§1. Ring, ideal, finiteness requirement.
1. Let Σ be a (commutative) ring in an abstract definition;7 that is
to say, Σ consists of a system of elements a, b, c, . . . , f, g, h, . . . in which a
relation satisfying the usual requirements is defined as equality, and in which
each two ring elements a and b combine uniquely through two operations,
addition and multiplication, to give a third element, given by the sum a ` b
and the product a¨b. The ring and the otherwise entirely arbitrary operations
must satisfy the following laws:
1. The associative law of addition: pa ` bq ` c “ a ` pb ` cq.
2. The commutative law of addition: a ` b “ b ` a.
3. The associative law of multiplication: pa ¨ bq ¨ c “ a ¨ pb ¨ cq.
4. The commutative law of multiplication: a ¨ b “ b ¨ a.
5. The distributive law: a ¨ pb ` cq “ a ¨ b ` a ¨ c.
6. The law of unrestricted and unique subtraction.
In Σ there is a single element x which satisfies the equation a ` x “ b.
(Written x “ b ´ a.)
6
A. Fraenkel, Über die Teiler der Null und die Zerlegung von Ringen. J. f. M. 145
(1914), p139. Über gewisse Teilbereiche und Erweiterungen von Ringen. Professorial
dissertation, Leipzig, Teubner, 1916. Über einfache Erweiterungen zerlegbarer Ringe. J.
f. M. 151 (1920), p121.
7
The definition is taken from Fraenkel’s professorial dissertation, with omission of the
more restrictive requirements 6, I and II; instead the commutative law of addition must
be incorporated. It therefore concerns the laws defining a field, with the omission of the
multiplicative inverse.
6
The existence of the zero element follows from these properties; however,
a ring is not required to possess a unit, and the product of two elements can
be zero without either of the factors being zero. Rings for which a product
being zero implies one of the factors is zero, and which in addition possess a
unit, are called integral domains. For the finite sum a` a` ¨ ¨ ¨` a we use the
usual abbreviated notation na, where the integers n are solely considered as
shortened notation, not as ring elements, and are defined recursively through
a “ 1 ¨ a, na ` a “ pn ` 1qa.
2. Let an ideal M8 in Σ be understood to be a system of elements of Σ
such that the following two conditions are satisfied:
1. If M contains f , then M also contains a ¨ f , where a is an arbitrary
element of Σ.
2. If M contains f and g, then M also contains the difference f ´ g;
so if M contains f , then M also contains nf for all integers n.
If f is an element of M, then we write f ” 0 pMq, as is usual; and we say
that f is divisible by M. If every element of N is equal to some element of M,
and so divisible by M, then we say N is divsible by M, written N ” 0 pMq.
M is called a proper divisor of N if it contains elements not in N, and so is
not conversely divisible by N. If N ” 0 pMq and M ” 0 pNq, then N “ M.
The remaining familiar notions also remain valid, word for word. By the
greatest common divisor D “ pA, Bq of two ideals A and B, we understand
this to be all elements which are expressible in the form a ` b, where a is an
element of A and b is an element of B; D is also itself an ideal. Likewise, the
greatest common divisor D “ pA1 , A2 , . . . , Aν , . . . q of infinitely many ideals
is defined as all elements d which are expressible as the sum of the elements
of each of finitely many ideals: d “ ai1 ` ai2 ` ¨ ¨ ¨ ` ain ; here D is again itself
an ideal.
Should the ideal M contain in particular a finite number of elements
f1 , f2 , . . . , f̺ such that
M “ pf1 , . . . , f̺ q;
that is,
f “ a1 f1 ` ¨ ¨ ¨ ` a̺ f̺ ` n1 f1 ` . . . n̺ f̺ for all f ” 0 pMq,
8
Ideals are denoted by capital letters. The use of M brings to mind the example of the
ideals composed of polynomials usually called “modules”. Incidentally, §§1-3 use only the
properties of modules and not the properties of ideals; compare §9 as well.
7
where the ai are elements of the ring, the ni are integers, and so M forms a
finite ideal; f1 , . . . , f̺ forms an ideal basis.
In the following we now consider solely rings Σ which satisfy the finiteness condition: every ideal in Σ is finite, and so has an ideal basis.
3. The following underlying ideas all follow directly from the finiteness
condition:
Theorem I (Theorem of the Finite Chain):9 If M, M1 , M2 , . . . , Mν , . . .
is a countably infinite system of ideals in Σ in which each ideal is divisible
by the following one, then all ideals after a finite index n are identical;
Mn “ Mn`1 “ . . . . In other words: If M, M1 , M2 , . . . , Mν , . . . gives a
simply ordered chain of ideals such that each ideal is a proper divisor of its
immediate predecessor, then the chain terminates in a finite number of steps.
In particular, let D “ pM1 , M2 , . . . , Mν , . . . q be the greatest common
divisor of the system, and let f1 . . . fk be an always existing basis of D
resulting from the finiteness condition. Then it follows from the divisibility
assumption that every element of D is equal to an element of an ideal in the
chain; then it follows from
f “ g ` h,
g ” 0 pMr q,
h ” 0 pMs q,
pr ď sq
that g ” 0 pMs q and so f ” 0 pMs q. The corresponding statement holds if
f is the sum of several components. There is therefore also a finite index n
such that
f1 ” 0 pMn q; . . . ; fk ” 0 pMn q; D “ pf1 , . . . , fk q ” 0 pMn q.
Because conversely Mn ” 0 pDq, it follows that Mn “ D; and because
furthermore
Mn`i ” 0 pDq;
D “ Mn ” 0 pMn`i q,
it also follows that Mn`i “ D “ Mn for all i, whereupon the theorem is
proved.
9
Initially stated for modules by Dedekind: Zahlentheorie, Suppl. XI, §172, Theorem
VIII (4th condition); our proof and the term ”chain” is taken from there. For ideals of
polynomials by Lasker, loc. cit. p56 (lemma). In both cases the theorem finds only
specific applications, however. Our applications depend without exception on the axiom
of choice.
8
Note that conversely the existence of the ideal basis follows from this
theorem, so the finiteness condition could also have been stated in this basisfree form.
§2. Representation of an ideal as the least common
multiple of finitely many irreducible ideals.
Let the least common multiple rB1 , B2 , . . . , Bk s of the ideals B1 , B2 ,
. . . , Bk be defined as usual as the collection of elements which are divisible
by each of B1 , B2 , . . . , Bk , written as:
f ” 0 pBi q, pi “ 1, 2, . . . , kq implies f ” 0 prB1 , B2 , . . . , Bk sq
and vice versa. The least common multiple is itself an ideal; we call the
ideals Bi the components of the decomposition.
Definition I. A representation M “ rB1 , . . . , Bk s is called a reduced
representation if no Bi appears in the least common multiple Ai of the
remaining ideals, and if no Bi can be replaced with a proper divisor.10
If the conditions are only satisfied for the ideal Bi , then the representation is called reduced with respect to Bi . The least common multiple
Ai “ rB1 , . . . , Bi´1 , Bi`1 , . . . , Bk s is called the complement of Bi . Representations in which only the first condition is satisfied are called shortest
representations.
It suffices now, when considering the representation of an ideal as a lowest common multiple, to restrict ourselves to reduced representations, due
to the following lemma:
Lemma I. Every representation of an ideal as the least common multiple
of finitely many ideals can be replaced in at least one way by a reduced rep10
An example of a non-reduced representation is:
px2 , xyq “ rpxq, px2 , xy, y λ qs
for all exponents λ ě 2; the case λ “ 1, corresponding to the representation rpxq, px2 , yqs,
gives a reduced representation. (K. Hentzelt, who was killed in the war, gave this representation to me as the simplest example of a non-unique decomposition into primary
ideals.)
9
resentation; in particular, one such representation can be obtained through
successive decomposition.
Let M “ rB˚1 , . . . , B˚l s be an arbitrary representation of M. We can then
omit those B˚i which go into the least common multiple of the remaining
ideals. Because the remaining ideals still give M, the resulting representation
M “ rB1 , . . . , Bk s “ rAi , Bi s
satisfies the first condition, and so this is a shortest representation; and
this condition remains satisfied if some Bi is replaced with a proper divisor.
But the second condition is always satisfiable, by the Theorem of the Finite
Chain (Theorem I). For suppose
p1q
pνq
M “ rAi , Bi s “ rAi , Bi s “ ¨ ¨ ¨ “ rAi , Bi s, . . . ,
pνq
where each Bi is a proper divisor of its immediate predecessor, so that
p1q
pνq
the chain Bi , Bi , . . . , Bi , . . . must terminate in finitely many steps; in
pnq
pnq
the representation M “ rAi , Bi s, Bi can therefore not be replaced with
a proper divisor, and this holds a fortiori if Ai is replaced with a proper
divisor. Therefore if the algorithm is applied to each Bi by at each stage
using the complement together with the already reduced B, then a reduced
representation is formed.11
In order to obtain such a representation successively, it must be shown
that it follows from the individual reduced representations
M “ rB1 , C1 s, C1 “ rB2 , C2 s, . . . , Cl´1 “ rBl , Cl s
that the representation M “ rB1 , . . . , Bl , Cl s resulting from these is also
reduced. For this it is sufficient to show that if the representations M “
rB1 , . . . , B̺ , Cs and C “ rC1 , C2 s are reduced, then M “ rB1 , . . . , B̺ , C1 , C2 s
is also reduced. Indeed, by assumption no B appears in its complement;
were this the case for a Ci , then in the first representation, C would be replaceable with a proper divisor, contrary to the assumption, because due
to the second reduced representation, C1 and C2 are proper divisors of C;
the representation is therefore a shortest representation. Furthermore, by
11
The previous example shows that the given representation does not uniquely define
such a reduced representation. For px2 , xyq “ rpxq, px2 , xy, y λ qs, where λ ě 2, both
rpxq, px2 , yqs and rpxq, px2 , µx ` yqs for arbitrary µ are reduced representations.
10
assumption no B can be replaced with a proper divisor; were this the case
for a Ci , then this would correspond to the substitution of C with a proper
divisor, in contradiction with the assumption, because the representation
for C is reduced. The lemma is thus proved.
Definition II. An ideal M is called reducible if it can be expressed as
the least common multiple of two proper divisors; otherwise M is called irreducible.
We now prove the following via the Theorem of the Finite Chain (Theorem I) and using the reduced representation:
Theorem II. Every ideal can be expressed as the least common multiple
of finitely many irreducible ideals.12
An arbitrary ideal M is either irreducible, in which case M “ rMs is a
representation in the form required by Theorem II, or it is M “ rB1 , C1 s,
where B1 , C1 are proper divisors of M, and by Lemma I the representation
can be assumed to be reduced. The same choice is true of C1 ; either it is
irreducible or it has a reduced representation C1 “ rB2 , C2 s.
Continuing in this way, the following series of reduced representations is
obtained:
M “ rB1 , C1 s; C1 “ rB2 , C2 s; . . . ; Cν´1 “ rBν , Cν s; . . .
.
(1)
In the chain C1 , C2 , . . . , Cν , . . . , each Ci is a proper divisor of its immediate predecessor, and therefore the chain terminates in finitely many steps;
there is an index n such that Cn is irreducible. Furthermore, by Lemma
I the representation M “ rB1 , . . . , Bn , Cn s is reduced; Cn can therefore
not go into its complement An , and Cn cannot be replaced by any proper
divisor in the representation M “ rAn , Cn s. Replacing An with a proper
divisor13 if necessary so that the representation is reduced, it has therefore
12
The previous example shows that such a representation is in general not unique:
px , xyq “ rpxq, px2 , µx `yqs. Both components are irreducible for arbitrary µ. All divisors
of pxq are of the form px, gpyqq, where gpyq denotes a polynomial in y; therefore if the least
common multiple of two divisors also has this form, then it is a proper divisor of pxq.
px2 , µx ` yq has only one divisor px, yq, and so is necessarily also irreducible.
13
Indeed, the representation is also reduced with respect to An , as will be shown in §3
(Lemma IV) as the converse of Lemma I.
2
11
been shown that every reducible ideal admits a reduced representation as the
least common multiple of an irreducible ideal and an ideal complementary to
it. All Bi in series (1) can therefore without loss of generality be assumed
to be irreducible. Iteration of the above argument gives the existence of an
irreducible Cn , which proves Theorem II.
§3. Equality of the number of components in two
different decompositions into irreducible ideals.
To prove the equality of the number of components, it is first necessary
to express the reducibility and irreducibility of an ideal through properties
of its complement, via
Theorem III.14 Let the shortest representation M “ rA, Cs be reduced
with respect to C. Then a necessary and sufficient condition for C to be
reducible is the existence of two ideals N1 and N2 which are proper divisors
of M such that
N1 ” 0 pAq;
N2 ” 0 pAq;
rN1 , N2 s “ M.
(2)
From this also follows: If the conditions (2) are satisfied, and C is irreducible, then at least one of the Ni is not a proper divisor of M; Ni “ M.
Let C “ rC1 , C2 s, where C1 , C2 are proper divisors of C. Then it follows
that
M “ rA, Cs “ rA, C1 , C2 s “ rrA, C1 s, rA, C2 ss.
Here the ideals rA, Ci s are proper divisors of M, because otherwise rA, Cs
would not be reduced with respect to C. Because the divisibility by M is
also satisfied, condition (2) is proved to be necessary. (The representation
(2) is not reduced, because a rA, Ci s can be replaced with Ci .)
Conversely, now suppose (2) holds. We construct the ideals:
C1 “ pC, N1 q;
C2 “ pC, N2 q;
14
C˚ “ rC1 , C2 s.
Theorem III corresponds to the transition from modules to quotient groups in the
works of Schmeidler and Noether-Schmeidler (cf. the introduction). Here A corresponds
to the quotient group, and N1 and N2 to the subgroups into which the quotient group is
decomposed.
12
Then C is divisible by both C1 and C2 , and therefore also by the least common
multiple C˚ . In order to show the divisibility of C˚ by C, let f ” 0 pC˚ q;
therefore f ” 0 pC1 q and f ” 0 pC2 q, or also f “ c`n1 and f “ c̄`n2 , where
c, c̄, n1 , n2 are elements of C, N1 , N2 , and so in particular ni is divisible by
A. Therefore the difference
g “ c ´ c̄ “ n2 ´ n1
is divisible both by C and by A, and so by M. Because n1 “ n2 ` m, it
further holds that n1 (likewise n2 ) is divisible both by N1 and by N2 , and
so by M. Therefore
f “ c ` m;
f ” 0 pCq;
C˚ “ C.
Here C1 and C2 are proper divisors of C, since if Ci “ pC, Ni q “ C, then Ni
would be divisible by C, and so because of the divisibility by A, Ni would
be equal to M, in contradiction with the assumption. Hence C “ rC1 , C2 s is
identified as reducible; Theorem III is proved.
Note that almost identical reasoning also shows the following:
Lemma II. If the ideal C in a shortest representation M “ rA, Cs can
be replaced with a proper divisor, then C is reducible.
Let
M “ rA, Cs “ rA, C1 s,
and set
C˚ “ rC1 , pA, Cqs.
Then C is again divisible by C˚ . Furthermore, it follows from f ” 0 pC˚ q
that
f “ c1 “ a ` c.
The difference a “ c1 ´ c is therefore divisible both by A and by C1 , and
therefore by M; therefore c1 “ c ` m; f ” 0 pCq; C˚ “ C.
Because both C1 and pA, Cq are proper divisors of C by assumption,
C “ C˚ is thus identified as reducible.
13
An irreducible C can therefore not be replaced with a proper divisor.
Now let the following be two different shortest representations of M as
the least common multiple of (finitely many) irreducible ideals:
M “ rB1 , . . . , Bk s “ rD1 , . . . , Dl s.
These representations are both reduced according to the remark following
Lemma II. Now we shall first prove
Lemma III. For every complement Ai “ rB1 . . . Bi´1 Bi`1 . . . Bk s there
exists an ideal Dj such that M “ rAi , Dj s.
Set M “ rD1 , C1 s, C1 “ rD2 , C12 s and so forth, so that
M “ rAi , Ms “ rAi , D1 , C1 s “ rrAi D1 s, rAi C1 ss.
Here the conditions (2) from Theorem III are satisfied for N1 “ rAi , D1 s, N2 “
rAi , C1 s, because M “ rAi , Bi s is reduced with respect to Bi and the representation is a shortest one. Because Bi was assumed to be irreducible, one
Ni must necessarily be equal to M.
For N1 “ M the lemma would be proved; for N2 “ M it follows respectively that M “ rrAi D2 s, rAi C12 ss, where, by the same result, one component must again be equal to M. Continuing in this way, it follows either that
M “ rAi , Dj s, where j ă l, or that M “ rAi , C1...l´1 s; because C1...l´1 “ Dl ,
the lemma is thus proven.
Now, as a result of this, we have
Theorem IV. For two different shortest representations of an ideal as
the least common multiple of irreducible ideals, the number of components
is the same.
It follows from the lemma for the particular case i “ 1:
M “ rA1 , B1 s “ rA1 , Dj1 s “ rDj1 , B2 , . . . , Bk s.
Now consider both of the decompositions
M “ rDj1 , B2 , . . . , Bk s “ rD1 , D2 , . . . , Dl s,
14
and recall the earlier result with reference to the complement Ā2 “
rDj1 , B3 , . . . , Bk s of B2 . It then follows that
M “ rĀ2 , B2 s “ rĀ2 , Dj1 s “ rDj1 , Dj2 , B3 , . . . , Bk s;
and by extension of this procedure:
M “ rDj1 , Dj2 , . . . , Djk s.
Because now the representation M “ rD1 , . . . , Dl s is a shortest one by assumption, and so no D can be omitted, the different ones among the Dji
must exhaust all of the D; therefore it follows that k ě l. Should we swap
the B with the D throughout the lemma and the subsequent results, it then
follows accordingly that l ě k, and therefore that k “ l, which proves the
equality of the number of components. As a result of this it follows that the
ideals Dji are all different from each other, because otherwise there would
be fewer than k components in a shortest representation using the Dji ; the
notation can therefore be chosen so that Dji “ Di . By the same reasoning,
all intermediate representations M “ rDj1 , . . . , Dji , Bi`1 , . . . , Bk s are also
shortest and so, by the remark following Lemma II, reduced.
The equality of the number of components leads us to a converse of
Lemma I through
Lemma IV. If the components in a reduced representation are collected
into groups and the least common multiples of them constructed, then the
resulting representation is reduced. In other words: Given a reduced representation
M “ rC11 , . . . , C1µ1 ; . . . ; Cσ1 , . . . , Cσµσ s,
it follows that M “ rN1 , . . . , Nσ s “ rNi , Li s is also reduced, where Ni “
rCi1 , . . . , Ciµi s.
Firstly, note that Ni cannot go into its complement Li , since this is not
the case for any of its divisors Cij ; the representation is therefore a shortest
one. In order to show that Ni cannot be replaced with any proper divisor,
we split the C into their irreducible ideals B,15 so that the (by Lemma I)
15
By this we understand the B to always be shortest, and therefore reduced, representations.
15
reduced representations arise:
M “ rB11 , . . . , B1λ1 ; . . . ; Bσ1 , . . . , Bσλσ s;
Ni “ rBi1 , . . . , Biλi s.
Now let M “ rN˚i , Li s be reduced with respect to N˚i , where N˚i is a proper
divisor of Ni . Then by Lemma II:
Ni “ rN˚i , pNi , Li qs;
and this representation is necessarily reduced with respect to N˚i , because
otherwise N˚i can also be replaced with a proper divisor in M. Now also
replace pNi , Li q with a proper divisor where appropriate, so that a reduced
representation for Ni is achieved. If both components of Ni are now decomposed into irreducible ideals, then the number λi of irreducible ideals in Ni is
composed additively of those of the components; the number of irreducible
ideals corresponding to the proper divisor N˚i is therefore necessarily smaller
than λi . Then, however, the decomposition of M “ rN˚i , Ls into irreducible
ř
ideals leads to fewer than i λi ideals, in contradiction with the equality of
number of components. In the special case σ “ 2, it also follows that the
representation M “ rNi , Li s is reduced with respect to the complement Li .
§4. Primary ideals. Uniqueness of the prime ideals belonging to two different decompositions into
irreducible ideals.
The following concerns the connection between primary and irreducible
ideals.
Definition III. An ideal Q is called primary if a ¨ b ” 0 pQq, a ı 0 pQq
implies bx ” 0 pQq, where the exponent x is a finite number.
The definition can also be restated as follows: if a product a¨b is divisible
by Q, then either one factor is divisible by Q or a power of the other is. If
in particular x is always equal to 1, then the ideal is called a prime ideal.
From the definition of a primary (respectively prime) ideal follows, by
virtue of the existence of a basis, the definition using only products of ide-
16
als:16
Definition IIIa. An ideal Q is called primary if A ¨ B ” 0 pQq,
A ı 0 pQq necessarily implies Bλ ” 0 pQq. If λ is always equal to 1,
then the ideal is called a prime ideal. For a prime ideal P, it therefore always follows from A ¨ B ” 0 pPq, A ı 0 pPq that B ” 0 pPq.
Because Definition III is contained in IIIa for A “ paq, B “ pbq as a
special case, every ideal which is primary by IIIa is also primary by III.
Conversely, suppose Q is primary by III, and let the assumption of IIIa be
satisfied: A ¨ B ” 0 pQq, so that it therefore follows either that A ” 0 pQq,
or alternatively that there is at least one element a ” 0 pAq such that
a ¨ B ” 0 pQq and a ı 0 pQq. If now b1 , . . . , br is an ideal basis of B, then
by Definition III, since a ¨ bi ” 0 pQq, the following holds:
bx1 1 ” 0 pQq; . . . ; bxr r ” 0 pQq.
Because b “ f1 b1 `¨ ¨ ¨`fr br `n1 b1 `¨ ¨ ¨`nr br , for λ “ x1 `¨ ¨ ¨`xr the product
of λ elements b is therefore divisible by Q, which proves the fulfilment of
Definition IIIa for ideals primary by III. In particular, for prime ideals P
it follows from a ¨ B ” 0 pPq, and therefore also a ¨ b ” 0 pPq for every
b ” 0 pBq and a ı 0 pPq, that b ” 0 pPq and thus B ” 0 pPq. We have
therefore shown the two definitions to be equivalent.
The connection between primary and prime ideals shall be established
through the remark that the collection P of all elements p with the property
that a power of p is divisible by Q forms a prime ideal. It is immediately clear
that P is an ideal; because if the given property holds for p1 and p2 , it also
holds for ap1 and p1 ´ p2 . Furthermore, by the inference used in Definition
IIIa regarding the basis, there exists a number λ such that Pλ ” 0 pQq.
Now let
a ¨ b ” 0 pPq;
a ı 0 pPq,
so that it follows from the definition of P that
aλ ¨ bλ ” 0 pQq;
16
aλ ı 0 pQq;
The product A ¨ B of two ideals is understood to mean, as usual, the ideal consisting
of the collection of elements a ¨ b and their finite sums.
17
therefore by the definition of Q:
bλx ” 0 pQq
and hence
b ” 0 pPq,
which proves P to be a prime ideal. P is also defined as the greatest common
divisor of all ideals B with the property that a power of B is divisible by
Q. Every such B is by definition divisible by P; thus the greatest common
divisor D of these B is too. Conversely, P is itself one of the ideals B, so
is divisible by D, which proves that P “ D. P is therefore a prime ideal
which is a divisor of Q and a power of which is divisible by Q. It is uniquely
defined by these properties, because it follows from
Q ” 0 pPq;
Pλ ” 0 pQq;
Q ” 0 pP̄q;
Pλ ” 0 pP̄q;
P̄µ ” 0 pPq;
P̄µ ” 0 pQq
that
and so, by the properties of prime ideals,
P ” 0 pP̄q;
P̄ ” 0 pPq;
P “ P̄.
In conclusion, we have
Theorem V. For every primary ideal Q there exists one, and only one,
prime ideal P which is a divisor of Q and a power of which is divisible by
Q; P shall be referred to as the “associated prime ideal”.17 P is defined as
the greatest common divisor of all ideals B with the property that a power
of B is divisible by Q. If ̺ is the smallest number such that P̺ ” 0 pQq,
then ̺ shall be referred to as the exponent of Q.18
We now prove, as the connection between primary and irreducible:
17
The example M “ px2 , xyq shows that the converse does not hold. The prime ideal
pxq satisfies all requirements, but M is not primary.
18
It is not in general true that P̺ “ Q, as it is in the rings of the integers and the
algebraic integers; for example:
Q “ px2 , yq;
P “ px, yq;
P2 “ px2 , xy, y 2 q ” 0 pQq; but Q ı 0 pP2 q;
therefore Q differs from P2 .
18
Theorem VI. Every non-primary ideal is reducible; in other words, every irreducible ideal is primary.19
Let K be a non-primary ideal, so that by Definition III there exists at
least one pair of elements a, b such that
a ¨ b ” 0 pKq;
a ı 0 pKq;
bx ı 0 pKq for every x.
(3)
We now construct the two ideals
L0 “ pK, aq;
N0 “ pK, bq,
which by (3) are proper divisors of K, and by (3) it holds that
L0 ¨ N0 ” 0 pKq.
(4)
For the elements f of the least common multiple K0 “ rL0 , N0 s, the following
options now arise:
Either from
f ” 0 pL0 q;
f ” 0 pN0 q, that is f ” a1 ¨ b pKq
always follows a representation
f ” l0 ¨ b pKq;
l0 ” 0 pL0 q;
therefore, by (4), f ” 0 pKq, which implies K0 ” 0 pKq, and because K ”
0 pK0 q, it also holds that K “ K0 , by which K is proven to be reducible.
Or there is at least one f ” 0 pK0 q for which there exists no such l0 .
Using the a1 belonging to this f we then construct:
L1 “ pL0 , a1 q “ pK, a, a1 q;
19
N1 “ pK, b2 q.
The following example shows that the converse does not hold here:
Q “ px2 , xy, y λ q “ rpx2 , yq, px, y λ qs,
where λ ě 2. Here Q is primary, but reducible. (Q is primary because it contains all
products of powers of x and y with a total dimension of λ; for every polynomial without
a constant term, one power is therefore divisible by Q. However, should the polynomial
b in a ¨ b ” 0 pQq contain a constant term – so bx ı 0 pQq for every x – then a must be
divisible by Q, because every homogeneous component of a ¨ b is divisible by Q due to the
homogeneity of the basis polynomials of Q.)
19
Then, because a1 ¨ b ” 0 pL0 q by (4), it also holds that
L1 ¨ N1 ” 0 pKq;
(4’)
and L1 is a proper divisor of L0 .
For the elements f of K1 “ rL1 , N1 s, the same options occur:
Either from
f ” 0 pL1 q;
f ” 0 pN1 q, that is f ” a2 ¨ b2 pKq
it always follows that
f ” l1 ¨ b2 ;
l1 ” 0 pL1 q;
and so by (4’):
K “ K1 .
Or there is at least one f for which there exists no such l1 , which leads
2
to the construction of L2 “ pL1 , a2 q, N2 “ pK, b2 q, with L2 ¨ N2 ” 0 pKq,
where L2 is a proper divisor of L1 . Therefore, continuing in this way, in
general we define
L0 “ pK, aq; L1 “ pL0 , a1 q; . . . ; Lν “ pLν´1 , aν q; . . . ;
ν
N0 “ pK, bq; N1 “ pK, b2 q; . . . ; Nν “ pK, b2 q; . . . ,
where the ai are defined so that there exists an f such that
f ” 0 pLi´1 q;
f ” 0 pNi´1 q,
that is
i´1
f ” ai ¨ b2
pKq;
but ai ı 0 pLi´1 q.
Subsequently it holds in general that Li ¨ Ni ” 0 pKq, that by (3) Ni is a
proper divisor of K, and that Li is a proper divisor of Li´1 . By Theorem I
of the Finite Chain, the chain of the L must therefore terminate in finitely
many steps, say at Ln . For each f ” 0 pLn q, f ” 0 pNn q, it follows that
n
f ” ln ¨ b2 pKq, with ln ” 0 pLn q, and consequently by the above conclusion
K “ rLn , Nn s, which proves that K is reducible.
As a result of the preceding proofs, the uniqueness of the associated
prime ideals emerges as follows:
20
Let
M “ rB1 , . . . , Bk s “ rD1 , . . . , Dk s
be two shortest, and therefore reduced, representations of M as the least
common multiple of irreducible ideals, the numbers of components of which
are equal by Theorem IV. Then, by this theorem, the intermediate representations
M “ rD1 , . . . , Di´1 , Bi , Bi`1 , . . . , Bk s “ rD1 , . . . , Di´1 , Di , Bi`1 , . . . , Bk s
“ rĀi , Bi s “ rĀi , Di s
occurring there (where, as was remarked there, the index ji “ i can be set)
are also shortest representations. It therefore comes about that
Āi ¨ Bi ” 0 pDi q, Āi ı 0 pDi q;
Āi ¨ Di ” 0 pBi q, Āi ı 0 pBi q.
Because now by Theorem IV the irreducible ideals Bi and Di are primary,
it follows that there exist two numbers λi and µi such that
Dµi i ” 0 pBi q.
Bλi i ” 0 pDi q;
(5)
Now let Pi and P̄i denote the corresponding prime ideals of Bi and Di
respectively; therefore P̺i i ” 0 pBi q and P̄σi i ” 0 pDi q, and so by (5)
Piλi ̺i ” 0 pP̄i q;
P̄iµi σi ” 0 pPi q,
and from this, by the property of prime ideals:
Pi ” 0 pP̄i q;
P̄i ” 0 pPi q;
Pi “ P̄i .
With this we have proven
Theorem VII. For two different shortest representations of an ideal as
the least common multiple of irreducible ideals, the associated prime ideals
agree, and in fact the same associated prime ideals 20 also occur for every
decomposition thereof. The ideals themselves can be paired up in at least one
way such that a power of the ideal Bi is divisible by the associated Di and
vice versa. The numbers of ideals agree by Theorem IV.21
20
This is shown, for instance, in the example from footnote 19:
px2 , xy, y λ q “ rpx2 , yq, px, y λ qs,
where λ ě 2. Both corresponding prime ideals are px, yq here.
21
For the uniqueness of the “isolated” ideals found among the irreducible ideals, see §7.
21
§5. Representation of an ideal as the least common
multiple of maximal primary ideals. Uniqueness of
the associated prime ideals.
Definition IV. A shortest representation M “ rQ1 , . . . , Qα s is called
the least common multiple of maximal primary ideals if all Q are primary,
but the least common multiple of two Q is no longer primary.
That at least one such representation always exists follows from the representation of M as the least common multiple of irreducible ideals. This is
because these ideals are primary; either there now already exists a representation through maximal primary ideals, or else the least common multiple
of some two ideals is again primary. Because taking this least common multiple decreases the number of ideals by one, continuation of this procedure
leads to the desired representation in finitely many steps.
This representation is reduced by Lemma IV. Conversely, every reduced
representation arises through maximal primary ideals in this way, as the
decomposition of the Q into irreducible ideals shows.
In order to achieve an appropriate theorem of uniqueness here from Theorem VII, the connection with the associated prime ideals must be investigated, via
Theorem VIII. Should the primary ideals N1 , N2 , . . . , Nλ all have the
same associated prime ideal P, then their least common multiple Q “
rN1 , N2 , . . . , Nλ s is also primary and has P as its associated prime ideal.
If conversely Q “ rN1 , . . . , Nλ s is a reduced representation for the primary
ideal Q, then all Ni are primary and have as their associated prime ideal
the associated prime ideal P of Q.
To prove the first part of the statement, first note that P̺i ” 0 pNi q for
each i also implies Pτ ” 0 pQq, where τ denotes the largest of the indices ̺i .
Because P is furthermore also a divisor of Q, P is necessarily the associated
prime ideal, if Q is primary. It follows from
A ¨ B ” 0 pQq;
Bk ı 0 pQq (for every k)
22
that consequently
B ı 0 pPq; so Bk ı pNi q (for every k)
and as a result
A ” 0 pNi q; so A ” 0 pQq,
which proves that Q is primary, and that P is the associated prime ideal.
Conversely, first let
Q “ rN1 , . . . , Nλ s “ rNi , Li s
be a shortest representation of Q (where Q is primary) using primary ideals
Ni , and let Pi be the respective associated prime ideals. It follows from
Li ¨ Ni ” 0 pQq;
Li ı 0 pQq
(because of the shortest representation) that
Nσi i ” 0 pQq; or P̺i i σi ” 0 pQq.
Because the Pi are all divisors of Q, Pi is therefore equal to the associated
prime ideal P of Q for all i.
It remains to show that for every reduced representation Q “ rN1 , . . . , Nλ s,
the Ni are primary.22 For this, decompose the Ni into their irreducible ideals B; in the resulting shortest representation Q “ rB1 , . . . , Bµ s, each ideal
is then primary and has P as its associated prime ideal by the previous
parts of the proof. The same then holds for each Ni by the first part of the
theorem, proved above, which completes the proof of Theorem VIII.
Remark. From this it follows that a prime ideal is necessarily irreducible. This is because the reduced representation P “ rN, Ls gives
N ” 0 pPq by Theorem VIII, and so, because P ” 0 pNq, this also gives
P “ N and similarly P “ L. The irreducibility of P also follows directly,
The example Q “ rx2 , xy, y λ s “ rpx2 , xy, y 2 , yzq, px, y λ qs, where λ ě 2, which is
not reduced, shows that the reduced representation is crucial here. Here px2 , xy, y 2 , yzq “
rpx2 , yq, px, y 2 , zqs is not primary by the above proof; this is because the last representation
is a shortest one through primary ideals, but the associated prime ideals px, yq and px, y, zq
are different. (Q is primary by footnote 19.)
22
23
because P “ rN, Ls implies N ¨ L ” 0 pPq, N ı pPq, L ı 0 pPq, in contradiction with the defining property of a prime ideal.
Now let
M “ rQ1 , . . . , Qα s “ rQ̄1 , . . . , Q̄β s
be two reduced representations of M as the least common multiple of maximal primary ideals. By decomposing the Q into their irreducible ideals B
and the Q̄ respectively into the irreducible ideals D, two reduced representations of M as the least common multiple of irreducible ideals are produced,
in which by Theorem VII both the numbers of components and the associated prime ideals are the same. By Theorem VIII, all irreducible ideals
B for a fixed Qi have the same associated prime ideal Pi , while the Pk
associated with Qk is necessarily different from this, because otherwise by
Theorem VIII no representation using maximal primary ideals exists. The
number α of Q is therefore equal to the number of different associated prime
ideals P of the B; these different P construct the associated prime ideals of
the Q. The same applies to the Q̄ with respect to their decomposition into
the D. From Theorem VII it therefore follows that the number of Q and
Q̄ are the same, and that their corresponding prime ideals are the same.
Together these show that the summary given at the start of the section
about the irreducible ideals among maximal primary ideals holds true; that
is, these, and only these, all have the same associated prime ideal. Theorem
VII further shows the property of irreducibility of the maximal primary ideals: they admit no reduced representation as the least common multiple of
maximal primary ideals.
In summary the following has been proved:
Theorem IX. For two reduced representations of an ideal as the least
common multiple of maximal primary ideals, the numbers of components
and the associated prime ideals (which are all different from each other) are
the same. In other words, each Q can be uniquely associated with a Q̄ so
that a power of Q is divisible by Q̄, and vice versa.23 The Q and Q̄ have
23
One example of different representations is that given in footnote 12 for Theorem II:
px2 , xyq “ rpxq, px2 , µx`yqs for arbitrary µ. Because the associated prime ideals P1 “ pxq,
P2 “ px, yq are different, it consists of maximal primary ideals. For the uniqueness of the
“isolated” maximal primary ideals, see §7.
24
the property of irreducibility with respect to the decomposition into maximal
primary ideals.
Remark. Note that Theorem IX remains for the most part true if instead of reduced representations, only shortest representations are required.
If for instance M “ rQ1 , . . . , Q˚i , . . . , Qα s is then reduced with respect to Q˚i ,
and Q˚i is a proper divisor of Qi , then by Lemma IV Qi “ rQ˚i , pLi , Qi qs,
where Li is the complement of Qi ; this representation is reduced with respect
to Q˚i . By Theorem VIII, where in this application pLi , Qi q is replaced with
a proper divisor if necessary, Q˚i is therefore primary and has the same associated prime ideal Pi as Qi . Continuation of this procedure shows that every
such representation can be assigned a reduced representation through maximal primary ideals such that the number of components and the associated
prime ideals are the same. It therefore also holds for shortest representations that in two different representations the number of components and the
associated prime ideals are the same.
The thus uniquely defined associated prime ideals that are different from
each other shall be called “the associated prime ideals of M” for short.
§6. Unique representation of an ideal as the least
common multiple of relatively prime irreducible ideals.
Definition V. An ideal R is called relatively prime to S if T¨R ” 0 pSq
necessarily implies T ” 0 pSq. If R is relatively prime to S and S is also
relatively prime to R, then R and S are called mutually relatively prime.24
An ideal is called relatively prime irreducible if it cannot be expressed as the
least common multiple of mutually relatively prime proper divisors.
In particular, if instead of T the greatest common divisor T0 of all T
for which T ¨ R ” 0 pSq is taken, then we also have T0 ¨ R ” 0 pSq and
S ” 0 pT0 q. Therefore T0 “ S if R is relatively prime to S, and T0 is a
24
The relation of being relatively prime is not symmetric. For example, R “ px2 , yq is
relatively prime to S “ pxq, but S is not relatively prime to R, because S2 ” 0 pRq,
whereas T “ S ı 0 pRq.
25
proper divisor of S if R is not relatively prime to S.25
The proof of uniqueness is underpinned by
Theorem X. 1. If R is relatively prime to the ideals S1 , . . . , Sλ , then
R is also relatively prime to their least common multiple S.
2. If the ideals S1 , . . . , Sλ are relatively prime to R, then their least
common multiple S is also relatively prime to R.
3. If R is relatively prime to S and S “ rS1 , . . . , Sλ s is a reduced
representation of S, then R is also relatively prime to each Si .
4. If S is relatively prime to R, then each divisor Si of S is also relatively prime to R.
1. Because T ¨ R ” 0 pSq necessarily implies T ¨ R ” 0 pSi q, our
assumption implies T ” 0 pSi q, and therefore T ” 0 pSq.
2. Let C1 “ rS2 , . . . , Sλ s, C12 “ rS3 , . . . , Sλ s, . . . , C12...λ´1 “ Sλ . From
T¨S ” 0 pRq it then follows that T¨C1 ¨S1 ” 0 pRq, and so by our assumption
T ¨ C1 ” 0 pRq. From this it follows furthermore that T ¨ C12 ¨ S2 ” 0 pRq,
and so by our assumption T ¨ C12 ” 0 pRq. Proceeding in this way, it follows
finally that T ¨ C12...λ´1 “ T ¨ Sλ ” 0 pRq, and so T ” 0 pRq.
3. Let Ci denote the complement of Si ; it then follows from T0 ¨ R ”
0 pSi q that rT0 , Ci s ¨ R ” 0 pSq, since Ci ¨ R ” 0 pCi q.
Should T0 now be a proper divisor of Si , then because of the reduced
representation S “ rSi , Ci s, it is also true that rT0 , Ci s is a proper divisor
of S, contradicting our assumption.
4. It follows from T0 ¨ Si ” 0 pRq, where T0 is a proper divisor of R,
that T0 ¨ S ” 0 pRq; this contradicts our assumption.
Definition V shows in particular that every ideal is relatively prime to
the trivial ideal O consisting of all elements of Σ;26 the following theorems
however apply only for ideals other than O.
25
This so defined T0 is the same as Lasker’s “residual module” and Dedekind’s “quotient” of two modules in his expansion into modules rather than ideals. Lasker, loc. cit.
p49, Dedekind (Zahlentheorie), p504.
26
O plays the role of the trivial ideal only with respect to divisibility and least common
multiples, not with respect to the formation of products. For example, O “ pxq for the
ring of all polynomials in x with integer coefficients and without a constant term; O “ p2q
for the ring of all even numbers.
26
From Theorem X, which has just been proved, it immediately follows
that:
Lemma V. Every representation of an ideal as the least common multiple of mutually relatively prime ideals other than O is reduced.
Let M “ rR1 , R2 , . . . , Rσ s “ rRi , Li s be one such representation. By
Theorem X, part 2, Li is then also relatively prime to Ri ; Ri can therefore
not appear in Li , and so the representation is a shortest one. This is because
if Li ” 0 pRi q, where Li is relatively prime to Ri , it would also follow that
O ” 0 pRi q, as O ¨ Li ” 0 pRi q, and it therefore holds that Ri “ O, which is
by assumption impossible. Now suppose Ri can be replaced with the proper
divisor R˚i . Then Ri is reducible by Lemma II, and Ri “ rR˚i , pRi , Li qs. Replace pRi , Li q here with a proper divisor pRi , Li q˚ if necessary, and likewise
replace R˚i with a proper divisor if necessary so that a reduced representation for Ri is formed. Then Theorem X, part 3, shows that Li is also
relatively prime to pRi , Li q˚ , and this is different from O because of the
shortest representation. Because however pRi , Li q˚ appears in pRi , Li q and
pRi , Li q appears in Li , a contradiction therefore arises.
From Theorem X a further theorem arises, concerning the connection
with the associated prime ideals:
Theorem XI. If R is relatively prime to S and S is different from
O, then no associated prime ideal of R27 is divisible by an associated prime
ideal of S. Conversely, should no such divisibility occur, then R is relatively
prime to S, and of course S is different from O.
Let
R “ rQ1 , . . . , Qα s,
S “ rQ˚1 , . . . , Q˚β s
be reduced representations of R and S through maximal primary ideals,
and let P1 , . . . , Pα , P˚1 , . . . , P˚β be the associated prime ideals. We prove
the statement in the following form: if a P is divisible by a P˚ , then R
cannot be relatively prime to S, and vice versa.
27
The definition of associated prime ideals of R is given in the conclusion of §5.
27
Therefore let Pµ ” 0 pP˚ν q and as a result also Qµ ” 0 pP˚ν q, from
which by definition of P˚ν it follows that Qσµν ” 0 pQ˚ν q, and therefore also
Rσν ” 0 pQ˚ν q. Now let Rτ be the lowest power of R divisible by Q˚ν . If
τ “ 1, then R is divisible by Q˚ν ; because, however, S is different from O
by assumption, R is therefore not relatively prime to Q˚ν . If τ ě 2, then
Rτ ´1 ¨R ” 0 pQ˚ν q, Rτ ´1 ı 0 pQ˚ν q, and so R is not relatively prime to Q˚ν ,28
and hence in both cases R is also not relatively prime to S by Theorem X,
part 3.
If conversely R is not relatively prime to S, then by Theorem X, part
1, R is also not relatively prime to at least one Q˚ν . It therefore holds that
T0 ¨ R ” 0 pQ˚ν q, T0 ı 0 pQ˚ν q, and from this, because Q˚ν is primary, it
follows that Rτ ” 0 pQ˚ν q, and so also Qτ1 . . . Qτα ” 0 pQ˚ν q. It then follows,
however, that Pτ1 ̺1 . . . Pτα̺α ” 0 pP˚ν q for the associated prime ideals, and
by the properties of the prime ideals, P˚ν is therefore contained in at least
one P, which completes the proof of Theorem XI.
The existence29 and uniqueness of the decomposition into relatively prime
irreducible ideals now arises from Theorems X and XI as follows:
Let M “ rQ1 , . . . , Qα s be a reduced (or at least shortest) representation
of M through maximal primary ideals, and let P1 , . . . , Pα be the associated
prime ideals. We collect the P together in groups such that no ideal in a
group is divisible by an ideal in a different group, and each individual group
cannot be split into two subgroups that both have this property. In order to
construct such a grouping, it must be noted that by definition the group G
containing each ideal P must also contain all its divisors and multiples (that
is to say, divisible by P) occurring in P1 . . . Pα . For example, let Ppi1 q be all
pi q
pi ,i q
the multiples of P, Pj11 all the divisors of Ppi1 q , Pj11 2 all the multiples of
pi q
pi ...i q
pi ...i q
pi ...i q
pi ...i
q
λ´1
λ
and
is all the multiples of Pj11...jλ´1
Pj11 and so on; so in general Pj11...jλ´1
λ
. Because it deals with only finitely
Pj11...jλλ is all the divisors of Pj11...jλ´1
many ideals P, this algorithm must terminate in finitely many steps; that
is, no ideal different from all preceding ones is left over as a result. The thus
28
R0 is not defined, because Σ does not need to contain a unit; therefore the case τ “ 1
must be considered separately. τ “ 0 is also impossible when Σ does contain a unit, due
to the assumption regarding S.
29
The existence of the decomposition can also be proved directly, in exact analogy with
the proof of the existence of the decomposition into finitely many irreducible ideals given
in §2.
28
obtained system of ideals P now in fact forms a group G with the desired
properties.
pi ...iλ´1 q
all multiples
By definition G contains in addition to each Pj11...jλ´1
pi ...i q
pi ...i q
pi ...i q
λ
λ
all divisors Pj11...jλλ . Among
, and in addition to each Pj11...jλ´1
Pj11...jλ´1
pi ...i
q
λ´1
, while the
these however are also contained all divisors of the Pj11...jλ´1
pi ...i q
pi ...i q
λ
λ
. The ideals not
are themselves again Pj11...jλ´1
multiples of the Pj11...jλ´1
contained in G can therefore neither be divisors nor multiples of those ideals
contained in G. G however also satisfies the irreducibility condition. Because
if a division into two subgroups Gp1q and Gp2q exists, and Gp1q contains for
pi ...iλ q
pi ...i q
), it then also contains all preceding
instance Pj11...jλλ (respectively Pj11...jλ´1
ideals, because these are alternating multiples and divisors (respectively
divisors and multiples). So it also contains P, and thus the whole group
G. Should we continue accordingly with the ideals not contained in G,
then a division of all P into groups G1 , . . . , Gσ which possess the desired
1
1
properties is obtained. Such a grouping is unique; because if G1 , . . . , Gτ is
1
pi ...iλ q
pi ...i q
) is an element of Gi ,
a second grouping and Pj11...jλλ (respectively Pj11...jλ´1
1
then by the above reasoning Gi contains the whole group G and is therefore
identically equal to G because of the irreducibility property.
Now denote by Piµ (where µ runs from 1 to λi ) the ideals P collected
in a group Gi . Then, because the P are all different from each other, the
associated primary ideals Qiµ of a shortest representation are uniquely determined. Should we set
Ri “ rQi1 , . . . , Qiλi s, then M “ rR1 , . . . , Rσ s;
we show that, as a result, a decomposition of M into relatively prime irreducible ideals is achieved. First of all, it must be noted that Theorem XI
also remains applicable when Ri “ rQi1 , . . . , Qiλi s is only a shortest representation, because the associated prime ideals are uniquely defined, as seen
from the remark on Theorem IX. Because now no associated prime ideal Piµ
of Ri is divisible by an associated prime ideal Pjν of Rj and vice versa, by
Theorem XI Ri and Rj are mutually relatively prime, and each individual
Ri is relatively prime irreducible by Theorem XI due to the irreducibility
property of the group Gi , because with reducibility, ideals completely different from O come into question. Furthermore, by Lemma V it is a reduced
representation also if we had originally only started out with a shortest representation through the Q. Conversely, every decomposition into relatively
29
prime irreducible ideals leads to the given grouping of the Piµ by Theorem
XI.
Now let M “ rR̄1 , . . . , R̄τ s be a second representation of M through
relatively prime irreducible ideals, which is reduced by Lemma V. Because
then, as the decomposition of the R̄ into maximal primary ideals shows, the
associated prime ideals are the same, the groupings of these prime ideals,
the uniqueness of which was proven above, are therefore also the same. It
therefore follows that τ “ σ; and the notation can be chosen so that Ri and
R̄i belong to the same group. Therefore let
M “ rRi , Li s “ rR̄i , L̄i s
be the representation through ideal and complement. Then, because R̄i is
associated with the same group as Ri , by Theorem XI Li is also relatively
prime to R̄i and L̄i relatively prime to Ri . Because
Ri ¨ Li ” 0 pR̄i q,
R̄i ¨ L̄i ” 0 pRi q
it therefore follows that
Ri ” 0 pR̄i q;
R̄i ” 0 pRi q;
Ri “ R̄i .
Hence we have proved
Theorem XII. Every ideal can be uniquely expressed as the least common multiple of finitely many mutually relatively prime and relatively prime
irreducible ideals.
§7. Uniqueness of the isolated ideals.
Definition VI. If the shortest representation M “ rR, Ls is reduced
with respect to L, then R is called an isolated ideal if no associated prime
ideal of R appears in an associated prime ideal of L, or in other words, if L
is relatively prime to R.
Subsequently the representation M “ rR, Ls satisfies the requirements
of the representation M “ rRi , Li s in Lemma V, and with Lemma V we
have proved
30
Lemma VI. If R is an isolated ideal of the shortest and reduced (with
respect to L) representation M “ rR, Ls, then the representation is also reduced with respect to R.
Because we therefore always have a reduced representation when using
isolated ideals, the associated prime ideals occurring in the decompositions
of R and L into irreducible ideals complement each other to give the uniquely
determined associated prime ideals occurring in the corresponding decomposition of M.
Therefore no prime ideal of R belonging to the decomposition into irreducible ideals appears in the remaining prime ideals belonging to the corresponding decomposition of M. If conversely this condition is satisfied,
and if R features in at least one representation M “ rR, Ls reduced with
respect to R, and so also in a reduced representation M “ rR, L˚ s, then R
is isolated by Definition VI. From this the following definition arises, which
is independent of the particular complement L:
Definition VIa. R is called an isolated ideal if the prime ideals of R
belonging to the decomposition into irreducible ideals do not appear in the
remaining prime ideals belonging to the corresponding decomposition of M,
and if R appears in at least one representation M “ rR, Ls reduced with
respect to R.30
Now let
M “ rR, Ls “ rR̄, L̄s
be two representations of M through isolated ideals R and R̄ and complements L and L̄ such that the associated prime ideals of R and R̄ are the
same. Should L and L̄ be replaced with divisors L˚ and L̄˚ such that the
representations are reduced, then the associated prime ideals of L˚ and L̄˚
are also the same; by Theorem XI L˚ is therefore relatively prime to R̄ and
30
Should ordinary associated prime ideals (conclusion of §5) be introduced, then it
would be added as a special requirement that those of L all be different from those of R.
Following Definition VIa the representation therefore needs no longer be assumed to be
reduced with respect to the complement.
31
L̄˚ relatively prime to R. Because
R ¨ L˚ ” 0 pR̄q;
R̄ ¨ L̄˚ ” 0 pRq
it therefore follows that
R ” 0 pR̄q;
R̄ ” 0 pRq;
R “ R̄;
isolated ideals are therefore uniquely determined by the associated prime
ideals. This in particular results in a strengthening of Theorems VII and IX
regarding the decomposition into irreducible and maximal primary ideals,
where due to the remark on Theorem IX only shortest representations need
to be assumed. In summary:
Theorem XIII. For each shortest representation of an ideal as the least
common multiple of irreducible ideals (respectively maximal primary ideals), the isolated irreducible ideals (respectively maximal primary ideals) are
uniquely determined; the non-uniqueness applies only to the non-isolated
irreducible ideals (respectively maximal primary ideals).31 In general, the
isolated ideals are uniquely determined by the associated prime ideals.
If the ideals Bi (respectively Dj ) in one such shortest representation
through irreducible ideals (respectively maximal primary ideals) are nonisolated, then by definition the complements Ai (respectively Lj ) are divisible by Pi (respectively Pj ). It therefore follows that
σ
A̺i i ” 0 pBi q (respectively Lj j ” 0 pDj q).
It follows conversely from the fulfilment of these relations that Pi (respectively Pj ) appears in at least one associated prime ideal of the complement,
and so by the remark on Theorem IX also in an associated prime ideal of the
divisor L˚j of Lj that leads to a representation that is reduced with respect
to L˚j ; the Bi (respectively Dj ) are therefore non-isolated. In particular, irreducible ideals Bi for which the associated prime ideal appears more than
once in the decomposition of M are always non-isolated. Non-isolated primary ideals are therefore also characterised by the fact that a power of each
31
This theorem is already given without proof by Macaulay for polynomial ideals in the
case of the decomposition into maximal primary ideals; his definition of the isolated and
non-isolated (imbedded) primary ideals can be viewed as the irrational version of that
given below.
32
complement is divisible by them; isolated primary ideals are characterised by
the fact that this cannot be satisfied.
§8. Unique representation of an ideal as the product of coprime irreducible ideals.
Should the ring Σ contain a unit, that is, an element ε such that ε¨a “ a
for all elements in Σ,32 then the coprime ideals can be defined by
Definition VIII. Two ideals R and S are called coprime if their greatest common divisor is the trivial ideal O “ pεq consisting of all elements of
Σ. An ideal is called coprime irreducible if it cannot be expressed as the least
common multiple of pairwise coprime ideals.
Note that two coprime ideals are always mutually relatively prime. By
definition there are two elements r ” 0 pRq and s ” 0 pSq such that ε “
r ` s. It follows from T ¨ R ” 0 pSq however that T ¨ r ” 0 pSq, and so
T ¨ ε “ T ” 0 pSq; similarly T̄ ¨ S ” 0 pRq gives T̄ ” 0 pRq.33 It therefore
follows from Lemma V that each representation through pairwise coprime
ideals is reduced.
The following theorem, analogous to Theorem X, lays the foundation for
the proof of uniqueness:
Theorem XIV. If R is coprime to each of the ideals S1 , . . . , Sλ , then
R is also coprime to S “ rS1 , . . . , Sλ s. Conversely, it also follows from the
coprimality of R and S that R is coprime to each Sj . If R “ rR1 , . . . , Rµ s
and each Ri is coprime to each Sj , then R and S are coprime; the converse
again holds here.
If R is coprime to each Sj , then there exist elements sj such that
sj ” 0 pSj q;
32
sj ” ε pRq.
Σ clearly cannot contain more than one unit because of the commutativity of multiplication, since for some two units ε1 and ε2 it holds that ε1 ε2 “ ε2 “ ε1 .
33
The converse does not hold, however; for example, the ideals R “ pxq and S “ pyq
are mutually relatively prime, but not coprime.
33
Hence
s1 ¨ s2 ¨ . . . ¨ sλ ” 0 pSq;
s1 ¨ s2 ¨ . . . ¨ sλ ” ε pRq,
pR, Sq “ pεq.
Because, however, pR, Sq is divisible by each pR, Sj q, the converse also
holds. Repeated application of this conclusion results in the second part of
the statement. Namely, if Ri is coprime to S1 , . . . , Sλ for fixed i, then Ri
is coprime to S. If this holds for every i, then because the relationship of
coprimality is symmetric, S is coprime to R. Conversely, the coprimality of
S to Ri follows from the coprimality of R and S, and from this follows the
coprimality of Ri to Sj .
The proof of the existence and uniqueness34 of the decomposition into
coprime irreducible ideals comes from a unique grouping, like the corresponding proof for relatively prime ideals. Because, however, the relatively
prime irreducible ideals R1 , . . . , Rσ of M are uniquely defined by Theorem
XII, referring back to the associated prime ideals is unnecessary here.
We collect together the uniquely defined relatively prime irreducible ideals R1 , . . . , Rσ of M in groups in such a way that each ideal of a group
is coprime to every ideal of a group different from it, while each individual
group cannot be split into two subgroups such that each ideal of a subgroup is
coprime to every ideal of the other subgroup. One such grouping is given as
follows: by definition, for every ideal R contained in each individual group
G, all ideals not coprime to R must also be contained in G. For instance,
let these be denoted by Ri1 ; let Ri1 i2 be not coprime to these; in general, let
Ri1 ...iλ be not coprime to Ri1 ...iλ´1 . Because we are only dealing with finitely
many ideals in total, this procedure must terminate in finitely many steps,
that is, there comes a point after which no ideals different from all those preceding are yielded; the thus obtained system of ideals contructs a group G
with the desired properties, because all ideals not contained in G are by construction coprime to all those contained in G. Suppose there further exists
a splitting into two subgroups Gp1q and Gp2q ; and without loss of generality
let Ri1 ...iλ be an element of Gp1q . Because the relationship of coprimality
is symmetric, Gp1q must then also contain Ri1 ...iλ´1 , . . . , Ri1 , and so also R
and consequently the whole group G, which proves irreducibility. If we then
34
The existence of the decomposition can again be proved in direct analogy with §2; the
proof of uniqueness can also be conducted directly (cf. that mentioned in the introduction
about Schmeidler and Noether-Schmeidler). The proof given here also gives an insight
into the structure of the coprime irreducible ideals.
34
proceed accordingly with the groups not contained in G, then we obtain a
1
1
grouping G1 , . . . , Gτ of all R. This grouping is unique; let G1 , . . . , Gτ be a
1
1
second grouping and Ri1 ...iλ an element of Gi . Then Gi also contains R and
hence G1 , and because of the irreducibility requirement cannot contain any
1
elements different from G1 ; thus Gi is the same as G1 .
Now let Ti be the least common multiple of the ideals R incorporated
into a group Gi . We show that M “ rT1 , . . . , Tτ s is a representation of
M through coprime irreducible ideals. First of all, the decomposition of the
T into the R shows that rT1 , . . . , Tτ s really does give a represention of M.
Furthermore, by Theorem XIV the T are pairwise coprime, and each T is
coprime irreducible. Hence the existence of such a representation is proven.
For the proof of uniqueness, let M “ rT̄1 , . . . , T̄τ̄ s be a second such
representation. If we decompose the T̄ into their relatively prime irreducible
ideals R, then the R appearing in different T̄ are coprime to each other by
Theorem XIV, and so are also mutually relatively prime; they are therefore
the same as the uniquely defined relatively prime irreducible ideals R of M.
1
By Theorem XIV, the T̄i further generate a grouping Gi of the R with the
given properties. Because, however, this grouping is unique, and every T̄i
1
is uniquely defined by the group Gi “ Gi , it follows that T̄i “ Ti , proving
uniqueness.
Furthermore, for pairwise coprime ideals the least common multiple is
the same as the product.
Because by Theorem XIV the complement Li is also coprime to Ti for
M “ rT1 . . . Tτ s, there therefore exist elements
ti ” 0 pTi q;
li ” 0 pLi q;
ε “ ti ` li .
It therefore follows from f ” 0 pTi q, f ” 0 pLi q that, because
f “ f ε “ f ti ` f li , it also holds that f ” 0 pLi ¨ Ti q.
Because conversely Li ¨ Ti is divisible by rLi , Ti s, it follows that rLi , Ti s “
Li ¨ Ti , and by extension of the procedure over the Li , we conclude that
M “ rT1 , . . . , Tτ s “ T1 ¨ T2 ¨ . . . ¨ Tτ .
We have therefore proved
Theorem XV. Every ideal can be uniquely expressed as the product of
finitely many pairwise coprime and coprime irreducible ideals.
35
§9. Development of the study of modules. Equality of the number of components in decompositions
into irreducible modules.
We now show that the content of the first three sections, which relates
to irreducible ideals, not primary and prime ideals, still holds under less
restrictive conditions. These sections in particular do not use the law of
commutativity of multiplication and concern only the property of ideals
being modules, and so remain upheld for modules over non-commutative
rings, which are now to be defined. The definition of these modules shall be
based upon a double domain pΣ, T q with the following properties:
Σ is an abstractly defined non-commutative ring, that is, Σ is a system
of elements a, b, c, . . . , for which two operations are defined; ring addition
Ś
(#) and ring multiplication ( ), which satisfy the laws set out in §1, with
the exception of law 4. regarding the commutativity of ring multiplication.
T is a system of elements α, β, γ, . . . for which, in conjunction with Σ,
two operations are also defined; addition, which for every two elements α, β
uniquely generates a third α ` β, and multiplication of an element α of T
with an element c of Σ, which uniquely generates an element c ¨ α of T .35
The following laws apply to these operations:
1. The law of associativity of addition: pα ` βq ` γ “ α ` pβ ` γq.
2. The law of commutativity of addition: α ` β “ β ` α.
3. The law of unrestricted and unique subtraction: there exists one and
only one element ξ in T which satisfies the equation α ` ξ “ β (written
ξ “ β ´ α).
Ś
4. The law of associativity of multiplication: a ¨ pb ¨ γq “ pa bq ¨ γ.
5. The law of distributivity: pa#bq ¨ γ “ a ¨ γ ` b ¨ γ; c ¨ pα ` βq “ c ¨ α ` c ¨ β.
The existence of the zero element follows from these conditions, as is
well-known, and also the validity of the law of distributivity for subtraction
and multiplication:
pa ´ bq ¨ γ “ a ¨ γ ´ b ¨ γ;
c ¨ pα ´ βq “ c ¨ α ´ c ¨ β;
35
We are therefore dealing with “right” multiplication, a “right” domain T , and thus
“right” modules and ideals. Were we to have based T on a left multiplication α ¨ c, then
a corresponding theory of the left modules and ideals would follow; M would contain in
Ś
addition to α also α ¨ c. The law of associativity would have the form pγ ¨ bq ¨ a “ γ ¨ pb aq
here.
36
where (´) denotes subtraction in Σ. If Σ contains a unit ε, then ε ¨ α “ α
holds for all elements α of T .
Let a module M over pΣ, T q be understood to be a system of elements
of T which satisfies the following two conditions:
1. For each element α of M , c ¨ α is also an element of M , where c is an
arbitrary element of Σ.
2. For each pair of elements α and β of M , the difference α ´ β is also an
element of M ; therefore for each element α of M , nα is also an element of
M for every integer n.36
By this definition, T itself constitutes a module in pΣ, T q. If in particular
T and the operations defined for it coincide with the ring Σ and the operations applicable to it, then the module M becomes a (right) ideal M in Σ.
If Σ is taken to be commutative, then the usual concept of an ideal arises,
which therefore comes about as a special case of the concept of a module.37
All definitions in §1 remain upheld for modules: so α ” 0 pM q and
N ” 0 pM q mean that α and each element of N respectively are in M ;
in other words, α and N respectively are divisible by M . M is a proper
divisor of N if M contains elements not in N ; it follows from N ” 0 pM q,
M ” 0 pN q that M “ N . The definitions of the greatest common divisor and
the least common multiple remain upheld word for word. In particular, if
M contains a finite number of elements α1 . . . α̺ such that M “ pα1 . . . α̺q,
that is α “ c1 α1 ` ¨ ¨ ¨ ` c̺ α̺ ` n1 α1 ` . . . n̺ α̺ for every α ” 0 pM q, where
the ci are elements of Σ and the ni are integers, then M is called a finite
module, and α1 . . . α̺ a module basis.
In the following we now use, analogously to §1, only domains pΣ, T q that
satisfy the finiteness condition: every module in pΣ, T q is finite, and so has
a module basis.
36
These integers are again to be considered as abbreviatory symbols, not as ring elements.
37
The simplest example of a module is the module consisting of integer linear forms;
here Σ consists of all integers and T consists of all integer linear forms. A somewhat
more general module arises if algebraic integers are used instead of integers in Σ and T ,
or for instance all even numbers. If we consider the complex of all coefficients as one
element each time instead of the linear forms, then the operations in Σ and T are in
fact different. Ideals in non-commutative rings of polynomials form the subject matter
of the joint work of Noether-Schmeidler. The lectures of Hurwitz on the number theory
of quaternions (Berlin, Springer 1919) and the works of Du Pasquier cited there relate to
ideals in further special non-commutative rings.
37
Theorem I of the Finite Chain then also holds for modules for this domain pΣ, T q, as the proof there shows, and thus all requirements for §§2 and
3 are satisfied. It remains to directly carry over Definition I and Lemma I
regarding shortest and reduced representations, and likewise Theorem II regarding the ability to represent each module as the least common multiple of
finitely many irreducible modules, whereby Lemma II shows that each such
shortest representation is immediately reduced. Furthermore, Theorem III
also remains upheld, which conveys the reducibility of a module through the
properties of its complement; and from that follows Lemma III, and finally
Theorem IV, which states the equality of the number of components in two
different shortest representations of a module as the least common multiple
of irreducible modules. Theorem IV then yields Lemma IV as the converse
of Lemma I on reduced representations.
Remark. The same reasoning shows that all of these theorems and definitions remain upheld if we understand all modules to be two-sided (that
is, if α is an element, then c ¨ α and α ¨ c are also elements, and if α and β
are elements, then α ´ β is also an element) for two-sided domains T , that
is, domains that are both left domains and right domains.
While all these theorems are based only on the notions of divisibility and
the least common multiple, the further theorems of uniqueness are based crucially on the concept of product, and therefore do not have a direct translation. Therefore the definitions of primary and prime ideals do not translate
to modules, because the product of two elements of T is not defined. Although it is feasible to formally construct a translation to non-commutative
rings, it loses its meaning, because here the existence of the associated prime
ideals cannot be proved,38 and the proof that an irreducible ideal is primary
also breaks down. However, these two cases lay the foundations for the
38
For example, from
pλ1 1 ” 0 pQq,
pλ2 2 ” 0 pQq
it does not follow that
pp1 ´ p2 qλ1 `λ2 ” 0 pQq
and likewise from
pa ¨ bqλ ” 0 pQq
it does not follow that
aλ ¨ bλ ” 0 pQq.
As a result, P cannot be proven to be an ideal, nor does it have the properties of prime
ideals. For two-sided ideals, P can be proven to be an ideal, but still not a prime ideal.
38
uniqueness theorems – in contrast, if the non-commutative ring contains a
unit, coprime and coprime irreducible ideals can be defined, and the reasoning used in the proof of Theorem II shows that every ideal can be represented
as the least common multiple of finitely many pairwise coprime and coprime
irreducible ideals.39
Finally, we mention another criterion sufficient for the finiteness condition to be satisfied in pΣ, T q: if Σ contains a unit and satisfies the finiteness
condition, and if T is itself a finite module in pΣ, T q, then every module in
pΣ, T q is finite.
It follows from the existence of the unit in Σ namely that for
M “ pf1 , . . . , f̺ q,
f “ b̄1 f1 ` ¨ ¨ ¨ ` b̺̄ f̺ ` n1 f1 ` ¨ ¨ ¨ ` n̺ f̺ ,
where the b̄i are elements of Σ and the ni are integers, there is also a representation of the form f “ b1 f1 ` ¨ ¨ ¨ ` b̺ f̺ , where the bi are elements of Σ.
This is because fi “ εfi implies ni fi “ ni εfi , and since ni ε “ pε ` ¨ ¨ ¨ ` εq
belongs to Σ, it then follows that bi “ b̄i ` ni ε is an element of Σ. The
requirement regarding T now states that every element α of T has a representation
α “ ā1 τ1 ` ¨ ¨ ¨ ` āk τk ` n1 τ1 ` ¨ ¨ ¨ ` nk τk ,
and hence
α “ a1 τ1 ` ¨ ¨ ¨ ` ak τk ,
where the second representation arises from the first because ετi “ τi , as
above.
If the α now run through the elements of a module M in pΣ, T q, then the
coefficients ak of τk run through the elements of an ideal Mk in Σ; by the
p1q
p̺q
above, for each ak ” 0 pMk q, we also have ak “ b1 ak ` ¨ ¨ ¨ ` b̺ ak . If we let
piq
αpiq denote an element of M for which the coefficient of τk is equal to ak ,
then α ´ b1 αp1q ´ . . . ´ b̺ αp̺q is an element belonging to M which depends
only on τ1 , . . . , τk´1 . This procedure can be repeated on the collection of
1
these elements no longer containing τk , which construct a module M , so
that after finitely many repetitions the proposition is proved.
39
For special “completely reduced” ideals, uniqueness theorems can also be established
here; cf. the joint work of Noether-Schmeidler.
39
§10. Special case of the polynomial ring.
1. The ring Σ that we take as our starting point consists of all polynomials in x1 , . . . , xn with arbitrary complex coefficients, for which the finiteness
condition is satisfied due to Hilbert’s Module Basis Theorem (Math. Ann.,
vol. 36). This section concerns the connection of our theorems with the
known theorems of elimination theory and module theory.
This connection is established through the following special case of a
famous theorem of Hilbert’s:40
If f vanishes for every (finite) system of values of x1 , . . . , xn that is a
root of all polynomials of a prime ideal P (we call such a system a root of
P), then f is divisible by P. In other words, a prime ideal P consists of all
polynomials that vanish at these roots.41
If a product f ¨ g vanishes for all roots of P, then at least one factor vanishes; the roots form an irreducible algebraic figure. Conversely, should we
begin with this definition of the irreducible figure, then it follows that the collection of polynomials vanishing on an irreducible figure forms a prime ideal;
prime ideals and irreducible figures therefore correspond with each other bijectively. Furthermore, because Q ” 0 pPq and P̺ ” 0 pQq, the roots of
a primary ideal are the same as those of its associated prime ideal.42 The
representation of an ideal as the least common multiple of maximal primary
ideals therefore yields a partition of all roots of the ideal into irreducible
figures; and as Lasker has shown, the converse also holds. The proof of the
40
Über die vollen Invariantensysteme. Math. Ann. 42 (1893), §3, p313.
This special case can also be proven directly in the case of homogeneous forms, as
Lasker [Math. Ann. 60 (1905), p607] has shown, and then conversely this again implies
the Hilbert Theorem (in the homogeneous and the inhomogeneous case), which we can
state as follows: if an ideal R vanishes at all roots of M, then a power of R is divisible by
M. This theorem, and likewise the special case, only holds however if the codomain of the
x is algebraically closed, and therefore cannot follow from our theorems alone, but must
use the existence of roots; for instance, in the case that an ideal for which the only root is
x1 “ 0, . . . , xn “ 0 contains all products of powers of the x of a particular dimension. The
remaining proof can be simplified somewhat by using our theorems via Lasker. Lasker
must in particular also make use of the Hilbert Theorem for the proof of the decomposition
of an ideal into maximal primary ideals.
42
Macaulay (cf. Introduction) uses this property of a primary ideal having an irreducible
figure in the definition, while Lasker only incorporates the concept of the manifold of a
figure in the definition, which is otherwise abstractly defined. The primary ideals which
vanish only for x1 “ 0, . . . , xn “ 0 take a special place in Lasker.
41
40
uniqueness of the associated prime ideals therefore corresponds here with the
Fundamental Theorem of Elimination Theory regarding the unique decomposability of an algebraic figure into irreducible figures; it can serve as the
equivalent of this theorem of elimination theory for special polynomial rings
where no unique representation of the polynomial as the product of irreducible polynomials of the ring exists, and consequently also no elimination
theory.
The irreducible figures corresponding to the isolated primary ideals are
exactly those occurring in the “minimal resolvent”;43 since the roots of each
non-isolated maximal primary ideal are at the same time roots of at least one
isolated one, namely one for which the associated prime ideal is divisible by
that of the non-isolated ideal. The uniqueness of the isolated primary ideals
therefore yields new invariant multiplicities in the exponents. Additionally,
the uniqueness theorems for the decomposition of the primary ideals into
irreducible ideals can be viewed as an addendum to elimination theory, in
the sense of multiplicity.
Following these remarks, the different decompositions can be interpreted
in their relation to the algebraic figures. The pairwise coprime ideals correspond to figures that have no roots in common; for the mutually relatively
prime ideals, likewise no irreducible figure of one ideal has roots in common
with that of another; the maximal primary ideals vanish only in irreducible
figures which are all different from each other; for the decomposition into
irreducible ideals, the same irreducible figures can also appear repeatedly.
It should be noted that the ring of all homogeneous forms can also be
used in place of the general polynomial ring, since it is easy to convince
ourselves that the general theorems also remain upheld for the operations
valid there44 – the addition is only defined for forms of the same dimension.
A simple example of the four different decompositions – for which the
formulae below follow – is given, following the above remarks, by one straight
line and two further straight lines skew to it and intersecting each other, one
43
Cf. for example J. König, Einleitung in die allgemeine Theorie der algebraischen
Größen (Leipzig, Teubner, 1903), p235.
44
The following example shows that here inhomogeneous decompositions can also exist
as well as homogeneous ones in the case of non-uniqueness, however:
px3 , xy, y 3 q “ rpx3 , yq; py 3 , xqs “ rpxy, x3 , y 3 , x ` y 2 q; pxy, x3 , y 3 , y ` x2 qs.
41
of which contains a point of higher multiplicity other than the point of intersection. The decomposition into coprime-irreducible ideals corresponds to
the decomposition into the straight line and the figure skew to it; this figure
splits into the two straight lines it is composed of for the decomposition into
relatively prime irreducible ideals; the decomposition into maximal primary
ideals corresponds to a detachment of the point of higher multiplicity, while
the decomposition into irreducible ideals requires the removal of this point.
Should we take this point as the starting point, and the straight line
passing through it as the y-axis, the straight line intersecting this parallel
to the x-axis, and the straight line skew to it parallel to the z-axis, then one
such configuration is represented by the following irreducible ideals:45
B1 “ px ´ 1, yq;
B2 “ py ´ 1, zq;
B3 “ px, zq;
B4 “ px3 , y, zq;
B5 “ px2 , y 2 , zq.
The associated prime ideals are:
P1 “ B1 ;
P2 “ B2 ;
P3 “ B3 ;
P4 “ P5 “ px, y, zq.
The maximal primary ideals are:
Q1 “ B1 ;
Q2 “ B2 ;
Q3 “ B3 ;
Q4 “ rB4 , B5 s “ px3 , y 2 , x2 y, zq.
The relatively prime irreducible ideals are:
R1 “ Q1 ;
R2 “ Q2 ;
R3 “ rQ3 , Q4 s “ px3 , x2 y, xy 2 , zq.
The coprime irreducible ideals are:
S1 “ R1 ;
S2 “ rR2 , R3 s “ ppy ´ 1qx3 , py ´ 1qx2 y, py ´ 1qxy 2 , zq.
This produces the total ideal:
M “ rS1 , S2 s
“ S1 ¨ S2
“ ppx ´ 1qpy ´ 1qx3 , py ´ 1qx2 y, py ´ 1qxy 2 , px ´ 1qz, ypy ´ 1qx3 , yzq,
45
The first three of these are irreducible since they are prime ideals; B4 is irreducible
because it only has the divisors px2 , y, zq and px, y, zq; B5 is irreducible because every
divisor contains the polynomial xy.
42
which gives
1 “ ´py ´ 1qx3 ` py ´ 1qpx3 ´ 1q ` y,
py ´ 1qpx3 ´ 1q ` y ” 0 pS1 q;
´py ´ 1qx3 ” 0 pS2 q.
Here the ideals B1 , B2 , B3 are isolated, and so also uniquely determined in the decompositions into irreducible and maximal primary ideals. The ideals B4 and B5 , and respectively Q4 , are non-isolated; they
are not uniquely determined, but rather can be replaced for example with
D4 “ px3 , y ` λx2 , zq, D5 “ px2 ` µxy, y 2 , zq; similarly, Q4 can be replaced
with
Q̄4 “ px3 , x2 y, y 2 ` λxy, zq.
2. Similarly to the general (and the integer) polynomial ring, every
finite integral domain of polynomials also satisfies the finiteness condition46
– as Hilbert’s Module Basis Theorem shows – where the coefficients can be
assigned an arbitrary field. We shall give another example for the ring of
all even polynomials, as the simplest ring where, because x2 ¨ y 2 “ pxyq2 , no
unique factorisation of the polynomial into irreducible polynomials of the
ring exists. It uses the same configuration as in the above example, which
is now given through the irreducible ideals
B1 “ px2 ´ 1, xy, y 2 , yzq;
B2 “ py 2 ´ 1, xz, yz, z 2 q;
B3 “ px2 , xy, xz, yz, z 2 q;
B4 “ px4 , xy, y 2 , xz, yz, z 2 q;
B5 “ px2 , y 2 , xz, yz, z 2 q.
The associated prime ideals are:
P1 “ B1 ;
P2 “ B2 ;
P3 “ B3 ;
P4 “ P5 “ px2 , xy, y 2 , xz, yz, z 2 q.
It follows that P1 is a prime ideal because every polynomial of the ring
has the following form:
f ” φpz 2 q ` xzψpz 2 q pP1 q.
46
Conversely, if the finiteness condition is satisfied for a polynomial ring, and if each
polynomial has at least one representation where the factors of lower degree are in x, then
the ring is a finite integral domain.
43
Therefore let
f1 ” φ1 pz 2 q ` xzψ1 pz 2 q;
f2 ” φ2 pz 2 q ` xzψ2 pz 2 q,
and thus, since f1 ¨ f2 ” 0 pP1 q, we have the existence of the following
equations:
φ1 φ2 ` z 2 ψ1 ψ2 “ 0;
φ2 ψ1 ` φ1 ψ2 “ 0,
and so f1 ” 0 pP1 q or f2 ” 0 pP1 q.
We can show that P2 is a prime ideal in precisely the same way; P3 is
a prime ideal because every polynomial of the ring mod P3 is congruent to
a polynomial in y 2 ; P4 consists of all polynomials of the ring. It therefore
also follows that B1 , B2 and B3 are irreducible, as they are prime ideals;
B4 and B5 each have only the sole proper divisor P4 , and so are necessarily
irreducible.
From the irreducible ideals arise the maximal primary ideals:
Q1 “ B1 ; Q2 “ B2 ; Q3 “ B3 ; Q4 “ rB4 , B5 s “ px4 , x3 y, y 2 , xz, yz, z 2 q;
the relatively prime irreducible ideals:
R1 “ Q1 ;
R2 “ Q2 ;
R3 “ rQ3 , Q4 s “ px4 , x3 y, x2 y 2 , xy 3 , xz, yz, z 2 q,
the coprime irreducible ideals:
S1 “ R1
S2 “ rR2 , R3 s
“ ppy 2 ´ 1qx4 , py 2 ´ 1qx3 y, py 2 ´ 1qx2 y 2 , py 2 ´ 1qxy 3 , xz, yz, z 2 q.
As in the first example, it follows that
rB4 , B5 s “ rD4 , D5 s;
where D4 “ px4 , xy ` λx2 , . . . q and D5 “ px2 ` µxy, . . . q for λ ¨ µ ‰ 1; and
rQ3 , Q4 s “ rQ3 , Q̄4 s;
where Q̄4 “ px4 , x3 y, y 2 ` λxy, . . . q.
The remaining irreducible ideals (and maximal primary ideals respectively) B1 , B2 , B3 are uniquely determined as isolated ideals.
44
§11. Examples from number theory and the theory
of differential expressions.
1. Let the ring Σ consist of all even integers. Σ can then be bijectively
assigned to all of the integers, since each number 2a in Σ can be allocated
the number a. From this it immediately follows that every ideal in Σ is a
principal ideal p2aq, where in the basis representation 2c “ n ¨ 2a of each
element 2c of the ideal, the odd numbers n only amount to abbreviations
for finite sums.
The prime ideals of the ring are given by P0 “ O “ p2q and P “ p2pq,
where p is an odd prime number; therefore every prime ideal is divisible by
P0 , but by no other prime ideal. The primary ideals are given by Q̺0 “
p2 ¨ 2̺0 q and Q̺ “ p2p̺ q; they are at the same time irreducible ideals, and by
what has been said about prime ideals, each two corresponding to different
odd prime numbers are mutually relatively prime, but no Q is relatively
prime to any Q̺0 ,47 and so the Q̺0 are the only non-isolated primary ideals.
The unique decomposition of a into prime powers corresponds to the unique
representation of the ideal p2aq through maximal primary (and at the same
time irreducible) ideals:
p2aq “ rp2 ¨ 2̺0 q, p2p1 q̺1 , . . . , p2pα q̺α s;
therefore, contrary to the examples from the polynomial ring, the nonisolated maximal primary ideals are also uniquely determined. As for ̺0 “ 0,
it behaves also as a representation through mutually relatively prime ideals,
while for ̺0 ą 0 the ideal is relatively prime irreducible.
While the four different decompositions therefore coincide in the ring
of all integers, this is only the case here for the two decompositions into
maximal primary ideals and irreducible ideals on the one hand, and for the
decompositions into coprime irreducible ideals (every ideal is coprime irreducible, because the ring has no units) and relatively prime irreducible ideals
for ̺0 ą 0 on the other hand, while for ̺0 “ 0 the coprime irreducible and
relatively prime irreducible decompositions are different from each other.
At the same time, an example presents itself here. A prime ideal can be
divisible by another without having to be identical to it; more generally, the
In fact, it always follows from 2b ¨ 2p̺11 ” 0 p2p̺22 q for odd p1 ‰ p2 that 2b ” 0 p2p̺22 q;
however, 2b ¨ 2p̺ ” 0 p2 ¨ 2̺0 q only implies that 2b ” 2 ¨ 2̺0 ´1 p2 ¨ 2̺0 q.
47
45
factorisation does not follow from divisibility. The last one – as a consequence of the fact that the ring contains no units – is also the reason that
no unique factorisation of the numbers of the ring into irreducible numbers
of the ring exists, although each ideal is a principal ideal; the introduction of
the least common multiple therefore proves to be necessary here. It should
also be noted that the relationships remain exactly the same if instead of
all even numbers, we use all numbers divisible by a fixed prime number or
prime power.
However, irreducible and primary ideals are different as well if Σ consists
of all numbers divisible by a composite number g “ p1σ1 . . . pσν ν . Every ideal
is again a principal ideal pg ¨ aq, and the prime ideals are again given by
P0 “ O “ pgq, P “ pg ¨ pq, where p is a prime number different from the
prime numbers p appearing in g. In contrast, the irreducible ideals are given
by Bλi “ pg¨pλi i q, B̺ “ pg¨p̺ q; the primary ideals are given by Q̺ “ B̺ and
by the Qλ1 ...λν “ pg ¨ pλ1 1 . . . pλν ν q different from the irreducible ideals, where
the Bλi and Qλ1 ...λν all have the same associated prime ideal P0 “ pgq.
Uniqueness of the decomposition into irreducible ideals also holds here, and
consequently uniqueness of the decomposition into maximal primary ideals
too, and so the non-isolated ideals are again uniquely determined here too.
2. One example of a non-commutative ring is presented by the ideal theory in non-commutative polynomial rings discussed in the Noether-Schmeidler
paper. It concerns in particular “completely reducible” ideals, that is, ideals for which the components of the decomposition are pairwise coprime and
have no proper divisors; the components are therefore a fortiori irreducible.
The equality of the number of components in two different decompositions
therefore follows from §9, in addition to the isomorphism proved there. Thus
a consequence of the decomposition of systems of partial or ordinary linear
differential expressions arising as a special case of the paper is obtained,
which even appears not to have been remarked upon in the well-known case
of an ordinary linear differential expression.
Meanwhile, the system T of all cosets of a fixed ideal M together with
the non-commutative polynomial ring Σ gives a double domain pΣ, T q, where
T has the module property with respect to Σ, since the difference of two
cosets is again a coset, and likewise the product of a coset with an arbitrary
polynomial, whereas the product of two cosets does not exist (loc. cit.
§3). The systems of cosets denoted there as “subgroups” form examples of
46
modules in double domains pΣ, T q where the ring Σ is non-commutative.
§12. Example from elementary divisor theory.
This section deals with a concept of elementary divisor theory contingent on the general developments thereof, which is however itself presumed
to be known.
Let Σ be the ring of all integer matrices with n2 elements, for which addition and multiplication are defined in the conventional sense for matrices.
Σ is then a non-commutative ring; the ideals are thus in general one-sided,
and two-sided ideals only arise as a special case.48
We first show that every ideal is a principal ideal. In order to do this,
we allocate (for right ideals) each matrix A “ paik q a module
A “ pa11 ξ1 ` ¨ ¨ ¨ ` a1n ξn , . . . , an1 ξ1 ` ¨ ¨ ¨ ` ann ξn q
consisting of integer linear forms. Conversely, each matrix which gives a
basis of A corresponds to this module, so in addition to A, U A also corresponds to A, where U is unimodular. More generally, a module B which is
a multiple of A corresponds to the product P A. A single linear form from
A is given by a P which contains only one non-zero row.
Now let A1 , A2 , . . . , Aν , . . . be all the elements of an ideal M, with A1 ,
A2 , . . . , Aν , . . . the modules assigned to them, A the greatest common
divisor of these, and U A the most general matrix assigned to this module A.
To every individual linear form from A then corresponds a matrix P1 Ai1 `
¨ ¨ ¨ ` Pσ Aiσ by definition of the greatest common divisor, where as above
the P contain only one non-zero row. From this it follows that the matrix
A corresponding to a basis of A can also have such a representation, now
with general P , and thus is an element of M. Because furthermore every
module Ai is divisible by A, every matrix Ai is therefore divisible by A; A,
and in general U A, forms a basis of M. If we were to deal with left ideals,
then we would have to consider the columns of each matrix as the basis of a
module accordingly; each ideal is a principal ideal for which A being a basis
48
The ideal theory of these rings forms the subject of the papers of Du Pasquier: Zahlentheorie der Tettarionen, Dissertation Zürich, Vierteljahrsschr. d. Naturf. Ges. Zürich,
51 (1906). Zur Theorie der Tettarionenideale, ibid., 52 (1907). The content of the two
papers is the proof that each ideal is a principal ideal.
47
implies that AV is also a basis, where V is understood to be an arbitrary
unimodular matrix.
In the following we now use two-sided ideals in the context of elementary divisor theory, which in particular implies that if A belongs to the
ideal, then so does P AQ. Thus, by that shown above, the most general
basis of one such ideal is given by U AV , where U and V are unimodular.
The basis elements therefore run through a class of equivalent matrices,49
and a one-to-one relationship exists between the ideal and the class. Consequently, such a relationship also exists between the ideal and the elementary
divisor system pa1 |a2 | . . . |an q of the class, where the ai are known to be nonnegative integers, each of which divides the following one. The matrix of
the class occurring in the normal form induced by the elementary divisors
can therefore be understood as a special basis of the ideal; the divisibility
of the elementary divisors follows from that of the ideals, respectively the
classes, and vice versa.
Now, however, Du Pasquier has shown loc. cit. §11 that for each twosided ideal the rank is n and all elementary divisors are the same. In order
to be able to consider the case of general elementary divisors, we must
therefore start not from the ideals, but directly from the two-sided classes
(which shall be denoted by capital letters such as A, B, C, . . . ). A class
A “ U AV is therefore divisible by another class B here if A “ P BQ.
In general, the following statement holds: the least common multiple
(greatest common divisor) of two classes is obtained through the construction
of the least common multiple (greatest common divisor) of the corresponding
systems of elementary divisors.50
Let pa1 |a2 | . . . |an q, pb1 |b2 | . . . |bn q, pc1 |c2 | . . . |cn q be the systems of elementary divisors of A, B, C˚ respectively, where ci “ rai , bi s, and let
C “ rA, Bs. Then C˚ is divisible by A and B, and therefore by C, and
conversely the elementary divisors of C are divisible by those of C˚ , which
implies that C is divisible by C˚ , and thus C “ C˚ . The proof for the greatest
49
The right and left classes respectively corresponding to the basis elements of the onesided ideals.
50
In the paper Zur Theorie der Moduln, Math. Ann. 52 (1899), p1, E. Steinitz defines
the least common multiple (greatest common divisor) of classes through least common multiples and greatest common divisors of the systems of elementary divisors. Independently
of this, the least common multiple of classes is found as the “congruence composition” in
H. Brandt, Komposition der binären quadratischen Formen relativ einer Grundform, J. f.
M. 150 (1919), p1.
48
common divisors proceeds likewise.
The unique representation of the elementary divisors ai as the least common multiple of prime powers therefore corresponds to a representation of A
as the least common multiple of classes Q, the elementary divisors of which
are given by the powers of one prime number, or in symbols:
Q „ ppr1 |pr2 | . . . |pr̺ |0| . . . |0q;
r1 ď r2 ď ¨ ¨ ¨ ď r̺ .
If in particular the rank ̺ is equal to n, then we have a decomposition into
coprime and coprime irreducible classes here, which, although the ring is
non-commutative, is unique.
The classes Q can further be decomposed into classes which correspond
to the system of elementary divisors:
B1 „ ppr1 | . . . |pr1 q; B2 „ p1|pr2 | . . . |pr2 q; . . . ; B̺ „ p1| . . . |1|pr̺ | . . . |pr̺ q;
B̺`1 „ p1| . . . |1|0| . . . |0q,
where the number 1 appears pν ´ 1q times in each Bν . If r1 “ r2 “ . . . “
rµ “ 0 here, then B1 , . . . , Bµ are equal to the trivial class and are therefore
left out of the decomposition; the same holds for B̺`1 if ̺ “ n. If furthermore rν “ rν`1 “ . . . “ rν`λ , then Bν`1 , . . . , Bν`λ are proper divisors
of Bν , and are therefore likewise to be excluded. Denote those remaining by Bi1 , . . . , Bik , which now provide a shortest representation, so that
Q “ rBi1 , . . . , Bik s is the unique decomposition of Q into irreducible classes.
Namely, let Bν „ p1| . . . |1|prν | . . . |prν q be representable as the least common multiple of C „ p1| . . . |1|ps1 | . . . |psλ q and D „ p1| . . . |1|pt1 | . . . |ptµ q:
then the number 1 must appear in the first pν ´ 1q positions of the system
of elementary divisors of C and D; in the νth position, the exponent s1 or
t1 must be equal to rν ; let this be s1 without loss of generality. Because
however rν “ s1 ď s2 ď sλ ď rν , it follows that C “ Bν , and so Bν is
irreducible.51 The same applies for B̺`1 , where p is replaced with 0. Each
51
In contrast, the B are reducible in one-sided classes; here no unique relationship exists
between elementary divisors and classes anymore, and therefore there is also no unique
decomposition into irreducible one-sided classes. This is shown by the following example
given to me by H. Brandt regarding the decomposition into right classes (where the classes
are represented by a basis matrix, and respectively by the corresponding module):
˜
¸
˜
¸
p 0
1 0
B “ rC1 , C2 s “ rD1 , D2 s; B „
; C1 „
;
0 p
0 p
49
of these irreducible classes gives, as the construction of the least common
multiple shows, a particular exponent, and the position where this exponent first appears in the system of elementary divisors of Q, respectively A,
while B̺`1 indicates the rank. Because these numbers are uniquely determined by the system of elementary divisors of Q, and respectively A, and
because the relation between elementary divisors and classes is bijective, the
decomposition of Q, and likewise that of an arbitrary class, into irreducible
classes is therefore unique. In summary, we can say the following: Every
two-sided class A consisting of integer matrices with a bounded number of
elements can be uniquely expressed as the least common multiple of finitely
many irreducible two-sided classes. Each irreducible class in this represents
a fixed prime divisor of the system of elementary divisors of A, an associated
exponent, and the position where this exponent first appears. The irreducible
class corresponding to the divisor 0 indicates the rank of A.
Erlangen, October 1920.
(Received on 16/10/1920.)
C2 „
˜
p
0
0
1
¸
;
D1 „
˜
1
0
0
p
¸
;
D2 „
˜
p
pp ´ 1q
0
1
¸
.
In fact, the modules pξ, pηq, ppξ, ηq, ppξ, pp ´ 1qξ ` ηq each have only the module pξ, ηq as
a proper divisor, and so are irreducible and furthermore are different from each other.
50
Translator’s notes.
Pairwise coprime (p2) - This is defined in the usual way.
Relatively prime (p2) - Although today we consider this to mean the same
as pairwise coprimality, Noether distinguishes the two.
Residual module (p25) - At the time this paper was written, the idea of a
quotient was relatively new, and so different names were in use. Lasker’s
residual module is simply another name for the same concept.
Double domain (p36) - In modern terms this is a non-commutative ring and
a module on it. Noether defines a module on a double domain to be what
in modern terms is a submodule of the module in the double domain.
Integer linear forms (p37) - Also known as integral linear combinations.
Domains (p38) - This concept does not appear to have a direct modern
equivalent. It seems to be similar to the idea of a ring, but more ambiguous
in definition. As T is clearly not itself a ring, this name has been kept.
Irreducible algebraic figure (p40) - This concept has a long history, most
notably used by Weierstrass. It is in many ways similar to the modern
notion of an algebraic variety, however there is not enough evidence to show
that they are indeed the same, so this term has been avoided.
Acknowledgements.
Special thanks go to John Rawnsley, Colin McLarty, John Stillwell, Steve
Russ, Derek Holt, Daniel Lewis and Anselm Bründlmayer for their invaluable advice, and in particular to Jeremy Gray for providing this opportunity
and for his support throughout.
51
| 0 |
CONJUGACY GROWTH OF COMMUTATORS
arXiv:1802.09507v1 [] 26 Feb 2018
PETER S. PARK
Abstract. For the free group Fr on r > 1 generators (respectively, the free product G1 ∗ G2 of
two nontrivial finite groups G1 and G2 ), we obtain the asymptotic for the number of conjugacy
classes of commutators in Fr (respectively, G1 ∗ G2 ) with a given word length in a fixed set of free
generators (respectively, the set of generators given by the nontrivial elements of G1 and G2 ). A
geometric interpretation of our result is that for any connected CW-complex X with fundamental
group isomorphic to Fr or G1 ∗ G2 , we obtain the asymptotic number of free homotopy classes
of loops γ : S 1 → X with word length k such that there exists a torus Y with one boundary
component and a continuous map f : Y → X satisfying f (∂Y ) = Im γ. Our result is proven by
using the classification of commutators in free groups and in free products by Wicks, and builds
on the work of Rivin, who asymptotically counted the conjugacy classes of commutator-subgroup
elements in Fr with a given word length.
1. Introduction
Let G be a finitely generated group with a finite symmetric set of generators S. Any element
g ∈ G can then be written as a word in the letters of S, and one can define the length of g by
inf
k∈Z≥0 :∃c1 ,...,ck ∈S
g=c1 ···ck
k.
Consider the closed ball Bk (G, S) ⊂ G of radius k in the word metric, defined as the subset
consisting of elements with length ≤ k. One can then ask natural questions about the growth of
G: how large is |Bk (G, S)| as k → ∞, and more generally, what connections can be made between
the properties of G and this notion of its growth rate? For example, one of the pioneering results
on this question is that of Gromov [11], who classified the groups G with polynomial growth.
Since the middle of the 20th century, the growth of groups has been widely studied in various
contexts largely arising from geometric motivations, such as characterizing the volume growth of
Riemannian manifolds and Lie groups. In fact, in addition to applying knowledge on the growth
of groups to describe growth in such geometric settings, one can also pass information in the
other direction, i.e., use information on volume growth in geometric settings to yield consequences
regarding the growth of the relevant groups. For instance, Milnor [19] used inequalities relating
the volume and curvature of Riemannian manifolds to prove that the fundamental group of any
compact, negatively-curved Riemannian manifold has at least exponential growth.
A group that arises especially commonly in such geometric contexts is the free group Fr on r > 1
generators, which also has exponential growth; more precisely, after fixing a symmetric generating
−1
set S ··= {x1 , . . . , xr , x−1
1 , . . . , xr }, it is easy to see that
k
k
X
X
r (2r − 1)k − 1
i−1
2r(2r − 1)
=1+
|∂Bi (G, S)| = 1 +
,
|Bk (G, S)| = 1 +
r−1
i=1
i=1
where ∂Bk (G, S) denotes the subset of length-k elements.
Date: February 27, 2018.
1
2
PETER S. PARK
1.1. Motivation and main results. In certain contexts, it is more natural to consider the growth
rate of the conjugacy classes of G. For a given conjugacy class C of G, define the length of C by
inf length(g),
g∈C
and define ∂Bkconj (G, S) as the set of conjugacy classes of G with length k. In the case of Fr ,
the minimal-length elements of a conjugacy class are precisely its cyclically reduced elements, all
of which are cyclic conjugates of each other. The conjugacy growth of Fr can be described as
∂Bkconj (G, S) ∼ (2r − 1)k /k, which agrees with the intuition of identifying the cyclic conjugates
among the 2r(2r − 1)k−1 words of length k; for the full explicit formula, see [15, Proposition 17.8].
One context for which conjugacy growth may be a more natural quantity to study than the growth
rate in terms of elements is when characterizing the frequency with which a conjugacy-invariant
property occurs in G. An example of such a property is membership in the commutator subgroup
[G, G]. On this front, Rivin [21] computed the number ck of length-k cyclically reduced words in
Fr that are in the commutator subgroup (i.e., have trivial abelianization) to be the constant term
in the expression
!
r
X
√
1
1
k
√
,
xi +
(2 2r − 1) Tk
xi
2 2r − 1 i=1
where Tk denotes the kth Chebyshev polynomial of the first kind. This quantity can asymptotically
be described as ck ∼ Cr (2r − 1)k /kr/2 for some positive constant Cr depending only on r. Furthermore, from the number of cyclically reduced words with trivial abelianization, one can derive
the growth of conjugacy classes with trivial abelianization by using Möbius inversion, due to the
following relationships:
X
ck =
pd ,
d|k
where pd denotes the number of primitive (i.e., not a proper power of any subword) length-d words
with trivial abelianization, and
X pd
,
|∂Bkconj (G, S) ∩ [G, G]| =
d
d|k
which together imply by Möbius inversion that
′
X1X
X ce X µ(d )
d
µ
|∂Bkconj (G, S) ∩ [G, G]| =
ce =
d
e
e
d′
′ k
d|k
(1.1)
=
e|d
e|k
d |e
X ce φ(k/e) X φ(k/e)
·
=
ce
.
e
k/e
k
e|k
e|k
In the above, φ denotes the Euler totient function. For details on this derivation, the reader is
directed to [15, Chapter 17] .
In this paper, we answer the analogous question for commutators rather than for commutatorsubgroup elements. This new inquiry is structurally different in that it aims to solve a Diophantine equation over a group G (whether, for a given W ∈ G, there exist X, Y ∈ G such that
XY X −1 Y −1 = W ), rather than a subgroup-membership problem (whether W is in [G, G]). In
particular, the set of commutators is not multiplicatively closed, so we cannot use primitive words
as a bridge between counting cyclically reduced words and counting conjugacy classes as above.
Instead, we use a theorem of Wicks [28], which states that an element of Fr is a commutator if and
only if it is a cyclically reduced conjugate of a commutator satisfying the following definition.
CONJUGACY GROWTH OF COMMUTATORS
3
Definition 1.1. A Wicks commutator of Fr is a cyclically reduced word W ∈ Fr of the form
ABCA−1 B −1 C −1 . Equivalently, it is a word of the form ABCA−1 B −1 C −1 such that the subwords
A, B, and C are reduced; there are no cancellations between the subwords A, B, C, A−1 , B −1 , and
C −1 ; and the first and last letters are not inverses.
Using this classification, we count the number of conjugacy classes of commutators in Fr with
length k by counting the number of Wicks commutators with length k.
Theorem 1.2. Let k ≥ 0 be even. The number of distinct conjugacy classes of commutators in Fr
with length k is given by
k
(2r − 2)2 (2r − 1) 2 −1 2
k + Or (k) ,
96r
where the implied constant depends only on r and is effectively computable.
Note that the number of conjugacy classes of commutators in Fr is roughly proportional to the
square root of the number (1.1) of all conjugacy classes with trivial abelianization.
We also employ a similar argument, using Wicks’ characterization of commutators in free products, to answer the analogous question for the free product G1 ∗ G2 of two nontrivial finite groups.
We consider the set of generators S ··= (G1 \ {1}) ∪ (G2 \ {1}) of G1 ∗ G2 . Then, a theorem of
Wicks [28] analogous to the previous one implies that an element of G1 ∗ G2 is a commutator if and
only if it is a cyclic conjugate of a commutator satisfying the following definition.
Definition 1.3. A Wicks commutator of G1 ∗ G2 is a word W ∈ G1 ∗ G2 that is fully cyclically
reduced, which is to say that the adjacent letters (i.e., nonidentity elements of G1 and G2 in the
word) are in different factors of the free product, as are the first and last letters; and in one of the
following forms:
(1) α ∈ G1 that is a commutator of G1 ,
(2) α ∈ G2 that is a commutator of G2 ,
(3) Aα1 Aα−1
2 for α1 , α2 ∈ G2 that are conjugates,
(4) α1 Aα−1
2 A for α1 , α2 ∈ G1 that are conjugates,
(5) ABA−1 B −1 ,
(6) Aα1 Bα2 A−1 α3 B −1 α4 for α1 , α2 , α3 , α4 ∈ G2 satisfying α4 α3 α2 α1 = 1,
(7) α1 Aα2 Bα3 A−1 α4 B −1 for α1 , α2 , α3 , α4 ∈ G1 satisfying α4 α3 α2 α1 = 1,
(8) Aα1 Bβ1 Cα2 A−1 β2 B −1 α3 C −1 β3 for α1 , α2 , α3 , β1 , β2 , β3 ∈ G2 satisfying α3 α2 α1 = 1 and
β3 β2 β1 = 1, with A, B, and C nontrivial,
(9) β1 Aα1 Bβ2 Cα2 A−1 β3 B −1 α3 C −1 for α1 , α2 , α3 , β1 , β2 , β3 ∈ G1 satisfying α3 α2 α1 = 1 and
β3 β2 β1 = 1, with A, B, and C nontrivial.
A fully cyclically reduced element W ∈ G1 ∗ G2 with length k > 1 alternates between k/2 letters
in G1 \ {1} and k/2 letters in G2 \ {1}, where k/2 is necessarily an integer. Thus, if k > 1 is odd,
then there are no fully cyclically reduced elements of G1 ∗ G2 with length k. Furthermore, for W
to be a Wicks commutator not of the form (1) or (2) in Definition 1.3, it is necessary not only that
k is even, but also that k/2 is an even integer. Thus, all but possibly finitely many length-1 Wicks
commutators of G1 ∗ G2 have length divisible by 4. Thus, for any k divisible by 4, we obtain the
number of length-k conjugacy classes in G1 ∗ G2 comprised of commutators.
Theorem 1.4. Let k ≥ 0 be a multiple of 4. The number of distinct conjugacy classes of commutators in G1 ∗ G2 with length k is given by
k
k
1
(|G1 | − 1) (|G2 | − 2)2 + (|G1 | − 2)2 (|G2 | − 1) k2 (|G1 | − 1) 4 −1 (|G2 | − 1) 4 −1
192
k
k
+O|G1 |,|G2| k(|G1 | − 1) 4 (|G2 | − 1) 4 ,
where the implied constant only depends on |G1 | and |G2 |, and is effectively computable.
4
PETER S. PARK
Suppose that 4 | k > 0. Then, the set of cyclically reduced elements of G1 ∗ G2 (whose letters of
odd position are elements of G1 , without loss of generality) with length k and trivial abelianization
bijectively maps to the Cartesian product of two sets: the set of closed paths of length k/2 on the
complete graph K|G1 | with fixed basepoint P1 , and the set of closed paths of length k/2 on the
complete graph K|G2 | withP
fixed basepoint P2 . For a graph E, the number of closed paths on E
with length n is given by
λn , where λ ranges over the eigenvalues (counted with multiplicity)
of the adjacency matrix of E. For m ≥ 2, the adjacency matrix of Km is the m × m matrix with
diagonal entries 0 and all other entries 1; its eigenvalues are m − 1 (with multiplicity 1) and −1
(with multiplicity m − 1). Consequently, for even integers n > 0, the number of closed paths on
Km with length n is given by (m − 1)n + (m − 1)(−1)n = m(m − 1)n−1 . By symmetry between
the m vertices, the number of such closed paths with a given basepoint is (m − 1)n−1 . Thus, the
number of cyclically reduced words with length k and trivial abelianization in G1 ∗ G2 is given by
k
k
(|G1 | − 1) 2 −1 (|G2 | − 1) 2 −1 .
Applying Möbius inversion as done in (1.1), we see that the number of conjugacy classes of commutators in G1 ∗ G2 with length k is roughly comparable to the square root of the number of all
length-k conjugacy classes with trivial abelianization.
1.2. Geometric applications. Counting conjugacy classes of commutators has a topological application. Let X be a connected CW-complex with fundamental group π1 (X), and let C be a
conjugacy class of π1 (X) with trivial abelianization, corresponding to the free homotopy class of a
homologically trivial loop γ : S 1 → X. Then, the commutator length of C, defined as the minimum
number of commutators whose product is equal to an element of C, is also the minimum genus of
an orientable surface (with one boundary component) that continuously maps to X so that the
boundary of the surface maps to γ [3, Section 2.1]. Thus, a conjugacy class of π1 (X) is comprised of
commutators if and only if its corresponding free homotopy class γ : S 1 → X satisfies the following
topological property:
(⋆)
There exists a torus Y with one boundary component
and a continuous map f : Y → X satisfying f (∂Y ) = Im γ.
Consequently, we obtain the following.
Corollary 1.5. Let X be a connected CW-complex.
(1) Suppose X has fundamental group Fr with a symmetric set of free generators S. Then, the
number of free homotopy classes of loops γ : S 1 → X with length k (in the generators of S)
satisfying Property (⋆) is given by
k
(2r − 2)2 (2r − 1) 2 −1 2
k + Or (k) ,
96r
where the implied constant depends only on r and is effectively computable.
(2) Suppose X has fundamental group G1 ∗ G2 with the set of generators
S ··= (G1 \ {1}) ∪ (G2 \ {1}).
Then, the number of free homotopy classes of loops γ : S 1 → X with length k (in the
generators of S) satisfying Property (⋆) is given by
k
k
1
(|G1 | − 1) (|G2 | − 2)2 + (|G1 | − 2)2 (|G2 | − 1) k2 (|G1 | − 1) 4 −1 (|G2 | − 1) 4 −1
192
k
k
+O|G1 |,|G2| k(|G1 | − 1) 4 (|G2 | − 1) 4 ,
where the implied constant depends only on |G1 | and |G2 |, and is effectively computable.
CONJUGACY GROWTH OF COMMUTATORS
5
For a connected CW-complex with a natural geometric definition of length for loops, one expects
the word length of a free homotopy class of loops to correlate with the geometric length of the free
homotopy class, taken to be the infimum of the geometric lengths
of its loops, in some way. A
Wr
trivial example of this phenomenon is in the case of a wedge j=1 S 1 of r unit circles, for which
there is perfect correlation; a free homotopy class in this space has word length k if and only if the
minimal geometric length of a loop in the class is k.
A more interesting example of the connection between word length and geometric length arises
from hyperbolic geometry. Let X ∼
= Γ\H be a hyperbolic orbifold, where Γ ⊂ PSL2 (R) denotes
a Fuchsian group. Then, every free homotopy class of loops in X that does not wrap around
a cusp has a unique geodesic representative, so the geometric length of the class is realized as
the geometric length of this unique closed geodesic [2, Theorem 1.6.6]. In particular, we have a
canonical correspondence between hyperbolic free homotopy classes of loops (i.e., ones that do not
wrap around a cusp) and closed geodesics. Let us specialize to the case that X is a pair of pants
(surface obtained by removing three disjoint open disks from a sphere). We have that X ∼
= Γ\H for
Γ∼
F
,
so
fix
a
symmetric
set
of
free
generators.
Then,
Chas,
Li,
and
Maskit
[4]
have
conjectured
= 2
from computational evidence that the geometric length and the word length of a free homotopy
class of loops are correlated in the following fundamental way.
Conjecture 1.6 (Chas–Li–Maskit). For an arbitrary hyperbolic metric ρ on the pair of pants that
makes its boundary geodesic, let Dk denote the set of free homotopy classes with word length k in
a fixed choice of free generators for the fundamental group. Then, there exist constants µ and σ
(depending on ρ) such that for any a < b, the proportion of C ∈ Dk such that the geometric length
h(C) of the unique geodesic representative of C satisfies
h(C) − µk
√
∈ [a, b]
k
converges to
Z b
x2
1
√
e− 2σ2 dx
σ 2π a
as k → ∞. In other words, as k → ∞, the distribution of geometric lengths g(C) for C ∈ Dk
approaches the Gaussian distribution with mean µk and standard deviation σk.
It is natural to expect that even when restricting to commutators, the resulting distribution would
still exhibit a fundamental correlation between geometric length and word length. If this were
proven true, then one could use Corollary 1.5 to indirectly count closed geodesics in X with geometric length ≤ L satisfying Property (⋆).
For a general hyperbolic orbifold Γ\H, we have the aforementioned natural correspondence between the closed geodesics of Γ\H and the hyperbolic conjugacy classes of Γ. In the case that Γ is
finitely generated, we claim that the number of non-hyperbolic conjugacy classes of commutators
with word length k is small. Indeed, Γ only has finitely many elliptic conjugacy classes. Moreover,
Γ only has finitely many primitive parabolic conjugacy classes, corresponding to equivalence classes
of cusps. Denoting the primitive parabolic conjugacy classes by {Ci }1≤i≤n , we can conjugate each
Ci by elements in Γ to assume without loss of generality that the representative gi is fully cyclically
reduced. Then, since every parabolic conjugacy class is a power of a primitive one, we see that the
number of parabolic classes with word length k is at most n.
Suppose further that Γ is isomorphic to Fr (respectively, to G1 ∗G2 for two nontrivial finite groups
G1 and G2 ) and fix a set of generators. We see that the number of non-hyperbolic conjugacy
classes of commutators with length k is bounded by a constant, and thus clearly dominated by
the error term of ≪ k(2r − 1)k/2 (respectively, ≪ k(|G1 | − 1)k/4 (|G2 | − 1)k/4 ). In fact, it is
known [24] that a commutator of a free group cannot be a proper power, which shows that the
number of non-hyperbolic conjugacy classes of commutators is finite in this case. For the latter case
6
PETER S. PARK
of Γ ∼
= G1 ∗ G2 , the work of Comerford, Edmunds, and Rosenberger [5] gives the possible forms
that a proper-power commutator can take in a general free product. After discarding the O(1)
number of non-hyperbolic conjugacy classes of commutators with word length k, one obtains from
Corollary 1.5 the asymptotic number of closed geodesics γ with word length k satisfying Property
(⋆), with the error term unchanged. It is natural to expect for Γ\H a correlation between the
word length and the geometric length similar to that conjectured for the pair of pants. If such a
correlation were proven true, then Corollary 1.5 would yield information on the number of closed
geodesics in Γ\H with length ≤ L satisfying Property (⋆).
For an example of such a hyperbolic orbifold Γ\H whose fundamental group Γ is isomorphic to
G1 ∗ G2 , we introduce the Hecke group H(λ) for a real number λ > 0, defined by the subgroup of
PSL2 (R) generated by
0
−1
1
λ
S ··=
and
Tλ ··=
.
1 0
0 1
Denote λq ··= 2 cos(π/q) for integers q ≥ 3. Hecke [12] proved that H(λ) is Fuchsian if and only if
λ = λq for some integer q ≥ 3 or λ ≥ 2. Specializing to the former case, H(λq ) has the presentation
hS, Rλ : S 2 = Rλq = Ii ∼
= Z/2Z ∗ Z/qZ,
where the generator Rλ can be taken to be STλ [13]. Thus, our desired example H(λq )\H has
fundamental group isomorphic to Z/2Z ∗ Z/qZ. In particular, note that H(λ3 ) = PSL2 (Z) (with
the standard generators S and T1 ), which has a presentation as the free product Z/2Z ∗ Z/3Z.
1.3. Connections to number theory. Counting closed geodesics on hyperbolic orbifolds Γ\H
by geometric length is a well-studied problem that often has analogies (in both the asymptotic
of the resulting counting function and the techniques used to prove it) to counting problems in
number theory. To illustrate, the primitive hyperbolic conjugacy classes of Γ correspond precisely
to the primitive closed geodesics of Γ\H. These are called prime geodesics because when ordered
by trace, they satisfy equidistribution theorems analogous to those of prime numbers, such as the
prime number theorem (generally credited to Selberg [25], while its analogue for surfaces of varying
negative curvature was proven by Margulis [16]) and Chebotarev’s density theorem (proven by
Sarnak [22]). Specifically, this analogue of the prime number theorem is called the prime geodesic
theorem, which states that the number of prime geodesics γ ∈ Γ\H with norm ≤ N is asymptotically
given by ∼ N/ log N , where the norm n(γ) denotes the real
number ρ> 1 such that the Jordan
±ρ 0
normal form of the PSL2 (R)-matrix corresponding to ℓ is
0 ±ρ−1 , which is related to the
geometric length ℓ(γ) by the relation (see [14, p. 384]) given by
(1.2)
n(γ) = cosh
ℓ(γ)
+
2
s
cosh
ℓ(γ)
2
2
2
− 1 .
Furthermore, Sarnak’s analogue of Chebotarev’s density theorem implies that for every surjective
homomorphism ϕ : Γ → G onto a finite abelian group G, the number of prime geodesics of Γ\H
with norm ≤ N in the inverse image of a given g ∈ G is asymptotically given by ∼ N/(|G| log N ).
In particular, if [Γ, Γ] is of finite index in Γ, then letting φ be the quotient map Γ → Γ/[Γ.Γ],
one sees that the number of prime geodesics of Γ\H with norm ≤ N and trivial abelianization is
asymptotically given by
∼
N
.
|Γ : [Γ, Γ]| log N
CONJUGACY GROWTH OF COMMUTATORS
7
The result of Corollary 1.5 can be seen as the answer to an analogous problem of counting closed
geodesics of Γ\H that correspond to conjugacy classes of commutators rather than arbitrary conjugacy classes with trivial abelianization, the main difference being that our formula counts closed
geodesics ordered by word length.
We mention one more number-theoretic setting which involves counting conjugacy classes of
commutators of a Fuchsian group, specialized to the modular group Γ = PSL2 (Z) = H(λ3 ). A
commutator of Γ is precisely a coset {C, −C} for a commutator C = ABA−1 B −1 of SL2 (Z), where
A, B ∈ SL2 (Z); this choice is unique because −I has nontrivial abelianization, which implies that
−C does, as well. Thus, a conjugacy class of commutators in Γ with such a representative {C, −C}
is the union of the SL2 (Z)-conjugacy class of the SL2 (Z)-commutator C and the SL2 (Z)-conjugacy
class of −C.
For the free group F2 on generators X and Y , the map Hom(F2 , SL2 (C)) → C3 defined by
ρ 7→ (Tr ρ(X), Tr ρ(Y ), Tr ρ(XY )) induces an isomorphism between the moduli space of SL2 -valued
representations of F2 and the affine space C3 , by the work of Vogt [27] and Fricke [8]. Labeling
this parametrization of C3 as (x, y, z), we note that Tr ρ(XY X −1 Y −1 ) is given by the polynomial
M (x, y, z) ··= x2 + y 2 + z 2 − xyz − 2; in other words, we have the trace identity [9, p. 337]
(1.3)
Tr(A)2 + Tr(B)2 + Tr(AB)2 − Tr(A) Tr(B) Tr(AB) − 2 = Tr(ABA−1 B −1 )
for all A, B ∈ SL2 (C). In particular, note that every conjugacy class of commutators in SL2 (Z)
(say, with representative ABA−1 B −1 of SL2 (Z) such that A, B ∈ SL2 (Z)) with a given trace T
gives rise to an integral solution
(Tr(A), Tr(B), Tr(AB))
of the Diophantine equation M (x, y, z) = T . The integral solutions to the Markoff equation
M (x, y, z) = −2 play an important role in the theory of Diophantine approximation through their
relation to extremal numbers via the Lagrange–Markoff spectrum [17, 18], and have been studied
recently in [1] by applying strong approximation. Moreover, integral solutions to the Markoff-type
equation M (x, y, z) = T for varying T are also of independent number-theoretic interest and have
been studied recently in [10].
The conjugacy classes of PSL2 (Z) with trace T correspond to the SL2 (Z)-orbits of binary quadratic forms with discriminant T 2 − 4. This allows one to use Gauss’ reduction theory of indefinite
binary quadratic forms, which yields a full set of representatives for the SL2 (Z)-orbits of binary
quadratic forms of any positive discriminant, to obtain the conjugacy classes of PSL2 (Z) with trace
T . Furthermore, there is a straightforward condition determining whether each such conjugacy
class is in the commutator subgroup [PSL2 (Z), PSL2 (Z)], since [PSL2 (Z), PSL2 (Z)] is precisely
(
a b
a b
,−
: (1 − c2 )(bd + 3(c − 1)d + c + 3) + c(a + d − 3) ≡ 0 (mod 12)
c d
c d
)
or (1 − c2 )(bd + 3(c + 1)d − c + 3) + c(a + d + 3) ≡ 0 (mod 12)
a congruence subgroup of index 6; this follows from the fact that [SL2 (Z), SL2 (Z)] = ker χ for the
surjective homomorphism χ : SL2 (Z) → Z/12Z defined by
a b
7→ (1 − c2 )(bd + 3(c − 1)d + c + 3) + c(a + d − 3),
c d
as shown in [6, Proof of Theorem 3.8]. However, there is no known number-theoretic condition determining whether a conjugacy class of PSL2 (Z) with trivial abelianization is in fact a commutator;
the only known method to do so is to decompose an element of the conjugacy class into a word in
the generators S and T1 , then combinatorially checking whether there exists a cyclic conjugate of
8
PETER S. PARK
this word in the form given by Definition 1.3. For an implementation of this algorithm in magma,
see [20, Appendix].
Since every commutator of SL2 (Z) with trace T gives rise to an integral point on MT , the
natural follow-up question is whether the converse is true. A local version of this holds, in that
for every power pn of a prime p ≥ 5, there exists a commutator ABA−1 B −1 satisfying (1.3)
mod pn [10, Lemma 6.2]. The global version of the converse, however, does not hold in general,
nor do we know of any number-theoretic condition determining whether an integral point on MT
comes from a commutator. Then, a question of Sarnak [23] asks:
What is the asymptotic number of integral points on MT that arise from SL2 (Z)-commutators by
the trace identity (1.3) as |T | → ∞?
On this front, one can consider SL2 (Z)-commutators as elements of PSL2 (Z) ∼
= Z/2Z ∗ Z/3Z, for
which Theorem 1.4 gives the asymptotic number of conjugacy classes of commutators with a given
word length in the generators. Any correlation between the word length of a hyperbolic conjugacy
class γ of PSL2 (Z) and the geometric length of the corresponding geodesic in Γ\H would also yield
correlation between the word length of γ and its trace t(γ), defined to be the absolute value of the
trace of any representative matrix. This is due to the relation between t(γ) and ℓ(γ) given by
t(γ) = 2 cosh
ℓ(γ)
,
2
which is equivalent to the relation (1.2). Thus, a proof of such a correlation would allow Theorem 1.4
to yield information on the number of integral points on MT and M−T for T > 2 (the condition
for hyperbolicity, where we have without loss of generality taken the representative trace T to be
positive) arising from commutators by (1.3).
1.4. Strategy of proof. The main idea behind the proof of Theorem 1.2 given in Section 2
(respectively, the proof of Theorem 1.4 given in Section 3) is to count the Wicks commutators of Fr
(respectively, those of G1 ∗ G2 ) and show that each such Wicks commutator is on average cyclically
conjugate to precisely six Wicks commutators (including itself) while keeping the overall error term
under control. Thus, we obtain the asymptotic number of conjugacy classes of commutators with
length k by dividing the asymptotic number of Wicks commutators of length k by 6. Finally, in
Section 4, we conclude the paper with a discussion of future directions for generalization.
2. Proof of Theorem 1.2
2.1. Counting the Wicks commutators of Fr . Since the cyclically reduced conjugacy representative of Fr is unique up to cyclic permutation, it suffices to count equivalence classes (with
respect to cyclic permutation) of Wicks commutators of length k = 2X. Let RX denote the set of
reduced words of length X, of which there are 2r(2r − 1)X−1 . For each such word W , the number
of ways to decompose W into A, B, and C (i.e., W = ABC without cancellation) is given by the
number of ordered partitions p of X into three (not necessarily nontrivial) parts.
Define (W, p) to be a viable pair of length n if W is a length-n reduced word, p = (n1 , n2 , n3 ) is
a partition of n into three parts n1 , n2 , n3 > 0, and the word W ′ ··= ABCA−1 B −1 C −1 (obtained
by decomposing W as specified by p, in the method described above) is a Wicks commutator. It is
natural to first count the viable pairs (W, p) of length X. To do so, we will need the following.
Lemma 2.1. In the set of all reduced words of length n beginning with a given letter, the proportion
of words ending in any one letter is given by
1
(2r − 1)n−1 + O(r)
1
1
·
=
+O
.
(2r − 1)n−1
2r
2r
(2r − 1)n−1
CONJUGACY GROWTH OF COMMUTATORS
9
Proof. The number of cyclically reduced words with length n is (2r − 1)n + (r − 1)(−1)n + r
(see [15, Proposition 17.2]). Another way of stating this is that the number an of reduced words of
length n whose initial and final letters are inverses (i.e., are not cyclically reduced) is
an = 2r(2r − 1)n−1 − (2r − 1)n − (r − 1)(−1)n − r = (2r − 1)n−1 − (r − 1)(−1)n − r
Similarly, let bn denote the number of reduced words whose initial and final letters are equal. Note
that {an }n∈Z>0 and {bn }n∈Z>0 are solutions to the recurrence
cn = (2r − 1)cn−2 + (2r − 2)(2r(2r − 1)n−3 − cn−2 ) = cn−2 + 2r(2r − 2)(2r − 1)n−3 .
Indeed, for n ≥ 3, a reduced word whose initial and final letters are inverses is precisely of the
form sW s−1 , and there are 2r − 1 possible choices for the letter s if W also has the aforementioned
property (initial and final letters are inverses) and 2r − 2 possible choices if W does not; an
analogous discussion holds for the property that the initial and final letters are equal. It follows
that the difference {bn −an }n∈Z>0 is a solution to the associated homogeneous recurrence dn = dn−2 ,
and thus a linear combination of the functions 1 and n. In light of the initial data a1 = a2 = 0 and
b1 = b2 = 2r, we see that bn − an = 2r, which yields
bn = (2r − 1)n−1 − (r − 1)(−1)n + r.
For s1 , s2 ∈ S, let Rns1 ,s2 ⊂ Rn denote the subset of words that begin
with s1 and end with s2 ,
S
which gives us a decomposition of Rn into the disjoint union Rn = s1 ,s2 ∈S Rns1 ,s2 . For any s ∈ S,
−1
|Rns,s | =
and
|Rns,s | =
(2r − 1)n−1 − (r − 1)(−1)n − r
an
=
2r
2r
(2r − 1)n−1 − (r − 1)(−1)n + r
bn
=
.
2r
2r
It follows that for t ∈ S \ {s, s−1 },
−1
(2r − 1)n−1 − |Rns,s | − |Rns,s |
1
(2r − 1)n−1 − (r − 1)(−1)n
n−1
=
=
(2r − 1)
−
2r − 2
2r − 2
r
n−1
(2r − 1)
+ O(1)
,
=
2r
which completes the proof of our claim.
|Rns,t |
Now, fix a partition p = (n1 , n2 , n3 ) of n into three parts n1 , n2 , n3 > 0, and consider RX with
the uniform probability measure placed on its elements. Within the decomposition W = ABC in
accordance with p, let the first letter and last letter of A respectively be a0 and a1 , and define b0 , b1 ,
c0 , and c1 similarly. We will compute the probability that (W, p) is viable for a random W ∈ RX ,
−1
i.e., the probability that b1 6= a−1
0 , c0 6= a0 , c1 6= a1 , and c1 6= b0 .
Suppose that a0 = s. Then, by our work above, the set of possible candidates for a1 b0 is the
2r(2r − 1)-element set S = {wz : w, z ∈ S, w 6= z −1 }, each of which has probability
1
1
1
1
1
+O
+O
.
=
2r − 1 2r
(2r − 1)n1 −1
2r(2r − 1)
(2r − 1)n1
We fix a choice of a1 b0 in S, and all probabilities from now on are conditional on this event. The set
of possible candidates for b1 c0 is also S, each of which has probability 1/2r(2r−1)+O(1/(2r−1)n2 ).
Let S ′ ⊂ S be the subset of possible candidates for b1 c0 that satisfy the conditions a0 6= b−1
1 and
a0 6= c0 . The cardinality of S ′ can be computed as follows: there are 2r − 1 choices for b1 satisfying
−1
s 6= b−1
1 , and conditional on this, there are 2r − 2 choices for c0 satisfying s 6= c0 and b0 6= c0 , for a
total of (2r − 1)(2r − 2) elements of S ′ . Fix a choice of b1 c0 in S ′ , and all probabilities from now on
10
PETER S. PARK
−1
are conditional on this event. Since a1 6= b−1
0 , the conditions c1 6= a1 and c1 6= b0 leave precisely
2r − 2 (out of 2r) possible values for c1 , so the probability that c1 satisfies these conditions is
1
1
+O
.
(2r − 2)
2r
(2r − 1)n3 −1
Overall, we have that the probability that (W, p) is viable is
1
1
2r(2r − 1)
+O
2r(2r − 1)
(2r − 1)n1
1
1
+O
· (2r − 1)(2r − 2)
2r(2r − 1)
(2r − 1)n2
1
1
· (2r − 2)
+O
2r
(2r − 1)n3 −1
2
2r − 2
1
1
=
1+O
1+O
2r
(2r − 1)n1 −2
(2r − 1)n2 −2
1
· 1+O
(2r − 1)n3 −2
2r − 2 2
1
1
1
+
+
=
.
1+O
2r
(2r − 1)n1 −2 (2r − 1)n2 −2 (2r − 1)n3 −2
Thus, the number of viable pairs (W, p) of length X is given by
2
X
X−1 2r − 2
2r(2r − 1)
2r
0<n ,n ,n
1
2
3
n1 +n2 +n3 =X
· 1+O
1
1
1
+
+
(2r − 1)n1 −2 (2r − 1)n2 −2 (2r − 1)n3 −2
(2r − 2)2 (2r − 1)X−1
=
2r
(X − 2)(X − 1)
+
·
2
=
0<n1 ,n2 ,n3
n1 +n2 +n3 =X
·
=
O
1
1
1
+
+
n
−2
n
−2
n
−2
(2r − 1) 1
(2r − 1) 2
(2r − 1) 3
(2r − 2)2 (2r − 1)X−1
(X − 2)(X − 1) + 3 · O
2r
2
(2r − 2)2 (2r − 1)X−1
=
2r
=
X
(2r −
2)2 (2r
1)X−1
−
2r
(X − 2)(X − 1)
+O
2
(2r − 2)2 (2r − 1)X−1
4r
(X − 2)(X − 1)
+O
2
X
0<n1 ,n2 ,n3
n1 +n2 +n3 =X
X−2
X
1
(2r − 1)n1 −2
1
(X − 1 − n1 )
(2r − 1)n1 −2
n1 =1
!!
(2r − 1)2 (2r − 1)2−X + X(2r − 2) − 4r + 3
(2r − 2)2
r2
2
+ rX
.
X +O
(2r − 1)X
!!
CONJUGACY GROWTH OF COMMUTATORS
11
There could a priori be a Wicks commutator ABCA−1 B −1 C −1 arising from distinct viable pairs,
say (W, p1 ) and (W, p2 ) for p1 = (n1 , n2 , n3 ), and p2 = (m1 , m2 , m3 ). We show that the number of
such commutators is small.
Let W = ABC be its decomposition with respect to p1 , and W = A′ B ′ C ′ its decomposition
with respect to p2 . Consider the function f : {1, . . . , X} → {1, . . . , X}2 that maps i to (j, k) in
the following way: the ith letter of A−1 B −1 C −1 = A′−1 B ′−1 C ′−1 , when viewed in terms of the
decomposition given by p1 , is the inverse of the jth letter of W , and when viewed in terms of the
decomposition given by p2 , is the inverse of the kth letter of W . For example, the first letter of
A−1 B −1 C −1 is defined to be the inverse of the n1 th letter when the decomposition is in terms of
p1 , and is defined to be the m1 th letter of W when in terms of p2 , so f (1) = (n1 , m1 ). We consider
two cases: when the two entries of f (i) are distinct for all i, and otherwise. In the first case, the
following algorithm allows us to reduce the degrees of freedom for the letters of W by at least half.
(1) Let i = 1. For f (i) = (ji , ki ), do the following:
• If neither the ji th or the ki th position has an indeterminate variable assigned to it,
then assign a new indeterminate variable simultaneously to the ji th and ki th positions.
This increases the number of indeterminate variables by two.
• If just one of the ji th and the ki th positions has an indeterminate variable assigned to
it, but the other does not, then assign this indeterminate variable to the former.
• If both the ji th and the ki th positions have the same indeterminate variable assigned
to them, then make no changes.
• If the ji th and the ki th positions have distinct indeterminate variables assigned to
them, then set these indeterminate variables equal to each other. This decreases the
number of indeterminate variables by one.
(2) Incrementing i by one each time, repeat this procedure for all 1 ≤ i ≤ X.
By our hypothesis that the two entries of f (i) are distinct for all i, the number of indeterminate
variables, which precisely represents the number of degrees of freedom for the word W such that
(W, p1 ) and (W, p2 ) give rise to the same commutator, is ≤ X/2. It follows that there are only
O((2r − 1)X/2 ) of such words for each pair p1 , p2 .
Now, consider the next case that there exists an i such that the two entries of f (i) are equal.
Then, we consider the following three cases for W , p1 , and p2 .
Case 1. Suppose the smallest i such that the two entries of f (i) are equal satisfies that this
entry is a position in A. Then, n1 = m1 must be this entry and i must equal 1, since otherwise
we can continue to decrement i so that the two entries of f (i) are incremented and remain equal,
a contradiction. Next, the subwords B −1 C −1 and B ′−1 C ′−1 must be equal. Without loss of
generality, suppose that n2 > m2 . Decompose B = B ′ D so that our condition B −1 C −1 = B ′−1 C ′−1
is precisely CB ′ D = DCB ′ . Since CB ′ and D commute, we have that they are both powers of a
common subword V . We can bound the number of (W, p) satisfying this case by counting, for each
i = X − n1 and for each divisor d | i (denoting the length of V ), the number of ways to divide the
right subword of length i into two subwords B and C, and the number of degrees of freedom. Thus,
the number of double-countable Wicks commutators in this case can be bounded from above by
X−1
XX
i=1 d|i
(i − d + 1)(2r − 1)X−i+d
= (X − 1)(2r − 1)X +
X−1
XX
i=1 d|i
d6=i
(i − d + 1)(2r − 1)X−i+d
12
PETER S. PARK
≤ X(2r − 1) + (2r − 1)
X−1
X
= X(2r − 1)X + (2r − 1)X
X−1
X
≤ X(2r − 1)X + (2r − 1)X
X−1
X
X
≪ X(2r − 1)X ,
X
X
i=2 1≤d≤ i
2
i=2
i=2
(i − d + 1)(2r − 1)−i+d
(2r − 1)−i
X
1≤d≤ 2i
(2r − 1)−i+1
(i − d + 1)(2r − 1)d
i
i
i(2r − 2) (2r − 1) 2 − 2 + 2(2r − 1) (2r − 1) 2 − 1
2(2r − 2)2
which is dominated by our error term.
Case 2. Suppose the smallest i such that the two entries of f (i) are equal satisfies that this entry
is a position in C. An argument symmetric to the one above can be given to show that the above
expression is also an upper bound for the number of (W, p) satisfying this case.
Case 3. Suppose the smallest i such that the two entries of f (i) are equal satisfies that this
entry is a position in B. Without loss of generality, suppose n1 > m1 . Then, f (m1 + 1) =
(n1 − m1 , m1 + m2 ), and by an argument similar to that in Case (1), we have that the simultaneous
entry of the aforementioned f (i) must be n1 + n2 , with n1 − m1 = n3 − m3 so that f (m1 +
1) = (n1 − m1 , n1 + n2 + (n1 − m1 )). Thus, divide W into DEF GH so that |D| + |E| = n1 ,
|G| + |H| = n3 , and |E| = |G|. Then, (W, p1 ) gives rise to the commutator W E −1 D −1 F −1 H −1 G−1 ,
while (W, p2 ) gives rise to the commutator W D −1 G−1 F −1 E −1 H −1 . Since these are equal, it follows
that DE = GD and GH = HE. Note that if a word satisfies the equality E = G without
cancellation, then is uniquely determined, since one can inductively identify the letters of from
left to right (or right to left). It follows that D = H. But this contradicts the assumption that
DEF GHD −1 G−1 F −1 E −1 H −1 is cyclically reduced.
Note that pairs (W, p) such that ni = 0 for some i ∈ {1, 2, 3} are counted in the above cases.
This justifies our assumption of n1 , n2 , n3 > 0 in our earlier counting of the main term by passing to
the task of counting viable pairs of length X, which a priori only accounts for Wicks commutators
ABCA−1 B −1 C −1 such that |A|, |B|, |C| > 0. We have thus shown the following.
Lemma 2.2. The total number of Wicks commutators with length 2X is given by
(2r − 2)2 (2r − 1)X−1
X 2 + Or (X) .
4r
2.2. A conjugacy class of commutators contains six Wicks commutators on average.
We need to count the number of conjugacy classes containing at least one Wicks commutator.
Consider the conjugacy class C of the Wicks commutator W ′ = ABCA−1 B −1 C −1 arising from
(W, p), where p = (n1 , n2 , n3 ) with n1 , n2 , n3 > 0; recall that the Wicks commutator not satisfying
this latter hypothesis are negligible. Note that the minimum-length elements in a conjugacy class
are precisely the cyclically reduced words, and that two cyclically reduced words are conjugate if and
only if they are cyclically conjugate. The Wicks commutators BCA−1 B −1 C −1 A, CA−1 B −1 C −1 AB,
A−1 B −1 C −1 ABC, B −1 C −1 ABCA−1 , and C −1 ABCA−1 B −1 are conjugates of W ′ . We show that
the number of other Wicks commutators in C is on average negligible.
For an arbitrary 1 ≤ ℓ ≤ n3 /2 denoting the number of letters of the conjugation, let C = DEF
be a decomposition without cancellation such that |D| = |F | = ℓ. Label the letters of W ′ by
A = a1 · · · an1 , B = b1 · · · bn2 , D = d1 · · · dℓ , E = e1 · · · en3 −2ℓ , and F = f1 · · · fℓ . Consider the cyclic
conjugate W ′′ ··= D −1 ABDEF A−1 B −1 F −1 E −1 of W ′ . We wish to show that on average, W ′′ is
not a Wicks commutator. Suppose the contrary, that there exists a partition p′ = (m1 , m2 , m3 ) of
CONJUGACY GROWTH OF COMMUTATORS
13
X into three parts such that
W ′′ = D −1 ABDEF A−1 B −1 F −1 E −1 = w1 w2 w3 w1−1 w2−1 w3−1
for subwords w1 , w2 , and w3 of lengths m1 , m2 , and m3 .
Label the letters of A from left to right as a1 , . . . , an1 , and label the letters of B, C, D, E, and F
similarly. We have that w1 , w2 , and w3 are subwords comprised of the letters
(2.1)
−1
d−1
ℓ , . . . , d1 , a1 , . . . , an1 , b1 , . . . , bn2 , d1 , . . . , dℓ , e1 , . . . , en3 −2ℓ .
Then, note that the second half of W ′ can be considered in two forms:
F A−1 B −1 F −1 E −1 = w1−1 w2−1 w3−1 .
Equivalently, this equality can be written as
EF BAF −1 = w3 w2 w1 ,
(2.2)
where we reiterate that we substitute the letters of (2.1), in the correct order, for w3 , w2 , and w1 .
Consider the function g mapping the ordered set of symbols of the left-hand side,
A ··= {e1 , . . . , en3 −2ℓ , f1 , . . . , fℓ , b1 , . . . , bn2 , a1 , . . . , an1 , fℓ−1 , . . . , f1−1 }
to the ordered set B of symbols of the right-hand side of (2.2), which are just the symbols of (2.1)
reordered appropriately. Specifically, g maps the ith leftmost letter of the left-hand side of (2.2) to
the ith leftmost letter of the right-hand side.
First, suppose g has no fixed points (i such that g(i) = i). Then, use an algorithm similar to the
previous one to conclude that there are ≤ X/2 degrees of freedom for ABDEF , so W must be one
of only O((2r − 1)X/2 ) choices (for each choice of ℓ and p′ ).
Now, suppose that there exists an i such that g(i) = i. Such fixed points i must be letters of A,
B, or E. We first consider the case that all the fixed points are letters of only one of A, B, and E.
In this case, we consider the following subcases for W , p, and p′ :
Case 1. Suppose that the fixed points are letters of E. Then, all of the fixed points must be in
one of w2 and w3 ; they cannot be in w1 , since this would mean that w1 contains e1 , but e1 is then
necessarily located at different positions in the left-hand side and right-hand side of (2.2). Suppose
that the fixed points of E are in w3 . Then, in order for the letters of E to match, we require that
w3 = E. This means that g(f1 ) is the first letter of w2 , which is adjacent to the last letter of w1 .
But the last letter of w1 is g(f1−1 ), which shows that we have adjacent letters that are inverses.
This contradicts the fact that W ′ is cyclically reduced.
Next, suppose all the fixed points are in w2 . Then, we must have that m3 = n3 − 2ℓ − (m2 + m3 )
so that the first letter of w2 is at the same position in both the left-hand and right-hand side.
Thus, m2 + 2m3 = n3 − 2ℓ, which means there are ≤ (n3 − 2ℓ)/2 choices for p′ parametrized by
m3 ≤ (n3 − 2ℓ)/2 . For each such choice of p′ , there are m2 = n3 − 2ℓ − 2m3 fixed letters, from
em3 +1 to en3 −2ℓ−m3 , and (X − (n3 − 2ℓ − 2m3 ))/2 non-fixed letters. Counting across all choices
of values for the letters, p, p′ , and ℓ, we have that the number of additional Wicks commutators
arising from this case is bounded from above by
X X−n
X1
X
n1 =0 n2 =0
j
X−n1 −n2
2
X
ℓ=1
kj
n3 −2ℓ
2
X
m3 =0
k
(2r − 1)
X+n3 −2ℓ−2m3
2
≪ (2r − 1)X ,
which is dominated by our error term.
Case 2. Suppose that the fixed points are letters of B. Then, all of the fixed points must be
in one of w1 , w2 , and w3 . First, suppose they are in w1 . Then, note that g(fℓ ) = an1 , but we
also have g(an1 ) is next to g(fℓ−1 ), which leads to the contradiction that a letter cannot equal its
inverse. Second, suppose the fixed letters are in w3 . Then, g(d1 ) = a1 , but a1 is adjacent to d−1
1 , a
contradiction.
14
PETER S. PARK
Thus, the fixed points of B must be in w2 . We consider three subcases: n2 > m2 , n2 < m2 , and
n2 = m2 . If n2 > m2 , then in order for the letters of B to match, we require that the leftmost
fixed letter of B is b n2 −m2 +1 . But then b n2 −m2 is both equal to f1−1 (since g(f1−1 ) = b n2 −m2 ) and
2
2
2
en3 −2ℓ (since g(b n2 −m2 ) = en3 −2ℓ ) as letters of G, which contradicts the fact that f1 and en3 −2ℓ are
2
adjacent. If n2 < m2 , then g(fℓ ) = an1 , but also g(an1 ) is the letter in D −1 A that is left of the
letter g(fℓ−1 ), giving us the contradiction that the letter in G in the position fℓ is adjacent to the
letter in the position fℓ−1 . This implies that n2 = m2 , from which we can use an argument similar
to that in Case 1 of the previous casework (showing that W ′ on average can be only decomposed
as a commutator in one way) to conclude that A is a power of D −1 and E, a power of D. It follows
that our original (W, p) is one of the pairs falling under Case 1 of the previous casework, which are
negligible.
Case 3. Suppose that the fixed points are letters of A. Then, all of the fixed points must be in
one of w1 and w2 ; they cannot be in w3 , since then there must be more than ℓ letters right of A.
Suppose the fixed letters of A are in w1 . Then, we must have g(bn2 ) = d−1
1 , which contradicts the
fact that bn2 is adjacent to d1 . Therefore, the fixed points are necessarily in w2 . This requires that
m1 − ℓ = n1 + ℓ − (m1 − m2 ) in order for the letters of A to be in matching positions. Thus, we
have m2 = n1 + 2ℓ − 2m1 . Note then that p′ is parametrized by m1 ≤ (n1 + 2ℓ)/2. For each choice
of p′ , we have n1 fixed letters (and (X + n1 )/2 ≤ (X + n1 − m1 + ℓ)/2 overall degrees of freedom)
if m1 ≤ ℓ, and n1 − (m1 − ℓ) fixed letters (and (X + n1 − m1 + ℓ)/2 overall degrees of freedom) if
m1 > ℓ. Thus, counting across all choices of values for the letters, p, p′ , and ℓ, we have that the
number of additional Wicks commutators arising from this case is bounded from above by
X X−n
X1
X
n1 =0 n2 =0
j
X−n1 −n2
2
X
ℓ=1
kj
n1 +2ℓ
2
X
m1 =0
k
(2r − 1)
X+n1 +ℓ−m1
2
≪ (2r − 1)X ,
which is dominated by our error term.
Next, consider the case where the fixed points are in two of A, B, and E. It is necessary that
the fixed letters inside these two subwords must respectively be in two distinct subwords among
w1 , w2 , and w3 . However, we have shown above that the subwords w1 and w3 cannot contain fixed
points, a contradiction. Finally, the fixed letters cannot be in all of A, B, and E. Indeed, if this
were true, then in order for the letters of A and E to match, we require m1 = n1 + 2ℓ and m3 = n3 .
But then the letters of B cannot possibly match, a contradiction.
If ℓ ≥ n3 /2, then we can think of our commutator as a cyclic conjugate of C −1 ABCA−1 B −1 such
that the letters are moved from left to right. A symmetric argument like above gives us the same
conclusion for this case. We have thus shown that the number of conjugacy classes of commutators
with length 2X is given by
1 (2r − 2)2 (2r − 1)X−1
·
X 2 + Or (X)
6
4r
(2r − 2)2 (2r − 1)X−1
X 2 + Or (X) ,
=
24r
as needed.
3. Proof of Theorem 1.4
3.1. Counting the Wicks commutators of G1 ∗ G2 . In addition to his theorem classifying
commutators of free groups, Wicks [28] also proved the following analogous theorem characterizing
all commutators of a free product of arbitrary groups.
Theorem 3.1 (Wicks). A word in ∗i∈I Gi is a commutator if and only if it is a conjugate of one
of the following fully cyclically reduced products:
CONJUGACY GROWTH OF COMMUTATORS
15
(a) a word comprised of a single letter that is a commutator in its factor Gi ,
(b) Xα1 Xα−1
2 , where X is nontrivial and α1 , α2 belong to the same factor Gi as conjugate
elements,
(c) Xα1 Y α2 X −1 α3 Y −1 α4 , where X and Y are both nontrivial, α1 , α2 , α3 , α4 belong to the same
factor Gi , and α4 α3 α2 α1 is trivial,
(d) XY ZX −1 Y −1 Z −1 ,
(e) XY α1 ZX −1 α2 Y −1 Z −1 α3 , where Y and at least one of X and Z is nontrivial, α1 , α2 , α3
belong to the same factor Gi , and α3 α2 α1 is trivial,
(f ) Xα1 Y β1 Zα2 X −1 β2 Y −1 α3 Z −1 β3 , where α1 , α2 , α3 belong to the same factor Gi and β1 , β2 , β3 ,
to Gj , α3 α2 α1 = β3 β2 β1 = 1, and either α1 , α2 , α3 , β1 , β2 , β3 are not all in the same factor
or X, Y, Z are all nontrivial.
Note that in the above, the Greek letters are assumed to be nontrivial. This convention is used
later in our proof, where when a Greek letter α is said to satisfy α ∈ Gi , we mean that α is a
nontrivial element of Gi .
Equipped with a complete classification of commutators in an arbitrary free product, we proceed
with our proof of Theorem 1.4. Let G1 ∗ G2 be the free product of two nontrivial finite groups G1
and G2 . The letters of a cyclically reduced word of G must alternate between equal numbers of
elements of G1 \ {1} and elements of G2 \ {1}, and thus must be of even length. Let C be a fully
reduced commutator of G1 ∗ G2 . If C is of the form (d) listed in Theorem 3.1 with none of X, Y,
and Z trivial, then the last letter of X is in different factors compared to the first letter of Y and
the last letter of Z, which must also be in different factors, a contradiction. If C is of the form (e),
then α1 , α2 , and α3 must be in the same free-product factor, but this would imply that the last
letter of Y is in different free factors compared to the first letter of Z and the first letter of X; this
contradicts the similar implication that the first letter of Z and the first letter of X are in different
factors. Thus, C must be of the form (a), (b), (c), (f ), or XY X −1 Y −1 (i.e., of the form (d) with
|Z| = 0). However, if C is of the form (f ), then Gi and Gj must be the same free-product factor,
since otherwise α1 would be adjacent to letters of distinct free-product factors, a contradiction.
By Wicks’ theorem for free products, we need to count cyclic conjugacy classes of Wicks commutators of G1 ∗ G2 . It follows from the discussion above that every commutator of W is conjugate to
one of the fully cyclically reduced forms listed under Definition 1.3. These general forms, labeled
from (1) to (9), have without loss of generality been taken to have the letters of odd position be in
G1 and those of even position in G2 . Throughout this proof, we will regularly use the terminology
Wicks commutators of the form (i), where 1 ≤ i ≤ 9, to refer to the corresponding general form
listed under Definition 1.3.
Consider Wicks commutators of G1 ∗ G2 with length k, where k is a multiple of 4. Let X = k/4,
so that the left-half subword of W contains X letters of the G1 factor and X letters of the G2 factor,
which are placed in alternating order. The commutators of the forms (1) and
(2) are O(1) in number
and all have length 1. The commutators of the forms (3) and (4) are O (|G1 | − 1)X (|G2 | − 1)X
in number, since there are ≤ X degrees of freedom among
the letters of each free-product
factor.
X
X
in number, since
The commutators of the forms (5), (6), and (7) are O X(|G1 | − 1) (|G2 | − 1)
there are ≤ X degrees of freedom for the X letters of each free-product factor and O(X) possible
pairs of values for |A| and |B|, by an argument similar to that used in Section 2.
The number of Wicks commutators of the form (8) is
(X − 5)(X − 4)
(|G2 | − 2)2 (|G1 | − 1)X (|G2 | − 1)X−1 .
2
Indeed, we have (|G2 | − 1) (|G2 | − 2) distinct choices of the triple (α1 , α2 , α3 ), since there are |G2 |−1
choices for α1 , |G2 | − 2 choices for α2 6= α−1
1 , and α3 is uniquely determined from the previous
choices. Likewise, there are (|G2 | − 1) (|G2 | − 2) distinct choices of the triple (β1 , β2 , β3 ). Finally,
16
PETER S. PARK
there are (X − 5)(X − 4)/2 partitions of X − 3 into three nontrivial parts (n1 , n2 , n3 ) such that
|A| = 2n1 + 1, |B| = 2n2 + 1, and |C| = 2n3 + 1; X degrees of freedom for choosing the G1 -letters
of A, B, and C; and X − 3 degrees of freedom for choosing the G2 -letters of A, B, and C. By an
analogous argument, the number of Wicks commutators of the form (9) is
(X − 5)(X − 4)
(|G1 | − 2)2 (|G1 | − 1)X−1 (|G2 | − 1)X
2
We define a generic Wicks commutator of G1 ∗ G2 to be one of the form (8) or (9). These comprise
the main term of the total number of Wicks commutators of length k = 4X, since we have seen
above that the non-generic Wicks commutators, those of the forms (1) − (7), comprise a negligible
subset. Overall, we have shown the following.
Lemma 3.2. The total number of Wicks commutators with length 4X is given by
1
(|G1 | − 1) (|G2 | − 2)2 + (|G1 | − 2)2 (|G2 | − 1) X 2 (|G1 | − 1)X−1 (|G2 | − 1)X−1
2
+O X(|G1 | − 1)X (|G2 | − 1)X .
3.2. A conjugacy class of commutators contains six Wicks commutators on average. We
need to count the number of conjugacy classes containing at least one generic Wicks commutator. As
before, let C be the conjugacy class of the Wicks commutator W ··= Aα1 Bβ1 Cα2 A−1 β2 B −1 α3 C −1 β3 ,
with (n1 , n2 , n3 ) a partition of X − 3 and |A| = 2n1 + 1, |B| = 2n2 + 1, and |C| = 2n3 +
1. We wish to show that on average, C does not contain generic Wicks commutators other
than the six obvious ones: W , Bβ1 Cα2 A−1 β2 B −1 α3 C −1 β3 Aα1 , Cα2 A−1 β2 B −1 α3 C −1 β3 Aα1 Bβ1 ,
A−1 β2 B −1 α3 C −1 β3 Aα1 Bβ1 Cα2 , B −1 α3 C −1 β3 Aα1 Bβ1 Cα2 A−1 β2 , and C −1 β3 Aα1 Bβ1 Cα2 A−1 β2 B −1 α3 .
We wish to show that any other cyclic conjugate W ′ of W is not on average a Wicks commutator.
Suppose the contrary. One of the ways this can happen is if there is such a W ′ that is of the
form (8), i.e, there exists a partition p′ = (m1 , m2 , m3 ) of X − 3 into three parts such that
W ′ = w1 α′1 w2 β1′ w3 α′2 w1−1 β2′ w2−1 α′3 w3−1 β3′
for subwords w1 , w2 , and w3 of lengths 2m1 +1, 2m2 +1, and 2m3 +1 given by p′ , and α′1 , α′2 , α′3 , β1′ , β2′ , β3′ ∈
G2 . We will see that the argument showing that these exceptions are negligible also shows that the
exceptions such that W ′ is of the other Wicks-commutator forms are also negligible.
Suppose the number of letters of the conjugation is 2ℓ, where ℓ ≤ (n3 + 1)/2 is arbitrary. For
the desired uniformity of our presented argument, we assume that ℓ > 0, although the exceptions
in the ℓ = 0 case can be bounded similarly. We decompose C = DEF without cancellation so that
|D| = |F | = 2ℓ − 1. Label the letters of W by A = a1 ā1 a2 ā2 · · · ān1 an1 +1 , B = b1 b̄1 b2 b̄2 · · · b̄n2 bn2 +1 ,
D = d1 d¯1 · · · d¯ℓ−1 dℓ , E = ē1 e2 ē2 · · · en3 −2ℓ+2 ēn3 −2ℓ+2 , and F = f1 f¯1 · · · f¯ℓ−1 fℓ ; note that the barred
letters denote the G2 -letters and the non-barred letters, the G1 -letters. Consider the cyclic conjugate W ′ ··= D −1 β3 Aα1 Bβ1 DEF α2 A−1 β2 B −1 α3 F −1 E −1 of W . We wish to show that on average,
W ′ is not a Wicks commutator.
The subwords w1 , w2 , and w3 are comprised of the letters
(3.1)
−1
d−1
ℓ , . . . , d1 , β3 , a1 , . . . , an1 +1 , α1 , b1 , . . . , bn2 +1 , β1 , d1 , . . . , dℓ , ē1 , . . . , ēn3 −2ℓ+2 ,
except three of these letters instead correspond to α′1 , α′2 , and α′3 and thus omitted. Then, note
that the second half of W ′ can be considered in two forms:
F α2 A−1 β3 B −1 α3 F −1 E −1 = w1−1 β2′ w2−1 α′3 w3−1 β3′ .
Equivalently, this equality can be written as
(3.2)
−1
−1 −1
′−1
EF α−1
= β3′−1 w3 α′−1
3 Bβ3 Aα2 F
3 w2 β2 w1 ,
where we reiterate that we substitute the appropriate letters of (3.1), in the correct order, for
w3 , w2 , and w1 .
CONJUGACY GROWTH OF COMMUTATORS
17
Consider the function g mapping the ordered set of symbols of the left-hand side of (3.2),
−1
−1
−1
−1
A ··= {ē1 , . . . , ēn3 −2ℓ+2 , f1 , . . . , fℓ , α−1
3 , b1 , . . . , bn2 +1 , β3 , a1 , . . . , an1 +1 , α2 , fℓ , . . . , f1 }
to the ordered set B of symbols of the right-hand side of (3.2), which when reordered are comprised of
the letters of (3.1) except we replace the (2m1 +2)th, (2m1 +2m2 +4)th, and (2m1 +2m2 +2m3 +6)th
′−1
leftmost letters in (3.1) with β2′−1 , α′−1
3 , and β3 ; note that the (2m1 + 2m2 + 2m3 + 6)th letter is
always ēn3 −2ℓ+2 . Specifically, g maps the ith leftmost letter of the left-hand side of (3.2) to the ith
leftmost letter of the right-hand side.
First, suppose g has no fixed points (i such that g(i) = i). Then, use an algorithm similar to the
one used in Section 2 to conclude that there are ≤ X/2 degrees of freedom
for the G1 -letters of
ABDEF , and likewise for the G2 -letters. So, W must be one of only O (|G1 | − 1)X/2 (|G2 | − 1)X/2
choices (for each choice of ℓ and p′ ).
Now, suppose that there exists an i such that g(i) = i. Such fixed points i must be letters of A,
B, or E. We first consider the case that all the fixed points are letters of only one of A, B, and E.
In this case, we consider the following subcases for W , p, and p′ :
Case 1. Suppose that the fixed points are letters of E. Then, all of the fixed points must be
in one of w2 and w3 ; they cannot be in w1 since this would mean that w1 contains e1 , but e1 is
necessarily located at different positions in the left-hand side and right-hand side of (3.2). If all the
fixed points are in w2 , then we require that
1 + (2m3 + 1) + 1 = (2n3 − 4ℓ + 3) − ((2m2 + 1) + 1 + (2m3 + 1) + 1) ,
in order for the first letter of w2 to be at the same position in both the left-hand and right-hand
side. Hence, we have m2 + 2m3 = n3 − 2ℓ − 2, which means there are ≤ (n3 − 2ℓ − 2)/2 choices for
p′ parametrized by m3 ≤ (n3 − 2ℓ − 2)/2 . For each such choice of p′ , there are
2n3 − 4ℓ + 3 − 2 ((2m3 + 1) + 1 + 1) = 2n3 − 4ℓ − 4m3 − 3
fixed letters excluding the first 2m3 + 3 and the last 2m3 + 3 letters of E, with n3 − 2ℓ − 2m3 − 1
of them in G1 and n3 − 2ℓ − 2m3 − 2 in G2 . There are ≤ (X − (n3 − 2ℓ − 2m3 − 1)) /2 degrees
of freedom for the non-fixed letters in G1 and ≤ (X − (n3 − 2ℓ − 2m3 − 2)) /2 degrees of freedom
for the fixed letters in G2 . Thus, counting across all choices of values for the letters, p, p′ , and ℓ,
we have that the number of additional Wicks commutators arising from this case is bounded from
above by
X−3
X X−3−n
X 1
n1 =0
n2 =0
j
X−2−n1 −n2
2
X
kj
n3 −2ℓ−2
2
X
m3 =0
ℓ=1
k
(|G1 | − 1)
X+n3 −2ℓ−2m3 −1
2
(|G2 | − 1)
X+n3 −2ℓ−2m3 −2
2
≪ (|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Next, suppose the fixed letters are in w3 . Then, it is necessary that g(ē1 ) = β3′−1 and w3 is given
by E with the first and last letters omitted. Thus, we have that m3 = n3 − 2ℓ, so the number of
possible choices for p′ is at most the number of partitions of X − 3 − n3 + 2ℓ into two nontrivial
parts, which is X − 3 − n3 + 2ℓ. For each choice of p′ , there are n3 − 2ℓ + 1 fixed letters in G1 and
n3 − 2ℓ fixed letters in G2 . Additionally, there are ≤ (X − (n3 − 2ℓ + 1)) /2 degrees of freedom for
the non-fixed letters in G1 and ≤ (X − (n3 − 2ℓ)) /2 degrees of freedom for the non-fixed letters in
G2 . Counting across all choices of values for the letters, p, p′ , and ℓ, we have that the number of
additional Wicks commutators arising from this case is bounded from above by
X−3
X 1
X X−3−n
n1 =0
n2 =0
j
X−2−n1 −n2
2
X
ℓ=1
k
(X − 3 − n3 + 2ℓ)(|G1 | − 1)
X−n3 +2ℓ−1
2
(|G2 | − 1)
X−n3 +2ℓ
2
18
PETER S. PARK
≪ (|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Case 2. Suppose that the fixed points are letters of B. Then, all of the fixed points must be in
one of w1 , w2 , and w3 . First, suppose they are in w1 . This requires that w1 = D −1 β3 Aα1 Bβ1 V ,
−1 ). All letters
where V is the left subword of DE having length 2n1 + 2ℓ + 1 (the length of Aα−1
2 F
of B are thus included in w1 . We have
2m1 + 1 = (2ℓ − 1) + 1 + (2n1 + 1) + 1 + (2n2 + 1) + 1 + (2n1 + 2ℓ + 1) = 4n1 + 2n2 + 4ℓ + 5,
so m1 = 2n1 + n2 + 2ℓ + 2. Thus, the number of possible choices for p′ is at most the number of
partitions of X − 3 − 2n1 − n2 − 2ℓ − 2 into two nontrivial parts, which is X − 5 − 2n1 − n2 − 2ℓ ≤
X − 3 − n1 − n2 − 2ℓ (the latter is guaranteed to be nonnegative for any choice of p). The fixed
letters are precisely the letters of B, so for each choice of p′ , we have n2 + 1 fixed G1 -letters, n2
fixed G2 -letters, ≤ (X − n2 − 1)/2 degrees of freedom for the non-fixed G1 -letters, and ≤ (X − n2 )/2
degrees of freedom for the non-fixed G2 -letters. Counting across all choices of values for the letters,
p, p′ , and ℓ, we have that the number of additional Wicks commutators arising from this case is
bounded from above by
X−3
X 1
X X−3−n
n1 =0
n2 =0
j
X−2−n1 −n2
2
k
X
(X − 3 − n1 − n2 − 2ℓ)(|G1 | − 1)
ℓ=1
X+n2 +1
2
(|G2 | − 1)
X+n2
2
≪ (|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Next, suppose that the fixed points are in w2 . Then, we require that the difference between the
−1
−1 −1
lengths of EF α−1
(length 2n1 + 2ℓ + 2) is the same as that
3 (length 2n3 − 2ℓ + 3) and β3 Aα2 F
′−1
′−1
′−1
between β3 w3 α3 (length 2m3 + 3) and β2 w1 (length 2m1 + 2). Furthermore, the number of
(fixed) letters of B in w2 is 2n2 + 1 if
j ··=
2n3 − 2ℓ + 3 − (2m3 + 3)
2n1 + 2ℓ + 2 − (2m1 + 2)
=
2
2
is negative and 2(n2 − j) + 1 if j ≥ 0. First, suppose that j ≥ 0. In this case, p′ is determined by
the choice of j ≤ n2 /2, for which there are n2 − 2j + 1 fixed G1 -letters and n2 − 2j fixed G2 -letters
of B. There are ≤ (X − (n2 − 2j + 1)) /2 degrees of freedom for the non-fixed letters of G1 and
≤ (X − n2 + 2j)/2 degrees of freedom for the non-fixed letters of G2 , so overall, we can count
across all choices of values for the letters, p, j, and ℓ to get that the number of additional Wicks
commutators arising from this case is bounded from above by
n3
X−3
2 ⌋
X 1 ⌊X
X X−3−n
n1 =0
n3 =0
ℓ=1
j
X−3−n1 −n3
2
X
j=0
k
(|G1 | − 1)
X+n2 −2j+1
2
(|G2 | − 1)
X+n2 −2j
2
≪ (|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Now, suppose that j < 0. In this case, −j = m1 −n1 −ℓ = m3 −n3 +ℓ is a positive integer satisfying
(n1 +ℓ−j)+(n3 −ℓ−j) = m1 +m3 ≤ X−3, which gives the condition −j ≤ (X−3−n1 −n3 )/2 = n2 /2.
Note that p′ is determined by the choice of −j, for which there are n2 + 1 fixed G1 -letters and n2
fixed G2 -letters, all of which are in B. The non-fixed letters in G1 have ≤ (X − n2 − 1)/2 degrees
of freedom and those in G2 have ≤ (X − n2 )/2 degrees of freedom. Overall, we can count across
all choices of values for the letters, p, −j, and ℓ to get that the number of additional Wicks
CONJUGACY GROWTH OF COMMUTATORS
19
commutators arising from this case is bounded from above by
X−3
X 1
X X−3−n
n1 =0
j
X−2−n1 −n2
2
n2 =0
X
ℓ=1
k
n2
⌊X
2 ⌋
−j=1
(|G1 | − 1)
X+n2 −2j+1
2
(|G2 | − 1)
X+n2 −2j
2
≪(|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Finally, suppose that the fixed points are in w3 . This requires that w3 is given by V Bβ1 DE with
the last letter omitted, where V is the right subword of D −1 β3 Aα1 having length (2n3 − 4ℓ + 3) +
(2ℓ − 1) = 2n3 − 2ℓ + 2, one less than that of EF α−1
3 . All letters of B are thus included in w3 . We
have that
2m3 + 1 = (2n3 − 2ℓ + 2) + (2n2 + 1) + 1 + (2ℓ − 1) + (2n3 − 4ℓ + 2) = 2n2 + 4n3 − 4ℓ + 5,
so m3 = n2 + 2n3 − 2ℓ + 2. Thus, the number of possible choices for p′ is at most the number of
partitions of X − 5 − n2 − 2n3 + 2ℓ into two nontrivial parts, which is X − 5 − n2 − 2n3 + 2ℓ ≤
X − 3 − n2 + 2ℓ (the latter is guaranteed to be nonnegative for any choice of p). The fixed letters
are precisely the letters of B, so there are n3 + 1 fixed letters of G1 and n3 fixed letters of G2 , with
the non-fixed G1 -letters having ≤ (X − n2 − 1)/2 degrees of freedom and the non-fixed G2 -letters
having ≤ (X − n2 )/2 degrees of freedom. Counting across all choices of values for the letters, p, p′ ,
and ℓ, we have that the number of additional Wicks commutators arising from this case is bounded
from above by For each choice of p′ ,
X−3
X 1
X X−3−n
n1 =0
j
X−2−n1 −n2
2
X
n2 =0
ℓ=1
k
(X − 3 − n2 + 2ℓ)(|G1 | − 1)
X+n2 +1
2
(|G2 | − 1)
X+n2
2
≪(|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Case 3. Suppose that the fixed points are letters of A. Then, all of the fixed points must be in
one of w1 and w2 ; they cannot be in w3 , since then there would be more than 2ℓ letters right of A.
First, suppose they are in w1 . This requires that w1 = D −1 β3−1 AV , where V is the left subword
−1
−1
of α−1
1 Bβ1 DE having length 2ℓ, the length of α2 F . The fixed letters are then precisely the
letters of A, so there are n1 + 1 fixed letters of G1 and n1 fixed letters of G2 . The non-fixed G1 letters have ≤ (X − n1 − 1)/2 degrees of freedom and the non-fixed G2 -letters have ≤ (X − n1 )/2
degrees of freedom.The number of possible choices for p′ is at most the number of partitions of
X − 3 − m1 = X − 3 − (2ℓ + 2n1 + 1 + 2ℓ) = X − 4 − 2ℓ − n1 into two nontrivial parts, which
is ≤ X − 3 − 2ℓ − n1 (the latter is guaranteed to be nonnegative for any choice of p). Counting
across all choices of values for the letters, p, p′ , and ℓ, we have that the number of additional Wicks
commutators arising from this case is bounded from above by
X−3
X 1
X X−3−n
n1 =0
n2 =0
j
X−2−n1 −n2
2
X
ℓ=1
k
(X − 3 − 2ℓ − n1 )(|G1 | − 1)
X+n1 +1
2
(|G2 | − 1)
X+n1
2
≪(|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Finally, suppose the fixed points are in w2 . Since the length difference 2m1 − 2ℓ + 2 between the
lengths of D −1 and w1 must be equal to the length difference 2n1 + 2ℓ − 2m1 − 2m2 − 2 between
the lengths of D −1 β3 A and w1 α′1 w2 , we require that m2 = n1 + 2ℓ − 2m1 − 2. Note then that p′
is parametrized by m1 ≤ (n1 + 2ℓ − 2)/2. For a given choice of p′ , if m1 ≤ ℓ, then we have n1 + 1
fixed letters of G1 and n1 fixed letters of G2 , with ≤ (X − n1 − 1)/2 degrees of freedom for the
20
PETER S. PARK
non-fixed G1 -letters and ≤ (X − n1 )/2 degrees of freedom for the non-fixed G2 -letters. This gives
≤ (X + n1 + 1)/2 ≤ (X + n1 − m1 + ℓ + 1)/2 overall degrees of freedom for the G1 -letters, as well
as ≤ (X + n1 − m1 + ℓ)/2 ones for the G2 -letters. If m1 > ℓ, then we have 2n1 − 2(m1 − ℓ) + 1 fixed
letters of G1 and 2n1 −2(m1 −ℓ) fixed letters of G2 , so analogously we have ≤ (X +n1 −m1 +ℓ+1)/2
overall degrees of freedom for the G1 -letters, as well as ≤ (X + n1 − m1 + ℓ)/2 ones for for the
G2 -letters. Thus, counting across all choices of values for the letters, p, p′ , and ℓ, we have that the
number of additional Wicks commutators arising from this case is bounded from above by
X−3
X 1
X X−3−n
n1 =0
j
X−2−n1 −n2
2
kj
X
n2 =0
n1 +2ℓ−2
2
X
m1 =0
ℓ=1
k
(|G1 | − 1)
X+n1 −m1 +ℓ+1
2
(|G2 | − 1)
X+n1 −m1 +ℓ
2
≪(|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Next, we suppose that the fixed letters of g are in two of the three subwords A, B, and E.
Consider the following subcases:
Case 1. Suppose the fixed letters of g are in A and B. It is necessary that the fixed letters
of A and those of B are in wi and wj , respectively, such that i < j; otherwise, the fixed letters
of A would come before the fixed letters of B, a contradiction. First, suppose that the fixed
letters of B are in w2 , which implies that the fixed letters of A are in w1 . Then, we require
that w1 = D −1 β3 AV , where V is the left subword of α1 Bβ1 DE having length 2ℓ. This gives
2m1 + 1 = 2n1 + 4ℓ + 1, i.e, m1 = n1 + 2ℓ. Furthermore, since we have fixed letters of B, we require
that V does not include all of B, i.e., 2n2 + 2 > 2ℓ, or equivalently, n2 ≥ ℓ. Next, for the fixed
letters of B to match in position, we require that w2 ends at the letter b2n2 −2ℓ+1 , which gives us
2m2 + 1 = (2n2 − 2ℓ + 1) − (2ℓ + 1) + 1 = 2n2 − 4ℓ + 1, i.e., m2 = n2 − 2ℓ ≥ 0. Then, m3 is
automatically determined, and in particular, w3 is the subword of bn2 −ℓ+2 · · · bn2 +1 β1 DE omitting
the rightmost letter. For this p′ corresponding to p, we have 2n1 + 1 fixed letters of A (n1 + 1
letters of G1 and n1 letters of G2 ) and 2n2 − 4ℓ + 1 fixed letters of B (n2 − 2ℓ + 1 letters of G1 and
n2 − 2ℓ letters of G2 ). Next, we bound the degrees of freedom of the non-fixed letters. Note that
′−1
g maps the letters of EF α−1
3 b1 · · · bℓ−1 b̄ℓ−1 to those of β3 bn2 −ℓ+2 · · · bn2 +1 β1 DE in order, but g
−1
also maps fℓ−1 , . . . , f¯2−1 f1−1 to b1 , . . . , b̄ℓ−1 , bℓ and bn2 −ℓ+2 , . . . , bn2 +1 , to d−1
ℓ , . . . , d1 . Thus, arguing
inductively by translation, we see that that choosing the letters of F determines the letters of E,
and thus also determines those of D, thereby determining all non-fixed letters (while not caring
about the constant number of αi and βi letters). There are ℓ G1 -letters and ℓ − 1 G2 -letters in F .
Counting across all choices of values for the letters, p, and ℓ, we have that the number of additional
Wicks commutators arising from this case is bounded from above by
X−3
X 1
X X−3−n
n1 =0
n2 =0
j
X−2−n1 −n2
2
X
ℓ=1
k
(|G1 | − 1)(n1 +1)+(n2 −2ℓ+1)+ℓ (|G2 | − 1)n1 +(n2 −2ℓ)+(ℓ−1)
≪X(|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Next, suppose that the fixed letters of B are in w3 . Then, we require that w3 = V Bβ1 DE, where
V is the right subword of D −1 β3 Aα1 having length 2n3 −2ℓ+2, the length of EF . Furthermore, since
we have fixed letters of A, we require that V does not include all of A, i.e., 2n3 − 2ℓ + 2 < 2n1 + 2, or
in other words, n3 < n1 +ℓ. Next, for the fixed letters of A to match in position, we require that they
are in w2 , and specifically that w2 = an3 −ℓ+1 · · · an1 −n3 +ℓ . This gives us 2m2 +1 = 2n1 −4n3 +4ℓ−1,
and thus m2 = n1 − 2n3 + 2ℓ − 1 > 0. It follows that w1 = D −1 β3 a1 · · · an3 −ℓ . In particular, m1
is automatically determined, and for this p′ corresponding to p, we have n2 + 1 fixed G1 -letters
and n2 fixed G2 -letters of B, as well as n1 − 2n3 + 2ℓ fixed G1 -letters and n1 − 2n3 + 2ℓ − 1 fixed
CONJUGACY GROWTH OF COMMUTATORS
21
G2 -letters of A. Next, we bound the degrees of freedom of the non-fixed letters. Note that g maps
−1 to those of D −1 β a · · · a
the letters of an1 −n3 +ℓ+1 · · · an1 α−1
3 1
n3 −ℓ in order. However, g maps
2 F
d1 , . . . , dℓ , ē1 , . . . , en3 −2ℓ+1 to a1 , ā1 , . . . , an3 −ℓ+1 , ān3 −ℓ+1 . Also, g maps an1 −n3 +ℓ+1 , · · · , an1 +1 , to
e2 , . . . , ēn3 −2ℓ+2 , f1 , . . . , fℓ . Thus, arguing inductively by translation, we see that that choosing the
letters of F determines the letters of E, and thus also determines those of D, thereby determining
all non-fixed letters (while not caring about the constant number of αi and βi letters). Counting
across all choices of values for the letters, p, and ℓ, we have that the number of additional Wicks
commutators arising from this case is bounded from above by
X−3
X 1
X X−3−n
n1 =0
n2 =0
j
k
X−2−n1 −n2
2
(|G1 | − 1)(n1 −2n3 +2ℓ)+(n2 +1)+ℓ (|G2 | − 1)(n1 −2n3 +2ℓ−1)+n2 +(ℓ−1)
X
ℓ=1
≪X(|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Case 2. Suppose the fixed letters of g are in B and E. Similarly to before, it is necessary that
the fixed letters of B and those of E are in wi and wj , respectively, such that i < j. First, suppose
that the fixed letters of B are in w1 . Then, we require that w1 = D −1 β3 Aα1 BV , where V is the
−1 . Then, w must start
left subword of β1 DE having length 2n1 + 2ℓ + 2, the length of β3−1 Aα−1
2
2 F
with en1 +3 , which means in order to have the letters of E match, we must have 2m3 + 3 = 2n1 + 3,
i.e., m3 = n1 . Since (2n1 + 2) + 1 + (2m2 + 1) + 1 + (2m3 + 1) + 1 = 2n3 − 4ℓ + 3 (counting the
letters of E in two ways), we have m2 = n3 − 2n1 − 2ℓ − 2 > 0. There are n3 − 2n1 − 2ℓ − 1
fixed G1 -letters and n3 − 2n1 − 2ℓ − 2 fixed G2 -letters of E in w2 . Also, there are n2 + 1 fixed
G1 -letters and n2 fixed G2 -letters of B. Next, we bound the degrees of freedom of the non-fixed
−1 to those of Dē · · · ē
letters. Note that g maps the letters of Aα−1
1
n1 +1 en1 +2 in order. Likewise,
2 F
by observing the left end of w1 , we see that g maps the letters of en3 −n1 −2ℓ+2 · · · ēn3 −2ℓ+2 F to those
of D −1 β3 A in order. However, we also have that g maps the letters of ē1 · · · ēn1 +1 en1 +2 to those of
β3′−1 en3 −n1 −2ℓ+2 · · · ēn3 −2ℓ+1 en3 −2ℓ+2 in order, which overall gives us the ordered equality (by g) of
letters
−1
ēn3 −2ℓ+2 F = Dē1 · · · en1 +2 ēn3 −2ℓ+2 F
Aα−1
2 F
= Dβ3′−1 en3 −n1 −2ℓ+2 · · · en3 −2ℓ+2 ēn3 −2ℓ+2 F = Dβ3′−1 D −1 β3 A.
Thus, arguing inductively by translation, we see that that choosing the letters of F determines the
letters of A, and thus also determines those of D, thereby determining all non-fixed letters (while
not caring about the constant number of exceptional letters). Counting across all choices of values
for the letters, p, and ℓ, we have that the number of additional Wicks commutators arising from
this case is bounded from above by
X−3
X 1
X X−3−n
n1 =0
n2 =0
j
X−2−n1 −n2
2
X
ℓ=1
k
(|G1 | − 1)(n2 +1)+(n3 −2n1 −2ℓ−1)+ℓ (|G2 | − 1)n2 +(n3 −2n1 −2ℓ−2)+(ℓ−1)
≪X(|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Next, suppose that the fixed letters of B are in w2 , which implies the fixed letters of E are
in w3 . Then, we require that w3 = e2 · · · en3 −2ℓ+2 . Furthermore, in order for the letters of B to
match in position, we must have that w2 = V α1 Bβ1 D, where V is the right subword of D −1 β3 A
having length 2ℓ − 1. We thus have n3 − 2ℓ + 1 fixed G1 -letters and n3 − 2ℓ fixed G2 -letters in
E, as well as n2 + 1 fixed G1 -letters and n2 fixed G2 -letters in B. Now, we bound the degrees of
freedom of the non-fixed letters. First, suppose that n1 > ℓ. Note then that g maps the letters
22
PETER S. PARK
−1 to those of
of F to those of an1 −ℓ+2 · · · an1 +1 in order, and likewise maps the letters of Aα−1
2 F
′−1 −1
Dβ2 D β3 a1 · · · an1 −ℓ+1 in order. Thus, we have the ordered equality (by g) of letters
−1
−1
a1 · · · an1 −ℓ+1 ān1 −ℓ+1 F α−1
= a1 · · · an1 −ℓ+1 ān1 −ℓ+1 (an1 −ℓ+2 · · · an1 +1 )α−1
2 F
2 F
= Dβ2′−1 D −1 β3 a1 · · · an1 −ℓ−1 .
Thus, arguing inductively by translation, we see that that choosing the letters of F determines
the letters of a1 · · · an1 −ℓ−1 , and thus also determines those of the rest of A and of D, thereby
determining all non-fixed letters (while not caring about the constant number of exceptional letters).
In the other case of n1 ≤ ℓ, the notation above for a1 · · · an1 −ℓ−1 becomes inviable, but nevertheless
we can use a similar argument as above to conclude that choosing the letters of F determines all
the non-fixed letters. Counting across all choices of values for the letters, p, and ℓ, we have that
the number of additional Wicks commutators arising from this case is bounded from above by
X−3
X 1
X X−3−n
n1 =0
j
X−2−n1 −n2
2
k
(|G1 | − 1)(n2 +1)+(n3 −2ℓ+1)+ℓ (|G2 | − 1)n2 +(n3 −2ℓ)+(ℓ−1)
X
n2 =0
ℓ=1
≪X(|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Case 3. Finally, suppose the fixed letters of g are in A and E. Similarly to before, it is necessary
that the fixed letters of A and those of E are in wi and wj , respectively, such that i < j. First, suppose that the fixed letters of E are in w3 . Then, we require that w3 = e2 · · · en3 −2ℓ+2 . Furthermore,
in order for the letters of A to match in position, we need that the fixed letters of A are contained
in w1 , and in particular, that w1 = D −1 β3 Aα1 V , where V is the left subword of Bβ1 DE having
length 2ℓ − 1. We thus have n3 − 2ℓ + 1 fixed G1 -letters and n3 − 2ℓ fixed G2 -letters in E, as well as
n1 + 1 fixed G1 -letters and n1 fixed G2 -letters in A. Now, we bound the degrees of freedom of the
non-fixed letters. First, suppose that n2 > ℓ. Note then that g maps the letters of F −1 to those
′−1 −1
of b1 · · · bℓ in order, and likewise maps the letters of F α−1
3 B to those of bℓ+1 · · · bn2 +1 β1 Dβ2 D .
Thus, we have the ordered equality (by g) of letters
′−1 −1
−1
F α−1
b̄ℓ bℓ+1 · · · bn2 +1 = F α′−1
3 F
3 (b1 · · · bℓ )b̄ℓ bℓ+1 · · · bn2 +1 = bℓ+1 · · · bn2 +1 β1 Dβ2 D
Arguing inductively by translation, we see that that choosing the letters of F determines the letters
of B of D, thereby determining all non-fixed letters. In the other case of n2 ≤ ℓ, the notation above
for b1 · · · bℓ becomes inviable, but nevertheless we can use a similar argument as above to conclude
that F determines all the non-fixed letters. Counting across all choices of values for the letters, p,
and ℓ, we have that the number of additional Wicks commutators arising from this case is bounded
from above by
X−3
X X−3−n
X 1
n1 =0
n2 =0
j
X−2−n1 −n2
2
X
ℓ=1
k
(|G1 | − 1)(n1 +1)+(n3 −2ℓ+1)+ℓ (|G2 | − 1)n1 +(n3 −2ℓ)+(ℓ−1)
≪X(|G1 | − 1)X (|G2 | − 1)X ,
which is dominated by our error term.
Next, suppose that the fixed letters of E are in w2 . Then, the fixed letters of A are contained in
w1 , which requires that w1 = D −1 β3 Aα1 V , where V is the left subword of Bβ1 DE having length
2ℓ − 1. But then w2 must start on a letter not in E, which makes it impossible for the letters of E
to match in position.
Finally, if there are fixed letters in A, B, and C, then it is necessarily that ℓ = 0 and p = p′ ,
which does not need to be considered.
CONJUGACY GROWTH OF COMMUTATORS
23
If ℓ > (n3 +1)/2, then we can think of our commutator as a cyclic conjugate of C −1 ABCA−1 B −1
such that the letters are moved from left to right. A symmetric argument like above gives us the
conclusion that W ′ = D −1 β3 Aα1 Bβ1 DEF α2 A−1 β2 B −1 α3 F −1 E −1 is on average not a generic
Wicks commutator of the form w1 α′1 w2 β1′ w3 α′2 w1−1 β2′ w2−1 α′3 w3−1 β3′ . Note that this entire argument
can then be repeated mutatis mutandis to show that W ′ is on average also not a Wicks commutator
of the other generic form (9) or of the other possible forms (3) − (7), since the only difference from
the previous case is a constant number of exceptional letters and possibly setting one or more
of the subwords w1 , w2 , and w3 to be trivial, which overall can only affect error bounds by at
worst a multiplicative constant. Likewise, a similar argument shows that W ′ is also not a Wicks
commutator for the case of ℓ = 0, again since the only difference from the previous case is a constant
number of exceptional letters. Finally, a similar argument shows that even if W is taken to be of
the generic form (9) rather than (8), W is on average only decomposable as a Wicks commutator
in one prescribed way (i.e., with respect to a unique partition of X − 3), and any cyclic conjugate
of W is on average not a Wicks commutator.
We have thus shown that the number of conjugacy classes of commutators with length 4X is
given by
1 1
(|G1 | − 1) (|G2 | − 2)2 + (|G1 | − 2)2 (|G2 | − 1) X 2 (|G1 | − 1)X−1 (|G2 | − 1)X−1
6 2
X
X
+O X(|G1 | − 1) (|G2 | − 1)
=
1
(|G1 | − 1) (|G2 | − 2)2 + (|G1 | − 2)2 (|G2 | − 1) X 2 (|G1 | − 1)X−1 (|G2 | − 1)X−1
12
+O X(|G1 | − 1)X (|G2 | − 1)X ,
as needed.
4. Concluding Remarks
There are a number of directions in which Theorems 1.2 and 1.4 can be generalized. First,
one can ask: how many conjugacy classes of commutators with word length k are in an arbitrary
finitely-generated free product G = G1 ∗ · · · ∗ Gn , with each Gi nontrivial and having symmetric
(i)
(i)
(i)
(i)
generating set Si = {g1 . . . . , gmi , (g1 )−1 . . . . , (gmi )−1 }? While one can define the word length in
this context to be with respect to an arbitrary generating set S, a natural notion of length to use
in this setting would be with respect to the symmetric generating set
n
[
Si .
S ··=
i=1
We note that in the case that Gi is finite for all 1 ≤ i ≤ n, we can take Si = Gi \ {1}, for which S
is consistent with our choice of the set S of generators for G1 ∗ G2 in the statement of Theorem 1.4.
Counting conjugacy classes of commutators by word length for groups in this more general form
would have analogous geometric consequences as those discussed in Corollary 1.5. For example,
Hecke Fuchsian groups H(λ) for λ ≥ 2 have the presentation [13]
hS, Rλ : S 2 = Ii ∼
= Z/2Z ∗ Z,
and a desire for geometric corollaries akin to those discussed in Section 1 motivates a result analogous to Theorem 1.2 and Theorem 1.4 in the case of the free product of a nontrivial finite group
and an infinite cyclic group.
To describe a second potential direction for generalization, let the n-commutators of a given
group be defined by the elements with trivial abelianization and commutator length n. A second
24
PETER S. PARK
direction for generalizing Theorems 1.2 and 1.4 is to, for any n, count the number of conjugacy
classes of n-commutators with word length k in a group G taken to be either the free group Fr
or a finitely generated free product. This is natural to ask, given that Culler [7] has classified the
possible forms of n-commutators for a free group and Vdovina [26] has done this for an arbitrary free
product. In fact, Culler has also classified the possible forms that a product of n square elements
can take for a free group, so an analogous question can be asked for the number of conjugacy
classes comprised of products of n square elements. If one obtained the asymptotic number of ncommutators with length k (respectively, of n-square-element-products with length k), this would
also give a corollary analogous to Corollary 1.5. Specifically, given a connected CW-complex X
with fundamental group G, one would obtain the asymptotic number of free homotopy classes of
loops γ : S 1 → X with length k (in the generators of S) such that there exists a genus-n orientable
surface Y with one boundary component (respectively, a connected sum Y of n real projective
planes, with one boundary component) and a continuous map f : Y → X satisfying f (∂Y ) = Im γ,
but also that this statement does not hold when replacing n with any m ≤ n. We expect our
combinatorial method to also work in this generalized setting, as long as one has, for the given
group, an explicit list of the possible Wicks forms of n-commutators (or of products of n square
elements).
Acknowledgments
This work began at Princeton University as part of the author’s senior thesis advised by Peter
Sarnak, whom the author would like to thank for providing the number-theoretic motivation for
this problem, invaluable discussions/references, and constant encouragement.
The generalization of Theorem 1.4 to an arbitrary free product of two nontrivial finite groups
was done at Harvard University and was supported by the National Science Foundation Graduate
Research Fellowship Program (grant number DGE1745303). The author would like to thank Bena
Tshishiku for reading over the manuscript and providing many valuable suggestions. He would also
like to thank Keith Conrad, Noam Elkies, Curt McMullen, Hector Pasten, Xiaoheng Wang, and
Boyu Zhang for very helpful discussions.
References
[1] J. Bourgain, A. Gamburd, and P. Sarnak, Markoff triples and strong approximation, C. R. Math. Acad. Sci.
Paris 354 (2016), no. 2, 131–135. MR3456887
[2] P. Buser, Geometry and spectra of compact Riemann surfaces, Modern Birkhäuser Classics, Birkhäuser Boston,
Inc., Boston, MA, 2010. Reprint of the 1992 edition. MR2742784
[3] D. Calegari, Stable commutator length is rational in free groups, J. Amer. Math. Soc. 22 (2009), no. 4, 941–961.
MR2525776
[4] M. Chas, K. Li, and B. Maskit, Experiments suggesting that the distribution of the hyperbolic length of closed
geodesics sampling by word length is Gaussian, Exp. Math. 22 (2013), no. 4, 367–371. MR3171098
[5] L. P. Comerford Jr., C. C. Edmunds, and G. Rosenberger, Commutators as powers in free products of groups,
Proc. Amer. Math. Soc. 122 (1994), no. 1, 47–52. MR1221722
[6] K. Conrad, SL2 (Z), 2017. Expository notes, http://www.math.uconn.edu/~ kconrad/blurbs/grouptheory/SL(2,Z).pdf.
[7] M. Culler, Using surfaces to solve equations in free groups, Topology 20 (1981), no. 2, 133–145. MR605653
[8] R. Fricke, Über die theorie der automorphen modulgrupper, Nachr. Akad. Wiss. Göttingen (1896), 91–101.
[9] R. Fricke and F. Klein, Vorlesungen über die theorie der automorphen functionen, Vol. 1, Teubner, Liepzig, 1897.
[10] A. Ghosh and P. Sarnak, Integral points on Markoff type cubic surfaces, arXiv e-prints (2017), available at
https://arxiv.org/abs/1706.06712.
[11] M. Gromov, Groups of polynomial growth and expanding maps, Inst. Hautes Études Sci. Publ. Math. 53 (1981),
53–73. MR623534
[12] E. Hecke, Über die bestimmung dirichletscher reihen durch ihre funktionalgleichung, Math. Ann. 112 (1936),
664–699.
[13] R. C. Lyndon and J. L. Ullman, Pairs of real 2-by-2 matrices that generate free products, Michigan Math. J. 15
(1968), 161–166. MR0228593
CONJUGACY GROWTH OF COMMUTATORS
25
[14] C. Maclachlan and A. W. Reid, The arithmetic of hyperbolic 3-manifolds, Graduate Texts in Mathematics,
vol. 219, Springer-Verlag, New York, 2003. MR1937957
[15] A. Mann, How groups grow, London Mathematical Society Lecture Note Series, vol. 395, Cambridge University
Press, Cambridge, 2012. MR2894945
[16] G. A. Margulis, Certain applications of ergodic theory to the investigation of manifolds of negative curvature,
Funkcional. Anal. i Priložen. 3 (1969), no. 4, 89–90. MR0257933
[17] A. Markoff, Sur les formes quadratiques binaires indéfinies, Math. Ann. 15 (1879), 381–409.
[18]
, Sur les formes quadratiques binaires indéfinies, Math. Ann. 17 (1880), 379–399.
[19] J. Milnor, A note on curvature and fundamental group, J. Differential Geometry 2 (1968), 1–7. MR0232311
[20] P. S. Park, Conjugacy growth of commutators, 2017. Thesis (A.B.)–Princeton University.
[21] I. Rivin, Growth in free groups (and other stories)—twelve years later, Illinois J. Math. 54 (2010), no. 1, 327–370.
MR2776999
[22] P. Sarnak, Prime geodesic theorems, ProQuest LLC, Ann Arbor, MI, 1980. Thesis (Ph.D.)–Stanford University.
MR2630950
[23]
, 2017. private communication.
[24] M. P. Schützenberger, Sur l’equation a2+n = b2+m c2+p dans un groupe libre, C. R. Acad. Sci. Paris 248 (1959),
2435–2436.
[25] A. Selberg, Harmonic analysis and discontinuous groups in weakly symmetric Riemannian spaces with applications to Dirichlet series, J. Indian Math. Soc. (N.S.) 20 (1956), 47–87. MR0088511
[26] A. Vdovina, Products of commutators in free products, Internat. J. Algebra Comput. 7 (1997), no. 4, 471–485.
MR1459623
[27] R. M. Vogt, Sur les invariants fondamentaux des équations différentielles linéaires du second ordre, Ann. Sci.
École Norm. Sup. (3) 6 (1889), 3–71.
[28] M. J. Wicks, Commutators in free products, J. London Math. Soc. 37 (1962), 433–444. MR0142610
Department of Mathematics, Harvard University, Cambridge, MA 02138
E-mail address: [email protected]
| 4 |
arXiv:1803.01239v1 [] 3 Mar 2018
Krull dimension and regularity of binomial edge
ideals of block graphs
Carla Mascia, Giancarlo Rinaldo
University of Trento
∗
March 6, 2018
Abstract
We give a lower bound for the Castelnuovo-Mumford regularity of
binomial edge ideals of block graphs by computing the two distinguished
extremal Betti numbers of a new family of block graphs, called flower
graphs. Moreover we present a linear time algorithm to compute the
Krull dimension of binomial edge ideals of block graphs.
Introduction
In 2010, binomial edge ideals were introduced in [3] and appeared independently also in [13]. Let S = K[x1 , . . . , xn , y1 , . . . , yn ] be the polynomial ring in
2n variables with coefficients in a field K. Let G be a graph on vertex set [n]
and edges E(G). The ideal JG of S generated by the binomials fij = xi yj − xj yi
such that i < j and {i, j} ∈ E(G), is called the binomial edge ideal of G. Any
ideal generated by a set of 2-minors of a 2 × n-matrix of indeterminates may be
viewed as the binomial edge ideal of a graph.
For a set T ⊂ [n], let G[n]\T be the induced subgraph of G with vertex set
[n] \ T and G1 , . . . , Gc(T ) the connected components of G[n]\T . T is a cutset
of G if c(T \ {i}) < c(T ) for each i ∈ T , and we denote by C(G) the set of all
cutsets for G. In [3] and [13] the authors gave a nice description of the primary
decomposition of JG in terms of prime ideals induced by the set C(G) (see (2)).
Thanks to this result the following formula for the Krull dimension is obtained
dim(S/JG ) = max {n + c(T ) − |T |}.
T ∈C(G)
(1)
The second author in [14] described an algorithm to compute the primary decomposition (2), and hence the Krull dimension. Unfortunately, this algorithm
is exponential in time and space.
A block graph, also known as clique tree, is a graph whose blocks are cliques.
In general, computing the depth of a ring is a difficult task. In [4] the authors
prove that when G is a block graph, depth S/JG = n + c and, equivalently,
proj dim S/JG = n − c where c is the number of connected components of G.
∗ Email
addresses: [email protected], [email protected]
1
Given any block graph, such a nice formula for the Krull dimension does not
exist, and we believe that is hard to obtain something that is better than (1).
Nevertheless the block graphs are the easier but not trivial class where we
can obtain a good algorithm to compute the Krull dimension. For example, by
an easy combinatorial argument we have that the cutsets of G are all and only
sets of cutpoints of G. In Proposition 2.5 we present an algorithm that in linear
time and space computes the Krull dimension. The idea is to find a minimal
prime ideal of minimal height since it induces the Krull dimension of S/JG . We
have implemented the algorithm using CoCoA (see [2]), in the case G is a tree.
Another fundamental invariant that has been studied in deep is the Castelnuovo-Mumford regularity of binomial edge ideal. Lower and upper bounds
for the regularity are known by Matsuda and Murai [12] and Kiani and Saeedi
Madani [9]. The case of the so called proper interval graphs has been studied by
by Ene and Zarojanu [5]. Furthermore, Kiani and Saeedi Madani characterized
all graphs whose binomial edge ideal have regularity 2 and regularity 3, see [10]
and [11].
It is still an open problem to determine the regularity of the binomial edge
ideal for block graphs in terms of the combinatorics of the graph. Recently, Herzog and the second author [7] computed one of the distinguished extremal Betti
number of the binomial edge ideal of a block graph and classify all block graphs
admitting precisely one extremal Betti number giving a natural lower bound for
the regularity of any block graph. Jananthan et al in an yet unpublished paper
and revised version of [8] obtained a related result for trees, a subclass of block
graphs.
Inspired by these results we define a new class of graphs, namely the flower
graphs (see Definition 3.1 and Fig. 1), for which we compute the superextremal
Betti numbers (see Theorem 3.4) and the regularity (see Corollary 3.5). As a
consequence we obtain new lower bounds in Theorem 3.6 and Corollary 3.7 for
the regularity of any block graph.
1
On the height of minimimal prime ideals of JG
and decomposability of block graphs
We start this section recalling the formula to compute the primary decomposition of a binomial edge ideal JG . Let define
!
[
PT (G) =
{xi , yi }, JG̃1 , . . . , JG̃c(T ) ⊆ S
i∈T
where G̃i , for i = 1, . . . , c(T ), denotes the complete graph on V (Gi ). PT (G) is
a prime ideal of height n − c(T ) + |T |, where |T | denotes the cardinality of T .
It holds
\
PT (G).
(2)
JG =
T ∈C(G)
We denote by M(G) the minimal prime ideals of JG , by Minh(G) ⊆ M(G) the
minimal prime ideals PT (G) of minimum height and by Maxh(G) ⊆ M(G) the
minimal prime ideals PT (G) of maximum height.
2
A subset C of V (G) is called a clique of G if for all i, j ∈ C, with i 6= j,
one has {i, j} ∈ E(G). A maximal clique is a clique that cannot be extended
by including one more adjacent vertex. A vertex v of a graph G is called a free
vertex if it belongs to only one maximal clique of G.
A connected subgraph of G that has no cutpoint and is maximal with respect
to this property is a block. G is called block graph if all its blocks are complete
graphs. One can see that a graph G is a block graph if and only if it is a chordal
graph in which every two maximal cliques have at most one vertex in common.
Let G be a block graph. An endblock is a block having exactly one cutpoint.
Definition 1.1 Let G be a graph and v ∈ V (G). The clique degree of v, denoted
by cdeg(v), is the number of maximal cliques which v belongs to.
Definition 1.2 A graph G is decomposable if exists a decomposition
G = G1 ∪ G2
(3)
with V (G1 ) ∩ V (G2 ) = {v} such that v is a free vertex of G1 and G2 . By a
recursive decomposition (3) applied to each G1 and G2 , after a finite number of
steps we obtain
G = G1 ∪ · · · ∪ Gr
(4)
where G1 , . . . , Gr are indecomposable and for 1 ≤ i < j ≤ r either V (Gi ) ∩
V (Gj ) = ∅ or V (Gi ) ∩ V (Gj ) = {v} and v is a free vertex of Gi and Gj . The
decomposition (4) is unique up to ordering and we say that G is decomposable
into indecomposable graphs G1 , . . . , Gr .
Lemma 1.3 Let G be a graph decomposable into G1 and G2 , with V (G1 ) ∩
V (G2 ) = {v}. Let PT (G) ∈ M(G). If v ∈ T , then height(PT \{v} (G)) =
height(PT (G)).
Proof. Let H1 , . . . , Hc denote the connected components induced by T , and
then height(PT (G)) = n − c + |T |. Suppose v ∈ T , v induces exactly two
connected components, we may suppose H1 and H2 . The connected components
induced by T \ {v} are H1 ∪ H2 ∪ {v}, H3 , . . . , Hc , hence height(PT \{v} (G)) =
n − (c − 1) + (|T | − 1) = height(PT (G)).
Proposition 1.4 Let G be a block graph. The following are equivalent:
(i) G is indecomposable;
(ii) if v ∈ V (G), then cdeg(v) 6= 2;
(iii) Maxh(G) = {P∅ (G)}.
Proof.
(i) ⇔ (ii) It is trivial.
(ii) ⇒ (iii) We can suppose G connected. Since height(P∅ (G)) = n−1, we want
to prove that for any T 6= ∅, height(PT (G)) < n − 1 . Let T ∈ C(G), with
height(PT (G)) ≥ n − 1, that is c(T ) − |T | ≤ 1. If T = {v}, then c(T ) ≤ 2
3
or equivalently cdeg(v) ≤ 2, but v is a cutpoint, then it is not a free vertex,
and then cdeg(v) = 2, which is in contradiction with the hypothesis. Let
T = {v1 , . . . , vr }, with r ≥ 2, such that height(PT (G)) ≥ n − 1 and
suppose it is minimal with respect to this property. In a block graph, also
T1 = T \ {vr } is a cutset. By definition, c(T1 ) < c(T ) and |T1 | = |T | − 1,
then c(T1 )−|T1 | < 2, then height(PT1 (G)) ≥ n−1, but it is in contradiction
with the hypothesis on the minimality of T .
(iii) ⇒ (ii) Suppose by absurd that exists a vertex v ∈ V (G) such that
cdeg(v) = 2. Let T = {v}, then height(PT (G)) = height(P∅ (G)) = n − 1,
then also PT (G) ∈ Maxh(G), which is in contradiction with the hypothesis.
We observe that for a generic graph G is not true that if G is indecomposable
then cdeg(v) 6= 2 for any v ∈ V (G). It is sufficient to consider G = C4 : all its
vertices have clique degree equal to 2, but G is indecomposable. For a generic
graph, also G indecomposable is not equivalent to the fact that P∅ (G) is the
prime ideal of the maximum height in the primary decomposition of JG . In
fact, consider G = C4 with V (G) = {1, . . . , 4} and E(G) = {{i, i + 1}|i =
1, . . . , 3}∪{1, 4}. The subset T = {1, 3} is a cutset for G and height(PT (G)) = 4,
whereas height(P∅ (G)) = 3.
2
Krull dimension of binomial edge ideals of block
graphs
If G is any graph with n vertices, it is well-known that the Krull dimension
of S/JG is given by dim(S/JG ) = maxT ⊆C(G) {n + c(T ) − |T |}, and then, in
general, to compute it one must investigate all the possible cutsets of G. For
some classes of graphs, there exists an immediate way to compute the Krull
dimension. For example, if G is a complete graph or a graph obtained by gluing
free vertices of complete graphs and such that any vertices v ∈ V (G) is or a free
vertex or has cdeg(v) = 2, then dim(S/JG ) = n + 1. For a generic block graph
G, in Proposition 2.5, we show an algorithm to compute the Krull dimension of
S/JG in linear time.
From now on, we consider only connected block graphs, since the Krull dimension of a graph with r connected components, G1 , . . . , Gr , is given by the
sum of the Krull dimension of Si /JGi , with i = 1, . . . , r and Si = K[xj , yj ]j∈V (Gi ) .
Before showing the aforementioned algorithm, we need some auxiliary results.
Lemma 2.1 Let G be a block graph and v ∈ V (G). Let PT (G) ∈ Minh(G). If
v belongs to exactly two endblocks, then PT ∪{v} (G) ∈ Minh(G); if v belongs
to at least three endblocks, then v ∈ T .
Proof. Let PT (G) ∈ Minh(G) and let v belong to r endblocks, B1 , . . . , Br ,
with r ≥ 2, and let G1 , . . . , Gc be the connected components of G[n]\T , then
height(PT (G)) = n − c + |T |. Suppose that v 6∈ T . Without loss of generality,
we can suppose v ∈ G1 . The connected components induced by T ∪ {v} are
B1′ , . . . , Br′ , G′1 , G2 , . . . , Gc , where Bi′ = Bi \ {v} for i = 1, . . . , r and G′1 =
4
G1 \ {B1 , . . . , Br }. If r ≥ 3 or r = 2 and G′1 6= ∅, the number of connected
components induced by T ∪ {v} is at least r + c − 1 and hence it is greater
than or equal to c + 2. Thus, height(PT ∪{v} (G)) ≤ n − (c + 2) + (|T | + 1) <
height(PT (G)), which is in contradiction with the minimality of PT (G). If
r = 2 and G′1 = ∅, then height(PT ∪{v} (G)) = height(PT (G)), and then also
PT ∪{v} (G) ∈ Minh(G).
Lemma 2.2 Let G be a block graph. There exists at least one minimal prime
ideal PT (G) ∈ Minh(G) such that T does not contain any vertices v with
cdeg(v) = 2.
Proof. Let T ∈ C(G) be such that PT (G) ∈ Minh(G). Let {v1 , . . . , vr } ⊆ T be
all the vertices in T with clique degree equal to 2. Let T ′ = T \ {v1 , . . . , vr }, we
prove that height(PT ′ (G)) = height(PT (G)), and then also PT ′ (G) ∈ Minh(G).
We use induction on r. If r = 0, there is nothing to prove. Otherwise, let
T1 = T \ {vr }. By induction hypothesis, we have height(PT ′ ) = height(PT1 ),
and by the Lemma 1.3, we get height(PT1 ) = height(PT ), and then we have
done.
Remark 2.3. By Lemma 2.1 and Lemma 2.2, it follows that there exists at least
one minimal prime ideal PT (G) associated to JG of minimum height such that
T contains all the vertices that belong to at least two endblocks and if v ∈ T ,
then v belongs to at least three blocks.
Remark 2.4. If G is a block graph, and v ∈ V (G) is a vertex with cdeg(v) =
r ≥ 2, then v is a cutpoint and the number of connected components of G[n]\{v}
is exactly r.
In the following we show a linear time algorithm to obtain the Krull dimension of a block graph.
Proposition 2.5 Let G be a block graph. Then the following algorithm computes the Krull dimension of S/JG :
• Input: A block graph G with n vertices
• Output: The Krull dimension d of S/JG
1. d := n + 1;
2. LG := {G};
3. LI := {};
4. for every graph H ∈ LG
4a. decompose H into its indecomposable subgraphs I = {G1 , . . . , Gr };
4b. remove from I the subgraphs which are blocks;
4c. LI := LI ∪ I;
4d. for every graph I ∈ LI
4e. S := {v ∈ I|v belongs to at least 2 endblocks};
4f. for every v ∈ S
5
4g. d := d + cdeg(v) − 2;
4h. remove from I the vertices of the endblocks that
contain v;
4i. if I is not a block, then LG = LG ∪ I;
5. return d.
Proof. The aim of the algorithm is to compute the Krull dimension through the
formula n + c(T ) − |T |, where T ⊆ [n] is a cutset such that PT (G) ∈ Minh(G)
and of the form described in Remark 2.3. The algorithm works in the following
way: firstly, T = ∅, and then d = n + 1. We denote by LG the list of graphs
that are to consider still. Given a graph H in LG, any vertex of H with clique
degree equal to 2 does not belong to T and then we decompose H into its
indecomposable subgraphs and we collect them on LI, discarding blocks, since
all the vertex of a block are free vertices, and then they don’t belong to T . For
every subgraph I in LI, any vertex v ∈ I which belongs to at least 2 endblocks
and to at least 3 blocks must be in T and hence we update the Krull dimension.
In particular, at this step the contribute of v for the Krull dimension is equal to
the number of the connected components that it induces less the cardinality of
the cutset, which is 1. By Remark 2.4, the number of the connected components
of IV (I)\{v} is equal to cdeg(v), less 1, because one connected component is just
considered starting from d = n + 1. Then, the vertices of the endblocks that
contain v are removed from I and the remaining subgraphs are added to LG.
The algorithm continues on all the subgraphs on LG, until it is empty.
We have implemented the algorithm proposed in the Proposition 2.5 using
CoCoA [2] version 4.7 in order to compute effectively the Krull dimension of
S/JG , whenever G is a tree. We underline that the time complexity is linear
with respect to number of vertices of G, in fact it is sufficient an unique visit of
G to compute the Krull dimension of S/JG .
3
Regularity bounds for binomial edge ideals of
block graphs
The main result of this section is the lower bound for the CastelnuovoMumford regularity of binomial edge ideals of block graphs (Theorem 3.6).
To reach our result, we compute the regularity and the superextremal Betti
numbers of special block graphs, called flower graphs, Figure 1.
Let M be a finitely graded S-module. A Betti number βi,i+j (M ) 6= 0 is
called extremal if βk,k+ℓ = 0 for all pairs (k, ℓ) 6= (i, j), with k ≥ i, ℓ ≥ j. Let
q = reg(M ) and p = projdim(M ), then there exist unique numbers i and j
such that βi,i+q (M ) and βp,p+j (M ) are extremal Betti numbers. We call them
the distinguished extremal Betti numbers of M . Let k be the maximal integer
j such that βi,j 6= 0 for some i. It is clear that βi,k (M ) is an extremal Betti
number for all i with βi,k 6= 0, and that there is at least one such i. These Betti
numbers are distinguished by the fact that they are positioned on the diagonal
{(i, k − 1)|i = 0, . . . , k} in the Betti diagram, and that all Betti numbers on the
right lower side of the diagonal are zero. The Betti numbers βi,k , for i = 0, . . . , k,
6
are called superextremal, regardless of whether they are zero or not. We refer
the reader to [6, Chapter 11] for further details.
Definition 3.1 A flower graph Fh,k (v) is a connected block graph constructed
by joining h copies of the cycle graph C3 and k copies of the bipartite graph
K1,3 with a common vertex v, where v is one of the free vertices of C3 and of
K1,3 .
We observe that any flower graph Fh,k (v) has 2h+3k+1 vertices and 3(h+k)
edges. The clique degree of v is given by h + k, and the number of inner vertices
is i(Fh,k (v)) = k + 1 and all of them are cutpoints for Fh,k (v). When it is
unnecessary to make explicit the parameters h and k, we refer to Fh,k (v) as
F (v).
q q q q
r
r
✡❏
✡❏
✡
✡
❏
❏
✡
✡
❏
❏❏r
r✡
❏rr✡
❅
v
❅
❅r
r
r
r
r
r
q q q q
Figure 1: A flower graph Fh,k (v)
Remark 3.2. Let G be a flower graph F (v). By the result [15, Corollary 1.5],
obtained from G by connecting all the
G = JG′ ∩ Qv where G′ is the graph
T
vertices adjacent to v, and Qv = T ∈C(G),v∈T PT (G). We observe that in this
case Qv = (xv , yv ) + JG′′ , where G′′ is obtained from G by removing v, and then
we can write
JG = JG′ ∩ ((xv , yv ) + JG′′ )
Let G be a graph. We denote by i(G) the number of inner vertices of G
and by f (G) the number of free vertices of G. Before stating the distinguished
extremal Betti numbers of the binomial edge ideal of a flower graph, we need
the following remark.
Remark 3.3. Let G be a disconnected block graph with G1 , . . . , Gr its connected components. If all the Gj have precisely one extremal Betti number,
βnj −1,nj +i(Gj ) (Sj /JGj ), for any j = 1, . . . , r, with Sj = K[xi , yi ]i∈V (Gj ) and
nj = |V (Gj )|, then S/JG has precisely one extremal Betti number and it is
given by
r
Y
βnj −1,nj +i(Gj ) (Sj /JGj ).
βn−r,n+i(G) (S/JG ) =
j=1
Theorem 3.4 Let G be a flower graph F (v), with cdeg(v) ≥ 3. The following
are extremal Betti numbers of S/JG :
(a) βn−1,n+i(G) (S/JG ) = f (G) − 1;
7
(b) βn−cdeg(v)+1,n+i(G) (S/JG ) = 1.
In particular, they are the only non zero superextremal Betti numbers.
Proof. The fact (a) is proved in [7]. As regards (b), we focus on the cutpoint v
of G. Thanks to the decomposition quoted in the Remark 3.2, we consider the
following exact sequence
0 −→ S/JG −→ S/JG′ ⊕ S/((xv , yv ) + JG′′ ) −→ S/((xv , yv ) + JH )
(5)
where G′ is the graph obtained from G by connecting all the vertices adjacent to
v, G′′ is obtained from G by removing v and H is obtained from G′ by removing
v. We observe that G′ and H are block graphs satisfying [7, Theorem 2.4 (b)],
with i(G′ ) = i(H) = i(G) − 1, and then reg(S/JG′ ) = reg(S/((xv , yv ) + JH )) =
i(G). The graph G′′ has cdeg(v) connected components G1 , . . . , Gcdeg(v) : all of
them are K2 or paths P2 of length 2. The latter are decomposable into two K2
and it holds reg(S ′ /JP2 ) = 2 = i(P2 ) + 1, with S ′ = K[xi , yi ]i∈V (P2 ) . Then, by
[7, Theorem 2.4 (b)] and since the ring S/((xv , yv ) + JG′′ ) is the tensor product
of Sj /JGj , with j = 1, . . . , cdeg(v) and Sj = K[xi , yi ]i∈V (Gj ) , we have
reg
S
(xv , yv ) + JG′′
cdeg(v)
=
X
j=1
reg
Sj
JGj
cdeg(v)
=
X
(i(Gj )+1) = i(G)−1+cdeg(v).
j=1
We get the following bound on the regularity of S/JG
S
S
S
, reg
, reg
+ 1}
JG′
(xv , yv ) + JG′′
(xv , yv ) + JH
= max{i(G), i(G) − 1 + cdeg(v), i(G) + 1} = i(G) − 1 + cdeg(v).
reg(S/JG ) ≤ max{reg
By [4, Theorem 1.1], the depth of S/JG for any block graph G over [n] is equal
to n+c, where c is the number of connected components of G. Then, we know the
depth of all quotient rings involved in (5), and by Auslander-Buchsbaum formula
we get proj dim S/JG = proj dim S/JG′ = n − 1, proj dim S/((xv , yv ) + JH ) = n,
and proj dim S/((xv , yv ) + JG′′ ) = n − cdeg(v) + 1.
Let j > i(G), then
Tm,m+j (S/JG′ ) = Tm,m+j (S/((xv , yv ) + JH )) = 0
for any m,
and
for any m > n − cdeg(v) + 1,
Tm,m+j (S/((xv , yv ) + JG′′ )) = 0
S
where Tm,m+j
(M ) stands for TorSm,m+j (M, K) for any S-module M , and S is
omitted if it is clear from the context. Of course, all the above Tm,m+j (−) are
zero when j > i(G) − 1 + cdeg(v).
Therefore, for m = n − cdeg(v) + 1 and j = i(G) − 1 + cdeg(v) we obtain
the following long exact sequence
· · · → Tm+1,m+1+(j−1) (S/((xv , yv ) + JH )) → Tm,m+j (S/JG ) →
Tm,m+j (S/JG′ ) ⊕ Tm,m+j (S/((xv , yv ) + JG′′ )) → Tm,m+j (S/((xv , yv ) + JH )) → · · ·
8
In view of the above, all the functors on the left of Tm,m+j (S/JG ) in the long
exact sequence are zero, and Tm,m+j (S/JG′ ) = Tm,m+j (S/((xv , yv ) + JH )) = 0
too. It follows
Tm,m+j (S/JG ) ∼
= Tm,m+j (S/((xv , yv ) + JG′′ )).
It means that βn−cdeg(v)+1,n+i(G) (S/JG ) = βn−cdeg(v)+1,n+i(G) (S/((xv , yv ) +
JG′′ )). We observe that
S
S
(S /JG′′ ).
Tm,m+j
(S/((xv , yv ) + JG′′ )) ∼
= Tm−2,m−2+j
′
′′
′′
where S = S/(xv , yv ). Since any connected components G1 , . . . , Gcdeg(v) of G′′
is or a K2 or a path of length 2, they have an unique extremal Betti number
βnj −1,nj +i(Gj ) (Sj /JGj ), which is equal to 1, where nj = |V (Gj )|. Therefore, by
Remark 3.3, we have
cdeg(v)
′′
βm−2,m−2+j (S /JG′′ ) =
Y
βnj −1,nj +i(Gj ) (Sj /JGj ) = 1.
j=1
Observe that for m = n − cdeg(v) + 1 and j = i(G) − 1 + cdeg(v) we get that
m + j = n + i(G) is the maximal integer such that βi,m+j (S/JG ) 6= 0 for some i.
We want to prove that βi,n+i(G) 6= 0, only for i = n − cdeg(v) + 1 and i = n − 1.
Let i be an integer such that βi,n+i(G) 6= 0. Since proj dim(S/JG ) = n − 1 and
reg(S/JG ) ≤ i(G)+cdeg−1 then we have to examine n−cdeg(v)+1 ≤ i ≤ n−1.
Consider the following long exact sequence
S
S
· · · → Ti+1,n+i(G)
→ Ti,n+i(G)
→
(xv , yv ) + JH
JG
S
S
S
⊕Ti,n+i(G)
→ Ti,n+i(G)
→ ···
Ti,n+i(G)
JG′
(xv , yv ) + JG′′
(xv , yv ) + JH
If n − cdeg(v) + 1 < i < n − 1, since i > proj dim(S/((xv , yv ) + JG′′ )) and
n + i(G) − i > reg(S/JG′ ), reg(S/((xv , yv ) + JH )), it holds
Tori,n+i(G) (M ) = 0, for M ∈ {S/JG′ , S/((xv , yv ) + JG′′ ), S/((xv , yv ) + JH )},
and then we can conclude that also Tori,n+i(G) (S/JG ) = 0.
An immediate consequence of the proof of the Theorem 3.4 is the regularity
of any flower graphs F (v), with cdeg(v) ≥ 3, that depends only on the clique
degree of v and the number of inner vertices. We observe that if cdeg(v) = 2,
then the flower graph is decomposable into G1 ∪G2 , where the Gi ’s, with i = 1, 2,
are K2 or K1,3 , and it is clear by a simple computation that also in this case
reg(S/JF (v) ) = i(F (v)) + cdeg(v) − 1.
Corollary 3.5 Let F (v) be a flower graph, then
reg(S/JF (v) ) = i(F (v)) + cdeg(v) − 1.
If F (v) is an induced subgraph of a block graph G, we denote by cdegF (v)
the clique degree of v in F (v). Note that if F (v) is the maximal flower induced subgraph of G and all the blocks of G containing v are C3 or K1,3 , then
cdegF (v) = cdeg(v), otherwise cdegF (v) < cdeg(v).
9
Theorem 3.6 Let G be an indecomposable block graph and let F (v) be an
induced subgraph of G. Then
reg(S/JG ) ≥ i(G) + cdegF (v) − 1.
Proof. We use induction on the number of blocks of G that are not in F (v). If
G = F (v), the statement follows from Corollary 3.5. Suppose now G contains
properly F (v) as induced subgraph. Since G is connected, there exists an endblock B of G and a subgraph G′ of G such that G = G′ ∪ B, G′ contains F (v)
as induced subgraph, V (G′ ) ∩ V (B) = {w}, and all the blocks containing w are
endblocks, except for one, that one which is in G′ . cdegF (v) does not change
and since G is supposed indecomposable, cdeg(w) ≥ 3. If cdeg(w) = 3, then G′
is decomposable into G1 ∪ G2 , and reg(S/JG′ ) = reg(S/JG1 ) + reg(S/JG2 ). We
may suppose that G1 contains F (v), and then i(G1 ) = i(G) − 1, but cdegF (v)
is still the same. Whereas, G2 is a block and reg(S/JG2 ) = 1. Then by using
induction, we may assume that reg(S/JG1 ) ≥ i(G) + cdeg(v) − 2. Therefore,
reg(S/JG′ ) = reg(S/JG1 ) + reg(S/JG2 ) ≥ i(G) + cdegF (v) − 1.
If cdeg(w) > 3, then i(G′ ) = i(G) and cdegF (v) is still the same. Then, by
using induction on the number of blocks of G, we may assume reg(S/JG′ ) ≥
i(G) + cdegF (v) − 1. By [12, Corollay 2.2] of Matsuda and Murai, one have that
reg(S/JG ) ≥ reg(S/JG′ ).
and then reg(S/JG ) ≥ i(G) + cdegF (v) − 1, as desired.
Putting together the result [7, Theorem 2.4] and the above theorem, we
finally obtain a bound for the regularity for any binomial edge ideal of block
graphs.
Corollary 3.7 Let G be a block graph.
(i) If G does not contain a flower graph as induced subgraph, then reg(S/JG ) =
i(G) + 1.
(ii) If G contains r flower graphs F1 (v1 ), . . . , Fr (vr ) as induced subgraphs,
then reg(S/JG ) ≥ i(G) + max {cdegFi (vi )} − 1.
i=1,...,r
References
[1] F. Chaudhry, A. Dokuyucu, R. Irfan, On the binomial edge ideals of block
graphs, An. Şt. Univ. Ovidius Constanta, Vol. 24, 2016, 149-158.
[2] CoCoATeam, CoCoA: a system for doing Computations in Commutative
Algebra, Available at http://cocoa.dima.unige.it
[3] J. Herzog, T. Hibi, F. Hreinsdottir, T. Kahle, J. Rauh, Binomial edge
ideals and conditional independence statements, Adv. in Appl. Math., Vol.
45, 2010, pp. 317–333.
[4] V. Ene, J. Herzog, T. Hibi, Cohen-Macaulay binomial edge ideals, Nagoya
Math. J., Vol. 204, 2011, pp. 57–68.
10
[5] V. Ene, A. Zarojanu. On the regularity of binomial edge ideals, Math.
Nachrichten Vol. 288, 2015, pp. 19–24.
[6] J. Herzog, T. Hibi, Monomial Ideals, Grad. Texts in Math. 260, Springer,
London, 2010.
[7] J. Herzog, G. Rinaldo On the extremal Betti numbers of binomial edge
ideals of block graphs arXiv:1802.06020, 2018, 1–9
[8] A. V. Jayanthan, N. Narayanan, and B. V. Raghavendra Rao, Regularity of
binomial edge ideals of certain block graphs, arXiv:1601.01086, 2017, 1–9.
[9] D. Kiani, S. Saeedi Madani, The Castelnuovo-Mumford regularity of binomial edge ideals, Journal of Combinatorial Theory, Series A, Vol. 139, 2016,
pp. 80–86.
[10] S. Saeedi Madani, D. Kiani, Binomial edge ideals of graphs, Electron. J.
Combin. Vol. 19, 2012, Paper #P44.
[11] S. Saeedi Madani, D. Kiani, Binomial edge ideals of regularity 3,
arXiv:1706.09002, 2017,
[12] K. Matsuda, S. Murai, Regularity bounds for binomial edge ideals, Journal
of Commutative Algebra. Vol. 5, 2013, pp. 141–149.
[13] M. Ohtani, Graphs and ideals generated by some 2-minors, Comm. Algebra,
Vol. 39, 2011, pp. 905–917.
[14] G. Rinaldo, Cohen-Macaulay binomial edge ideals of small deviation, Bull.
Math. Soc. Sci. Math. Roumanie, Tome 56(104) No. 4, 2013, 497–503.
[15] G. Rinaldo, Cohen-Macaulay binomial edge ideals of cactus graphs,
arXiv:1704.07106 , 2017, 1–17.
11
| 0 |
1
An Order Optimal Policy for Exploiting Idle
Spectrum in Cognitive Radio Networks
Jan Oksanen, Student Member, IEEE and Visa Koivunen, Fellow, IEEE
Abstract—In this paper a spectrum sensing policy employing
recency-based exploration is proposed for cognitive radio networks. We formulate the problem of finding a spectrum sensing
policy for multi-band dynamic spectrum access as a stochastic
restless multi-armed bandit problem with stationary unknown
reward distributions. In cognitive radio networks the multiarmed bandit problem arises when deciding where in the radio
spectrum to look for idle frequencies that could be efficiently
exploited for data transmission. We consider two models for
the dynamics of the frequency bands: 1) the independent model
where the state of the band evolves randomly independently from
the past and 2) the Gilbert-Elliot model, where the states evolve
according to a 2-state Markov chain. It is shown that in these
conditions the proposed sensing policy attains asymptotically
logarithmic weak regret. The policy proposed in this paper is an
index policy, in which the index of a frequency band is comprised
of a sample mean term and a recency-based exploration bonus
term. The sample mean promotes spectrum exploitation whereas
the exploration bonus encourages for further exploration for
idle bands providing high data rates. The proposed recency
based approach readily allows constructing the exploration bonus
such that it will grow the time interval between consecutive
sensing time instants of a suboptimal band exponentially, which
then leads to logarithmically increasing weak regret. Simulation
results confirming logarithmic weak regret are presented and
it is found that the proposed policy provides often improved
performance at low complexity over other state-of-the-art policies
in the literature.
Index Terms—Cognitive radio, opportunistic spectrum access
(OSA), restless multi-armed bandit (RMAB), online learning,
multi-band spectrum sensing
I. I NTRODUCTION
When looking at the radio spectrum allocation tables one
might come to the discouraging conclusion that there are
very few radio resources available for new wireless systems
and services. However, a very different conclusion is reached
when one actually measures the true utilization of the radio
spectrum at a particular location and time. In fact it has been
demonstrated by measurement campaigns (e.g. [1]) that many
parts of the spectrum are heavily underutilized. Cognitive
radio (CR) is a technology that holds promise for more
efficient use of such underutilized radio spectrum. A cognitive
radio network consists of secondary users (SUs) that sense
the spectrum for idle frequencies that they could use for
transmission in an agile manner. When the SUs sense that
a part of the spectrum allocated to the primary users (PUs) is
The authors are with the Department of Signal Processing and Acoustics,
Aalto University, FI-00076 AALTO, Finland (e-mail: jan.oksanen@aalto.fi,
visa.koivunen@aalto.fi)
idle, the SUs may use those frequencies for data transmission.
When the PUs become active again the SUs need to be able to
detect the primary signal and vacate the band and search for
idle frequencies elsewhere in the spectrum. For an up-to-date
and extensive review of different spectrum sensing techniques
as well as various exploration and exploitation schemes for
CR, see, for example [2], [3].
Depending on the location and the spectral allocation of the
PUs, some frequency bands are idle more often than others.
Some bands may have higher bandwidths and experience less
interference, thus, potentially support higher data rates. Naturally, it is desirable that the CRs focus on sensing those bands
that are persistently available and have a large bandwidth, i.e.,
bands that are expected to provide high data rates. However,
since the expected data rates are in practice unknown, the
CR needs to learn them. This learning problem resembles a
variation of the multi-armed bandit (MAB) [4] problem where
the objective is to learn and identify the subbands that provide
the highest data rates while not wasting too much time on
sensing frequencies with low data rates. In machine learning
this is also referred to as the exploration-exploitation tradeoff
problem [5].
In the classical MAB problem a player is faced with
slot
machines (one armed bandits) the th one of which produces
an unknown expected reward ,
. The player’s
goal is to collect as much reward as possible over time,
i.e., to learn the machine that has the highest
as fast as
possible. The dynamic rule governing the selection of the
machine to be played is called a policy and it is commonly
denoted by . In cognitive radio the analogous counterpart
of a slot machine is a frequency band suitable for wireless
transmissions whereas a reward corresponds to an achieved
data rate when the secondary user accesses an idle band. A
good recent review of the classical bandit problem and its
variations is given in [4].
The general description of the MAB problem given above
contains a rich blend of variations with different assumptions
and solutions. The MAB problems in the literature may be
broadly categorized into problems with independent rewards
and problems with Markovian rewards. These two categories
may be further subdivided into rested and restless bandit
problems and furthermore into problems with known statistics
and problems with unknown statistics. In the independent
MAB the rewards of each machine are generated by a time
independent random process whereas in the Markovian MAB
the rewards are generated by a Markov chain. The difference
between the rested MAB and the restless MAB (RMAB) is that
in the rested MAB the state of the machine can change only
when the machine is played, while in the restless MAB the
2
state of the machine keeps evolving regardless of whether it is
played or not. The restlessness of the MAB has implications
only in the Markov case but not in the independent reward
case, since independent rewards naturally do not depend on
the past rewards nor the players actions.
The optimal strategy for the Markovian rested MAB with
known statistics is the so called Gittins index policy [6]. The
MAB with Markovian restless rewards with known statistics
is in general PSPACE-hard [7], but for a particular relaxed
version of the problem the optimal solution has been provided
by Whittle [8]. The optimal policy for the rested and restless
MAB problem with time independent rewards is obviously to
always play the machine with the highest expected reward. For
the restless Markovian MAB problem with unknown statistics
the optimal policy is generally not known.
In this paper we are interested in the RMAB problem with
unknown statistics - a problem that stems from multi-channel
dynamic spectrum access in cognitive radio. In particular
we consider the case where the rewards are independent in
time and the case where they evolve according to a 2-state
Markov chain (i.e. the Gilbert-Elliot model). The 2-state model
captures the fact that the spectrum is either idle or occupied,
hence it is particularly suitable for the CR problem. In the
RMAB problem the states of the non-activated machines may
change, similarly as the state of the spectrum band may change
regardless of whether it is sensed or not. Hence the RMAB
is a suitable model for the dynamic spectrum access problem.
Depending for example on the lengths of the time slots of
the SU and the PU either the independent reward model or
the Markovian reward model may be more appropriate. For
example, if the operational time slot of the secondary user
is much smaller than the time slot of the primary user the
Markovian model may be more suitable. This is because in
that case the PUs consecutive actions may be correlated from
the SUs point of view. On the other hand if the SU has a much
larger time slot than the primary user the independent reward
model can be more appropriate. In this paper we cover both
independent and Markovian reward cases.
In the Markov case, since computing the optimal policy is
in general PSPACE-hard [7], a weaker notion of optimality has
been used in the literature called the best single arm policy [9],
[10], [11]. The best single arm policy is defined as the policy
that produces the highest cumulative reward by always playing
only one arm, which is the arm with the highest stationary
mean reward. Note that in the i.i.d. MAB the best single arm
policy is also the optimal policy. In the rest of this paper the
term optimal policy always refers to the best single arm policy.
The success of a policy
is measured by its expected
weak regret which is the difference between the expected total
payoff using policy
and the total payoff expected when
the best single arm policy
is used. In [12] it was shown
that when the rewards are independent for any policy, the
weak regret is asymptotically lower bounded by a function
that grows logarithmically in time. Consequently, policies
achieving logarithmic weak regret are called order optimal.
This paper proposes a sensing policy that is asymptotically
order optimal when the rewards are independent. Furthermore, it is shown that asymptotically logarithmic expected
weak regret is achieved when the rewards are restless and
follow a Markov chain with two states (i.e. the Gilbert-Elliot
model). The proposed policy is an index policy consisting
of a sample mean term and an exploration bonus term. The
sample mean promotes exploitation whereas the exploration
bonus encourages for exploration. The exploration bonus in
this paper is based on recency, i.e., it promotes exploring such
bands that have not been sensed for a long time. The higher
the exploration bonus of a particular band, the more likely the
band will be explored in the near future. The exploration bonus
is designed such that asymptotically the time difference of two
consecutive sensing time instants on a suboptimal band grows
exponentially, which also provides an effortless intuition for
logarithmic weak regret. In this paper we in fact show that the
proposed policies achieve asymptotic logarithmic weak regret
and demonstrate by simulations that they can often outperform
other state-of-the-art policies or obtain equal performance with
reduced complexity. Asymptotic results are typically achieved
with finite sample size. However, the point at which the
asymptotic result begins to hold need to be determined by
simulations. Another advantage of the recency based policies
proposed in this paper is that the tradeoff between exploration
and exploitation becomes asymptotically deterministic. This
allows for simplifying the proposed policy in a practical implementation. For instance, in centralized cooperative spectrum
sensing, where a fusion center (FC) maintains and runs the
sensing policy on behalf of the (possibly unintelligent) SUs,
one would like to minimize the amount of control information
transmitted between the FC and the SUs. After a sufficient
number of sensings the exploration and exploitation time
instances have practically become deterministic. Then the FC
does not need to instruct the SUs at every time instant which
band to sense. Instead, the FC needs to communicate to the
SUs only at those time instants when the sensed band changes.
This could significantly reduce the amount of control traffic
the FC needs to transmit. However, we leave these kinds of
developments and quantitative results for future studies.
The proposed recency-based policy may find applications
also outside the spectrum sensing context. These possible
areas of application are (but not limited to) adaptive clinical
trials, webpage content experiments, internet advertising, game
playing and learning online the shortest path in a graph with
stochastic edge weights (see e.g. [13] and references therein).
A. Contributions and structure of the paper
Some preliminary ideas and results of this paper were
published by the authors in [14], where a special case of
the sensing policy proposed in this paper was developed. The
contributions of this paper are the following:
We generalise the idea of recency-based exploration in
RMAB formulation of multi-band spectrum sensing and
show how to find order optimal sensing policies for different reward (data rate) distribution families. In particular
we find order optimal policies for bounded i.i.d. rewards
and for rewards generated by the Gilbert-Elliot model.
It is shown that the performance of the proposed sensing
policy can be enhanced when the type of the reward
3
distribution is known, e.g., when the rewards are i.i.d.
Bernoulli, by simply modifying a constant in the exploration bonus.
A nontrivial analysis of the expected weak regret of the
proposed policy for i.i.d. and Markovian rewards is provided and the weak regret is shown to be asymptotically
logarithmic.
We present extensive computer simulations demonstrating logarithmic weak regret of the proposed policy and
demonstrate that the policy often outperforms other state
of the art policies or achieves equal performance at
significantly lower computational cost.
The rest of the paper is organized as follows. In Section II
we give an overview of the related work in the area of MAB
problems and sensing policies in dynamic spectrum access.
In Section III we express mathematically using the RMAB
formulation the problem of finding a spectrum sensing policy.
In section IV we propose the spectrum sensing policy based on
the RMAB formulation for both i.i.d. and for Markovian data
rates (rewards). In section V we show how to optimize the
exploration bonus for particular reward distribution classes.
Section VI illustrates the performance of the policy and
verifies the analytical results using simulation examples. The
paper is concluded in Section VII.
II. R ELATED W ORK
Since the seminal paper by Lai and Robbins [12], much
of the work on the stochastic MAB problems has focused
on index policies with low expected weak regret and low
computational complexity. Many of these policies are built on
a principle called “optimism in the face of uncertainty”. This
principle states that an agent (learner) should stay optimistic
about actions whose exact expected response is uncertain. Policies based on this principle may be categorized in two groups:
optimistic initial value policies [5], [15] and exploration bonus
policies [12], [14], [15], [16], [17], [18].
In optimistic initial value policies the value of an action
is initialized with a high bias in order to guarantee sufficient
amount of exploration in the beginning of the learning process.
In [15] it was shown that setting the initial value sufficiently
high guarantees convergence to an -optimal policy. However,
selecting high enough initial values that lead also to a good
finite time performance is not a trivial task, which makes these
policies impractical for the purposes of this paper.
Policies based on exploration bonuses assign a bonus for
the actions based on, for example confidence, frequency or
recency. Confidence based policies [16], [12], [17] evoke
optimism through the use of an optimistic upper confidence
bound on the expected reward estimate of the actions which effectively makes insufficiently explored actions more attractive
for the agent. Frequency based policies assign bonuses to the
actions based on the number of times they have been taken. In
this regard most confidence based policies, such as the UCB by
[16], may also be seen to be frequency based since the value of
the confidence bound is inversely proportional to the number
of times the action has been taken. Recency-based policies,
such as the one proposed in this paper, promote exploration in
proportion to the time that has passed since the action was last
tried. As a consequence, actions that have not been taken for
a long time will be chosen more likely in the near future. The
rate at which exploration is promoted is gradually decreased in
time in order to guarantee convergence to the optimal action.
To the best of our knowledge, this paper is the first to develop
and analyze recency-based exploration in the bandit setting.
For the classical stochastic MAB problem with independent
rewards it was shown in [12] (and later generalized in [19])
that asymptotically order optimal policies have an expected
weak regret of
. In [20] this lower bound was further
generalized for the case where the rewards are at rest and
evolve according to an irreducible and aperiodic Markov chain.
However, for general restless Markovian rewards (apart from
the special case of i.i.d. rewards, such as Bernoulli) theoretical
lower bounds on the weak regret have not been reported in the
literature.
In [12] a class of confidence bound based policies that
achieve asymptotical logarithmic weak regret was presented
for the MAB problem. However, these policies require storing
the entire history of the observed rewards, which makes
their implementation impractical. The recency based policies
proposed in this paper require storing only a sample mean term
and an exploration bonus term for each of the frequency bands.
In [21], a class of policies based on sample means and upper
confidence bounds was proposed. These policies were simpler
compared to those in [12]. However, the policies in [21]
are distribution dependent and deriving the upper confidence
bounds in a closed form is often tedious. Among the most
celebrated bandit papers is [16], where a computationally
simple upper confidence bound (UCB) policy was proposed
and shown to be uniformly order optimal when the rewards
are independent and have a bounded support. This policy
was further developed in [22] (see Theorem 2.2 therein) by
improving a constant in the UCB policy. The recency based
policy proposed in this paper has similar desirable properties
as the UCB policy in terms of its simple implementation.
Additionally, the recency based policy has an intuitive explanation for its asymptotically logarithmic weak regret. In
the UCB policy exploration bonus is based on confidence,
whereas in this paper the policies are based on recency. In
addition to this fundamental difference, we have observed in
our simulations that the policy proposed in this paper often
achieves lower weak regret than the UCB. Recently the KLUCB policy was proposed in [23]. It is an asymptotically
optimal policy (i.e. it achieves the lower bound of [12]) for
bounded i.i.d. rewards whose distributions are known except
for their parameterization. The KL-UCB was analytically
shown to have uniformly lower expected weak regret than the
UCB when the rewards are Bernoulli. Therefore, we compare
the policy proposed in this paper to the KL-UCB policy instead
of the UCB. However, the KL-UCB is computationally more
expensive than the policy proposed in this paper (as well as
the UCB) as it requires solving a constrained optimization
problem using dichotomic search or Newton’s iterations. For
example the average time complexity of binary search is
, where
is the size of the search grid that
essentially sets the accuracy of the optimal upper confidence
4
bound. In the proposed sensing policy no such search steps
are required. Interestingly, although being much simpler, our
simulations with independent rewards show that the proposed
recency based policy performs equally well or close to the
KL-UCB in many scenarios.
CR spectrum sensing policies stemming from the RMAB
formulation have been proposed for example in [9], [10],
[11], [24], [25], [26]. Among them [9] and [10] are the most
relevant ones for this work since they also model the sensing
problem as an RMAB problem with unknown statistics and use
the weak regret as the performance measure. In [9] a policy
achieving uniformly logarithmic weak regret in the RMAB
problem with unknown Markovian rewards was proposed
for centralized and decentralized CR networks. The policy
proposed in [9] operates in regenerative cycles that cleverly
allows for using an UCB type index policy. In this paper we
also employ regenerative cycles when learning the stationary
expectations of the Gilbert-Elliot channel in section IV-B. The
policy in [9], however, discards all observations made outside
the regenerative cycles making it an inefficient learner in some
cases. In this paper however all collected observations are
used for learning. In [10] a policy based on deterministic
sequencing of exploration and exploitation (DSEE) epochs
was proposed for the RMAB problem and shown to achieve
uniform logarithmic weak regret in centralized and distributed
channel sensing. The principle used in the policy proposed
in this paper, which is to grow the periods of exploitation
exponentially in time, echoes similar ideas as the policy in
[10]. However, the policy proposed in this paper is an index
policy whereas the DSEE policy operates explicitly in epochs
of exploration and exploitation by maintaining a dynamic
list of frequency bands to be either explored or exploited.
The policy proposed in this paper has a simple index form.
Moreover, our simulation results demonstrate that also a better
performance is often obtained by using the proposed policy.
The simulations also indicate that the deterministic nature of
the DSEE exploration epochs occasionally results in sudden
increases in the weak regret, which do not occur in the policy
proposed in this paper. In [24] it was shown that under the
assumption that the channels are identical and independent
2-state Markovian channels whose signs of correlation are
known, the RMAB problem can be solved with a simple myopic sensing policy. However, in practice the channels would
rarely obey the same statistics (e.g. radios are in different
locations, scattering environments and experience different
SINR and are mobile). Furthermore, the sign of correlation
of the channels is usually not known a priori. In our paper
the statistics of the underlying rewards are not assumed to be
known. In [25] the problem of finding optimal sensing policy
was cast as a partially observable Markov decision process
(POMDP) with unknown channels’ state transition probabilities. The proposed algorithm works by estimating the transition
probabilities during exploration phases and then mapping the
obtained estimates in to so called policy zones for which the
optimal policies have been precomputed. However, apart from
a few special cases, it is not always possible to precompute
the optimal policies for a general multi band spectrum sensing
scenario. We mention [11] and [26] here as possible interesting
directions for future studies where the recency-based policies
proposed in this paper could be applied assuming either side
information or different optimization goals. In [11] the authors
formulate the learning problem of joint sensing and access
as a RMAB problem with side observations and assume that
the SUs sensing performance (detection probability and false
alarm probability) is known. The paper then proposes using an
UCB-type policy for solving the problem. Another interesting
application of RMAB formulation in CR was given in [26],
where the authors propose a PAC learning algorithm that
determines the amount of exploration needed in RMAB in
order to balance between the energy consumption of sensing
plus probing and access.
III. P ROBLEM F ORMULATION
A. System model
At time instant
the CR network senses (and possibly
accesses) a frequency band
, and observes
an achieved data rate (reward)
with an unknown mean
. In this paper the data rates are assumed to be either i.i.d.
in time or evolve as a stationary 2-state Markov process. The
rates
are assumed to be
bounded which can be
achieved by normalizing the true data rates (bits/s) with the
highest Shannon capacity among the bands. Also, it is assumed
that the SUs have a way of estimating and feeding back the
achieved data rates to a central node, e.g. a fusion center that
runs and maintains the sensing policy for the whole network
or for a small subnetwork. All bands are assumed to evolve
independently from each other.
B. Objective
The objective of this paper is to develop a simple sensing
policy for the CR that achieves an order optimal tradeoff between exploration and exploitation. Quantitatively, the success
of a policy can be measured by its expected weak regret
. The weak regret of a policy is defined as the
difference of the total payoff achieved by the policy and the
total payoff achievable by the optimal single arm policy .
Mathematically this can be expressed as
(1)
where is the time index,
is the optimality
gap of band ,
is the number of times band has been
sensed up to time using policy and
. In
order to simplify the notation the superscript will be dropped
for the rest of the paper.
C. Discussion on the System Model
In practice the notion of the single best frequency band
may be ambiguous. Since the SUs may be scattered in space
they experience different channel fading and consequently
obtain different data rates in different locations. For the same
reason, the probabilities of detection and false alarm may
be different at different locations. Taking these factors into
account the optimal sensing policy becomes a function of
5
the access policy (access policy tells who will get access to
the possibly idle band) and the employed sensing scheme.
Such joint optimization of sensing and access in CR has
been considered for example in [11], [27], [28]. Also, in
practice the rewards from different bands are not necessary
independent since high power primary transmissions (such as
TV-transmission) may cause out-of-band interference to the
neighboring bands.
In many potential CR settings, the data rates (rewards) are
non-stationary. For example the obtained throughputs depend
on the amount of traffic in the primary network that may vary
between peak and off-peak hours. Also the time-frequencylocation varying nature of the wireless channels and user
mobility will in practice cause the secondary users data rates to
be non-stationary. In these situations a sensing policy assuming
stationarity has to be occasionally restarted afresh. Alternatively, in exploration bonus based policies the exploration
bonus could be tuned so that after a fixed period exploration
becomes more attractive again. In this paper, however, we
concentrate on the stationary problem alone.
IV. T HE P ROPOSED P OLICY
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1000
2000
n
3000
4000
Exploitation
Exploration
Fig. 2. The exploration bonus of a suboptimal channel as a function of time
in a two frequency band scenario for
. In this
example the difference between the expected rewards of the optimal band and
the suboptimal band is
. The time interval between two
consecutive sensing time instants (the zeroes) of the suboptimal band tends
to grow exponentially in time. In other words it means that the lengths of the
exploitation epochs grow exponentially. Since the time intervals between two
consecutive sensings of a suboptimal channel grow exponentially it means
that the total number of sensings must grow logarithmically in time.
A. Policy for i.i.d. rewards
In this section we propose a spectrum sensing policy for
CR when the data rates (rewards) are
bounded and
i.i.d. in time. The proposed sensing policy is an index policy
that contains an exploitation promoting sample mean term
and an exploration promoting bonus. Exploration bonuses are
awarded to the band according to recency, such that bands that
have not been sensed for a long time will get higher bonus
and consequently will be more likely to be sensed in the near
future. The proposed sensing policy is detailed in Fig. 1 and
equation (2).
Initialization:
Sense each band once.
Loop:
Sense band with the highest index
,
where
is computed according to (2).
Fig. 1.
The proposed sensing policy for i.i.d. rewards.
The index of band
at time
is given as
(2)
function of time when using it in the proposed sensing policy.
It can be seen that the zeroes of the exploration bonus indicate
the sensing time instants of the subband and that the time
instants when the subband is sensed tend to grow exponentially
as sensing focuses on the band giving the highest rewards.
The effect of the choice of the exploration bonus on the
weak regret becomes now intuitive. Employing a
that
increases fast from 0 means that asymptotically all bands
(including the suboptimal ones) will be sensed more often
compared to a choice of
that increases slowly. Fast
growing
will lead to aggressive exploration whereas slow
growing
leads to aggressive exploitation. With aggressive
exploration the policy’s asymptotic weak regret can be reached
fast but its value will be high, whereas with aggressive
exploitation the convergence to the asymptotic regret will be
slow but the regret itself will be small. This trade-off is dealt
in more detail in section V.
The asymptotic weak regret of the policy in Fig. 1 is
summarized in Theorem 1.
Theorem 1. The asymptotic weak regret of the policy in Fig.
1 when the rewards are i.i.d. is
(3)
where
is the sample mean of the rewards from band
,
is the exploration bonus and
is the last
time instant when band was sensed. The sample mean is
computed as
, where
is
the sequence of length
of the sensing time instants of
band up to time .
The exploration bonus
is a concave, strictly
increasing and unbounded function such that
. An
example of such exploration bonus would be
.
Fig. 2 shows the exploration bonus of a suboptimal band as a
Proof: See appendix A
B. Markovian rewards
In practice the assumption that the state of a frequency
band evolves independently in time may not be always valid.
Following the line of [11], [24], [29], [30] we model the
evolution of the state of the spectrum band with a 2-state
Markov chain (see Fig. 3), also known as the Gilber-Elliot
6
model. We propose a small modification for the sensing policy
in Fig. 1 so that provably asymptotically logarithmic weak
regret will be attained. The condition for the logarithmic weak
regret is that the Markov chain of the Gilbert-Elliot model
is ergodic (irreducible, aperiodic and such that all states are
positive recurrent). We employ the Gilbert-Elliot model due
to its correspondence with dynamic spectrum access scenario.
In a real CR network the SUs would be equipped with
signal detectors which perform binary hypothesis testing on
the availability of a particular frequency band. Assuming
that the detectors are well designed the secondary users may
then access the band only when the state of the spectrum
is detected to be idle. The two states in the Gilbert-Elliot
model correspond to these two possible outputs from the
detector: state 1 indicating that the band is occupied and state
0 indicating that the band is idle. Note, however, that the actual
observed rewards (data rates) from these two states can be any
two values between 0 and 1.
Initialization:
Sense each band for one regenerative cycle.
Loop:
After each full regenerative cycle sense
band with the highest index
, where
is computed according to (2).
Fig. 4. The proposed sensing policy for rewards evolving according to a
2-state Markov chain.
(occupied) 1
Band 2
(idle) 0
(occupied) 1
Band 1
(idle) 0
1
2
3
4
5
6
7
8
9
10
11
12
Time
Fig. 3. Markov chain used to model the temporal dependency of the state
of a band. In this paper state 1 denotes that the band is occupied and state 0
that the band is idle.
Our policy construction for Markovian rewards is inspired
by [9] and [11] by making use of the regenerative property
of Markov chains. In particular, we constrain the periods of
exploration and exploitation of each frequency band to be an
integer multiple of a full regenerative cycle. Starting from
state a regenerative cycle of a Markov chain is a sequence
of states before the chain returns back to state . See Fig.
5 for an illustration of regenerative cycles. When the chain
is irreducible and aperiodic the lengths of the regenerative
cycles of a given state are i.i.d. (see e.g. [31],[32]). This
”trick” of breaking the observed states of a Markov chain into
regenerative cycles is often used in order to employ the theory
of independent random variables, for example, when proving
the strong law of large numbers for Markov chains [32]. The
proposed policy for Markovian rewards is shown in Fig. 4.
Fig. 5 illustrates the regenerative cycles during the first 12
sensing time instants in a 2 band scenario with the proposed
sensing policy.
The main difference to the policy in Fig. 1 is that each
time a band is selected for sensing it will be sensed until at
least one full regenerative cycle is observed. This also helps in
making the analysis of the weak regret in appendix B simpler
and intuitive.
Since the Bernoulli distribution is a special case of twostate Markov chain, the proposed sensing policy in Fig. 4 can
naturally also achieve asymptotically logarithmic weak regret
with Bernoulli distributed rewards. However, in practice, due
to the fact that the policy in Fig. 4 forces the CR to sense
the same band for at least one full regenerative cycle, it often
tends to perform slightly worse with i.i.d. rewards than the
Fig. 5. Illustration of the first 12 sensing time instants using the proposed
sensing policy. The number of bands is 2 and they are assumed to follow
a 2-state Markov-chain. The horizontal axis indicates time and the vertical
axis indicates the states of the two bands. The dashed line indicates the state
evolution of the band. The green solid line shows the sensed states by the SU.
The lengths of the regenerative cycles of band 1 are denoted as
,
and similarly for band 2. At time instants 1-2 the SU senses one regenerative
cycle at band 1. During time instants 3-8 the SU senses 3 regenerative cycles at
band 2 and at time instants 9-12 the SU senses band 1. The indices of the bands
are recalculated after each observed full regenerative cycle. Consequently the
decision to keep sensing the same band or to switch to another is also made
after each observed regenerative cycle. In this example these index update
and decision time instants correspond to 2,4,5,8 and 12.
policy in Fig. 1.
The following theorem summarizes the asymptotic weak
regret of the policy in Fig. 4.
Theorem 2. The asymptotic weak regret of the policy in Fig. 4
when the rewards are evolving according to a 2-state Markov
process is logarithmic, i.e.,
(4)
Proof: See appendix B
C. Intuition on asymptotic logarithmic weak regret
Next we provide the intuition why the proposed policies
in Fig. 1 and Fig. 4 attain asymptotically logarithmic weak
regret. Detailed proofs can be found in appendices A and B.
As was seen in (1) the expected weak regret depends on the
number of times that a suboptimal frequency band is sensed. In
order to show that the weak regret is logarithmic one needs to
show that the expected number of sensings of any suboptimal
band is upper bounded logarithmically. Our analysis is based
on investigating the interval between two consecutive sensing
7
time instants of any suboptimal band (the instants of the zeroes
seen in Fig. 2) after sufficient number of samples from each
band has been accumulated. It is shown that the time difference
between the first and the
(for large enough ) sensing time
instant of a suboptimal band grows exponentially in time and
that this fact consequently leads to logarithmically growing
weak regret.
Since the rewards are bounded and since by definition
and
, at any given time for any band
there always exists a future time instant when the band will be
sensed again. In other words the policy never completely stops
exploring and consequently each band will be asymptotically
sensed infinitely often. After sufficient amount of exploration
due to the strong law of large numbers the indices of a
suboptimal band and the optimal band may be approximated
respectively as
where
is the time instant of the
sensing of (suboptimal) band . The latter inequality follows from the fact that the
exploration bonus
is always greater or equal to
zero. This is to say that the sample mean terms behave asymptotically like constants and that for sufficiently large
the
times of exploration and exploitation are practically controlled
by the exploration bonus
. Hence, asymptotically
a suboptimal band
will not be sensed sooner than when
, i.e., when
Since
and
strictly increasing, concave and
unbounded on
, the inverse function
and
is strictly increasing, convex and unbounded
on
. Consequently
, where the
equality holds only when
. For the
sensing
and so on for the
sensing
time instant
. This implies
that asymptotically
(5)
The above intuition is based on the strong law of large
numbers which states that there exists a finite time instant
when approximating the sample mean terms with the true
mean values is good enough and after which the number of
suboptimal sensings will increase only logarithmically. This
is illustrated graphically in Fig. 6, where after time instant
the number of sensings of a suboptimal band grows
logarithmically.
While much of the recent work in this area has concentrated
on finding finite time upper bounds on the expected weak
regret, the bounds derived in this paper are asymptotic by
nature. Most of the recent derivations of finite time weak regret
bounds in the literature owe themselves to the framework first
given in [16] by using Chernoff/Hoeffding type concentration
inequalities. The policies proposed in this paper are not
applicable to this proofing technique, since the exploration
Fig. 6. Example of the asymptotic behavior of the number of suboptimal
sensings for one sample path. The x-axis indicates the suboptimal sensing
time instants and the y-axis indicates the ordinal of the sensing (the number of
suboptimal sensings). The length of each horizontal line indicates the sensing
time instant for that the th sensing. For the first sensings the number of
suboptimal sensings might in the worst case grow linearly with time, but
with probability one for large enough (here ) the growth will slow down
to logarithmic.
bonuses here are based on recency - not confidence explicitly.
However, in Appendices A and B, we show that for every run
of the proposed policy with probability 1, there exists a finite
time instant after which the weak regret grows logarithmically.
In practice, this means that whenever the policy is set running,
the weak regret converges with probability 1 to a logarithmic
rate in a finite time.
V. C ONSTRUCTING THE EXPLORATION BONUS
In this section we present how to find the exploration bonus
. The goal is to find a feasible exploration bonus
that brings the leading constant of the asymptotic weak regret
in the i.i.d. case as close as possible to the asymptotic lower
bound given in [12]. Here by the term ”feasible”, we mean an
exploration bonus within the proposed class of recency-based
policies that is also simple to compute. Deriving a policy that
would match the lower bound of Lai and Robbins in [12] is
out of the scope of this paper, since to that end the proposed
policy would need nontrivial estimates of the Kullback-Leibler
divergences between the reward distributions. Here we will
only consider the case of (general) independent rewards and
Bernoulli rewards, since the lower bound of [12] applies to
those cases. Interestingly, the lower bound for the weak regret
in the general restless Markov case is not known, but deriving
is omitted in this paper.
A. Independent rewards: general case
In [12], Lai and Robbins showed that for any consistent
policy , the following lower bound holds for the number of
sensings of a suboptimal band:
(6)
where
is the Kullback-Leibler divergence of the
reward distribution
of the suboptimal band to the reward
distribution
of the optimal band and
is the total
number of suboptimal sensings by policy . We would like
the asymptotic leading constant
of the weak
regret of our policy in (5) to be as close as possible (from
above) to
. Hence, we are looking for a
8
(7)
as close as possible to equality. Since
is a
convex function in the pair ( , ) [33], one can conclude
that it is best approximated from below by
that is
convex. Consequently, since
needs to be convex, the
exploration bonus
itself should be concave. This is
also supported by intuition and by keeping in mind Fig.2: If
the difference in the mean reward between the optimal band
and the suboptimal band is small (close to 0), one can afford
to increase the exploration bonus fast in the beginning since
both the optimal and suboptimal bands need to be explored
many times in order to find out which one of them has a
larger expected reward. On the other hand, if the difference in
the mean reward is big (close to 1), one can still increase the
exploration bonus fast in the beginning but should gradually
slow it down in order to inhibit excessive exploration.
For deriving an exploration bonus for bounded i.i.d. rewards
we observe the following result due to Pinsker’s inequality:
Theorem 3. Let
be continuous random variables with integrable probability density functions
and
. Let
. Then
(8)
where
band scenario. The rewards (data rates) from band 1 are i.i.d.
uniform between
and the rewards from band 2 are i.i.d.
uniform between
. With these the expected rewards are
and
so that
. Fig. 7 plots the rate of
change of the number of sensings of band 1 with respect to
. As expected by (5), the rate of change asymptotically
converges to
.
Rate of change in the number of subtiomal sensings
that satisfies
8
6
4
2
1
1
2
4
16
32
64
256
n
1024
4096
32768
Fig. 7. The simulated rate of change of the number of sensings of band 1
with respect to
. The curve has been averaged over 1000 Monte Carlo
simulations. The simulated rate of change approaches the theoretical value
.
denotes the Kullback-Leibler divergence.
B. Bernoulli rewards
Proof:
(9)
where
is the total variation distance between
and
. Equation (8) then follows from (9). The second inequality
is due to the fact that we are integrating over the unit interval
. The last step in (9) is due to Pinsker’s inequality
(see e.g. [34]). With practically the same arguments (and by
replacing integrals with sums), the above result can be derived
for and that are
bounded discrete random variables.
Next we construct the exploration bonus for the case when
the rewards are known to be independent Bernoulli distributed.
By assuming the data rates to be Bernoulli variables (in
addition to being independent), it is possible to obtain a tighter
version of the Pinsker’s inequality and hence use greedier
exploitation than what would be achieved by the exploration
bonus in (10).
Theorem 4. Assume two Bernoulli random variables and
in
. Let the success probability (probability of value 1) of
be
and the success probability of be . Furthermore,
let
. Then
(11)
Proof:
Using the fact that
from Theorem 3, we
can find the exploration bonus for the case when the rewards
are assumed to be independent and bounded in
. Keeping in mind that asymptotically at the sensing time instants
and requiring that
,
it is possible to find
to be
(10)
Using (10) as the exploration bonus in the proposed policy
(listed in Fig. 1), the leading constant of the asymptotically
logarithmic weak regret will approach
according to (5).
For illustration purposes we simulate the proposed policy
with the exploration bonus given in (10) in a two frequency
(12)
where
is the total variation distance between
and
and the last inequality is due to Pinsker’s inequality. Hence
solving for
yields equation (11).
Note that the theorem above holds for any Bernoulli process
(and not only when the rewards are either 0 or 1). Employing
Theorem 4 we obtain the tailored exploration bonus for
Bernoulli distributed rewards as
(13)
9
VI. S IMULATION EXAMPLES
In this section we illustrate the performance of the proposed
sensing policies in various scenarios with independent and
Markovian data rates (rewards) by simulations. Here the
performance measure is the number of suboptimal sensing,
i.e., the number of times when other than the band with the
highest stationary expectation was sensed. The performance of
the policies is compared against 3 other cutting edge policies
KL-UCB [17], DSEE [10] and RCA [9]. In order to guarantee
the DSEE and RCA policies to achieve finite time logarithmic
weak regret, one needs to define appropriate parameter values
for them (parameter
for the DSEE and parameter
for
the RCA). To this end one would need to know certain nontrivial upper bounds for the parameters of the underlying
Markov processes. This information may not be available in
practice. In addition, it has been empirically shown in [10],
[9] that these theoretical parameter values, although being
sufficient, are not necessary and that often better performance
is achieved with lower values of
and . However, setting
and to low fixed values might improve their performance
in some scenarios but might lead to significant performance
degradation in other scenarios. According to [10], by letting
the policy parameter
of the DSEE to slowly grow with
time eliminates the need for any a priori system knowledge
with an arbitrary small sacrifice in the asymptotic weak regret.
In the simulations we have set the DSEE policy parameter
with the goal of obtaining the best possible
overall performance in different scenarios. In our experiments,
it provided the most stable outcome with a good finite performance. In all the scenarios considered here, the weak regret
with
was practically the same as with
that
was used in the simulations of [10]. Similarly according to
[9], letting the parameter of the RCA policy to grow slowly
in time will not sacrifice the asymptotic regret much. In the
simulations of RCA we have used
and
,
where the value
is taken from the simulations of [9]. All
the simulations are
sensing time instants long
and the presented curves are averages of 10000 independent
runs. The curves have been normalized by
in order
to illustrate convergence to a logarithmic rate.
A. Independent rewards
First we simulate the performance of the proposed sensing
policy when the rewards are independent. We consider the case
when the rewards are Bernoulli distributed and the case when
the rewards can obtain any values between [0,1].
Fig. 8 shows the expected number of suboptimal sensing in
a 5 band scenario. The availability of the band is assumed to be
Bernoulli distributed such that the reward is 1 when the band
is sensed idle and accessed and 0 when the band is sensed
occupied. It is assumed that trying to access an occupied
band would cause a collision between the SU and the PU and
produce no throughput. The average data rates of the bands are
then
. In the implementation of
the KL-UCB we have assumed Bernoulli distribution, hence
it represents here an asymptotically optimal policy. For the
proposed policy we show the results using the exploration
term in (13) for Bernoulli rewards. It can be seen that in this
case the proposed sensing policy performs close to the KLUCB policy while being computationally much simpler. The
DSEE policy has a significant drop in its performance around
after 10000 sensing time instants. This seems to be due to
the deterministic nature of the exploration epochs, which in
this scenario often occurs around
. In this scenario
the RCA policy takes the longest time before converging to a
logarithmic rate.
3
10
# sub.optimal plays / ln(n)
Theorem 3 provides a lower bound for the KL-divergence of
all independent rewards bounded in
, whereas Theorem
4 provides a tighter lower bound when the rewards are
also known to be Bernoulli. Hence, when there is a priori
information about the type of the reward distributions (in this
case, Bernoulli) it is possible to use an exploration bonus that
favors more aggressive exploitation. In other words, when the
reward distribution is known apart from its expectation, it is
possible to more carefully balance the trade-off between the
convergence to the optimal frequency band and the achieved
asymptotic regret.
2
10
1
10
0
10
0
0.5
1
1.5
2
n
2.5
3
3.5
4
x 10
Fig. 8. Mean number of suboptimal sensings with Bernoulli rewards. The
expected rewards of the bands are
. It can
be seen that in this case the proposed sensing policy gives almost as good
performance as the KL-UCB policy that has been shown to be asymptotically
optimal. However, the proposed policy is much simpler than the KL-UCB.
Fig. 9 shows the average number of suboptimal sensing in
a 2-band scenario where the reward distributions of the bands
are shown in Fig. 10. In this case, the KL-UCB policy for
Bernoulli rewards is no longer asymptotically optimal. However, according to the authors of [17] it should still achieve
good performance with general [0,1] bounded rewards. For the
DSEE we have again used the parameter value
and
for the RCA the parameter
and
. In order
to simulate the RCA policy in this scenario we have assumed
that the SU would be capable of distinguishing between the
101 possible rewards (states) shown on the horizontal axis of
Fig. 10. For the proposed, policy we have used the exploration
bonus given in (10) optimized for general i.i.d. [0,1] bounded
rewards and the exploration bonus optimized for Bernoulli
rewards. We also observed that the proposed policy opti-
10
2500
1500
# sub.optimal plays / ln(n)
# sub.optimal plays / ln(n)
2000
1000
500
1500
1000
500
0
0
0.5
1
1.5
2
2.5
3
n
3.5
x 10
4
Fig. 9.
Average number of suboptimal sensings with IID rewards with
reward PMFs shown in Fig. 10. The KL-UCB policy is the one optimized
for Bernoulli rewards. For the proposed sensing policy we have used the
exploration bonus given in (10) and (13). It can be seen that the exploration
bonus optimized for Bernoulli rewards gives still excellent performance even
though the actual rewards are not Bernoulli distributed.
0.025
=0.4967
1
=0.50541
2
P[data rate]
0
0.5
1
1.5
2
2.5
n
3
3.5
4
x 10
Fig. 11.
The simulated mean number of suboptimal sensings
in a scenario with very slowly varying spectrum with 10 bands
and Markovian rewards. The transition probabilities are
and
making the typical state evolutions of the bands to be ...0,0,0,0,...0,1,1,1,1,....
Due to the slowly varying states of the spectrum this scenario would be highly
attractive for CR. The KL-UCB policy has is the one optimized for Bernoulli
rewards and in the DSEE policy we have set the parameter
. For
the RCA we have used
and
. In the proposed policy
the exploration bonus is
0.02
. In this scenario the
proposed policy achieves clearly the lowest number of sensings on suboptimal
bands.
0.015
0.01
0.005
0
0
0
0.2
0.4
0.6
0.8
1
data rate
Fig. 10. The data rate (reward) PMFs of the two bands simulated in Fig. 9.
The expected rewards are
and
. This corresponds
to a difficult scenario where the two bands have almost the same expected
rewards.
mized for Bernoulli rewards achieves excellent performance
even when the reward distributions are not Bernoulli. In this
scenario the proposed policy assuming Bernoulli rewards and
the DSEE policy achieve the lowest number of suboptimal
sensings.
B. Markovian rewards
Next we present the simulation results with Markovian
rewards of the proposed sensing policy listed in Fig. 4. Fig.
11 shows the expected number of suboptimal sensings when
there are 10 bands whose availability for secondary use evolves
according to a 2-state Markov chain (i.e. the Gilbert-Elliot
model). Also in this scenario when the band is in state 1 the
band is occupied by the primary user and when the band
is in state 0 the band is idle. The reward from sensing a
band that is idle is 1 whereas sensing an occupied band
produces 0 reward. The transition probabilities of the bands
are
[0.01, 0.01, 0.02, 0.02, 0.03, 0.03, 0.04, 0.04,
0.05, 0.05] and
[0.08, 0.07, 0.08, 0.07, 0.08, 0.07,
0.02, 0.01, 0.02, 0.01]. The corresponding stationary expected
rewards are
[0.83, 0.11, 0.80, 0.30, 0.67, 0.20, 0.71,
0.22, 0.13, 0.27]. This scenario corresponds to a case where
the state of the spectrum evolves slowly between occupied
and unoccupied, i.e., the spectrum is either persistently idle or
persistently occupied. Such scenario would be very attractive
for opportunistic spectrum use. In the proposed, policy we
have used the exploration bonus
.
Fig. 11 shows that the proposed policy achieves uniformly
lowest number of suboptimal sensings compared to the other
three policies.
Fig. 12 shows the average number of suboptimal sensings
for the proposed sensing policy in a scenario where the state
of the spectrum is highly dynamic. In this scenario the state
transition probabilities are close to 1 causing the bands to
all the time alternate between idle and occupied state, which
makes this scenario less attractive for practical cognitive radio
employment. In the DSEE we have again set
and
in the RCA
and
. In the proposed policy we
have used the exploration bonus
.
In this scenario the proposed policy is on par with the RCA
with
while the DSEE has the lowest number of
suboptimal sensings.
VII. CONCLUSIONS
In this paper we have proposed asymptotically order optimal
sensing policies for cognitive radio that carry out recencybased exploration through the use of carefully developed
exploration bonuses. We have proposed policies for the cases
in which the state of the spectrum evolves independently from
the past and when the state evolves as a 2-state Markov
11
2500
# sub.optimal plays / ln(n)
2000
1500
1000
500
0
0
0.5
1
1.5
2
2.5
n
3
3.5
x 10
4
Fig. 12. The simulated mean number of suboptimal sensings in a highly
dynamic spectrum with 5 bands and Markovian rewards. The transition
probabilities are
and
making the typical state evolutions of the
bands to be ...,0,1,0,1,0,.... Since the spectrum is highly dynamic this scenario
would be less attractive for cognitive radio employment. The KL-UCB policy
has is the one optimized for Bernoulli rewards and in the DSEE policy we have
set the parameter
to
. For the RCA we have used
and
. In the proposed policy we have used the exploration bonus
. In this scenario the proposed policy and the
Here we use the following notation for the suboptimal
band:
, where
is the sequence
of length
of the sampling instants up to time
and
. Variables with denote the corresponding values
for the optimal band. Furthermore,
is the
difference of the true mean of the optimal band and the true
mean of the suboptimal band.
Next we show that for each sample path (run of the sensing
policy) there exists with probability 1 a time instant
when the suboptimal band is sensed and after which the event
does not take place any more.
In other words, one can show that for any
and large
enough the sample mean of the optimal band will be larger
by
than the sample mean of the suboptimal band.
After this point the explorations of the suboptimal band will
be almost surely dictated by the exploration bonus. To this end
we use the following lemma by Kolmogorov (see e.g. [35] p.
27):
Lemma 1. (Kolmogorov’s strong law). Let
independent with means
and variances
If the series
converges, then
be
.
RCA perform equally well while the DSEE has the best performance.
(14)
process. The proposed policies are built upon the idea of
recency-based exploration bonuses that force each band to
be sensed infinitely many times while ultimately pushing the
exploration instants of suboptimal bands exponentially far
apart from each other. We have proved using analytical tools
that the proposed policies attain asymptotically logarithmic
weak regret when the bounded rewards are independent and
when they are Markovian. Furthermore, we have shown that
when there is information about the type of the secondary user
throughput distributions, it is possible to construct policies
with better performance. Our simulation results have shown
that the proposed policies provide typically performance gains
over the state-of-the-art policies. The simulation results have
also indicated that the expected weak regret would also be
uniformly logarithmic.
where a.e. stands for almost everywhere.
Proof: See [36] p. 590.
Note that Lemma 1 implies that for any
(15)
where i.o. stands for infinitely often. Now, for any
A PPENDIX A
P ROOF OF T HEOREM 1
In this section, we give a formal proof that the proposed
policy in Fig. 1 attains asymptotically logarithmic weak regret
when the rewards are independent. The proof is based on the
fact that the event
happens only a finite
number of times when
and henceforth asymptotically
the suboptimal band will be sensed only when its exploration
bonus has become large enough (see Fig. 2). Asymptotically,
this event happens at an exponentially decreasing rate with
probability 1. In order to keep the notation simple the derivation is given for the case of two bands, however, without loss
of generality. The result will generalize to multiple frequency
bands by comparing each suboptimal band against the optimal
band separately. Since the optimal band is asymptotically
sensed exponentially more often than any of the suboptimal
bands, the asymptotic weak regret will be logarithmic.
where
is the
sampling instants of the optimal band and
is the
sampling instant the suboptimal band. Note that
since the rewards are bounded in
and the exploration
function is increasing and unbounded, there always exists a
time instant when the index of any given band will be the
largest (and hence sensed). Consequently both bands will be
sensed infinitely many times, i.e.,
and
.
Take the event
12
Now we can notice that in order for
to be negative at
least one of or have to be negative. Hence, we get that
The last inequality is due to the fact that for any (real) random
variable ,
.
Since
and
have finite variances we notice using
Lemma 1 that,
Hence we conclude that
(16)
In other words, for any
with probability 1 there exists
a time instant
when the sample average of the optimal band
will be at least
larger than the sample average of
the suboptimal band. Consequently, for any two consecutive
sensing time instants
and
of a suboptimal band, for
which
, the following will hold:
(17)
Since
the difference between the sensing
time instants of the suboptimal band will asymptotically
grow exponentially. When the time difference between two
consecutive sensing time instants increases exponentially the
number of sensings must grow logarithmically and hence the
expected number of suboptimal sensings is
(18)
Markovian rewards, however, the sensing policy, i.e. how the
sensing time instant is selected plays a role whether the reward
sample means converge to the true stationary mean or not. In
order to simplify the notation we have dropped the indexing
of the bands and focus only to one of the bands by showing
that its reward sample average converges to its true stationary
mean almost surely. Rest of the proof follows essentially the
same path as the proof for the i.i.d. case in appendix A.
The exploration bonus of a band that is not sensed grows
unboundedly so that the index of that band will also grow
unboundedly. As a consequence for any band there always
exists a future time instant when its index will be the largest
one and when it will be sensed again. Since every time a band
is selected for sensing and since it is sensed for at least one
full regenerative cycle, the number of regenerative cycles spent
on sensing each band approaches infinity as the policy runs
infinitely long. This in mind we may use the strong law of
large numbers for Markov chains to prove the convergence of
the reward sample mean to the true stationary expected reward.
Let
denote the reward (data rate) that the CR obtains
when it senses and accesses the band in idle state and let
denote the reward obtained when the band is sensed and
accessed in occupied state. The expected reward from the band
is then
(19)
where
is the stationary probability of state 0 and
the
stationary probability of state 1.
Denote the total number of sensings of the band at time
instant as
. Assume that is the time instant at the
end of one of the regenerative cycles. Denote the length of
the j 0-cycle (regenerative cycle starting and ending in state
) as . Notice that
’s are i.i.d. with mean
(see e.g. [32] Theorem 1.41), where
is the stationary
probability of state 0. It can be shown that (see e.g. [37],
[31] or [32])
, where
is the sample
average of the lengths of the 0-cycles observed up to time
. This result is a consequence of the independence of the
lengths of the regenerative cycles and the strong law of large
numbers. Then the following will also hold:
(20)
Similarly for the 1-cycles (regenerative cycles starting and
ending in state 1) we have
and
(21)
A PPENDIX B
P ROOF OF T HEOREM 2
In order to prove that the policy in Fig. 4 has logarithmic
weak regret with Markovian rewards one needs to show, similarly as in the independent rewards case, that with probability
1 there exists a time instant after which the sample mean of
the rewards of the optimal band is always greater than that of
the suboptimal band. In the proof for independent rewards in
appendix A the past sensing time instants did not play a role
in showing the convergence of the reward sample means to
the true expected reward from a band. This was because the
rewards were assumed to be independent from the past. With
where
is the sample average of the lengths of the 1cycles observed until time .
On the other hand the reward sample average of all the
sensings up to time
is
, where
is the
sum of the rewards collected from the band up to time .
It is assumed that the last observed reward whenever the
SU decides to hop to another band is not counted in the
sample average (although naturally the reward is collected).
The sample average can be further expressed as
13
where
is the number of visits to state
during the
observed -cycles up to time . By denoting the total number
of sensings during 0-cycles as
and the total number of
sensings during 1-cycles as
we get
. Since the channel must either in be in state 1 or
state 0 it must hold that
. Similarly,
. Using these we may further express
the sample average as
between two consecutive regenerative cycles (that are always
finite) increases exponentially the number of sensing must
grow logarithmically. Hence the expected number of sensings
of any suboptimal band is
(25)
ACKNOWLEDGMENT
Prof. Santosh Venkatesh, University of Pennsylvania is
acknowledged for useful discussions.
The authors wish to thank the anonymous reviewers for their
constructive comments that have improved the quality of the
paper.
Next we notice that
R EFERENCES
Substituting these we get
Using (20) and (21) we get that
which is equivalent with
(22)
Now again denote the sample average of the optimal band
by
and a suboptimal band by
respectively. Using
the same steps as in section A (15)-(16) with i.i.d. rewards
and the result of (22) we get that
(23)
This means that with probability 1 there exists a finite time
instant
when the sample average of the optimal band is at
least by
larger than that of a suboptimal band and
will stay larger from there on. Denoting the time instant of
the start of the
regenerative cycle at a suboptimal band by
for which
, the following will hold:
(24)
Since the Markov chains are assumed to be recurrent, all observed regenerative cycles are of finite length with probability
1. This guarantees that the use of regenerative cycles will not
make the policy to get stuck sensing a suboptimal band forever.
Since in (24)
the difference between the
start of a new regenerative cycle will asymptotically grow exponentially for any suboptimal band. When the time difference
[1] D. Cabric, S. M. Mishra, and R. W. Brodersen, “Implementation Issues
in Spectrum Sensing for Cognitive Radios,” in Proc. of the ASILOMAR
conf., vol. 1, Nov. 2004, pp. 772–776.
[2] J. Lundén, V. Koivunen, and H. V. Poor, Spectrum exploration and
exploitation, in Principles of Cognitive Radio, Chapter 5., E. Biglieri,
A. Goldsmith, L. Greenstein, N. Mandayam, and H. V. Poor, Eds.
Cambridge University Press, 2012.
[3] E. Axell, G. Leus, E. Larsson, and H. V. Poor, “Spectrum Sensing for
Cognitive Radio : State-of-the-Art and Recent Advances,” IEEE Signal
Process. Mag., vol. 29, no. 3, pp. 101–116, May 2012.
[4] S. Bubeck and N. Cesa-Bianchi, “Regret Analysis of Stochastic and
Nonstochastic Multi-armed Bandit Problems,” Foundations and Trends
in Machine Learning, vol. 5, no. 1, pp. 1–122, 2012.
[5] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction.
Cambridge University Press, 1998, vol. 1.
[6] J. C. Gittins, “Bandit Processes and Dynamic Allocation Indices,” J. R.
Stat. Soc. Series B Stat. Methodol., vol. 41, no. 2, pp. 148–177, 1979.
[7] C. H. Papadimitriou and J. N. Tsitsiklis, “The Complexity of Optimal
Queuing Network Control,” Math. Oper. Res., vol. 24, no. 2, pp. 293–
305, 1999.
[8] P. Whittle, “Restless Bandits: Activity Allocation in a Changing World,”
J. Appl. Probab., vol. 25A, pp. 287–298, 1988.
[9] C. Tekin and M. Liu, “Online Learning of Rested and Restless Bandits,”
IEEE Trans. Inf. Theory, vol. 58, no. 8, pp. 5588–5611, August 2012.
[10] H. Liu, K. Liu, and Q. Zhao, “Learning in a Changing World: Restless Multi-Armed Bandit with Unknown Dynamics,” IEEE Trans. Inf.
Theory, vol. 59, no. 3, pp. 1902–1916, March 2013.
[11] Z. Zhang, H. Jiang, P. Tan, and J. Slevinsky, “Channel Exploration
and Exploitation with Imperfect Spectrum Sensing in Cognitive Radio
Networks,” IEEE J. Sel. Areas Commun., vol. 31, no. 3, pp. 429–441,
2013.
[12] T. L. Lai and H. Robbins, “Asymptotically Efficient Adaptive Allocation
Rules,” Adv. in Appl. Math., vol. 6, no. 1, pp. 4–22, 1985.
[13] R. Kleinberg, “Nearly Tight Bounds for the Continuum-Armed Bandit
Problem,” in Adv. Neural Inf. Process. Syst., 2004, pp. 697–704.
[14] J. Oksanen, V. Koivunen, and H. V. Poor, “A Sensing Policy Based
on Confidence Bounds and a Restless Multi-Armed Bandit Model,” in
Proc. of the ASILOMAR conf., 2012, pp. 318–323.
[15] E. Even-Dar and Y. Mansour, “Convergence of Optimistic and Incremental Q-Learning,” in NIPS, 2001, pp. 1499–1506.
[16] P. Auer, N. Cesa-Bianchi, and P. Fischer, “Finite-time Analysis of the
Multiarmed Bandit Problem,” Machine Learning, vol. 47, pp. 235–256,
2002.
[17] O. Cappé, A. Garivier, O. Maillard, R. Munos, and G. Stoltz, “KullbackLeibler Upper Confidence Bounds for Optimal Sequential Allocation,”
Ann. Stat., vol. 41, no. 3, pp. 1516–1541, June 2013.
[18] R. S. Sutton, “Integrated Architectures for Learning, Planning, and
Reacting Based on Approximating Dynamic Programming,” in Proc.
of the ICML, 1990, pp. 216–224.
[19] A. N. Burnetas and M. N. Katehakis, “Optimal Adaptive Policies for
Sequential Allocation Problems,” Adv. in Appl. Math., vol. 17, no. 2, pp.
122–142, Jun. 1996.
[20] V. Anantharam, P. Varaiya, and J. Walrand, “Asymptotically Efficient
Allocation Rules for the Multiarmed Bandit Problem with Multiple
Plays-Part II: Markovian Rewards,” IEEE Trans. Autom. Control, vol. 32,
no. 11, pp. 977–982, 1987.
14
[21] R. Agrawal, “Sample Mean Based Index Policies with O(log n) Regret
for the Multi-Armed Bandit Problem,” Adv. Appl. Probab., vol. 27, no. 4,
pp. 1054–1078, 1995.
[22] S. Bubeck, “Bandits Games and Clustering Foundations,” Ph.D. dissertation, PhD thesis, Université Lille 1, 2010.
[23] A. Garivier and O. Cappé, “The KL-UCB Algorithm for Bounded
Stochastic Bandits and Beyond,” in Proc. of the COLT, 2011, pp. 359–
376.
[24] K. Liu, Q. Zhao, and B. Krishnamachari, “Dynamic Multichannel
Access With Imperfect Channel State Detection,” IEEE Trans. Signal
Process., vol. 58, no. 5, pp. 2795–2808, May 2010.
[25] S. Filippi, O. Cappe, and A. Garivier, “Optimally Sensing a Single
Channel Without Prior Information: The Tiling Algorithm and Regret
Bounds,” IEEE J. Sel. Topics Signal Process., vol. 5, no. 1, pp. 68–76,
2011.
[26] T. V. Nguyen, H. Shin, T. Quek, and M. Win, “Sensing and Probing
Cardinalities for Active Cognitive Radios,” IEEE Trans. Signal Process.,
vol. 60, no. 4, pp. 1833–1848, 2012.
[27] J. Oksanen, J. Lundén, and V. Koivunen, “Design of Spectrum Sensing
Policy for Multi-user Multi-band Cognitive Radio Network,” in Proc. of
the CISS, March 2012, pp. 1–6.
[28] Y. Li, S. Jayaweera, M. Bkassiny, and K. Avery, “Optimal Myopic
Sensing and Dynamic Spectrum Access in Cognitive Radio Networks
with Low-Complexity Implementations,” IEEE Trans. Wireless Commun., vol. 11, no. 7, pp. 2412–2423, July 2012.
[29] H. Su and X. Zhang, “Opportunistic MAC Protocols for Cognitive Radio
Based Wireless Networks,” in Proc. of the CISS, 2007, pp. 363–368.
[30] S. Geirhofer, L. Tong, and B. Sadler, “Cognitive Radios for Dynamic
Spectrum Access - Dynamic Spectrum Access in the Time Domain:
Modeling and Exploiting White Space,” IEEE Commun. Mag., vol. 45,
no. 5, pp. 66–72, 2007.
[31] D. W. Stroock, An Introduction to Markov Processes. Springer, 2005,
vol. 230, 184 pages.
[32] J. T. Chang, “Stochastic Processes,” Lecture notes, Yale
University. [Online]. Available: http://www.stat.yale.edu/ jtc5/251/
stochastic-processes.pdf
[33] T. M. Cover and J. A. Thomas, Elements of Information Theory. John
Wiley & Sons, 1991.
[34] N. Cesa-Bianchi, and G. Lugosi, Prediction, Learning, and Games.
Cambridge University Press, 2006, 394 pages.
[35] R. J. Serfling, Approximation Theorems of Mathematical Statistics.
Wiley. com, 1980, vol. 162, 380 pages.
[36] S. S. Venkatesh, The Theory of Probability: Explorations and Applications. Cambridge University Press, 2012, 805 pages.
[37] J. R. Norris, Markov Chains. Cambridge university press, 1998, no.
2008, 237 pages.
Jan Oksanen (S10) received his M.Sc. (with distinction) in 2008 in communications engineering from
the Helsinki University of Technology (currently
known as Aalto University), Finland. He is currently
finalizing his D.Sc. (Tech) at the department of Signal processing and acoustics, Aalto university. Nov.
2011–Nov 2012 he was a visiting student research
collaborator at Princeton University, NJ, USA. His
research interests include spectrum sensing, cognitive radio and machine learning.
Visa Koivunen (IEEE Fellow) received his D.Sc.
(EE) degree with honors from the University of
Oulu, Dept. of Electrical Engineering. He received
the primus doctor (best graduate) award among
the doctoral graduates in years 1989-1994. He is
a member of Eta Kappa Nu. From 1992 to 1995
he was a visiting researcher at the University of
Pennsylvania, Philadelphia, USA. Years 1997 -1999
he was faculty at Tampere UT. Since 1999 he has
been a full Professor of Signal Processing at Aalto
University (formerly known as Helsinki Univ of
Technology) , Finland. He received the Academy professor position (distinguished professor nominated by the Academy of Finland). He is one of
the Principal Investigators in SMARAD Center of Excellence in Research
nominated by the Academy of Finland. Years 2003-2006 he has been also
adjunct full professor at the University of Pennsylvania, Philadelphia, USA.
During his sabbatical term year 2007 he was a Visiting Fellow at Princeton
University, NJ, USA. He has also been a part-time Visiting Fellow at Nokia
Research Center (2006-2012). He spent a sabbatical at Princeton University
for the full academic year 2013-2014.
Dr. Koivunen’s research interests include statistical, communications, sensor
array and multichannel signal processing. He has published about 350
papers in international scientific conferences and journals. He co-authored
the papers receiving the best paper award in IEEE PIMRC 2005, EUSIPCO’2006,EUCAP (European Conference on Antennas and Propagation)
2006 and COCORA 2012. He has been awarded the IEEE Signal Processing
Society best paper award for the year 2007 (with J. Eriksson). He served as
an associate editor for IEEE Signal Processing Letters, IEEE Transactions on
Signal Processing, Signal Processing and Journal of Wireless Communication
and Networking. He is co-editor for IEEE JSTSP special issue on Smart Grids.
He is a member of editorial board for IEEE Signal Processing Magazine.
He has been a member of the IEEE Signal Processing Society technical
committees SPCOM-TC and SAMTC. He was the general chair of the IEEE
SPAWC conference 2007 conference in Helsinki, Finland June 2007. He is
the the Technical Program Chair for the IEEE SPAWC 2015 as well as Array
Processing track chair for 2014 Asilomar conference.
| 7 |
Degenerations of the generic square matrix.
Polar map and determinantal structure
Rainelly Cunha1
Zaqueu Ramos2
Aron Simis3
arXiv:1610.07681v4 [] 18 Oct 2017
Contents
1 Preliminaries
1.1 Review of ideal invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Homaloidal polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
4
5
2 Degeneration by cloning
2.1 Polar behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 The ideal of the submaximal minors . . . . . . . . . . . . . . . . . . . . . .
2.3 The dual variety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
7
12
16
3 Degeneration by zeros
3.1 Polar behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2 The ideal of the submaximal minors . . . . . . . . . . . . . . . . . . . . . .
3.3 The dual variety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
22
30
36
Abstract
One studies certain degenerations of the generic square matrix over a field k along
with its main related structures, such as the determinant of the matrix, the ideal generated by its partial derivatives, the polar map defined by these derivatives, the Hessian
matrix and the ideal of the submaximal minors of the matrix. The main tool comes
from commutative algebra, with emphasis on ideal theory and syzygy theory. The
structure of the polar map is completely identified and the main properties of the ideal
of submaximal minors are determined. Cases where the degenerated determinant has
non-vanishing Hessian determinant show that the former is a factor of the latter with
the (Segre) expected multiplicity, a result treated by Landsberg-Manivel-Ressayre by
geometric means. Another byproduct is an affirmative answer to a question of F. Russo
concerning the codimension in the polar image of the dual variety to a hypersurface.
Introduction
As implicit in the title, we aim at the study of certain degenerations of the generic square
matrix over a field k along with its main related structures. The degenerations one has
in mind will be carried by quite simple homomorphisms of the ground polynomial ring
generated by the entries of the matrix over k, mapping any entry to another entry or to
0
AMS Mathematics Subject Classification (2010 Revision). Primary 13C40, 13D02, 14E05, 14M12;
Secondary 13H10,14M05.
1
Under a CAPES Doctoral scholarship. The paper contains parts of this author’s ongoing PhD thesis
2
Partially supported by a CNPq post-doctoral fellowship (151229/2014-7)
3
Partially supported by a CNPq grant (302298/2014-2) and by a CAPES-PVNS Fellowship
(5742201241/2016).
1
zero. Since the resulting degenerated matrices often “forget” their generic origins, the study
of the related structures becomes a hard step.
Concretely, let M denote an m × m matrix which is a degeneration of the m × m generic
matrix and let R denote the polynomial ring over k generated by its entries. The related
structures will mean primevally the determinant f ∈ R of M, the corresponding Jacobian
ideal J ⊂ R, the Hessian matrix H(f ) of f , the polar map of f defined by its partial
derivatives, and the ideal I ⊂ R of submaximal minors. The approach throughout takes
mainly the commutative algebra side of the structures, hence ideal theory and homological
aspects play a dominant role.
The usual geometric way is to look at J as defining the base scheme of the polar map
without further details about its scheme nature. Here, a major point is to first understand
the ideal theoretic features of J such as its codimension and some of its associated prime
ideals, as well as the impact of its linear syzygies. In a second step one focus on the
intertwining between J and the ideal I ⊂ R of the submaximal minors of the matrix. As
per default, we will have J ⊂ I and typically of the same codimension. Of great interest is
to find out when I is a prime ideal and to study its potential nature as the radical of J or
of its unmixed part. As far as we know, for the question of primality there are two basic
techniques. One is based on verifying beforehand the normality of R/I, the other through a
lucky application of the results on s-generic matrices, as developed in [5] and [6]. We draw
on both, depending on the available players. A beautiful question, still open in general to
our knowledge, is to determine when the singular locus of the determinantal variety defined
by I is set-theoretic defined by the immediately lower minors, just as happens in the generic
case.
Another major focus is the structure of the polar map and its image (freely called polar
image) in terms of its birational potentiality. Since the polar map is a rational map of
projective space to itself, the image can be defined in terms of the original coordinates
(variables of R) and we may at times make this abuse. Its homogeneous coordinate ring is
known as the special fiber (or special fiber cone) of the ideal J and plays a central role in
the theory of reductions of ideals. Its Krull dimension is also known as the analytic spread
of the ideal J. In this regard, we face two basic questions: first, to compute the analytic
spread of J (equivalently, in characteristic zero, the rank of the Hessian matrix H(f ) of f );
second, to decide when J is a minimal reduction of the ideal I of submaximal minors.
Complete answers to the above questions are largely dependent upon the sort of degenerations of the generic square matrix one is looking at. Throughout this work all matrices
will have as entries either variables in a polynomial ring over a field or zeros, viewed as
degenerations of the generic square matrix. The prevailing tone of this study is to understand the effect of such degenerations on the properties of the underlying ideal theoretic
structures. By and large, the typical degeneration one has in mind consists in replacing
some of the entries of the generic matrix by way of applying a homomorphism of the ambient polynomial ring to itself. There will be noted differences regarding the behavior of a
few properties and numerical invariants, such as codimension, primality, Gorensteiness and
Cohen–Macaulayness. In particular, degenerating variables to zero is a delicate matter and
may depend on strategically located zeros.
In this work we focus on two basic instances of these situations.
The first, called informally “entry cloning” is dealt with in Section 2. Here we show
that f is homaloidal. For this, we first prove that the Jacobian ideal J has maximal linear
2
rank and that the determinant of H(f ) does not vanish, both requiring quite some tour de
force. By using the birationality criterion in Theorem 1.1, it follows that the polar map is
a Cremona map – i.e., f is a homaloidal polynomial.
We then move on to the ideal I of the submaximal minors. It will be a Gorenstein ideal
of codimension 4, a fairly immediate consequence of specialization. Showing in addition
that it is a prime ideal required a result of Eisenbud drawn upon the 2-generic property of
the generic matrix – we believe that R/I is actually a normal ring. It turns out that I is the
minimal primary component of J and the latter defines a double structure on the variety
V (I) with a unique embedded component, the latter being a linear subspace of codimension
4m − 5. An additional result is that the rational map defined by the submaximal minors
is birational onto its image. We give the explicit form of the image, through its defining
equation, a determinantal expression of degree m − 1. From the purely algebraic side, this
reflects on showing that the ideal J is not a reduction of its minimal component I.
The last topic of the section is the structure of the dual variety V (f )∗ of V (f ). Here
we show that V (f )∗ is an arithmetically Cohen–Macaulay variety that has very nearly the
structure of a ladder determinantal ideal of 2-minors and has dimension 2m−2. We close the
section by proving that f divides its Hessian determinant and has the expected multiplicity
m(m − 2) − 1 thereof in the sense of B. Segre.
The second alternative is dealt with in Section 3. We replace generic entries by zeros
in a strategic position to be explained
in the text. For any given 1 ≤ r ≤ m − 2, the
r+1
degenerated matrix will acquire 2 zeros. We prove that the ideal J still has maximal
linear rank. This time around, the Hessian determinant vanishes and the image of the polar
map is shown to have dimension m2 − r(r + 1) − 1. Moreover, its homogeneous coordinate
ring is a ladder determinantal Gorenstein ring.
Moving over again to the dual variety V (f )∗ we find that it is a ladder determinantal
variety of dimension 2m−2 defined by 2-minors. Thus, it has codimension (m−1)2 −r(r+1)
in the polar image. This result answers a question (oral communication) of F. Russo as
to whether there are natural examples where the codimension of the dual variety in the
polar image is larger than 1, assuming that the Hessian determinant of f vanishes – here
the gap is actually arbitrarily large and in addition it is representative of a well structured
class of determinantal hypersurfaces. We note that V (f )∗ is in particular an arithmetically
Cohen–Macaulay variety. It is arithmetically Gorenstein if r = m − 2.
In the sequel, as in the previous section, our drive is the nature of the ideal I of submaximal minors. Once again, we have geometry and algebra. The main geometric result is
that these minors define a birational map onto its image and the latter is a cone over the
polar variety of f with vertex cut by r+1
coordinate hyperplanes. The algebraic results
2
are deeper in the sense that one digs into other virtually hidden determinantal ideals coming from submatrices of the degenerate matrix. These ideals come naturally while trying
to uncover the nature of the relationship between the three ideals J, I, J : I. One of the
difficulties
is that I is not anymore prime for all values of r. We conjecture that the bound
r+1
≤ m − 3 is the exact obstruction for the primeness of I (one direction is proved here).
2
The second conjectured statement is that the ring R/J is Cohen–Macaulay if and only if
r = m − 2 (the “only if” part is proved here).
For a more precise discussion we refer to the statements of the various theorems. As
a guide, the main results are contained in Theorem 2.3, Theorem 2.4, Theorem 2.5, Theorem 2.6, Theorem 3.1, Theorem 3.7 and Theorem 3.16.
3
Unless otherwise stated, we assume throughout that the ground field has characteristic
zero.
1
Preliminaries
The aim of this section is to review some notions and tools from ideal theory and its role
in birational maps, including homaloidal ones.
1.1
Review of ideal invariants
Let (R, m) denote a Notherian local ring and its maximal ideal (respectively, a standard
graded ring over a field and its irrelevant ideal). For an ideal I ⊂ m (respectively, a
homogeneous ideal I ⊂ m), the special fiber of I is the ring R(I)/mR(I). Note that this is
an algebra over the residue field of R. The (Krull) dimension of this algebra is called the
analytic spread of I and is denoted `(I).
Quite generally, given ideals J ⊂ I in a ring R, J is said to be a reduction of I if there
exists an integer n ≥ 0 such that I n+1 = JI n . An ideal shares the same radical with all
its reductions. Therefore, they share the same set of minimal primes and have the same
codimension. A reduction J of I is called minimal if no ideal strictly contained in J is a
reduction of I. The reduction number of I with respect to a reduction J is the minimum
integer n such that JI n = I n+1 . It is denoted by redJ (I). The (absolute) reduction number
of I is defined as red(I) = min{redJ (I) | J ⊂ I is a minimal reduction of I}. If R/m is
infinite, then every minimal reduction of I is minimally generated by exactly `(I) elements.
In particular, in this case, every reduction of I contains a reduction generated by `(I)
elements.
The following invariants are related in the case of (R, m):
ht(I) ≤ `(I) ≤ min{µ(I), dim(R)},
where µ(I) stands for the minimal number of generators of I. If the rightmost inequality
turns out to be an equality, one says that I has maximal analytic spread. By and large,
the ideals considered in this work will have dim R ≤ µ(I), hence being of maximal analytic
spread means in this case that `(I) = dim R.
Suppose now that R is a standard graded over a field k and I is minimally generated
by n + 1 forms of same degree s. In this case, I is more precisely given by means of a free
graded presentation
R(−(s + 1))` ⊕
X
ϕ
R(−(s + j)) −→ R(−s)n+1 −→ I −→ 0
j≥2
for suitable shifts. Of much interest in this work is the value of `. The image of R(−(s+1))`
by ϕ is the linear part of ϕ – often denoted ϕ1 . It is easy to see that the rank of ϕ1 does
not depend on the particular minimal system of generators of I. Thus, we call it the linear
rank of I. One says that I has maximal liner rank provided its linear rank is n (=rank(ϕ)).
Clearly, the latter condition is trivially satisfied if ϕ = ϕ1 , in which case I is said to have
linear presentation (or is linearly presented).
4
Note that ϕ is a graded matrix whose columns generate the (first) syzygy module of I
(corresponding to the given choice of generators) and a syzyzy of I is an element of this
module – that is, a linear relation, with coefficients in R, on the chosen generators. In this
context, ϕ1 can be taken as the submatrix of ϕ whose entries are linear forms of the standard
graded ring R. Thus, the linear rank is the rank of the matrix of the linear syzygies.
Recall the notion of the initial ideal of a polynomial ideal over a field. For this one has
to introduce a monomial order in the polynomial ring. Given such a monomial order, if
f ∈ R we denote by in(f ) the initial term of f and by in(I) the ideal generated by the
initial terms of the elements of I – this ideal is called the initial ideal of I.
There are many excellent sources for the general theory of monomial ideals and Gröbner
bases; we refer to the recent book [8].
1.2
Homaloidal polynomials
Let k be an arbitrary field. For the purpose of the full geometric picture we may assume k
to be algebraically closed. We denote by Pn = Pnk the nth projective space, where n ≥ 1.
A rational map F : Pn 99K Pm is defined by m + 1 forms f = {f0 , . . . , fm } ⊂ R := k[x] =
k[x0 , . . . , xn ] of the same degree d ≥ 1, not all null. We often write F = (f0 : · · · : fm )
to underscore the projective setup. Any rational map can without lost of generality be
brought to satisfy the condition that gcd{f0 , . . . , fm } = 1 (in the geometric terminology,
F has no fixed part). The common degree d of the fj is the degree of F and the ideal
IF = (f0 , . . . , fm ) is called the base ideal of F.
The image of F is the projective subvariety W ⊂ Pm whose homogeneous coordinate ring
is the k-subalgebra k[f ] ⊂ R after degree renormalization. Write S := k[f ] ' k[y]/I(W ),
where I(W ) ⊂ k[y] = k[y0 , . . . , ym ] is the homogeneous defining ideal of the image in the
embedding W ⊂ Pm .
We say that F is birational onto its image if there is a rational map G : Pm 99K Pn ,
say, G = (g0 : · · · : gn ), with the residue classes of the gi ’s modulo I(W ) not all vanishing,
satisfying the relation
(g0 (f ) : · · · : gn (f )) = (x0 : · · · : xn ).
(1)
(See [4, Definition 2.10 and Corollary 2.12].) When m = n and F is a birational map of Pn ,
we say that F is a Cremona map. An important class of Cremona maps of Pn comes from
the so-called polar maps, that is, rational maps whose coordinates are the partial derivatives
of a homogeneous polynomial f in the ring R = k[x0 , . . . , xn ]. More precisely:
Let f ∈ k[x] = k[x0 , . . . , xn ] be a homogeneous polynomial of degree d ≥ 2. The ideal
J = Jf =
∂f
∂f
,...,
∂x0
∂xn
⊂ k[x]
∂f
∂f
is the Jacobian (or gradient) ideal of f . The rational map Pf := ∂x
:
·
·
·
:
is called
∂xn
0
the polar map defined by f . If Pf is birational one says that f is homaloidal.
We note that the image of this map is the projective subvariety on the target whose
homogeneous coordinate ring is given by the k-subalgebra k[∂f /∂x0 , . . . , ∂f /∂xn ] ⊂ k[x]
up to degree normalization. The image of Pf is called the polar variety of f .
The following birationality criterion will be largely used in this work:
5
Theorem 1.1. [[4], Theorem 3.2] Let F : Pn 99K Pm be a rational map, given by m + 1
forms f = {f0 , . . . , fm } of a fixed degree. If dim(k[f ]) = n + 1 and the linear rank of the
base ideal IF is m (maximal possible) then F is birational onto its image.
It is a classical result in characteristic zero that the Krull dimension of the k-algebra
k[f ] coincides with the rank of the Jacobian matrix of f = {f0 , . . . , fm }. Assuming that the
ground field has characteristic zero, the above criterion says that if the Hessian determinant
h(f ) does not vanish and the linear rank of the gradient ideal of f is maximal, then f is
homaloidal.
There are many sources for the basic material in these preliminaries; we refer to [4].
2
Degeneration by cloning
Quite generally, let (ai,j )1≤i≤j≤m denote an m × m matrix where ai,j is either a variable
on a ground polynomial ring R = k[x] over a field k or ai,j = 0. One of the simplest
specializations consists in going modulo a binomial of the shape ai,j −ai0 ,j 0 , where ai,j 6= ai0 ,j 0
and ai0 ,j 0 6= 0. The idea is to replace a certain nonzero entry ai0 ,j 0 (variable) by a different
entry ai,j , keeping ai,j as it was – somewhat like cloning a variable and keeping the mold.
It seems natural to expect that the new cloning place should matter as far as the finer
properties of the ideals are concerned.
The main object of this section is the behavior of the generic square matrix under this
sort of cloning degeneration. We will use the following notation for the generic square
matrix:
x1,1
x1,2
...
x1,m−1
x1,m
x2,1
x2,2
...
x2,m−1
x2,m
..
.. ,
.
.
.
..
(2)
G := .
.
.
...
xm−1,1 xm−1,2 . . . xm−1,m−1 xm−1,m
xm,1
xm,2
...
xm,m−1
xm,m
where the entries are independent variables over a field k.
Now, we distinguish essentially two sorts of cloning: the one that replaces an entry xi0 ,j 0
by another entry xi,j such that i 6= i0 and j 6= j 0 , and the one in which this replacement has
either i = i0 or j = j 0 .
In the situation of the second kind of cloning, by an obvious elementary operation and
renaming of variables (which is possible since the original matrix is generic), one can assume
that the matrix is the result of replacing a variable by zero on a generic matrix. Such a
procedure is recurrent, letting several entries being replaced by zeros. The resulting matrix
along with its main properties will be studied in Section 3.
Therefore, this section will deal exclusively with the first kind of cloning – which, for
emphasis, could be refereed to as diagonal cloning. Up to elementary row/column operations
and renaming of variables, we assume once for all that the diagonally cloned matrix has the
6
shape
x1,1
x2,1
..
.
x1,2
x2,2
..
.
...
...
x1,m−1
x2,m−1
..
.
x1,m
x2,m
..
.
GC :=
,
...
xm−1,1 xm−1,2 . . . xm−1,m−1 xm−1,m
xm,1
xm,2 . . . xm,m−1 xm−1,m−1
(3)
where the entry xm−1,m−1 has been cloned as the (m, m)-entry of the m × m generic matrix.
The terminology may help us remind of the close interchange between properties associated
to one or the other copy of the same variable in its place as an entry of the matrix. The
question as to whether there is a similar theory for repeatedly many diagonal cloning steps
has not been taken up in this work, but it looks challenging.
Throughout Ir (M ) denotes the ideal generated by the r-minors of a matrix M .
The following notion has been largely dealt with in [6].
An m × n matrix M of linear forms (m ≤ n) over a ground field is said to be s-generic
for some integer 1 ≤ s ≤ m if even after arbitrary invertible row and column operations,
any s of its entries are linearly independent over the field. It was proved in [6] that the
m × n generic matrix over a field is m-generic; in particular, this matrix is s-generic for any
1 ≤ s ≤ m.
Most specializations of the generic matrix fail to be s-generic for s ≥ 2 due to their very
format. However many classical matrices are 1-generic.
One of the important consequences of s-genericity is the primeness of the ideal of rminors for certain values of r. With an appropriate adaptation of the original notation, the
part of the result we need reads as follows:
Proposition 2.1. ([6, Theorem 2.1]) One is given integers 1 ≤ w ≤ v. Let G denote the
w × v generic matrix over a ground field. Let M 0 denote a w × v matrix of linear forms
in the entries of G and let further M denote a w × v matrix of linear forms in the entries
of M 0 . Let there be given an integer k ≥ 1 such that M 0 is a (w − k)-generic matrix and
such that the vector space spanned by the entries of M has codimension at most k − 1 in
the vector space spanned by the entries of M 0 . Then the ideal Ik+1 (M ) is is prime.
The following result originally appeared in [7] in a different context. It has independently
been obtained in [12, Proposition 5.3.1] in the presently stated form.
Proposition 2.2. Let M denote a square matrix over R = k[x0 , . . . , xn ] such that every
entry is either 0 or xi for some i = 1, . . . , n. Then, for each i = 0, . . . , n, the partial
derivative of f = det M with respect to xi is the sum of the (signed) cofactors of the entry
xi , in all its slots as an entry of M .
2.1
Polar behavior
Throughout we set f := det(GC) and let J = Jf ∈ R denote the gradient ideal of f , i.e.,
the ideal generated by the partial derivatives of f with respect to the variables of R, the
polynomial ring in the entries of GC over the ground field k.
Theorem 2.3. Consider the diagonally cloned matrix as in (3). One has:
7
(i) f is irreducible.
(ii) The Hessian determinant h(f ) does not vanish.
(iii) The linear rank of the gradient ideal of f is m2 − 2 (maximum possible).
(iv) f is homaloidal.
Proof. (i) We induct on m, the initial step of the induction being subsumed in the general
step.
Expanding f according to Laplace rule along the first row yields
f = x1,1 ∆1,1 + g,
where ∆1,1 is the determinant of the (m − 1) × (m − 1) cloned generic matrix obtained from
GC by omitting the first row and the first column. Note that both ∆1,1 and g belong to the
subring k [x1,2 , . . . , . . . , xm,m−1 ]. Thus, in order to show that f is irreducible it suffices to
prove that it is a primitive polynomial (of degree 1) in k [x1,2 , . . . , xm,m−1 ] [x1,1 ].
Now, on one hand, ∆1,1 is the determinant of a cloned matrix of the same type, hence it is
irreducible by the inductive hypothesis. Therefore, it is enough to see that ∆1,1 is not factor
of g. For this, one verifies their initial terms in the revlex monomial order, noting that they
are slightly modified from the generic case: in(∆1,1 ) = (x2,m−1 x3,m−2 · · · xm−1,2 )xm−1,m−1
and in(g) = in(f ) = (x1,m−1 x2,m−2 · · · xm−1,1 )xm−1,m−1 .
An alternative more sophisticated argument is to use that the ideal P of submaximal
minors has codimension 4, as shown independently in Theorem 2.4 (i) below. Since P =
(J, ∆m,m ), as pointed out in the proof of the latter proposition, then J has codimension
at least 3. Therefore, the ring R/(f ) is locally regular in codimension one, so it must be
normal. But f is homogeneous, hence irreducible.
(ii) Set v := {x1,1 , x2,2 , x3,3 , . . . , xm−1,m−1 } for the set of variables along the main diagonal. We argue by a specialization procedure, namely, consider the ring endomorphism
ϕ of R by mapping any variable in v to itself and by mapping any variable off v to zero.
Clearly, it suffices to show that by applying ϕ to the entries of the Hessian matrix H(f )
the resulting matrix M has a nonzero determinant.
Note that the partial derivative of f with respect to any xi,i ∈ v coincides with the
signed cofactor of xi,i , for i ≤ m − 2, while for i = m − 1 it is the sum of the respective
signed cofactors of xi,i corresponding to its two appearances.
By expanding each such cofactor according to the Leibniz rule it is clear that it has a
unique (nonzero) term whose support lies in v and, moreover, the remaining terms have
degree at least 2 in the variables off v.
Now, for xi,j ∈
/ v, without exception, the corresponding partial derivative coincides with
the signed cofactor. By a similar token, the Leibniz expansion of this cofactor has no term
whose support lies in v and has exactly one nonzero term of degree 1 in the variables off v.
By the preceding observation, applying ϕ to any second partial derivative of f will return
zero or a monomial supported on the variables in v. Thus, the entries of M are zeros or
monomials supported on the variables in v.
To see that the determinant of the specialized matrix M is nonzero, consider the Jacobian matrix of the set of partial derivatives {fv | v ∈ v} with respect to the variables in v.
8
Let M0 denote the specialization of this Jacobian matrix by ϕ, considered as a corresponding
submatrix of M. Up to permutation of rows and columns of M, we may write
M0 N
M=
,
P M1
where M1 has exactly one nonzero entry on each row and each column. Now, by the
way the second partial derivatives of f specialize via ϕ, as explained above, one must have
N = P = 0. Therefore, det(M) = det(M0 ) det(M1 ), so it remains to prove the nonvanishing
of these two subdeterminants.
Now the first block M0 is the Hessian matrix of the form
!
m−2
Y
g :=
xi,i x2m−1,m−1 .
i=1
This is the product of the generators of the k-subalgebra
k[x1,1 , . . . , xm−2,m−2 , x2m−1,m−1 ] ⊂ k[x1,1 , . . . , xm−2,m−2 , xm−1,m−1 ].
Clearly these generators are algebraically independent over k, hence the subalgebra is isomorphic to a polynomial ring itself. Then g becomes the product of the variables of a
polynomial ring over k. This is a classical homaloidal polynomial, hence we are done for
the first matrix block.
As for M1 , since it has exactly one nonzero entry on each row and each column, its
determinant does not vanish.
(iii) Let fi,j denote the xi,j -derivative of f and let ∆j,i stand for the (signed) cofactor
of the (i, j)th entry of the matrix GC.
The classical Cauchy cofactor formula
GC · adj(GC) = adj(GC) · GC = det(GC) Im
(4)
yields by expansion a set of linear relations involving the (signed) cofactors of GC:
m
X
xi,j ∆j,k = 0, for 1 ≤ i ≤ m − 1 and 1 ≤ k ≤ m − 2 (k 6= i)
(5)
j=1
m−1
X
xm,j ∆j,k + xm−1,m−1 ∆m,k = 0, for 1 ≤ k ≤ m − 2
(6)
j=1
m
X
j=1
m
X
xi,j ∆j,i =
m
X
xi+1,j ∆j,i+1 , for 1 ≤ i ≤ m − 3
(7)
j=1
xi,k ∆j,i = 0, for 1 ≤ j ≤ m − 3 and j < k ≤ j + 2.
(8)
i=1
m
X
xi,m−1 ∆m−2,i = 0.
i=1
9
(9)
m−1
X
xi,m ∆m−2,i + xm−1,m−1 ∆m−2,m = 0.
(10)
i=1
Since fi,j = ∆j,i for every (i, j) 6= (m − 1, m − 1) and the above relations do not involve
∆m−1,m−1 or ∆m,m then they give linear syzygies of the partial derivatives of f .
In addition, (4) yields the following linear relations:
m−1
X
xm−1,j ∆j,m + xm−1,m ∆m,m = 0
(11)
xi,m ∆m−1,i + xm−1,m ∆m−1,m−1 + xm−1,m−1 ∆m−1,m = 0
(12)
j=1
m−2
X
i=1
m−1
X
xi,m−1 ∆m,i + xm,m−1 ∆m,m = 0
(13)
xm,j ∆j,m−1 + xm,m−1 ∆m−1,m−1 + xm−1,m−1 ∆m,m−1 = 0
(14)
i=1
m−2
X
j=1
m
X
xm−1,j ∆j,m−1 + xm−1,m−1 ∆m−1,m−1 =
xm−2,j ∆j,m−2
(15)
j=1
j=1,j6=m−1
m−1
X
m
X
xm,j ∆j,m + xm−1,m−1 ∆m,m =
j=1
m
X
xm−2,j ∆j,m−2 .
(16)
j=1
As fm−1,m−1 = ∆m−1,m−1 + ∆m,m , adding (11) to (12), (13) to (14) and (15) to (16),
respectively, outputs three new linear syzygies of the partial derivatives of f . Thus one has
a total of (m − 2)(m − 1) + (m − 3) + 2(m − 2) + 3 = m2 − 2 linear syzygies of J.
It remains to show that these are independent.
For this we order the set of partial derivatives fi,j in accordance with the following
ordered list of the entries xi,j :
x1,1 , x1,2 , . . . , x1,m
x2,1 , x2,2 , . . . , x2,m
xm−1,1 , xm,1
xm−1,2 , xm,2
...
...
xm−2,1 , xm−2,2 . . . , xm−2,m ,
xm−1,m−1 , xm,m−1
xm−1,m
Here we traverse the entries along the matrix rows, left to right, starting with the first row
and stopping prior to the row having xm−1,m−1 as an entry; then start traversing the last
two rows along its columns top to bottom, until exhausting all variables.
We now claim that, ordering the set of partial derivatives fi,j in this way, the above sets
of linear relations can be grouped into the following block matrix of linear syzygies:
10
ϕ1
0
..
.
0
ϕ2
..
.
0
...
..
.
...
0m−1
2
0m−1
2
0m
2
0m
2
...
...
..
.
0m−1
2
0m−1
2
..
.
0m
2
0m
2
0m−1
1
01m−1
01m−1
0m
1
0m
1
0m
1
..
.
ϕm−2
...
...
...
0m
2
0m
2
..
.
0m
2
0m
2
ϕ32
022
..
.
022
022
ϕ43
..
.
022
022
.
...
...
ϕm−1
m−2
022
ϕm
m−1
...
...
...
0m
1
0m
1
0m
1
021
021
021
021
021
021
...
...
...
021
021
021
021
021
021
..
xm−1,m
2xm−1,m−1
0
xm,m−1
0
2xm−1,m−1
xm−1,m−1
xm,m−1
xm−1,m
.
Let us explain the blocks of the above matrix:
• ϕ1 is the matrix obtained from the transpose GC t of GC by omitting the first column;
• ϕ2 , . . . , ϕm−2 are each a copy of GC t (up to column permutation);
• ϕr+1
=
r
xm−1,r
xm,r
xm−1,r+1
xm−1,m−1
, r = 2, . . . , m − 2; ϕm
=
m−1
xm,r+1)
xm,m−1
xm−1,m
;
xm−1,m−1
• Each 0 under ϕ1 is an m × (m − 1) block of zeros and each 0 under ϕi is an m × m
block of zeros, for i = 2, . . . , m − 3 ;
• 0cr denotes an r × c block of zeros, for r = 1, 2 and c = 2, m − 1, m.
Next we justify why these blocks make up (linear) syzygies.
First, as already observed, the relations (5) through (16) yield linear syzygies of the
partial derivatives of f . Setting k = 1 in the relations (5) and (6) the resulting expressions
can be written, respectively, as
Pm−1
Pm
j=1 xm,j f1,j + xm−1,m−1 f1,m = 0.
j=1 xi,j f1,j = 0 for all i = 2, . . . , m − 1 and
Ordering the set of partial derivatives fi,j as explained before, the coefficients of these
relations form the first matrix above
x2,1
x3,1
...
xm−1,1
xm,1
..
..
..
..
.
.
...
.
.
ϕ1 :=
x2,m−1 x3,m−1 . . . xm−1,m−1 xm,m−1
x2,m
x3,m . . . xm−1,m xm−1,m−1
Note that ϕ1 coincides indeed with the submatrix of GC t obtained by omitting its first
column.
Getting ϕk , for k = 2, . . . ., m − 2, is similar, namely, use again relations (5) and (6)
retrieving a submatrix of GC t excluding the kth column and replacing it with an extra
column that comes from relation (7) taking i = k − 1.
Continuing, for each r = 2, . . . , m−2 the block ϕr+1
comes from the relation (8) (setting
r
m
j = r − 1) and ϕm−1 comes from the relations (9) and (10). Finally, the lower right corner
11
3 × 3 block of the matrix of linear syzygies comes from the three last relations obtained by
adding (11) to (12), (13) to (14) and (15) to (16).
This proves the claim about the large matrix above. Counting through the sizes of
the various blocks, one sees that this matrix is (m2 − 1) × (m2 − 2). Omitting its first row
obtains a block-diagonal submatrix of size (m2 −2)×(m2 −2), where each block has nonzero
determinant. Thus, the linear rank of J attains the maximum.
(iv) By (ii) the polar map of f is dominant. Since the linear rank is maximum by (iii),
one can apply Theorem 1.1 to conclude that f is homaloidal.
2.2
The ideal of the submaximal minors
In this part we study the nature of the ideal of submaximal minors (cofactors) of GC. As
previously, J denotes the gradient ideal of f = det(GC)
Theorem 2.4. Consider the matrix GC as in (3), with m ≥ 3. Let P := Im−1 (GC) denote
the ideal of (m − 1)-minors of GC. Then
(i) P is a Gorenstein prime ideal of codimension 4.
(ii) J has codimension 4 and P is the minimal primary component of J in R.
(iii) J defines a double structure on the variety defined by P , admitting one single embedded
component, the latter being a linear space of codimension 4m − 5.
(iv) Letting Di,j denote the cofactor of the (i, j)-entry of the generic matrix (yi,j )1≤i,j≤m ,
2
2
the (m − 1)-minors ∆ = {∆i,j } of GC define a birational map Pm −2 99K Pm −1 onto
a hypersurface of degree m − 1 with defining equation Dm,m − Dm−1,m−1 and inverse
e := {Di,j | (i, j) 6= (m, m)} modulo
map defined by the linear system spanned by D
Dm,m − Dm−1,m−1 .
(v) J is not a reduction of P .
Proof. (i) Let P denote the ideal of submaximal minors of the fully generic matrix (2).
The linear form xm,m − xm−1,m−1 is regular on the corresponding polynomial ambient and
also modulo P as the latter is prime and generated in degree m − 1 ≥ 2. Since P is a
Gorenstein ideal of codimenson 4 by a well-known result (“Scandinavian complex”), then
so is P .
In order to prove primality, we first consider the case m = 3 which seems to require a
direct intervention. We will show more, namely, that R/P is normal – and, hence a domain
as P is a homogeneous ideal. Since R/P is a Gorenstein ring, it suffices to show that R/P
is locally regular in codimension one. For this consider the Jacobian matrix of P :
12
x2,2 −x2,1
x2,3
0
0
x2,3
x3,2 −x3,1
x2,2
0
0
x
2,2
0
0
0
0
0
0
0
−x1,2 x1,1
0
−x2,1 −x1,3
0
x1,1
−x2,2
0
−x1,3 x1,2
0
0
0
0
−x3,1
0
x1,1
0
−x3,2
0
x1,2
0
0
x3,2 −x3,1
0
0
x2,2
x2,1 −x3,1
0
0
2x2,2 −x3,2
0
0
0
0
0
0
−x1,2 x1,1
−x1,3
0
.
0
−x1,3
−x2,2 x2,1
−x2,3
0
0
−x2,3
Direct inspection yields that the following pure powers are (up to sign) 4-minors of this
matrix: x41,3 , x42,1 , x42,2 , x42,3 , x43,1 and x43,2 . Therefore, the ideal of 4-minors of the Jacobian
matrix has codimension at least 6 = 4 + 2, thus ensuring that R/P satisfies (R1 ).
For m ≥ 4 we apply Proposition 2.1 with M 0 = G standing for an m × m generic matrix
and M = GC the cloned generic matrix as in the statement. In addition, we take k = m − 2,
so k + 1 = m − 1 is the size of the submaximal minors. Since m ≥ 4 and the vector space
codimension in the theorem is now 1, one has 1 ≤ m − 3 = k − 1 as required. Finally, the
m × m generic matrix is m-generic as explained in [6, Examples, p. 548]; in particular, it
is 2 = m − (m − 2)-generic. The theorem applies to give that the ideal P = Im−1 (GC) is
prime.
(ii) By item (i), P is a prime ideal of codimension 4. We first show that cod(J : P ) > 4,
which ensures that the radical of the unmixed part of J has no primes of codimension < 4
and coincides with P – in particular, J will turn out to have codimension 4 as stated.
For this note that P = (J, ∆m,m ), where ∆m,m denotes the cofactor of the (m, m)th
entry. From the cofactor identity we read the following relations:
m
X
xk,j ∆j,m = 0, for k = 1, . . . , m − 1;
j=1
m−1
X
xm,j ∆m,j + xm−1,m−1 ∆m,m =
j=1
m
X
x1,j ∆j,1 ;
j=1
m
X
xi,k ∆m,i = 0, for k = 1, . . . , m − 1;
i=1
Since the partial derivative fi,j of f with respect to the variable xi,j is the (signed)
cofactor ∆j,i , with the single exception of the partial derivative with respect to the variable
xm−1,m−1 , we have that the entries of the mth column and of the mth row all belong to
the ideal (J : ∆m,m ) = (J : P ). In particular, the codimension of (J : P ) is at least 5, as
needed.
In addition, since P has codimension 4 then J : P 6⊂ P . Picking an element a ∈ J : P \P
shows that PP ⊂ JP . Therefore P is the unmixed part of J.
To prove that P is actually the entire minimal primary component of J we argue as
follows. In addition, also note that P = (J, ∆m−1,m−1 ), where ∆m−1,m−1 denotes the
cofactor of the (m − 1, m − 1)th entry. From the cofactor identity we read the following
relations:
13
m
X
xk,j ∆j,m−1 + xk,m−1 ∆m−1,m−1 = 0, for k = 1, . . . , m, (k 6= m − 1);
j=1,j6=m−1
m
X
xm−1,j ∆j,m−1 + xm−1,m−1 ∆m−1,m−1 =
m
X
x1,j ∆j,1 ;
j=1
j=1,j6=m−1
m
X
xi,k ∆m−1,i + xm−1,k ∆m−1,m−1 = 0, for k = 1, . . . , m (k 6= m − 1);
i=1,i6=m−1
Then as above we have that the entries of the (m − 1)th column and the (m − 1)th row
belong to the ideal (J : ∆m−1,m−1 ) = (J : P ).
From this, the variables of the last two rows and columns of GC multiply P into J. As
is clear that P is contained in the ideal generated by these variables it follows that P 2 ⊂ J
(of course, this much could eventually be verified by inspection). Therefore, the radical of
J – i.e., the radical of the minimal primary part of J – is P .
(iii) By (ii), P is the minimal component of a primary decomposition of J. We claim
that J : P is generated by the 4m − 5 entries of GC off the upper left submatrix of size
(m − 2) × (m − 2). Let I denote the ideal generated by these entries.
As seen in the previous item, I ⊂ J : P . We now prove the reverse inclusion by writing
I = I 0 + I 00 as sum of two prime ideals, where I 0 (respectively, I 00 ) is the ideal generated
by the variables on the (m − 1)th row and on the (m − 1)th column of GC (respectively, by
the variables on the mth row and on the mth column of GC). Observe that the cofactors
∆i,j ∈ I 00 for all (i, j) 6= (m, m) and ∆i,j ∈ I 0 for all (i, j) 6= (m − 1, m − 1). Clearly, then
∆m,m ∈
/ I 00 and ∆m−1,m−1 ∈
/ I 0.
Let b ∈ J : P = J : ∆m,m , say,
X
b ∆m,m =
ai,j fi,j + afm−1,m−1
(i,j)6=(m−1,m−1)
X
=
ai,j ∆j,i + a(∆m−1,m−1 + ∆m,m )
(17)
(i,j)6=(m−1,m−1)
for certain ai,j , a ∈ R. Then
X
(b − a)∆m,m =
ai,j ∆j,i + a∆m−1,m−1 ∈ I 00 .
(i,j)6=(m−1,m−1)
Since I 00 is a prime ideal and ∆m,m ∈
/ I 00 , we have c := b − a ∈ I 00 . Substituting for a = b − c
in (17) gives
(−b + c)∆m−1,m−1 =
X
ai,j ∆j,i − c∆m,m ∈ I 0 .
(i,j)6=(m−1,m−1)
By a similar token, since ∆m−1,m−1 ∈
/ I 0 , then −b + c ∈ I 0 . Therefore
b = c − (−b + c) ∈ I 00 + I 0 = I,
14
as required.
In particular, J : P is a prime ideal which is necessarily an associated prime of prime
of R/J. As pointed out, P ⊂ J : P , hence J : P is an embedded prime of R/J. Moreover,
this also gives P 2 ⊂ J, hence J defines a double structure on the irreducible variety defined
by P .
Let Q denotes the embedded component of J with radical J : P and let Q0 denote the
intersection of the remaining embedded components of J. From J = P ∩ Q ∩ Q0 we get
J : P = (Q : P ) ∩ (Q0 : P ),
√
in particular, passing to radicals, J : P ⊂ Q0 . This shows that Q is the unique embedded component of codimension ≤ 4m − 5 and the corresponding geometric component is
supported on a linear subspace.
(iv) By Theorem 2.3 (ii), the polar map is dominant, i.e., the partial derivatives of f
generate a subalgebra of maximum dimension (= m2 − 1). Since J ⊂ P is an inclusion
in the same degree, the subalgebra generated by the submaximal minors has dimension
m2 − 1 as well. On the other hand, since P is a specialization from the generic case, it is
linearly presented. Therefore, by Theorem 1.1 the minors define a birational map onto a
hypersurface.
To get the inverse map and the defining equation of the image we proceed as follows.
Write ∆j,i for the cofactor of the (i, j)-entry of GC. For the image it suffices to show
that Dm,m − Dm−1,m−1 belongs to the kernel of the k-algebra map
ψ : k[yi,j | 1 ≤ i, j ≤ m] → k[∆] = k[∆i,j | 1 ≤ i, j ≤ m],
as it is clearly an irreducible polynomial.
Consider the following well-known matrix identity
adj(adj(GC)) = f m−2 · GC,
(18)
where adj(M ) denotes the transpose matrix of cofactors of a square matrix M . On the
right-hand side matrix we obviously see the same element as its (m − 1, m − 1)-entry as its
(m, m)-entry, namely, f m−2 xm−1,m−1 .
As to the entries of the matrix on the left-hand side, for any (k, l), the (k, l)-entry
is Dl,k (∆). Indeed, the (k, l)-entry of adj(adj(GC)) is the cofactor of the entry ∆l,k in
the matrix adj(GC). Clearly, this cofactor is the (l, k)-cofactor Dl,k of the generic matrix
(yi,j )1≤i,j≤m evaluated at ∆.
Therefore, we get (Dm,m − Dm−1,m−1 )(∆) = 0, as required.
Finally, by the same token, from (18) one deduces that the inverse map has coordinates
e
D := {Di,j | (i, j) 6= (m, m)} modulo Dm,m − Dm−1,m−1 .
(v) It follows from (iv) that the reduction number of a minimal reduction of P is m − 2.
Thus, to conclude, it suffices to prove that P m−1 6⊂ JP m−2 .
We will show that ∆m−1 ∈ P m−1 does not belong to JP m−2 .
Recall from previous passages that J is generated by the cofactors
∆l,h , with (l, h) 6= (m − 1, m − 1), (l, h) 6= (m, m)
15
and the additional form ∆m,m + ∆m−1,m−1 .
If ∆m−1 ∈ JP m−2 , we can write
X
∆m−1
m,m =
∆l,h Ql,h (∆) + (∆m,m + ∆m−1,m−1 )Q(∆)
(19)
(l,h)6=(m−1,m−1)
(l,h)6=(m,m)
where Ql,h (∆) and Q(∆) are homogeneous polynomial expressions of degree m − 2 in the
set
∆ = {∆i,j | 1 ≤ i ≤ j ≤ m}
of the cofactors (generators of P ).
Clearly, this gives a polynomial relation of degree m − 1 on the generators of P , so the
corresponding form of degree m − 1 in k[yi,j |1 ≤ i ≤ j ≤ m] is a scalar multiple of the
defining equation H := Dm,m − Dm−1,m−1 obtained in the previous item. Note that H
contains only squarefree terms. We now argue that such a relation is impossible.
Namely, observe that the sum
X
∆l,h Ql,h (∆)
(l,h)6=(m−1,m−1)
(l,h)6=(m,m)
m−2
does not contain any nonzero terms of the form α∆m−1
m,m or β∆m−1,m−1 ∆m,m . In addition,
if these two terms appear in (∆m,m + ∆m−1,m−1 )Q(∆) they must have the same scalar
coefficient, say, c ∈ k. Bring the first of these to the left-hand side of (19) to get a polynomial
m−1
relation of P having a nonzero term (1 − c)ym−1,m−1
. If c 6= 1, this is a contradiction due
to the squarefree nature of H.
On the other hand, if c = 1 then we still have a polynomial relation of P having a nonzero
m−2 . Now, if m > 3 this is again a contradiction vis-à-vis the nature of H
term ym−1,m−1 ym,m
as the nonzero terms of the latter are squarefree monomials of degree m − 1 > 3 − 1 = 2.
Finally, if m = 3 a direct checking shows that the monomial ym−1,m−1 ym,m cannot be the
support of a nonzero term in H. This concludes the statement.
2.3
The dual variety
An interesting question in general is whether f is a factor of its Hessian determinant h(f )
with multiplicity ≥ 1. If this is the case, then f is said in addition to have the expected
multiplicity (according to Segre) if its multiplicity as a factor of h(f ) is m2 −2−dim V (f )∗ −
1 = m2 − 3 − dim V (f )∗ = cod(V (f )∗ ) − 1, where V (f )∗ denotes the dual variety to the
hypersurface V (f ) (see [2]).
Theorem 2.5. Let f = det(GC). Then dim V (f )∗ = 2m − 2. In particular, the expected
multiplicity of f as a factor of h(f ) is m(m − 2) − 1.
Proof. We develop the argument in two parts:
1. dim V (f )∗ ≥ 2m − 2.
We draw on a result of Segre ([17]), as transcribed in [16, Lemma 7.2.7], to wit:
dim V (f )∗ = rank H(f )
16
(mod f ) − 2,
where H(f ) denotes the Hessian matrix of f . It will then suffice to show that H(f ) has a
submatrix of rank at least 2m modulo f . Consider the submatrix
x2,1
...
x2,m−1
x2,m
..
..
..
..
.
.
.
.
ϕ=
xm−1,1 . . . xm−1,m−1 xm−1,m
xm,1 . . . xm,m−1 xm−1,m−1
of GC obtained by omitting the first row. The maximal minors of this (m − 1) × m matrix
generate a codimension 2 ideal. On the other hand, ϕ has the property F1 for its Fitting
ideals. Indeed, the Fitting ideals of its generic predecessor are prime ideals, hence the
Fitting ideals of ϕ are specializations thereof, and as such each has the same codimension
as the respective predecessor. It follows from this that the ideal is of linear type ([9]). In
particular the m maximal minors of ϕ are algebraically independent over k, hence their
Jacobian matrix with respect to the entries of ϕ has rank m.
Let A0 denote an m × m submatrix thereof with det(A0 ) 6= 0. Now, f ∈ (x1,1 , . . . , x1,m )
while det(A0 ) ∈
/ (x1,1 , . . . , x1,m ). This means that det(A0 ) 6= 0 even modulo f .
Write the Hessian matrix H(f ) in the block form
0 A
,
H(f ) =
At B
where the first block row is the Jacobian matrix of the maximal minors of ϕ in the order
of the variables starting with {x1,1 , . . . , x1,m } and At denotes the transpose of A. Since A0
above is a submatrix of A of rank m modulo f , then h(f ) = det H(f ) has rank at least 2m
modulo f .
2. dim V (f )∗ ≤ 2m − 2.
Here we focus on the homogeneous coordinate ring of V (f )∗ , namely, the following
k-subalgebra of k[x]/(f )
k[∂f /∂x1,1 , . . . , ∂f /∂xm,m−1 ]/(f ) ' k[yi,j | 1 ≤ i ≤ j ≤ m, (i, j) 6= (m, m)]/P,
(20)
for a suitable prime ideal P , the homogeneous defining ideal of V (f )∗ . The isomorphism is
an isomorphism of graded k-algebras induced by the assignment yi,j 7→ ∂f /∂xi,j , (i, j) 6=
(m, m).
Claim 1. The homogeneous defining ideal P of V (f )∗ contains the ladder determinantal
ideal generated by the 2 × 2 minors of the following matrix
y1,1
y1,2
...
y1,m−2
y1,m
y1,m−1
y2,1
y2,2
...
y2,m−2
y2,m
y2,m−1
..
..
.
.
.
.
.
.
.
.
.
.
.
L=
.
ym−2,1 ym−2,2 . . . ym−2,m−2 ym−2,m ym−2,m−1
ym−1,1 ym−1,2 . . . ym−1,m−2 ym−1,m
ym,1
ym,2 . . . ym,m−2
To see this, we first recall that, for (i, j) 6= (m − 1, m − 1), the partial derivative ∂f /∂xi,j
coincides with the cofactor of xi,j in GC. Since ym−1,m−1 and ym,m are not entries of L,
17
the polynomial relations of the partial derivatives possibly involving the variables which are
entries of L are exactly relations of the cofactors other than the cofactor of xm−1,m−1 .
Thus, we focus on these factors, considering the following relation afforded by the cofactor identity:
adj(GC) · GC ≡ 0 (mod f ).
(21)
Further, for each pair of integers i, j such that 1 ≤ i < j ≤ m let Fij denote the 2 × m
submatrix of adj(GC) consisting of the ith and jth rows. In addition, let C stand for the
m × (m − 1) submatrix of GC consisting of its m − 1 leftmost columns. Then (46) gives the
relations
Fij C ≡ 0 (mod f ),
for all 1 ≤ i < j ≤ m. From this, since the rank of C modulo (f ) is obviously still m − 1,
the one of every Fi,j is necessarily 1. This shows that every 2 × 2 minor of adj(GC) vanishes
modulo (f ). Therefore, every such minor that does not involve either one of the cofactors
∆m−1,m−1 and ∆m,m gives a 2 × 2 minor of L vanishing on the partial derivatives. Clearly,
by construction, we obtain this way all the 2 × 2 minors of L. This proves the claim.
Now, since I2 (L) is a ladder determinantal ideal on a suitable generic matrix it is a
Cohen-Macaulay prime ideal (see [15] for primeness and [10] for Cohen–Macaulayness).
Moreover, its codimension is m(m−2)−2 = m2 −3−(2m−1) as follows from an application
to this case of the general principle in terms of maximal chains as described in [10, Theorem
4.6 and Corollary 4.7].
Note that by I2 (L) we understand the ideal generated by the 2 × 2 minors of L in the
polynomial ring A := k[I1 (L)1 ] spanned by the entries of L. Clearly, its extension to the full
polynomial ring B := k[yi,j | 1 ≤ i ≤ j ≤ m, (i, j) 6= (m, m)], is still prime of codimension
m(m − 2) − 2. Thus, P contains a prime subideal of codimension m(m − 2) − 2.
In addition, direct checking shows that the following two quadrics
g := y1,1 ym,m−1 − y1,m−1 ym,1 , h := y1,1 ym−1,m−1 − ym−1,1 y1,m−1 − ym,1 y1,m
belong to P and, furthermore:
Claim 2. {g, h} is a regular sequence modulo I2 (L)B.
It suffices to prove the assertion locally at the powers of y1,1 since I2 (L)B is prime and
y1,1 ∈
/ I2 (L)B. Now, locally at y1,1 and at the level of the ambient rings, f, g are like the
variables ym,m−1 and ym−1,m−1 , hence one has
By1,1 /(g, h) ' Ay1,1 .
Setting I = I2 (L) for lighter reading, it follows that
(By1,1 /IBy1,1 )/(g, h)(By1,1 /IBy1,1 ) ' By1,1 /(g, h, I)
' (By1,1 /(g, h))/((g, h, I)/(g, h))
' Ay1,1 /IAy1,1 .
But clearly, By1,1 /IBy1,1 ' (Ay1,1 /IAy1,1 )[ym−1,m−1 , ym,m−1 ]y1,1 , hence
dim Ay1,1 /IAy1,1 = dim By1,1 /IBy1,1 − 2.
18
This shows that
dim(By1,1 /IBy1,1 )/(g, h)(By1,1 /IBy1,1 ) = dim By1,1 /IBy1,1 − 2
and since By1,1 /IBy1,1 is Cohen–Macaulay, this proves the claim.
Summing up we have shown that P has codimension at least m(m−2)−2+2 = m(m−2).
Therefore, dim V (f )∗ = m2 − 2 − cod V (f )∗ ≤ m2 − 2 − m(m − 2) = 2m − 2, as was to be
shown.
Theorem 2.6. Let f = det(GC). Then f is a factor of its Hessian determinant h(f ) and
has the expected multiplicity as such.
Proof. Fix the notation of Theorem 2.4 (iv) and its proof in the next subsection. As seen
there, the identity
adj(adj(GC)) = det(GC)m−2 · GC
e :=
shows that the inverse map to the birational map defined by the minors ∆ has D
{Di,j | (i, j) 6= (m, m)} as its set of coordinates. By (1) one has the equality
e ◦ ∆(x) = f m−2 · (x).
D
(22)
Applying the chain rule to (22) yields
e
Θ(D)(∆)
· Θ(∆) = f m−2 · I +
1
f m−3 · (x)t · Grad(f ),
m−2
(23)
where Θ(S) denotes the Jacobian matrix of a set S of polynomials, Grad(f ) stands for the
row vector of the partial derivatives of f and I is the identity matrix of order m − 1.
Write
B
e
Θ(D)(∆) = A U , Θ(∆) =
,
V
e with
where U designates the column vector of the partial derivatives of the elements of D
respect to ym,m further evaluated at ∆, while V stands for the row vector of the partial
derivatives of ∆m,m with respect to the x-variables.
Letting E denote the elementary matrix obtained from I by adding the mth row to the
(m − 1)th row, one further has
!
H
f
e
Θ(D)(∆)
· E −1 = A0 U , E · Θ(∆) =
V
where Hf stands for the Hessian matrix of f with respect to the x-variables.
Applying these values to (23) obtains
A0 · Hf = f m−2 · I + A
(24)
1
where A = m−2
f m−3 · (x)t · Grad(f ) − U · V .
On the other hand, note that (∂Dij /∂yr,s ) (∆) is the (m−2)-minor of adj(GC) ommiting
the ith and rth rows and the jth and sth columns. Therefore, by a classical identity
19
(∂Dij /∂yr,s ) (∆) = f m−3 (∆i,j ∆r,s − ∆i,s ∆r,j ).
(25)
As a consequence, U = f m−3 · U 0 where the entries of U 0 are certain 2-minors of GC. We
now get
A0 · Hf = f m−3 (f · I + A0 )
where A0 =
1
m−2
· (x)t · Grad(f ) − U 0 · V. Thus,
det(A0 ) · det(Hf ) = f (m−3)(m
2 −1)
det(f · I + A0 )
(26)
Set n := m2 −1, [n] := {1, . . . , n} and let ∆[n]\{i1 ,...,ik } denote the principal (n−k)-minor
of A0 with rows and columns [n] \ {i1 , . . . , ik }. Also note that A0 has rank at most 2 since
it is a sum of matrices of rank 1. Using a classical formula for the determinant of a sum
where one of the summands is a diagonal matrix (see, e.g., [18, Lemma 2.3]), one has
det (f · I + A0 ) = det(A0 ) + f
∆[n]−{i}
!
X
X
+ . . . + f n−1
∆[n]\{i1 ,...,in−1 } + f n
1≤i1 <...<in−1 ≤n
i
X
= f n−2
∆[n]\{i1 ,...,in−2 } + f n−1 · trace(A0 ) + f n .
1≤i1 <...<in−2 ≤n
Setting G :=
X
∆[n]\{i1 ,...,in−2 } + f · trace(A0 ) + f 2 and substituting in (26)
1≤i1 <...<in−2 ≤n
gives
det(A0 ) · det(Hf ) = f (m−3)(m
2 −1)+m2 −3
· G = f (m
2 −1)(m−2)−2
·G
(27)
Suppose for a moment that G 6= 0. In this case, clearly det(A0 ) 6= 0 and a degree
argument shows that some positive power of f divides h(f ). Indeed, by construction,
2 −1)(m−2)−2
deg(det(A0 )) = (m2 − 1)(m(m − 3) + 2) < m((m2 − 1)(m − 2) − 2) = deg(f (m
).
Therefore, we are left with proving that G does not vanish.
Note that the vanishing of G would imply in particular that f is an integral element
over the k-subalgebra generated by the entries of A0 . This would possibly be forbidden if
the latter could be proved to be integrally closed. Due to the difficulty of this verification
we resort to a direct inspection. For this, note that
trace(A0 ) =
=
1
trace((x)t · Grad(f )) − trace(U 0 V )
m−2
X
∂∆m,m
m
f−
(aij am,m − ai,m amj )
m−2
∂xij
ij
i,j6=m
=
=
m
f − amm
m−2
X
aij
ij
∂∆m,m
−
∂xij
X
ij
i,j6=m
aim amj
∂∆m,m
∂xij
X
∂∆m,m
m
f − (m − 1)am,m ∆m,m +
aim amj
.
m−2
∂xij
ij
i,j6=m
20
Setting qij := aij amm − aim amj , one has
X
∆[n]\{i1 ,...,in−2 }
=
1≤i1 <...<in−2 ≤n
X
det
(ij),(rs)
=
X
det
(ij),(rs)
=
X
det
(ij),(rs)
∂f
mm
xij ∂x
− qij ∂∆
∂xij
ij
∂f
mm
xij ∂x
− qij ∂∆
∂xrs
rs
∂f
mm
xrs ∂x
− qrs ∂∆
∂xij
ij
∂f
mm
xrs ∂x
− qrs ∂∆
∂xrs
rs
!
∂f
∂f
xij
xrs
−qij
−qrs
xij
xrs
aim amj
arm ams
· det
∂xij
∂∆mm
∂xij
· det
Thus, we can write G = G1 + G2 , where
X
X
∂∆m,m
xij aim amj
G1 = f
aim amj
det
· det
+
xrs arm ams
∂xij
ij
(ij),(rs)
i,j6=0
and
G2 = f
m
· f − (m − 1)am,m ∆m,m
m−2
!
∂xrs
∂∆mm
∂xrs
∂f
∂xij
∂∆mm
∂xij
∂f
∂xij
∂∆mm
∂xij
∂f
∂xrs
∂∆mm
∂xrs
!
∂f
∂xrs
∂∆mm
∂xrs
!
+ f 2.
Inspecting the summands of G1 and G2 one sees that the degree of xm−1,m−1 in G1 is at
most 3, while that of xm−1,m−1 in G2 is 4. This shows that G 6= 0.
Remark 2.7. (1) The factor h(f )/f m(m−2)−1 coincides with the determinant of the 2 × 2
submatrix with rows m − 1, m and columns m − 1, m.
(2) There is a mistaken assertion in the proof of [13, Proposition 3.2 (a)] to the effect that
the dual variety to the generic determinant f is the variety of submaximal minors. This is of
course nonsense since the dual variety is the variety of the 2×2 minors. This wrong assertion
in loc. cit. actually serves no purpose in the proof since (a) follows simply from the cofactor
relation. At the other end, this nonsense reflects on the second assertion of item (c) of the
same proposition which is therefore also flawed. The right conclusion is that the multiplicity
of f as a factor of its Hessian has indeed the expected multiplicity, since the variety of the
2 × 2 minors has codimension (m − 1)2 and hence m2 − 1 − (m − 1)2 = m(m − 2) as desired.
Likewise, [13, Conjecture 3.4 (a)] should be read as affirmative without exception.
(3) We have been kindly informed by J. Landsberg that either Theorem 2.5 or Theorem 2.6, or perhaps both – which were stated as a conjecture in the first version of this
prepint posted on the arXiv – have been obtained in [11] by geometric means. For our misfortune, we were not able to trace in the mentioned work the precise statements expressing
the above contents.
3
Degeneration by zeros
Recall from the previous section the cloning degeneration where an entry is cloned along the
same row or column of the original generic matrix. As mentioned before, up to elementary
operations of rows and/or columns the resulting matrix has a zero entry. A glimpse of this
first status has been tackled in [13, Proposition 4.9 (a)].
21
This procedure can be repeated to add more zeros. Aiming at a uniform treatment of
all these cases, we will fix integers m, r with 1 ≤ r ≤ m − 2 and consider the following
degeneration of the m × m generic matrix:
x1,1
..
.
xm−r,1
xm−r+1,1
xm−r+2,1
..
.
xm−1,1
xm,1
...
x1,m−r
..
.
x1,m−r+1
..
.
x1,m−r+2
..
.
...
...
xm−r,m−r
xm−r+1,m−r
xm−r+2,m−r
..
.
xm−1,m−r
xm−r,m−r+1
xm−r+1,m−r+1
xm−r+2,m−r+1
..
.
xm−1,m−r+1
...
xm,m−r
0
...
...
...
...
...
x1,m
..
.
xm−r,m−r+2
xm−r+1,m−r+2
xm−r+2,m−r+2
..
.
0
...
...
...
...
.
..
...
x1,m−1
..
.
xm−r,m−1
xm−r+1,m−1
0
..
.
0
xm−r,m
0
0
..
.
0
0
...
0
0
(28)
Assuming m is fixed in the context, let us denote the above matrix by DG(r).
3.1
Polar behavior
Theorem 3.1. Let R = k[x] denote the polynomial ring in the nonzero entries of DG(r),
with 1 ≤ r ≤ m − 2, let f := det DG(r) and let J ⊂ R denote the gradient ideal of f . Then:
(a) f is irreducible.
(b) J has maximal linear rank.
r+1
2
(c) The homogeneous coordinate ring of the polar variety of f in Pm −( 2 )−1 is a Gorenstein ladder determinantal ring of dimension m2 − r(r + 1); in particular, the analytic
spread of J is m2 − r(r + 1).
Proof. (a) Expanding the determinant by Laplace along the first row, we can write f =
x1,1 ∆1,1 + g, where ∆1,1 is the cofactor of x1,1 . Clearly, both ∆1,1 and g belong to the
polynomial subring omitting the variable x1,1 . Thus, in order to show that f is irreducible
it suffices to prove that it is a primitive polynomial (of degree 1) in k[x1,2 , . . . , xm,m−r ][x1,1 ].
In other words, we need to check that no irreducible factor of ∆1,1 is a factor of g.
We induct on m ≥ r + 2. If m = r + 2 then ∆1,1 = x2,m x3,m−1 · · · xm−1,3 xm,2 , while the
initial term of g in the revlex monomial order is
in(g) = in(f ) = x1,m x2,m−1 · · · xm,1 .
Thus, assume that m > r + 2. By the inductive step, ∆1,1 is irreducible being the determinant of an (m − 1) × (m − 1) matrix of the same kind (same r). But deg(∆1,1 ) = deg(g) − 1.
Therefore, it suffices to show that ∆1,1 is not a factor of g. Supposing it were, we would
get that f is multiple of ∆1,1 by a linear factor – this is clearly impossible.
Once more, an alternative argument is to use that the ideal J has codimension 4, as will
be shown independently in Theorem 3.7 (b). Therefore, the ring R/(f ) is locally regular in
codimension at least one, so it must be normal. But f is homogeneous, hence irreducible.
(b) The proof is similar to the one of Theorem 2.3 (iii), but there is a numerical diversion
and, besides, the cases where r > m − r − 1 and r ≤ m − r − 1 keep slight differences.
22
Let fi,j denote the xi,j -derivative of f and let ∆j,i stand for the (signed) cofactor of xi,j
on DG(r). We first assume that r > m − r − 1. The Cauchy cofactor formula
DG(r) · adj(DG(r)) = adj(DG(r)) · DG(r) = det(DG(r)) Im
yields by expansion the following three blocks of linear relations involving the (signed)
cofactors of DG(r):
Pm
for 1 ≤ i ≤ m − r, 1 ≤ k ≤ m − r (k 6= i)
Pj=1 xi,j ∆j,k = 0
m−l
for 1 ≤ l ≤ r, 1 ≤ k ≤ m − r
j=1 xm−r+l,j ∆j,k = 0
Pm x ∆ − Pm x
j=1 i+1,j ∆j,i+1 = 0 for 1 ≤ i ≤ m − r − 1
j=1 i,j j,i
(29)
with m2 − rm − 1 such relations;
( Pm
for 1 ≤ j ≤ m − r, 1 ≤ k ≤ m − r (k 6= j)
i=1 xi,j ∆k,i = 0
Pm−l
i=1 xi,m−r+l ∆k,i = 0 for 1 ≤ l ≤ 2r − m + 1, 1 ≤ k ≤ m − r
(30)
with (m − r)(m − r − 1) + (m − r)(2r − m + 1) = r(m − r) such relations; and
( Pm−l
Pm
i=1 xi,m−r+l ∆m−r+l,i −
Pm−k
i=1 xi,m−r+k ∆m−r+l,i =
j=1 xi,1 ∆1,i
= 0 for 1 ≤ l ≤ r − 1
for 1 ≤ l ≤ r − 2, l + 1 ≤ k ≤ r − 1
0
(31)
with r(r − 1)/2 such relations.
Similarly, when r ≤ m−r−1, the classical Cauchy cofactor formula outputs by expansion
three blocks of linear relations involving the (signed) cofactors of DG. Here, the first and
third blocks are, respectively, exactly as the above ones, while the second one requires a
modification due to the inequality reversal; namely, we get
( Pm
i=1
Pm
xi,j ∆k,i = 0 for 1 ≤ j ≤ r + 1, 1 ≤ k ≤ r (k 6= j)
i=1 xi,j ∆k,i
= 0 for 1 ≤ j ≤ r, r + 1 ≤ k ≤ m − r,
(32)
with r(m − r) such relations (as before).
Since fi,j coincides with the signed cofactor ∆j,i , any of the above relations gives a linear
syzygy of the partial derivatives
of f . Thus one has a total of m2 − rm − 1 + r(m − r) +
r+1
2
r(r − 1)/2 = m − 2 − 1 linear syzygies of J.
It remains to show that these are independent.
For this, we adopt the same strategy as in the proof of Theorem 2.3 (iii), whereby we
list the partial derivatives according to the following ordering of the nonzero entries: we
traverse the first row from left to right, then the second row in the same way, and so on
until we reach the last row with no zero entry; thereafter we start from the first row having
a zero and travel along the columns, from left to right, on each column from top to bottom,
till we all nonzero entries are counted.
23
Thus, the desired ordering is depicted in the following scheme, where we once more used
arrows for easy reading:
x1,1 , x1,2 , . . . , x1,m
x2,1 , x2,2 , . . . , x2,m
xm−r+1,1 , . . . , xm,1
xm−r+1,2 , . . . , xm,2
...
xm−r,1 , xm−r,2 . . . , xm−r,m
...
xm−r+1,m−r , xm−r+2,m−r , . . . , xm,m−r
xm−r+1,m−r+1 , . . . , xm−1,m−r+1
xm−r+1,m−r+2 , . . . , xm−2,m−r+2
...
xm−r+1,m−1
xm−r+1,m−2 , xm−r+2,m−2
With this ordering the above linear relations translate into linear syzygies collected in the
following block matrix
M=
ϕ1
0
..
.
...
ϕ2
..
.
0
0m−1
r
0m−1
r
..
.
0
0m
r
0m
r
..
.
0m−1
r
0m−1
r−1
0m−1
r−2
..
.
0m
r
0m
r−1
0m
r−2
..
.
0m−1
1
0m
1
...
..
.
...
...
...
ϕm−r
0m
r
0m
r
..
...
.
...
0m
r
. . . 0m
r−1
. . . 0m
r−2
..
..
.
.
...
,
..
.
0m
1
ϕ1r
0rr
..
.
ϕ2r
..
.
..
0rr
0rr−1
0rr−2
..
.
0rr
0rr−1
0rr−2
..
.
...
...
...
..
.
ϕr
0rr−1
0rr−2
..
.
Φ1
r−1
0r−2
..
.
Φ2
..
.
0r1
0r1
...
0r1
01r−1
0r−2
1
.
(m−r)
..
.
...
Φr−1
where:
• ϕ1 is the matrix obtained from the transpose DG(r)t of DG(r) by omitting the first
column
• ϕ2 , . . . , ϕm−r are each a copy of DG(r)t (up to column permutation);
• When r > m − r − 1, setting d = 2r − m + 1, for i = 1, . . . , m − r one has that ϕir is
the r × r minor omitting the ith column of the following submatrix of DG(r):
xm−r+1,1
..
.
xm−d,1
xm−d+1,1
..
.
xm−1,1
xm,1
. . . xm−r+1,m−r xm−r+1,m−r+1 xm−r+1,m−r+2 . . . xm−r+1,r+1
..
..
..
..
..
..
.
.
.
.
.
.
. . . xm−d,m−r
xm−d,m−r+1
xm−d,m−r+2 . . . xm−d,r+1
.
. . . xm−d+1,m−r xm−d+1,m−r+1 xm−d+1,m−r+2 . . .
0
..
..
..
..
..
..
.
.
.
.
.
.
. . . xm−1,m−r
xm−1,m−r+1
0
...
0
...
xm,m−r
0
24
0
...
0
When r ≤ m − r − 1, consider the following submatrix of DG(r):
xm−r+1,1 . . . xm−r+1,r xm−r+1,r+1
..
..
..
..
.
.
.
.
.
...
xm,r
xm,r+1
m,1
Then, for i = 1, . . . , r (respectively, for i = r + 1, . . . , m − r) ϕir denotes the r × r
submatrix obtained by omitting the ith column (respectively, the last column).
• Each 0 under ϕ1 is an m × (m − 1) block of zeros and each 0 under ϕi is an m × m
block of zeros for i = 2, . . . , m − r − 1 ;
• 0cl denotes an l × c block of zeros.
• Φi is the following (r − i) × (r − i) submatrix of DG(r):
xm−r+1,m−r+i xm−r+1,m−r+i+1 . . . xm−r+1,m−1
xm−r+2,m−r+i xm−r+2,m−r+i+1 . . .
0
Φi =
.
..
..
..
..
.
.
.
.
xm−i,m−r+i
0
...
0
Next we justify why these blocks make up (linear) syzygies. As already explained, the
relations in (29), (30) and (31) yield linear syzygies of the partial derivatives
of f . Setting
Pm
k = 1 in the first two relations of (29), the latter can be written as j=1 xi,j f1,j = 0, for
P
i = 2, . . . , m − r, and m−l
j=1 xm−r+l,j f1,j = 0, for all l = 1, . . . , r. By ordering the set of
partial derivatives fi,j as explained before, the coefficients of these relations become the
entries of the submatrix ϕ1 of DG(r)t obtained by omitting its first column, as mentioned
above, namely:
x2,1
..
.
x2,m−r
x2,m−r+1
x2,m−r+2
..
.
x2,m−1
x2,m
...
xm−r,1
..
.
...
...
xm−r,m−r
. . . xm−r,m−r+1
. . . xm−r,m−r+2
..
...
.
...
...
xm−r,m−1
xm−r,m
xm−r+1,1
..
.
xm−r+2,1
..
.
xm−r+1,m−r
xm−r+1,m−r+1
xm−r+1,m−r+2
..
.
xm−r+2,m−r
xm−r+2,m−r+1
xm−r+2,m−r+2
..
.
xm−r+1,m−1
0
0
0
...
xm−1,1
..
.
...
...
xm−1,m−r
. . . xm−1,m−r+1
...
0
..
.
..
.
...
...
0
0
xm,1
..
.
xm,m−r
0
0
..
.
0
0
Getting ϕk , for k = 2, . . . , m − r, is similar, namely, we use again the first two relations
in the block (29) retrieving the submatrix of DG(r)t excluding the kth column and replacing
it with an extra column that comes from the last relation in (29) by taking i = k − 1.
25
Continuing, for each i = 1, . . . , m − r the block ϕir comes from the relations in the
blocks (30), if r > m − r − 1, or (32), if r ≤ m − r − 1, by setting k = i. Finally, for each
i = 1, . . . , r − 1, the block Φi comes from the relations in (31) by setting l = i.
This proves the claim about the large matrix above.
Counting
through the sizes of the
2 − r+1 −1). Omitting its first
various blocks, one sees that this matrix is (m2 − r+1
)×(m
2
2
row obtains a square block-diagonal submatrix where each block has nonzero determinant.
Thus, the linear rank of J attains the maximum.
r+1
2
(c) Note that the polar map can be thought as the map of Pm −( 2 )−1 to itself defined
by the partial derivatives of f . As such, the polar variety will be described in terms of
defining equations in the original x-variables.
Let L = L(m, r) denote the set of variables in DG(r) lying to the left and above the
stair-like polygonal in Figure 1 and let Im−r (L) stand for the ideal generated by the (m −
r) × (m − r) minors of DG(r) involving only the variables in L.
Figure 1: stair-like polygonal.
Since L can be extended to a fully generic matrix of size (m − 1) × (m − 1), the ring
K[L]/Im−r (L) is one of the so-called ladder determinantal rings.
Claim: The homogeneous defining ideal of the image of the polar map of f contains
the ideal Im−r (L).
Let xi,j denote a nonzero entry of DG(r). Since the nonzero entries of the matrix are
independent variables, it follows easily from the Laplace expansion along the ith row that
the xi,j -derivative fi,j of f coincides with the (signed) cofactor of xi,j , heretofore denoted
∆j,i .
Given integers 1 ≤ i1 < i2 < . . . < im−r ≤ m − 1, consider the following submatrix of
the transpose matrix of cofactors:
26
∆i1 ,1
∆i2 ,1
..
.
∆i1 ,2
∆i2 ,2
..
.
∆i1 ,3
∆i2 ,3
..
.
···
···
F =
···
∆im−r ,1 ∆im−r ,2 ∆im−r ,3 · · ·
Letting
C=
x
∆i1 ,m−im−r +(m−r−1)
∆i2 ,m−im−r +(m−r−1)
..
.
x1,im−r +2
..
.
xm−r,im−r +1
xm−r+1,im−r +1
..
.
xm−r,im−r +2
xm−r+1,im−r +2
..
.
xm−im−r +(m−r−1),im−r +1
.
∆im−r ,m−im−r +(m−r−1)
x1,im−r +1
..
.
m−im−r +(m−r−2),im−r +1
···
···
···
···
···
xm−im−r +(m−r−2),im−r +2 · · ·
0
···
x1,m−1
..
.
xm−r,m−1
xm−r+1,m−1
..
.
0
0
x1,m
..
.
xm−r,m
0
,
..
.
0
0
the cofactor identity adj(DG(r)) · DG(r) = det(DG(r))Im yields the relation
F · C = 0.
Since the columns of C are linearly independent, it follows that the rank of F is at most
m − im−r + (m − r − 1) − (m − im−r ) = (m − r) − 1. In other words, the maximal minors
of the following matrix
xi1 ,1
xi1 ,2
xi1 ,3
···
xi1 ,m−im−r +(m−r−1)
xi2 ,1
xi2 ,2
xi2 ,3
···
xi2 ,m−im−r +(m−r−1)
.
..
..
..
..
.
.
.
···
.
xim−r ,1 xim−r ,2 xim−r ,3 · · · xim−r ,m−im−r +(m−r−1)
all vanish on the partial derivatives of f , thus proving the claim.
Claim: The codimension of the ideal Im−r (L(m, r)) is at least r+1
2 .
Let us note that the codimension of this ladder ideal could be obtained by the general
principle described in [10] (see also [3]), as done in the proof of Theorem 2.5. However, in
this structured situation we prefer to give an independent argument.
For this we induct with the following inductive hypothesis: let 1 ≤ i ≤ r − 1; then for
any (m − i) × (m − i) matrix of the form DG(r − i), the ideal Im−i−(r−i) (L(m − i, r − i))
has codimension at least r−i+1
. Note that m − i − (r − i) = m − r, hence the size of the
2
inner minors does not change in the inductive step.
We descend with regard to i; thus, the induction step starts out at i = r − 1, hence
r − i = 1 and m − i = m − (r − 1) = m − r + 1 and since m − r ≥ 2 by assumption,
then 3 ≤ m − (r − 1) ≤ m − 1. Rewriting n := m − r + 1, we are in the situation of an
n × n (n ≥ 3) matrix of the form DG(1). Clearly, then the ladder ideal In−1 (L(n − 1, 1)) is
a principal ideal generated by the (n − 1) × (n − 1) minor of DG(1) of the first n − 1 rows
and columns. Therefore, its codimension is 1 as desired.
To construct a suitable inductive precedent, let Le denote the set of variables that are
e the ideal
to the left and above the stair-like polygonal in Figure 2 and denote Im−r (L)
27
e Note
generated by the (m − r) × (m − r) minors of DG(r) involving only the variables in L.
e
that L is of the form L(m − 1, r − 1) relative to a matrix of the form DG(r − 1)). Clearly,
e it too is a ladder determinantal ideal on a suitable (m − 2) × (m − 2) generic
Im−r (L)
matrix; in particular, it is a Cohen-Macaulay prime ideal (see [15] for primeness and [10]
e is at
for Cohen–Macaulayness).
By the inductive hypothesis, the codimension of Im−r (L)
r
least 2 .
Figure 2: Sub-stair-like inductive.
Note that Le is a subset of L, hence there is a natural ring surjection
S :=
e
k[L]
k[L]
e k[L]
=
[L \ L]
e
e
Im−r (L)
Im−r (L) k[L]
Im−r (L)
Since 2r + r = r+1
2 , it suffices to exhibit r elements of Im−r (L) forming a regular
e
sequence on the ring S := k[L]/Im−r (L)k[L].
Consider the matrices
x1,1
...
x1,m−r−1
x1,m−i
..
..
..
..
.
.
.
.
(33)
xm−r−1,1 . . . xm−r−1,m−r−1
xm−r−1,m−i
xm−r−1+i,1 . . . xm−r−1+i,m−r−1 xm−r−1+i,m−i
for i = 1, . . . , r. Let ∆i ∈ Im−r (L) denote the determinant of the above matrix, for i =
1, . . . , r.
The claim is that ∆ := {∆1 , . . . , ∆r } is a regular sequence on S.
Let δ denote the (m − r − 1)-minor in the upper left corner of (33). Clearly, δ is a regular
element on S as its defining ideal is a prime ideal generated in degree m − r. Therefore, it
suffices to show that the localized sequence
∆δ := {(∆1 )δ , . . . , (∆r )δ }
is a regular on Sδ . On the other hand, since S is Cohen-Macaulay, it is suffices to show that
dim Sδ /∆δ Sδ = dim Sδ − r.
28
Write X0 := {xm−r,m−1 , xm−r+1,m−2 , . . . , xm−2,m−r+1 , xm−1,m−r }. Note that, for every
i = 1, . . . , r, one has (∆i )δ = xm−r−1+i,m−i + (1/δ)Γi , with xm−r−1+i,m−i ∈ X0 and Γi ∈
k[L\X0 ]. The association xm−r−1+i,m−i 7→ −(1/δ)Γi therefore defines a ring homomorphism
k[L]δ /(∆δ ) = (k[X 0 ][L \ X 0 ])δ /(∆δ ) ' k[L \ X 0 ])δ
This entails a ring isomorphism
Sδ
k[L \ X0 ]δ
'
.
e
∆δ Sδ
(Im−r (L))k[L
\ X0 ]δ
e δ = dim Sδ − r
Thus, dim Sδ /∆δ Sδ = dim k[L]δ − r − codim (Im−r (L))
e
Therefore, codim (Im−r (L)) is at least codim (Im−r (L)) + r = r+1
2 .
In order to show that Im−r (L) is the homogeneous defining ideal
of the polar variety
r+1
it suffices to show that the latter has codimension at most 2 . Since the dimension
of the homogeneous coordinate ring of the polar variety coincides with the rank of the
Hessian matrix of f , it now suffices to show that the latter is at least dim R − r+1
=
2
r+1
2 − r(r + 1).
=
m
−
m2 − r+1
2
2
For this, we proceed along the same line of the proof of Theorem 2.3 (ii). Namely, set
X := {xi,j | i + j = r + 2, r + 3, . . . , 2m − r} and consider the set of partial derivatives of
f with respect to the variables in X. Let M denote the Jacobian matrix of these partial
derivatives with respect to the variables in X. Observe that M is a submatrix of size
(m2 − r(r + 1)) × (m2 − r(r + 1)) of the Hessian matrix. We will show that det(M ) 6= 0.
Set v := {x1,m , x2,m−1 , . . . , xm,1 } ⊂ X, the set of variables along the main anti-diagonal
of DG(r).
As already pointed out, the partial derivative of f with respect to any xi,j ∈ X coincides
with the signed cofactor of xi,j . By expanding the cofactor of an entry in the set v one sees
that there is a unique (nonzero) term whose support lies in v and the remaining terms have
degree ≥ 2 in the variables off v. Similarly, the cofactor of a variable outside v has no term
whose support lies in v and has exactly one (nonzero) term of degree 1 in the variables off
v. In fact, if i + j 6= m + 1, one finds
∆j,i = xm+1−j,m+1−i (x1,m · · · xi,m−i+1
\ · · · xm−j+1,j
\ · · · xm,1 )
+ terms of degree at least 2 off v,
where the term inside the parenthesis has support in v.
Consider the ring endomorphism ϕ of R that maps any variable in v to itself and any
variable off v to zero. By the preceding observation, applying ϕ to any second partial
derivative of f involving only the variables of X will return zero or a monomial supported
f denote the resulting specialized matrix of M . Thus, any of
on the variables in v. Let M
its entries is either zero or a monomial supported on the variables in v.
f) is nonzero. For this, consider the Jacobian matrix of the set
We will show that det(M
of partial derivatives {fv : v ∈ v} with respect to the variables in v. Let M0 denote the
f.
specialization of this Jacobian matrix by ϕ considered as a corresponding submatrix of M
f
Up to permutation of rows and columns of M , we may write
29
f=
M
M0 N0
N1 M1
,
for suitable M1 . Now, by the way the second partial derivatives of f specialize via ϕ as
f) = det(M0 ) det(M1 ), so
explained above, one must have N0 = N1 = 0. Therefore, det(M
it remains to prove the nonvanishing of these two subdeterminants. Now the first block
is the Hessian matrix of the form g being taken as the product of the entries in the main
anti-diagonal of the matrix DG(r). By a similar argument used in the proof of Theorem 2.3
(ii), one has that g is a well-known homaloidal polynomial, hence we are done for the first
matrix block. As for the second block, by construction it has exactly one nonzero entry on
each row and each column. Therefore, it has a nonzero determinant.
To conclude the assertion of this item it remains to argue that the ladder determinantal
ring in question is Gorenstein. For this we use the criterion in [3, Theorem, (b) p. 120].
By the latter, we only need to see that the inner corners of the ladder depicted in Figure 1
have indices (a, b) satisfying the equality a + b = m − 1 + (m − r) − 1 = 2m − r − 2, where
the ladder is a structure in an (m − 1) × (m − 1) matrix.
This completes the proof of this item. The supplementary assertion on the analytic
spread of J is clear since the dimension of the latter equals the dimension of the k-subalgebra
generated by the partial derivatives.
Remark 3.2. One notes that the codimension of the polar variety in its embedding coincides with the codimension of DG(r) in the fully generic matrix of the same size, viewed as
vector spaces of matrices over the ground field k.
3.2
The ideal of the submaximal minors
We will need a couple of lemmas, the first of which is a non-generic version of [1, Theorem
10.16 (b)]:
Lemma 3.3. Let M be a square matrix with entries either variables over a field k or zeros,
such that det(M ) 6= 0. Let R denote the polynomial ring over k on the nonzero entries of
M and let S ⊂ R denote the k-subalgebra generated by the submaximal minors. Then the
extension S ⊂ R is algebraic at the level of the respective fields of fractions.
The proof is the same as the one given in [1, Theorem 10.16 (b)].
The second lemma was communicated to us by Aldo Conca, as a particular case of a
more general setup:
Lemma 3.4. The submaximal minors of the generic square matrix are a Gröbner base in
the reverse lexicographic order and the initial ideal of any minor is the product of its entries
along the main anti-diagonal.
This result is the counterpart of the classical result in the case of the lexicographic order,
where the initial ideals are the products of the entries along the main diagonals. In both
versions, the chosen term order should respect the rows and columns of M .
The content of the third lemma does not seem to have been noted before:
30
Lemma 3.5. Let G denote a generic m × m matrix and let X denote the set of entries none
of which belongs to the main anti-diagonal of a submaximal minor. Then X is a regular
sequence modulo the ideal generated by the submaximal minors in the polynomial ring of the
entries of G over a field k.
Proof. As for easy visualization,
•
•
•
•
•
•
•
•
•
•
•
•
..
..
..
.
.
.
•
xm−2,2 xm−2,3
x
m−1,1 xm−1,2 xm−1,3
xm,1
xm,2
•
X is the set of bulleted entries below (for m ≥ 6):
...
•
•
•
x1,m−1 x1,m
...
•
•
x2,m−2 x2,m−1 x2,m
...
•
x3,m−3 x3,m−2 x3,m−1
•
...
x4,m−4
x4,m−3 x4,m−2
•
•
..
..
..
..
..
.
.
.
.
.
. . . xm−2,m−4
•
•
•
•
...
•
•
•
•
•
...
•
•
•
•
•
(A similar picture can be depicted for m ≤5).
Clearly, the cardinality of X is 2 m−1
= (m − 1)(m − 2). Fix an ordering of the
2
elements {a1 , . . . , a(m−1)(m−2) } of X . By Lemma 3.4 and the assumption that every ai
avoids the initial ideal of any submaximal minor, it follows that the initial ideal of the
ideal (a1 , . . . , ai , P) is (a1 , . . . , ai , in(P)). Clearly, ai+1 is not a zero divisor modulo the
latter ideal, and hence, by a well known procedure, it is neither a zero divisor modulo
(a1 , . . . , ai , P).
In the subsequent parts we will relate the gradient ideal J ⊂ R of the determinant of
the matrix DG(r) in (3.3) to the ideal Im−1 (GC) ⊂ R of its submaximal minors. As an
easy preliminary, we observe that, for any nonzero entry xi,j of DG(r), since the nonzero
entries of the matrix are independent variables, it follows easily from the Laplace expansion
along the ith row that the xi,j -derivative fi,j of f coincides with the (signed) cofactor of
xi,j . In particular, one has J ⊂ Im−1 (GC) throughout the entire subsequent discussion and
understanding the conductor J : Im−1 (GC) will be crucial.
Proposition 3.6. Let DG(r) as in (3.3) denote our basic degenerate matrix, with 1 ≤ r ≤
m − 2. For every 0 ≤ j ≤ m, consider the submatrices Mj and Nj of DG(r) consisting of
its last j columns and the its last j rows, respectively. Write I := Im−1 (GC) ⊂ R for the
ideal of (m − 1)-minors of DG(r) and J for the gradient ideal of f := det(DG(r)). Then
Ij (Nj ) · Ir−j (Mr−j ) ⊂ J : I for every 0 ≤ j ≤ r.
Proof. For a fixed 1 ≤ j ≤ r, we write the matrices DG(r) and its adjoint adj(DG(r)) in
the following block form:
!
ej
N
Θ1,j Θ2,j
DG(r) =
, adj(DG(r)) =
;
(34)
Θ3,j Θ4,j
Nj
where Θ1,j , Θ2,j , Θ3,j , Θ4,j stand for submatrices of sizes (j + m − r) × (m − j), (j + m −
r) × j, (r − j) × (m − j) and (r − j) × j, respectively. Thus, we have
!
ej + Θ2,j Nj
Θ1,j N
adj(DG(r)) · DG(r) =
= f · Im .
(35)
ej + Θ4,j Nj
Θ3,j N
31
ej +
with Im denoting the identity matrix of order m. Since f belongs to J, then I1 (Θ1,j N
Θ2,j Nj ) ⊂ J. On the other hand, the entries of Θ1,j are cofactors of the entries on the
upper left corner of DG(r), hence belong to J as well. Therefore I1 (Θ2,j Nj ) ⊂ J as well.
From this by an easy argument it follows that
I1 (Θ2,j )Ij (Nj ) ⊂ J
(36)
I1 (Θ2,j )Ij (Nj ) · Ir−j (Mr−j ) ⊂ J
(37)
and, for even more reason,
Similarly, writing
DG(r) =
fr−j
M
Mr−j
we have:
DG(r) · adj(DG(r)) =
fr−j Θ1,j + Mr−j Θ3,j
M
fr−j Θ2,j + Mr−j Θ4,j
M
= f · Idm .
An entirely analogous reasoning leads to the inclusion I1 (Θ3,j )Ir−j (Mr−j ) ⊂ J, and for
even more reason
I1 (Θ3,j )Ij (Nj ) · Ir−j (Mr−j ) ⊂ J
(38)
fr−j Θ2,j +Mr−j Θ4,j , again I1 (M
fr−j Θ2,j +Mr−j Θ4,j ) ⊂
Arguing now with the second block M
fr−j Θ2,j + δMr−j Θ4,j ) ⊂ J. But, by (36), the
J, and hence for each δ ∈ Ij (Nj ), also I1 (δ M
fr−j Θ2,j belong to J. Thus, the entries de δMr−j Θ4,j belong to J and conseentries of δ M
quently
I1 (Θ4,j )Ij (Nj ) · Ir−j (Mr−j ) ⊂ J .
(39)
It follows from (37), (38) and (39) that
(I1 (Θ2,j ), I1 (Θ3,j ), I1 (Θ4,j )) Ij (Nj ) · Ir−j (Mr−j )) ⊂ J.
(40)
Since also I1 (Θ1,j ) ⊂ J, we have
(I1 (Θ1,j ), I1 (Θ2,j ), I1 (Θ3,j ), I1 (Θ4,j )) Ij (Nj ) · Ir−j (Mr−j )) ⊂ J.
(41)
From this equality it obtains
Ij (Nj ) · Ir−j (Mr−j )I ⊂ J
(42)
because
I = Im−1 (DG(r)) = I1 (adj(DG(r))) = (I1 (Θ1,j ), I1 (Θ2,j ), I1 (Θ3,j ), I1 (Θ4,j )).
This establishes the assertion above – we note that it contains as a special case (with j = 0
and j = r) the separate inclusions Ir (Mr ), Ir (Nr ) ⊂ J : I.
Theorem 3.7. Consider the matrix DG(r) as in (3.3), with 1 ≤ r ≤ m − 2. Let I :=
Im−1 (DG(r)) ⊂ R denote its ideal of (m − 1)-minors and J the gradient ideal of f :=
det(DG(r)). Then
32
(i) I is a Gorenstein ideal of codimension 4 and maximal analytic spread.
r+1
2
2
(ii) The (m − 1)-minors of DG(r) define a birational map Pm −( 2 )−1 99K Pm −1 onto a
cone over the polar variety of f with vertex cut by r+1
coordinate hyperplanes.
2
(iii) The conductor J : I has codimension 2(m − r) ≥ 4; in particular, J has codimension
4.
(iv) If r ≤ m − 3 then I is contained in the unmixed part of J; in particular, if R/J is
Cohen–Macaulay then r = m − 2.
(v) Let r = m − 2. Then the set of minimal primes of J is exactly the set of associated
primes of I and of the ideals (Ij (Nj ), Im−1−j (Mm−1−j )), for 1 ≤ j ≤ m − 2.
(vi) If, moreover, r+1
≤ m − 3, then I is a prime ideal; in particular, in this case it
2
coincides with the unmixed part of J.
Proof. (i) The analytic spread follows from Lemma 3.3.
The remaining assertions of the item follow from Lemma 3.5, which shows that I is a
specialization of the ideal of generic submaximal minors, provided we argue that the set
{xm,m−r+1 ; xm,m−r+2 , xm−1,m−r+2 ; . . . ; xm,m , xm−1,m , . . . , xm−r+1,m }
of variables on the voided entry places of the generic m × m matrix
x1,1
..
.
xm−r,1
xm−r+1,1
xm−r+2,1
..
.
xm−1,1
xm,1
...
x1,m−r
..
.
...
...
xm−r,m−r
. . . xm−r+1,m−r
. . . xm−r+2,m−r
..
...
.
...
xm−1,m−r
...
xm,m−r
x1,m−r+1
..
.
x1,m−r+2
..
.
xm−r,m−r+1
xm−r+1,m−r+1
xm−r+2,m−r+1
..
.
xm−r,m−r+2
xm−r+1,m−r+2
xm−r+2,m−r+2
..
.
xm−1,m−r+1
...
x1,m−1
..
.
...
...
xm−r,m−1
. . . xm−r+1,m−1
...
..
.
..
.
...
...
x1,m
..
.
xm−r,m
..
.
is a subset of X as in the lemma. But this is immediate because of the assumption r ≤ m−2.
(ii) By Lemma 3.5, the ideal is a specialization of the ideal of submaximal minors in the
generic case; in particular, it is linearly presented. On the other hand, its analytic spread
is maximal by Lemma 3.3. Therefore, by Theorem 1.1 the minors define a birational map
onto the image. It remains to argue that the image is a cone over the polar variety, the
latter as described in Theorem 3.1 (c).
To see this note the homogeneous inclusion T := k[Jm−1 ] ⊂ T 0 := k[Im−1 ] of k-algebras
which are domains, where I is minimally generated by the generators of J and by r+1
2
0 = T [f , . . . , f ]. On
additional generators, say, f1 , . . . , fs , where s = r+1
,
that
is,
T
1
s
2
the other hand, one has dim T = m2 − r(r + 1) and dim T 0 = m2 − r+1
. Therefore,
2
r+1
0
tr.degk(T ) k(T )(f1 , . . . , fs ) = dim T − dim T = 2 = s, where k(T ) denotes the field of
fractions of T . This means that f1 , . . . , fs are algebraically independent over k(T ) and, a
fortiori, over T . This shows that T 0 is a polynomial ring over T in r+1
indeterminates.
2
Geometrically, the image of the
map
defined
by
the
(m
−
1)-minors
is
a
cone
over the polar
r+1
image with vertex cut by 2 independent hyperplanes.
33
(iii) The fact that the stated value 2(m−r) is an upper bound follows from the following
fact: first, because the only cofactors which are not partial derivatives (up to sign) are
those corresponding to the zero entries, then J is contained in the ideal Q generated by the
variables of the last row and the last column of the matrix. Since Q is prime and I 6⊂ Q
then clearly J : I ⊂ Q.
To see that 2(m − r) is a lower bound as well, we will use Proposition 3.6.
For that we need some intermediate results.
Claim 1. For every 1 ≤ j ≤ r, both Ij (Mj ) and Ij (Nj ) have codimension m − r.
By the clear symmetry, it suffices to consider Ij (Mj ). Note that Mj has r − j + 1 null
rows, so its ideal of j-minors coincides with the ideal of j-minors of its (m − (r − j + 1)) × j
submatrix Mj0 with no null rows. Clearly, this ideal of (maximal) minors has codimension
at most (m − (r − j + 1)) − j + 1 = m − r. Now, the matrix Mj0 specializes to the wellknown diagonal specialization using only m − r variables – by definition, the latter is the
specialization of a suitable Hankel matrix via the ring homomorphism mapping to zero the
variables of the upper left and lower right corner except the last variables of first column
and the first variable of the last column. This ensures that Ij (Mj0 ) has codimension at least
m − r.
Claim 2. For every 1 ≤ j ≤ r − 1, the respective sets of nonzero entries of Nj and
Mr−j+1 are disjoint. In particular, the codimension of (Ij (Nj ), Ir−j+1 (Mr−j+1 )) is 2(m−r).
The disjointness assertion is clear by inspection and the codimension follows from the
previous claim.
To proceed, we envisage the following chains of inclusions
I1 (N1 ) ⊃ I2 (N2 ) ⊃ . . . ⊃ Ir−1 (Nr−1 ) ⊃ Ir (Nr )
(43)
I1 (M1 ) ⊃ I2 (M2 ) ⊃ . . . ⊃ Ir−1 (Mr−1 ) ⊃ Ir (Mr ).
(44)
and
Let P denote a prime ideal containing the conductor J : I. By Proposition 3.6 one has
the inclusion I1 (N ) · Ir−1 (Mr−1 ) ⊂ P. Thus,
(A1 ) either I1 (N1 ) ⊂ P , or else
(B1 ) I1 (N1 ) 6⊂ J : I but Ir−1 (Mr−1 ) ⊂ P .
If (A1 ) is the case, then (I1 (N1 ), Ir (Mr )) ⊂ P , because Ir (Mr ) ⊂ J : I again by
Proposition 3.6 (with j = 0). By Claim 2 above, we then see that the codimension of J : I
is at least 2(m − r).
If (B1 ) takes place then we consider the inclusion I2 (N2 ) · Ir−2 (Mr−2 ) ⊂ J : I ⊂ P by
Proposition 3.6. The latter in turn gives rise to two possibilities according to which
(A2 ) either I2 (N2 ) ⊂ P, or else
(B2 ) I2 (N2 ) 6⊂ P but Ir−2 (Mr−2 ) ⊂ P.
Again, if (A2 ) is the case then (I2 (N2 ), Ir−1 (Mr−1 )) ⊂ P since Ir−1 (Mr−1 ) ⊂ P by
hypothesis. Once more, by Claim 2, the codimension of P is at least 2(m − r).
If intead (B2 ) occurs then we step up to the inclusion I3 (N3 )·Ir−3 (Mr−3 ) ⊂ P and repeat
the argument. Proceeding in this way, we may eventually find an index 1 ≤ j ≤ r − 1 such
34
that the first alternative (Aj ) holds, in which case we are through always by Claim 2.
Otherwise, we must be facing the situation where Ij (Nj ) 6⊂ P for every 1 ≤ j ≤ r − 1.
In particular, Ir−1 (Nr−1 ) 6⊂ P and I1 (M1 ) ⊂ P. Thus, (Ir (Nr ), I1 (M1 )) ⊂ P, and once
more by Claim 2, P has codimension at least 2(m − r). This concludes the proof of the
codimension of J : I.
The assertion that J has codimension 4 is then ensured as J ⊂ I and I has codimension
4 by item (i).
(iv) By (iii), if r ≤ m − 3 then J : I has codimension at least 2(m − r) ≥ 6. This implies
that I ⊂ J un and the two coincide up to radical.
The first assertion on the Cohen–Macaulayness of R/J is clear since then J is already
unmixed, hence J = I which is impossible since r ≥ 1.
(v) First, one has an inclusion J ⊂ (Ij (Nj ), Im−1−j (Mm−1−j )). Indeed, recall once more
that the partial derivatives are (signed) cofactors. Expanding each cofactor by Laplace
along j-minors, one sees that it is expressed as products of generators of Ij (Nj ) and of
Im−1−j (Mm−1−j ). Next, the ideal (Ij (Nj ), Im−1−j (Mm−1−j )) is perfect of codimension 4
– the codimension is clear by Claim 2 in the proof of (iii), while perfectness comes from
the fact that the tensor product over k of Cohen–Macaulay k-algebras of finite type is
Cohen–Macaulay.
At the other end, by (i) the ideal I is certainly perfect. Therefore, any associated prime
of either (Ij (Nj ), Im−1−j (Mm−1−j )) or I is a minimal prime of J.
Conversely, let P denote a minimal prime of J not containing I. Then J : I ⊂ P ,
hence the same argument in the proof of (iii) says that P contains some ideal of the form
(Ij (Nj ), Im−1−j (Mm−1−j )).
(vi) We will apply Proposition 2.1 in the case where M 0 = G is an m × m generic matrix
and M = DG(r) is the degenerated generic matrix as in the statement. In addition, we take
k = m − 2, so k + 1 = m − 1 is the size of the submaximal minors. Observe that the vector
space spanned by the entries of M has codimension r+1
in the vector space spanned by
2
the entries of M 0 . Since the m×m generic matrix is 2 = m−(m−2)-generic
(it is m-generic
≤ k−1 = m−3
as explained in [6, Examples, p. 548]), the theorem ensures that if r+1
2
then Im−1 (DG) is prime.
Since in particular r ≤ m − 3 then item (d) says that I ⊂ J un . But I is prime, hence
I = J un .
Remark 3.8. The statement of item (i) in Theorem 3.7 depends not only the number of the
entries forming a regular sequence on P but also their mutual position. Thus, for example,
if more than r of the entries belong to one same column or row it may happen that I has
codimension strictly less than 4.
We end by filing a few natural questions/conjectures. The notation is the same as in
the last theorem.
Question 3.9. Does I = J un hold for r ≤ m − 3? (By (iv) above it would suffice to prove
that I is a radical ideal.)
Question 3.10. Is r+1
≤ m − 3 the exact obstrution for the primality of the ideal I of
2
submaximal minors?
Conjecture 3.11. If r+1
≤ m − 3 then J has no embedded primes.
2
35
Conjecture 3.12. Let r = m − 2. Then for any 1 ≤ j ≤ m − 2 one has the following
primary decomposition:
Ij (Mj ) = (xj+1,m−j+1 , δj ) ∩ (xj,m−j+2 , δi−1 ) ∩ . . . ∩ (x3,m−1 , δ2 ) ∩ (x2,m , x1,m ),
where δt denotes the determinant of the t × t upper submatrix of Mt . A similar result holds
for Ij (Nj ) upon reverting the indices of the entries and replacing δt by the determinant γt
of the t × t leftmost submatrix of Nt . In particular, Ij (Mj ) and Ij (Nj ) are radical ideals.
Conjecture 3.13. In the notation of the previous conjecture, one has
J :I
=
r
\
!
∩
(xm,1 , xm,2 , xt+1,m−t+1 , δt )
t=1
... ∩
1
\
r−1
\
!
(xm−1,3 , γ2 , xt+1,m−t+1 , δt )
∩
t=1
!
(xm,1 , xm,2 , xt+1,m−t+1 , δt )
∩ (x3,m−1 , γr , x2,m , δ1 ) .
t=1
In particular, J : I is a radical ideal.
Conjecture 3.14. If r = m − 2 then both R/J and R/J : I are Cohen–Macaulay reduced
rings and, moreover, one has J = I ∩ (J : I).
3.3
The dual variety
We keep the previous notation with DG(r) denoting the m × m matrix in (3.3).
In this part we describe the structure of the dual variety V (f )∗ of V (f ) for f =
det DG(r). The result in particular answers affirmatively a question posed by F. Russo
as to whether the codimension of the dual variety of a homogeneous polynomial in its polar
variety can be arbitrarily large when its Hessian determinant vanishes. In addition it shows
that this can happen in the case of structured varieties.
As a preliminary, we file the following lemma which may have independent interest.
Lemma 3.15. Let Ga,b denote the a × b generic matrix, with a ≥ b. Letting 0 ≤ r ≤ b − 2,
consider the corresponding degeneration matrix Ψ := DG a,b (r):
x1,1
x2,1
..
.
x1,2
x2,2
..
.
xa−r,1
xa−r,2
xa−r+1,1 xa−r+1,2
..
..
.
.
xa−1,1
xa−1,2
xa,1
xa,2
...
...
x1,b−r
x2,b−r
..
.
x1,b−r+1
x2,b−r+1
..
.
...
...
x1,b−1
x2,b−1
..
.
x1,b
x2,b
..
.
. . . xa−r,b−r
xa−r,b−r+1 . . . xa−r,b−1 xa−r,b
. . . xa−r+1,b−r xa−r+1,b−r+1 . . . xa−r+1,b−1
0
..
..
.
..
.
..
..
.
.
.
. . . xa−1,b−r
xa−1,b−r+1 . . .
0
0
...
xa,b−r
0
...
0
0
Then the ideal of maximal minors of Ψ has the expected codimension a − b + 1.
36
.
Proof. The argument is pretty much the same as in the proof of Lemma 3.5, except that
the right lower corner of Ga,b whose entries will form a regular sequence is now as depicted
in blue below
x1,1
x2,1
..
.
x1,2
x2,2
..
.
x1,3
x2,3
..
.
xa−b+2,1 xa−b+2,2 xa−b+2,3
X=
xa−b+3,1 xa−b+3,2 xa−b+3,3
..
..
..
.
.
.
xa−1,1
xa−1,2
xa−1,3
xa,1
xa,2
xa,3
...
...
x1,b−1
x2,b−1
..
.
x1,b
x2,b
..
.
. . . xa−b+2,b−1 xa−b+2,b
. . . xa−b+3,b−1 xa−b+3,b
..
..
.
..
.
.
. . . xa−1,b−1
xa−1,b
...
xa,b−1
xa,b
.
(Note that the “worst” case is the b-minor with of the last b rows of Ga,b , hence the top
right blue entry above.)
To finish the proof of the lemma, one argues that the blue entries form a regular sequence
on the initial ideal of Ib (Ga,b ) since the latter is generated by the products of the entries
along the anti-diagonals of the maximal minors (in any monomial order).
Next is the main result of this part. We stress that, in contrast to the dual variety in the
case of the cloning degeneration, here the dual variety will in fact be a ladder determinantal
variety.
Back to the notation of (3.3), one has:
Theorem 3.16. Let 0 ≤ r ≤ m − 2 and f := det DG(r). Then:
∗
(a) The dual variety
V (f ) of V (f ) is a ladder determinantal variety of codimension
r+1
2
(m−1) − 2 defined by 2-minors; in particular it is arithmetically Cohen–Macaulay
and its codimension in the polar variety of V (f ) is (m − 1)2 − r(r + 1).
(b) V (f )∗ is arithmetically Gorenstein if and only if r = m − 2.
Proof. (a) The proof will proceed in parallel to the proof of Theorem 2.5, but there will
be some major changes.
We first show that once more dim V (f )∗ = 2m − 2 and, for that, check the sideinequalities separately.
1. dim V (f )∗ ≥ 2m − 2.
At the outset we draw as before upon the equality of Segre ([17]):
dim V (f )∗ = rank H(f )
(mod f ) − 2,
where H(f ) denotes the Hessian matrix of f . It will then suffice to show that H(f ) has a
submatrix of rank at least 2m modulo f .
37
For this purpose, consider the submatrix ϕ of DG(r) obtained by omitting the first row:
x2,1
..
.
xm−r,1
xm−r+1,1
xm−r+2,1
..
.
xm−1,1
xm,1
...
x2,m−r
..
.
...
...
xm−r,m−r
. . . xm−r+1,m−r
. . . xm−r+2,m−r
..
...
.
x2,m−r+1
..
.
x2,m−r+2
..
.
xm−r,m−r+1
xm−r+1,m−r+1
xm−r+2,m−r+1
..
.
xm−r,m−r+2
xm−r+1,m−r+2
xm−r+2,m−r+2
..
.
...
x2,m−1
..
.
...
...
xm−r,m−1
. . . xm−r+1,m−1
...
0
..
.
..
.
x2,m
..
.
xm−r,m
0
0
..
.
...
xm−1,m−r
xm−1,m−r+1
0
...
0
0
...
xm,m−r
0
0
...
0
0
Claim: The ideal It (ϕ) has codimension at leat m − t + 1 for every 1 ≤ t ≤ m − 1.
To see this, for every 1 ≤ t ≤ m − 1, consider the m − 1 × t submatrix Ψt of ϕ of the
first t columns.
By Lemma 3.15, the codimension of It (Ψt ) is at least m − t – note that the lemma is
applied with a = m − 1, b = t ≤ m − 1 and r updated to r0 := r − (m − t), so indeed
b − r0 = t − r + (m − t) = m − r ≥ 2 as required.
It remains to find some t-minor of ϕ which is a nonzerodivisor on It (Ψt ). One choice is
the t-minor D of the columns 2, 3 . . . , t − 1, m and rows 1, m − t + 1, m − t + 2, . . . , m − 1.
Indeed, a direct verification shows that the variables in the support of the initial in(D) of D
in the reverse lexicographic order do not appear in the supports of the initial terms of the
maximal minors of Ψt . Therefore, in(D) is a nonzerodivisor on the initial ideal of It (Ψt ).
A standard argument then shows that It (Ψt ) has codimension at least m − t + 1.
This completes the proof of the claim.
The statement means that the ideal of maximal minors of ϕ satisfies the so-called property (F1 ). Since it is a perfect ideal of codimension 2 with ϕ as its defining Hilbert–Burch
syzygy matrix, it follows as in the proof of Theorem 2.5 that it is an ideal of linear type.
([9]). In particular the m maximal minors of ϕ are algebraically independent over k, hence
their Jacobian matrix with respect to the entries of ϕ has rank m.
Form this point the argument proceeds exactly as in the proof of item 1. of Theorem 2.5.
2. dim V (f )∗ ≤ 2m − 2.
Let P ⊂ k[y] := k[yi,j | 2 ≤ i + j ≤ 2m − r] denote the homogeneous defining ideal of
the dual variety V (f )∗ in its natural embedding, i.e.,
k[∂f /∂xi,j | 2 ≤ i + j ≤ 2m − r]/(f ) ' k[y]/P.
(45)
The isomorphism is an isomorphism of graded k-algebras induced by the assignment yi,j 7→
∂f /∂xi,j .
Claim 2. The homogeneous defining ideal P of V (f )∗ contains the ideal generated by
the 2 × 2 minors of the following ladder matrix:
To see this, we first recall that by Proposition 2.2, ∂f /∂xi,j coincides with the cofactor
of xi,j in DG(r). Now, consider the following relation afforded by the cofactor identity:
adj(DG(r)) · DG(r) ≡ 0 (mod f ).
(46)
Further, for each pair of integers i, j such that 1 ≤ i < j ≤ m let Fij denote the 2 × m
submatrix of adj(DG(r)) consisting of the ith and jth rows. In addition, let C stand for
38
Figure 3: Ladder ideal of the dual variety
the m × (m − 1) submatrix of DG(r) consisting of its m − 1 leftmost columns. Then (46)
gives the relations
Fij C ≡ 0 (mod f ),
for all 1 ≤ i < j ≤ m. From this, since the rank of C modulo (f ) is obviously still m − 1, the
rank of every Fi,j is necessarily 1. This shows that every 2 × 2 minor of adj(GD(r)) vanishes
modulo (f ). Therefore, each such minor involving only cofactors that are partial derivatives
gives a 2 × 2 minor of L vanishing on the partial derivatives. Clearly, by construction, we
obtain this way all the 2 × 2 minors of L. This proves the claim.
Now, since I2 (L) is a ladder determinantal ideal on a suitable generic matrix it is a
Cohen-Macaulay prime ideal (see [15] for primeness and [10] for Cohen–Macaulayness).
Moreover, its codimension is m2 − r+1
− (2m − 1) = (m − 1)2 − r+1
as follows from an
2
2
application to this case of the general principle in terms of maximal chains as described in
[10, Theorem 4.6 and Corollary 4.7].
(b) By the previous item, the homogeneous defining ideal of the dual variety is generated
by the 2×2 minors of the ladder matrix in Figure 3. Observe that the smallest square matrix
containing all the entries of the latter is the m × m matrix DG(r). By [3, Theorem, (b) p.
120], the ladder ideal is Gorenstein if only if the inner corners of the ladder have indices
(i, j) satisfying the equality i+j = m+1. In the present case, the inner corners have indices
satisfying the equation
i + j = m − r + (m − 1) = m − r + 1 + (m − 2) = · · · = m − 1 + (m − r) = 2m − r − 1.
Clearly this common value equals m + 1 if and only if r = m − 2.
Remark 3.17. It may of some interest to note that degenerating the matrix DG(m − 2) all
the way to a Hankel matrix, recovers the so-called sub-Hankel matrix thoroughly studied in
[2] from the homaloidal point of view and in [13], from the ideal theoretic side. The situation
is ever more intriguing since in the sub-Hankel case the determinant is homaloidal.
39
References
[1] W. Bruns, U. Vetter, Determinantal Rings, Lecture Notes in Mathematics 1327,
Springer-Verlag, 1988.
[2] C. Ciliberto, F. Russo and A. Simis, Homaloidal hypersurfaces and hypersurfaces with
vanishing Hessian, Advances in Math., 218 (2008) 1759–1805.
[3] A. Conca, Ladder determinantal rings, J. Pure Appl. Algebra 98 (1995), 119–134.
[4] A. Doria, H. Hassanzadeh and A. Simis, A characteristic free criterion of birationality,
Advances in Math., 230 (2012), 390–413.
[5] D. Eisenbud, On the resiliency of determinantal ideals, Proceedings of the U.S.-Japan
Seminar, Kyoto 1985. In Advanced Studies in Pure Math. II, Commutative Algebra and
Combinatorics, ed. M. Nagata and H. Matsumura, North-Holland (1987) 29–38.
[6] D. Eisenbud, Linear sections of determinantal varieties, Amer. J. Mathematics, 110
(1988), 541–575.
[7] M. A. Golberg, The derivative of a determinant, The American Mathematical Monthly,
79 (1972), 1124–1126.
[8] J. Herzog and T. Hibi, Monomial ideals, Graduate Texts in Mathematics 260, SpringerVerlag, 2011.
[9] J. Herzog, A. Simis and W. Vasconcelos, Koszul homology and blowing-up rings, in
Commutative Algebra, Proceedings, Trento (S. Greco and G. Valla, Eds.). Lecture
Notes in Pure and Applied Mathematics 84, Marcel-Dekker, 1983, 79–169.
[10] J. Herzog and N. N. Trung, Gröbner bases and multiplicity of determinantal and Pfaffian ideals, Adv. in Math. 96 (1992), 1–37.
[11] J. M. Landsberg, L. Manivel and N. Ressayre, Hypersurfaces with degenerate duals and
the geometric complexity theory program, Comment. Math. Helv. 88 (2013), 469-484.
[12] M. Mostafazadehfard, Hankel and sub-Hankel determinants – a detailed
study of their polar ideals, PhD Thesis, Universidade Federal de Pernambuco
(Recife, Brazil), July 2014.
[13] M. Mostafazadehfard and A. Simis, Homaloidal determinants, J. Algebra 450 (2016),
59-101.
[14] M. Mostafazadehfard and A. Simis, Corrigendum to “Homaloidal determinants”, 450
(2016), 59-101.
[15] H. Narasimhan, The Irreducibility of Ladder Determinantal Varieties, J. Algebra 102
(1986), 162–185.
[16] F. Russo, On the Geometry of Some Special Projective Varieties, Lecture Notes of the
Unione Matematica Italiana, Springer 2015.
40
[17] B. Segre, Bertini forms and Hessian matrices, J. London Math. Soc. 26 (1951), 164–176.
[18] A. Simis, R. Villarreal, Combinatorics of monomial Cremona maps, Math. Comp., 81
(2012), 1857–1867.
41
| 0 |
Multi-modal Aggregation for Video Classification
Chen Chen
Alibaba Group iDST
[email protected]
Xiaowei Zhao
Alibaba Group iDST
[email protected]
Yang Liu
Alibaba Group iDST
[email protected]
ABSTRACT
In this paper, we present a solution to Large-Scale Video
Classification Challenge (LSVC2017) [1] that ranked the 1st place.
We focused on a variety of modalities that cover visual, motion
and audio. Also, we visualized the aggregation process to better
understand how each modality takes effect. Among the extracted
modalities, we found Temporal-Spatial features calculated by 3D
convolution quite promising that greatly improved the
performance. We attained the official metric mAP 0.8741 on the
testing set with the ensemble model.
1
INTRODUCTION
Video classification is a challenging task in computer vision that
has significant attention in recent years along with more and more
large-scale video datasets. Compared with image classification,
video classification needs to aggregate frame level features to
video level knowledge. More modalities can be extracted in
videos like audio, motion, ASR etc. Multi-modalities are mutual
complement to each other in most cases.
The recent competition entitled “Large-Scale Video Classification
Challenge” provides a platform to explore new approaches for
realistic setting video classification. The dataset [2] contains over
8000 hours with 500 categories which cover a range of topics like
social events, procedural events, objects, scenes, etc. The
training/validation/test set has 62000/15000/78000 untrimmed
videos respectively. The evaluation metric is mean Average
Precision (mAP) across all categories. The organizers provide
frame level features with 1fps based on VGG. They also give raw
videos for the whole dataset and participants are allowed to
extract any modality.
2
2.1
APPROACH
Video Classification Architecture
For the video classification method, the first step is to extract
frame level CNNs activations as intermediate features. And then
aggregate the features through pooling layers like VLAD, Bag-ofvisual-words, LSTM and GRU. In previous YouTube-8M
competition [3], the frame level features were restricted to
officially provided ImageNet pre-trained inception v3 activation
thus the participants can only focus on aggregation methods.
However, in LSVC2017 competition, since the raw videos are
provided and the dataset scale is suitable, we put emphasis on
modality extraction and used VLAD as the aggregation layer.
Figure 1 shows our architecture for multi-modal aggregation for
video classification.
Figure 1: Overview of the video classification architecture.
2.1.1 Modality Extraction. We extract visual, audio and motion
features that are pre-trained by different public dataset. Since
VLAD aggregation layer doesn’t have the ability to model
temporal information, aside from the frame level features, we also
extracted spatial-temporal features with 3d convolutional network
and found them vital to action related class like high jump, baby
crawling, etc. The details of each modality are introduced in
Section 2.2.
2.1.2 Data processing. For the modality feature pre-processing,
we use PCA, whitening and quantization. The PCA dimension for
each modality is chosen according to the estimated importance to
classification in common sense, for example ImageNet pre-trained
features have 1024 dimension while audio feature has only 128
dimension. The whitening centralizes the energy and we clip the
value to [-2.5, 2.5] followed by 8-bit uniform quantization. The
purpose of quantization is to save the feature volume and the
experiments show it will not hurt the performance greatly. In
terms of sampling policy, we use random sampling in both
training and test as illustrated in Figure 2. First we divide the
video to splits with 10 minutes each so as to deal with extremely
long videos. Then, we extract frame level visual feature with 1 fps
and randomly select 50 frames. We found the pattern that in many
classes, representative scenes are not evenly distributed. For
example, “Food making” classes often start with people
introducing the recipe for a long time. Evenly split videos will
cause misleading train data since many scenes with “people
talking” without any hints of food labeled as a particular food.
Random sampling is a tradeoff between keeping key frames and
computation complexity. In evaluation, we repeat the random test
and average the results, it will promote the mAP about 0.1% 0.2%. For spatial-temporal features, sampling policy applied on
features not frames because each feature is influenced by nearby
several frames.
2.1.3 Feature aggregation. We use VLAD as that in [4] to
aggregate multi-modality features through time. Each modality
will learn VLAD encoding and concatenate together followed by
fully connect, mixture of experts and context gating.
Figure 2: Frame level feature Random Sampling in training and
test with 1 FPS.
2.2
Modality Extraction
In this section, we describe all the modalities respectively. We
outline the overview of extraction in table 1.
Table 1: Multi-modal Feature Extraction Overview
Modality
FPS
Dataset
CNN Structure
Visual
1
ImageNet
Inception Resnet V2
Visual
1
ImageNet
Squeeze & Excitation
Visual
1
Places365
Resnet152
Visual
1
Food101
InceptionV3
I3D RGB
0.3
Kinetics
InceptionV1 3D
I3D Flow
0.3
Kinetics
InceptionV1 3D
Audio
0.9
AudioSet
VGG-like
2.2.1 Visual feature pre-trained on ImageNet. ImageNet is a
large-scale annotated dataset with 1000 categories and over 1.2
million images. CNN can learn meaningful representation after
training on ImageNet. LSVC2017 provided frame level features
with VGG structure. Considering VGG is not state-of-the-art
CNN structure, we download 3T raw videos and extract the
features on our own. We use Inception Resnet V2 [5] and Squeeze
& Excitation model [6] for comparison.
2.2.2 Visual feature pre-trained on Places365. Places365 is the
largest subset of Places2 Database [7], the 2rd generation of the
Places Database by MIT CS&AI Lab. By adding the modality
with this scene dataset, we hope it helps to define a context in
frame level feature.
2.2.3 Visual feature pre-trained on Food101. In LSVC2017
dataset, about 90 classes are food related. We found food class
mAP is always lower than the whole by about 15% which means
it greatly impacts the performance. We look into the food class
and found some classes are difficult to be distinguished visually.
For examples, “making tea” vs “making mile tea”, “making juice”
vs “making lemonade”, “making salad” vs “making sandwich”.
Among these classes, many ingredients are similar. To make
matters worse, making food always involves scenes with people
introducing the recipes. Have in mind that the clue to classify food
cooking classes is so subtle, it may benefits from utilizing feature
pre-trained on Food dataset. Food101 [8] has 101 food categories
and 101000 images. It covers most of food classes in LSVC2017.
2
2.2.4 Audio feature pre-trained on AudioSet. Audio contains a
lot of information that helps to classify videos. We extract audio
feature by a VGG like acoustic model trained on AudioSet [9]
which consists of 632 audio event classes and over 2 million
labeled 10-second sound clips. The process is the same as that in
Youtube-8M, Google has released the extraction code in
tensorflow model release.
2.2.5 Temporal-Spatial feature pre-trained on Kinetics. Action
classification is one of the hottest topics in video classification.
Actions involve strong temporal dependent information that can
depart action classification from single-image analysis. A lot of
action dataset came up in recent years, like Kinetics [10], UCF101, HMDB-51 etc. Action dataset has trimmed videos and each
clip lasts around 10s with a single class. Carreira et al. proposed
an inflated 3D model [11] that can leverage ImageNet by inflating
2D ConvNets into 3D. Their I3D model pre-trained on Kinetics
gets state-of-the-art performance in both UCF101 and HMDB51
datasets. In untrimmed videos, features through time may be
much more complicate, so we combine Temporal-Spatial feature
I3D and Aggregation layer VLAD and the results show
noteworthy improvement.
Figure 3: I3D RGB extraction diagram
I3D RGB feature extraction details are shown in Figure 3. For
each input video clip, we first sample frames at 25 fps following
the origin pre-train sampling policy and send frames to I3D model
every 80 frames. Due to the 3D ConvNet structure, the temporal
dimension for output feature is reduced by a factor of 8 compared
with input. We averaged the output feature through time and get
the Spatial-Temporal feature with FPS (Feature per second) at
0.3125. For I3D Flow, most of the part is the same except that we
apply TV-L1 optical flow algorithm after sampling the videos.
In terms of realistic untrimmed videos in dataset like Youtube-8M
and LSVC2017, many classes can only be distinguished by
temporal information as illustrated in Figure 4. Each row shows 5
sample frames. The labels for the three videos are “baby
crawling”, “playing with nun chucks” and “cleaning a white
board”. All the videos are hard to infer ground truth based on
frames. The baby could be sitting on the bed. Nun chucks are hard
to notices in the second example and it seems that he is dancing.
In the last video, we are not sure whether he is cleaning the board
or writing on the board. VLAD and random sampling with frame
level features can only aggregate single-image visual feature.
Spatial-Temporal features are able to extend the learned
representative feature to more complicated continuous event.
histogram color is, the larger the gap is, thus the more
contribution the modality makes. Different kinds of examples are
shown in figure 6-8.
Figure 4: Action video frame samples in LSVC2017.
3
3.1
EXPERIMENT
Visualization
In this Section, we focus on what has been learned in VLAD and
how each modality takes effect. We visualize the learned cluster
and the whole aggregation process in prediction with the best
single model including 5 modalities: I3D RGB, I3D Flow,
Inception Resnet V2, Squeeze & excitation and food.
3.1.1 VLAD cluster visualization. VLAD cluster are supposed to
learn meaningful visual concepts. In our implementation, we
noticed that increasing the cluster size greatly doesn’t improve but
hurt the performance. After doing some experiments, the cluster
size is set with value 40 for food, scene & audio modality and 80
for the others. We randomly picked frames in validation set and
computed VLAD cluster assignment map. We illustrate some
sample frames that maximize the assignment in some cluster in
Figure 5.
Figure 5: Representative images that have largest assignment for
some VLAD clusters, which successfully learn meaningful visual
concept. Each row for a modality.
3.1.2 Aggregation visualization. To verify the impacts with
different modalities we visualize the process of aggregation. We
shows the raw videos, ground truth probability changing and
cluster assignment histogram in each modality. The histogram
color is computed by the difference between GT probability with
the one that pads the modality data with zero. The darker the
Figure 6: Aggregation Visualization for class: Fried egg. Left five
cluster assignment histograms are computed with I3D RGB, I3D
Flow, Inception Resnet V2, Squeeze & excitation, food
respectively. The top right image is a sample frame and the top
bottom is the curve of the ground truth probability vs time. Here
we give 3 status in the order of time. Note that in the beginning,
the eggs cannot impact the probability at all. After a while, some
visual hints like pouring oil and pot that highly correlated with
“Fried egg” start to activate GT prediction. When the Fried egg
eventually forms, it has a high confidence in GT. The histogram
color shows ImageNet pre-trained feature has the most influence
in this case and food/I3D RGB modality also contribute a little bit.
The blue arrow in last status points to the rapid histogram change
once egg changes to fried form.
3
Table 3: Evaluation of multi-modal on Validation Set
Figure 7: Aggregation Visualization for class: Baby Crawling. As
mention in Figure 4, this class is hard with only frame level
features. The histogram color proves that only I3D features take
effect.
Figure 8: Aggregation visualization for class: Marriage Proposal.
This class has the pattern that there is always a surprise at the
end. The probability curve fits well with this pattern. The value
get to highest level when the couple hug each other and spatialtemporal feature successfully capture this key movement.
3.2
3.1.2 Evaluation of single-modal. We evaluate all single
modality model on validation set except food because food is not
a general feature for videos. Two ImageNet pre-trained modalities
gets the highest mAP. CNN structure of Squeeze & Excitation is
better than that of Inception Resnet V2 by nearly 3%. SpatialTemporal feature I3D has slightly low performance. It makes
sense because kinetics dataset has mainly action knowledge while
LSVC2017 involves many object classes. Scene gets mAP of
0.6392 and Audio has the lowest mAP of 0.1840. Details are
listed in Table 2.
Table 2: Evaluation of single-modal on Validation Set
4
mAP
mAP(food)
I3D
I3D + InResV2
I3D + InResV2 + Audio
I3D + InResV2 + Food
I3D + Senet
I3D + Senet + Food
I3D + Senet + Scene
I3D + Senet + InResV2
I3D + Senet + InResV2 + Food
0.7890
0.8130
0.8373
0.8246
0.8395
0.8428
0.8379
0.8449
0.8485
0.5309
0.6070
0.6557
0.6710
0.6652
0.6855
0.6670
0.6901
0.7017
25 model ensemble
25 model ensemble (on Test)
0.8848
0.8741
0.7478
unknown
3.1.2 Evaluation of multi-modal. In Table 3, we shows the
multi-modality model results. I3D RGB and Flow are default
modalities. By comparing I3D with I3D + Senet and Senet in
Table 2 it is clear that spatial-temporal feature pre-trained on
action dataset and ImageNet pre-trained frame level features
complement each other well, the combination gets a relative high
mAP of 0.8395. By adding more modalities based on I3D and
Senet, the best multi-modal single model achieves mAP of 0.8485.
Since food is a very important subset, we list mAP of food in the
third column, it proves that food modality helps the food
performance by a considerable margin. Audio can improve the
mAP while scene seems to be useless in our results. Our final
submit is an ensemble of 25 models with different combination of
modalities. It gets mAP of 0.8741 on test and ranked 1st in the
competition.
4
Experiment Results
Modality
Inception Resnet V2
Squeeze & Excitation
Scene
I3D RGB
I3D Flow
Audio
Multi-modality
mAP
0.7551
0.7844
0.6392
0.7438
0.6819
0.1840
CONCLUSIONS
In summary, we have proposed a multi-modal aggregation method
for large-scale video classification. We showed that spatialtemporal features pre-trained on action dataset improves the
performance a lot. We also visualize the aggregation process and
find that multi-modalities are mutually complementary and the
model implicitly selects the modality that best describe the videos.
REFERENCES
[1] Wu, Zuxuan and Jiang, Y.-G, and Davis, Larry S and Chang, Shih-Fu. LargeScale Video Classification Challenge. 2017.
[2] Jiang, Yu-Gang, Wu, Zuxuan, Wang, Jun and Xue, Xiangyang and Chang,
Shih-Fu. Exploiting feature and class relationships in video categorization with
regularized deep neural networks. 2017.
[3] S. Abu-El-Haija, N. Kothari, J. Lee, P. Natsev, G. Toderici, B. Varadarajan, and
S. Vijayanarasimhan. Youtube-8m: A large-scale video classification
benchmark. arXiv preprint arXiv:1609.08675, 2016.
[4] Antoine Miech, Ivan Laptev and Josef Sivic. Learnable pooling with context
gating for video classification. arXiv preprint arXiv:1706:06905, 2017.
[5] Chistian Szegedy, Sergey Ioffe and Vincent Vanhoucke. Inception-v4,
Inception-ResNet and the Impact of Residual Connections on Learning. arXiv
preprint arXiv:1602.07261, 2016.
[6] Jie Hu, Li Shen, Gang Sun. Squeeze-and Excitation Networks. arXiv preprint
arXiv:1709.01507, 2017.
[7] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba. Places: A 10
million Image Database for Scene Recognition. In PAMI, 2017.
[8] Bossard, Lukas and Guillaumin, Mattieu and Van Gool, Luc. Food-101 –
Mining Discriminative Components with Random Forests. In ECCV 2014.
[9] Jort F. Gemmeke and Daniel P.W. Ellis and Dylan Freedman and Aren Jansen
and Wade Lawrence and R. Channing Moore and Manoj Plakal and Marvin
Ritter. Audio Set: An ontology and human-labeled dataset for audio events. In
ICASSP 2017.
[10] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier,
Sudheendra, Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul
Natsev, Mustafa Suleyman, Andrew Zisserman. The kinetics human action
video dataset. arXiv preprint arXiv:1705.06950, 2017.
[11] Joao Carreira and Andrew Zisserman. Quo Vaids, Action Recognition? A new
model and the kinetics dataset. In CVPR 2017.
5
| 1 |
arXiv:1104.2835v3 [] 13 Oct 2013
Combinatorial properties and characterization of
glued semigroups
J. I. Garcı́a-Garcı́a∗
M.A. Moreno-Frı́as†
A. Vigneron-Tenorio‡
Abstract
This work focuses on the combinatorial properties of glued semigroups
and provides its combinatorial characterization. Some classical results for
affine glued semigroups are generalized and some methods to obtain glued
semigroups are developed.
Keywords: Gluing of semigroups, semigroup, semigroup ideal, simplicial complex, toric ideal.
MSC-class: 20M14 (Primary), 20M05 (Secondary).
Introduction
Let S = hn1 , . . . , nl i be a finitely generated commutative semigroup with zero
element which is reduced (i.e. S ∩ (−S) = {0}) and cancellative (if m, n, n0 ∈ S
and m+n = m+n0 then n = n0 ). Under these settings if S is torsion-free then it
is isomorphic to a subsemigroup of Np which means it is an affine semigroup (see
[11]). From now on assume that all the semigroups appearing in this work are
finitely generated, commutative, reduced and cancellative, but not necessarily
torsion-free.
Let k be a field and k[X1 , . . . , Xl ] the polynomial ring in l indeterminates.
This polynomial ring is obviously an S−graded ring (by assigning the S-degree
Pl
ni to the indeterminate Xi , the S-degree of X α = X1α1 · · · Xlαl is i=1 αi ni ∈
S). It is well known that the ideal IS generated by
(
)
l
l
X
X
α
β
X −X |
α i ni =
βi ni ⊂ k[X1 , . . . , Xl ]
i=1
i=1
∗ Departamento
de Matemáticas, Universidad de Cádiz, E-11510 Puerto Real (Cádiz,
Spain). E-mail: [email protected]. Partially supported by MTM2010-15595 and Junta de
Andalucı́a group FQM-366.
† Departamento de Matemáticas, Universidad de Cádiz, E-11510 Puerto Real (Cádiz,
Spain). E-mail: [email protected]. Partially supported by MTM2008-06201-C02-02
and Junta de Andalucı́a group FQM-298.
‡ Departamento de Matemáticas, Universidad de Cádiz, E-11405 Jerez de la Frontera (Cádiz, Spain). E-mail: [email protected]. Partially supported by the grant
MTM2007-64704 (with the help of FEDER Program), MTM2012-36917-C03-01 and Junta
de Andalucı́a group FQM-366.
1
is an S−homogeneous binomial ideal called semigroup ideal (see [6] for details).
If S is torsion-free, the ideal obtained defines a toric variety (see [12] and the
references therein). By Nakayama’s lemma, all the minimal generating sets of
IS have the same cardinality and the S−degrees of its elements can be determinated.
The main goal of this work is to study the semigroups which result from
the gluing of others two. This concept was introduced by Rosales in [10]
and it is closely related with complete intersection ideals (see [13] and the
references therein). A semigroup S minimally generated by A1 t A2 (with
A1 = {n1 , . . . , nr } and A2 = {nr+1 , . . . , nl }) is the gluing of S1 = hA1 i and
S2 = hA2 i if there exists a set of generators ρ of IS of the form ρ = ρ1 ∪
0
ρ2 ∪ {X γ − X γ }, where ρ1 , ρ2 are generating sets of IS1 and IS2 respectively,
γ
γ0
X − X ∈ IS and the supports of γ and γ 0 verify supp (γ) ⊂ {1, . . . , r}
and supp (γ 0 ) ⊂ {r + 1, . . . , l}. Equivalently, S is the gluing of S1 and S2 if
0
IS = IS1 + IS2 + hX γ − X γ i. A semigroup is a glued semigroup when it is the
gluing of others two.
As seen, glued semigroups can be determinated by the minimal generating
sets of IS which can be studied by using combinatorial methods from certain
simplicial complexes (see [1], [4] and [7]). In this work the simplicial complexes
used are defined as follows: for any m ∈ S, set
Cm = {X α = X1α1 · · · Xlαl |
l
X
αi ni = m}
(1)
i=1
and the simplicial complex
∇m = {F ⊆ Cm | gcd(F ) 6= 1},
(2)
with gcd(F ) the greatest common divisor of the monomials in F.
Furthermore, some methods which require linear algebra and integer programming are given to obtain examples of glued semigroups.
The content of this work is organized as follows. Section 1 presents the tools
to generalize to non torsion-free semigroups a classical characterization of affine
gluing semigroups (Proposition 2). In Section 2, the non-connected simplicial
complexes ∇m associated to glued semigroups are studied. By using the vertices of the connected components of these complexes we give a combinatorial
characterization of glued semigroups as well as their glued degrees (Theorem
6). Besides, in Corollary 7 we deduce the conditions in order the ideal of a
glued semigroup to be uniquely generated. Finally, Section 3 is devoted to the
construction of glued semigroups (Corollary 10) and affine glued semigroups
(Subsection 3.1).
1
Preliminaries and generalizations about glued
semigroups
A binomial of IS is called indispensable if it is an element of all system of generators of IS (up to a scalar multiple). This kind of binomials were introduced in
[9] and they have an important role in Algebraic Statistics. In [8] the authors
characterize indispensable binomials by using simplicial complexes ∇m . Note
2
that if IS is generated by its indispensable binomials, it is uniquely generated
up to a scalar multiples.
With the above notation, the semigroup S is associated
Pl to the lattice ker S
formed by the elements α = (α1 , . . . , αl ) ∈ Zl such that i=1 αi ni = 0. Given
G a system of generators of IS , the lattice ker S is generated by the elements
α − β with X α − X β ∈ G and ker S also verifies that ker S ∩ Nl = {0} if
and only if S is reduced. If M(IS ) is a minimal generating set of IS , denote
by M(IS )m ⊂ M(IS ) the set of elements whose S−degree is equal to m ∈ S
and by Betti(S) the set of the S−degrees of the elements of M(IS ). When IS
is minimally generated by rank(ker S) elements, the semigroup S is called a
complete intersection semigroup.
Let C(∇m ) be the number of connected components of ∇m . The cardinality
of M(IS )m is equal to C(∇m ) − 1 (see Remark 2.6 in [1] and Theorem 3 and
Corollary 4 in [7]) and the complexes associated to the elements in Betti(S) are
non-connected.
Construction 1. ([4, Proposition 1]). For each m ∈ Betti(S) the set M(IS )m
is obtained by taking C(∇m ) − 1 binomials with monomials in different connected components of ∇m satisfying that two different binomials have not their
corresponding monomials in the same components and fulfilling that there is at
least a monomial of every connected component of ∇m . This let us construct a
minimal generating set of IS in a combinatorial way.
Let S be minimally1 generated by A1 t A2 with A1 = {a1 , . . . , ar } and
A2 = {b1 , . . . , bt }. From now on, identify the sets A1 and A2 with the matrices
(a1 | · · · |ar ) and (b1 | · · · |bt ). Denote by k[A1 ] and k[A2 ] the polynomial rings
k[X1 , . . . , Xr ] and k[Y1 , . . . , Yt ], respectively. A monomial is a pure monomial
if it has indeterminates only in X1 , . . . , Xr or only in Y1 , . . . , Yt , otherwise it is
a mixed monomial. If S is the gluing of S1 = hA1 i and S2 = hA2 i, then the
binomial X γX −Y γY ∈ IS is a glued binomial if M(IS1 )∪M(IS2 )∪{X γX −Y γY }
is a generating set of IS and in this case the element d = S-degree(X γX ) ∈ S is
called a glued degree.
It is clear that if S is a glued semigroup, the lattice ker S has a basis of the
form
{L1 , L2 , (γX , −γY )} ⊂ Zr+t ,
(3)
where the supports of the elements in L1 are in {1, . . . , r}, the supports of the
elements in L2 are in {r + 1, . . . , r + t}, ker Si = hLi i (i = 1, 2) by considering
only the coordinates in {1, . . . , r} or {r+1, . . . , r+t} of Li , and (γX , γY ) ∈ Nr+t .
Moreover, since S is reduced, one has that hL1 i ∩ Nr+t = hL2 i ∩ Nr+t = {0}.
Denote by {ρ1i }i the elements in L1 and by {ρ2i }i the elements in L2 .
The following Proposition generalizes [10, Theorem 1.4] to non-torsion free
semigroups.
Proposition 2. The semigroup S is the gluing of S1 and S2 if and only if there
exists d ∈ (S1 ∩ S2 ) \ {0} such that G(S1 ) ∩ G(S2 ) = dZ, where G(S1 ), G(S2 )
and dZ are the associated commutative groups of S1 , S2 and {d}, respectively.
1 We consider a minimal generator set of S, in the other case S is trivially the gluing of the
semigroup generated by one of its non minimal generators and the semigroup generated by
the others.
3
Proof. Assume that S is the gluing of S1 and S2 . In this case, ker S is generated
by the set (3). Since (γX , −γY ) ∈ ker S, the element d is equal to A1 γX =
A2 γY ∈ S and d ∈ S1 ∩ S2 ⊂ G(S1 ) ∩ G(S2 ). Let d0 be in G(S1 ) ∩ G(S2 ),
then there exists (δ1 , δ2 ) ∈ Zr × Zt such that d0 = A1 δ1 = A2 δ2 . Therefore
(δ1 , −δ2 ) ∈ ker S because (A1 |A2 )(δ1 , −δ2 ) = 0 and so there exist λ, λρi 1 , λρi 2 ∈ Z
satisfying
P ρ1
(δ1 , 0) =
i λi ρ1i + λ(γX , 0)
(0, δ2 )
−
=
P
i
λρi 2 ρ2i + λ(0, γY ),
P
and d0 = A1 δ1 = i λρi 1 (A1 |0)ρ1i + λA1 γX = λd. We conclude that G(S1 ) ∩
G(S2 ) = dZ with d ∈ S1 ∩ S2 .
Conversely, suppose there exists d ∈ (S1 ∩S2 )\{0} such that G(S1 )∩G(S2 ) =
dZ. We see IS = IS1 +IS2 +hX γX −Y γY i. Trivially, IS1 +IS2 +hX γX −Y γY i ⊂ IS .
Let X α Y β −X γ Y δ be a binomial in IS . Its S−degree is A1 α+A2 β = A1 γ +A2 δ.
Using A1 (α − γ) = A2 (β − δ) ∈ G(S1 ) ∩ G(S2 ) = dZ, there exists λ ∈ Z such
that A1 α = A1 γ + λd and A2 δ = A2 β + λd. We have the following cases:
• If λ = 0,
X αY β − X γ Y δ = X αY β − X γ Y β + X γ Y β − X γ Y δ =
= Y β (X α − X γ ) + X γ (Y β − Y δ ) ∈ IS1 + IS2 .
• If λ > 0,
X αY β − X γ Y δ =
= X α Y β −X γ X λγX Y β +X γ X λγX Y β −X γ X λγY Y β +X γ X λγY Y β −X γ Y δ =
= Y β (X α − X γ X λγX ) + X γ Y β (X λγX − Y λγY ) + X γ (Y λγY Y β − Y δ ).
Using that
X
λγX
−Y
λγY
= (X
γX
−Y
γY
)
λ−1
X
!
X
(λ−1−i)γX
Y
iγY
,
i=0
the binomial X α Y β − X γ Y δ belongs to IS1 + IS2 + hX γX − Y γY i.
• The case λ < 0 is solved similarly.
We conclude that IS = IS1 + IS2 + hX γX − Y γY i.
From above proof it is deduced that given the partition of the system of
generators of S the glued degree is unique.
2
Glued semigroups and combinatorics
Glued semigroups by means of non-connected simplicial complexes are characterized. For any m ∈ S, redefine Cm from (1), as
Cm = {X α Y β = X1α1 · · · Xrαr Y1β1 · · · Ytβt |
r
X
i=1
4
αi ai +
t
X
i=1
βi bi = m}
and consider the sets of vertices and the simplicial complexes
A1
Cm
= {X1α1 · · · Xrαr |
r
X
A1
1
αi ai = m}, ∇A
m = {F ⊆ Cm | gcd(F ) 6= 1},
i=1
A2
Cm
= {Y1β1 · · · Ytβt |
t
X
A2
2
βi bi = m}, ∇A
m = {F ⊆ Cm | gcd(F ) 6= 1},
i=1
where A1 = {a1 , . . . , ar } and A2 = {b1 , . . . , bt } as in Section 1. Trivially, the
A2
1
relations between ∇A
m , ∇m and ∇m are
A1
A2
A2
1
∇A
m = {F ∈ ∇m |F ⊂ Cm }, ∇m = {F ∈ ∇m |F ⊂ Cm }.
(4)
The following result shows an important property of the simplicial complexes
associated to glued semigroups.
Lemma 3. Let S be the gluing of S1 and S2 , and m ∈ Betti(S). Then all the
connected components of ∇m have at least a pure monomial. In addition, all
mixed monomials of ∇m are in the same connected component.
Proof. Suppose that there exists C, a connected component of ∇m only with
mixed monomials. By Construction 1 in all generating sets of IS there is at
least a binomial with a mixed monomial, but this does not occur in M(IS1 ) ∪
M(IS2 ) ∪ {X γX − Y γY } with X γX − Y γY a glued binomial.
Since S is a glued semigroup, ker S has a system of generators as (3). Let
X α Y β , X γ Y δ ∈ Cm be two monomials such that gcd(X α Y β , X γ Y δ ) = 1. In
this case, (α, β) − (γ, δ) ∈ ker S and there exist λ, λρi 1 , λρi 2 ∈ Z satisfying:
P ρ1
(α − γ, 0) =
i λi ρ1i + λ(γX , 0)
(0, β − δ)
=
P
i
λρi 2 ρ2i − λ(0, γY )
• If λ = 0, α − γ ∈ ker S1 and β − δ ∈ ker S2 , then A1 α = A1 γ, A2 β = A2 δ
and X α Y δ ∈ Cm .
P
• If λ > 0, (α, 0) = i λρi 1 ρ1i + λ(γX , 0) + (γ, 0) and
X ρ
A1 α =
λi 1 (A1 |0)ρ1i + λA1 γX + A1 γ = λd + A1 γ,
i
then X λγX X γ Y β ∈ Cm .
• The case λ < 0 is solved likewise.
In any case, X α Y β and X γ Y δ are in the same connected component of ∇m .
We now describe the simplicial complexes that correspond to the S−degrees
which are multiples of the glued degree.
Lemma 4. Let S be the gluing of S1 and S2 , d ∈ S the glued degree and
d0 ∈ S \ {d}. Then CdA0 1 6= ∅ =
6 CdA0 2 if and only if d0 ∈ (dN) \ {0}. Furthermore,
the simplicial complex ∇d0 has at least one connected component with elements
in CdA0 1 and CdA0 2 .
5
Pr
Pt
Proof. If there exist X α , Y β ∈ Cd0 , then d0 = i=1 αi ai = i=1 βi bi ∈ S1 ∩S2 ⊂
G(S1 ) ∩ G(S2 ) = dZ. Hence, d0 ∈ dN.
Conversely, let d0 = jd with j ∈ N and j > 1 and take X γX − Y γY ∈
IS be a glued binomial. It is easy to see that X jγX , Y jγY ∈ Cd0 and thus
{X jγX , X (j−1)γX Y γY } and {X (j−1)γX Y γY , Y jγY } belong to ∇d0 .
The following Lemma is a combinatorial version of [5, Lemma 9] and it is a
necessary condition of Theorem 6.
Lemma 5. Let S be the gluing of S1 and S2 , and d ∈ S the glued degree. Then
the elements of Cd are pure monomials and d ∈ Betti(S).
Proof. The order S defined by m0 S m if m − m0 ∈ S is a partial order on S.
Assume there exists a mixed monomial T ∈ Cd . By Lemma 3, there exists
a pure monomial Y b in Cd such that {T, Y b } ∈ ∇d (the proof is analogous
if we consider X a with {T, X a } ∈ ∇d ). Now take T1 = gcd(T, Y b )−1 T and
Y b1 = gcd(T, Y b )−1 Y b . Both monomials are in Cd0 , where d0 is equal to d
minus the S−degree of gcd(T, Y b ). By Lemma 4, if CdA0 1 6= ∅ then d0 ∈ dN, but
since d0 ≺S d this is not possible. So, if T1 is a mixed monomial and CdA0 1 = ∅,
then CdA0 2 6= ∅. If there exists a pure monomial in CdA0 2 connected to a mixed
monomial in Cd0 , we perform the same process obtaining T2 , Y b2 ∈ Cd00 with T2
a mixed monomial and d00 ≺S d0 . This process can be repeated if there exist a
pure monomial and a mixed monomial in the same connected component. By
degree reasons this cannot be performing indefinitely, an element d(i) ∈ Betti(S)
verifying that ∇d(i) is not connected having a connected component with only
mixed monomials is found. This contradicts Lemma 3.
After examining the structure of the simplicial complexes associated to glued
semigroups, we enunciate a combinatorial characterization by means of the nonconnected simplicial complexes ∇m .
Theorem 6. The semigroup S is the gluing of S1 and S2 if and only if the
following conditions are fulfilled:
1. For all d0 ∈ Betti(S), any connected component of ∇d0 has at least a pure
monomial.
2. There exists a unique d ∈ Betti(S) such that CdA1 6= ∅ 6= CdA2 and the
elements in Cd are pure monomials.
3. For all d0 ∈ Betti(S) \ {d} with CdA0 1 6= ∅ =
6 CdA0 2 , d0 ∈ dN.
Besides, the above d ∈ Betti(S) is the glued degree.
Proof. If S is the gluing of S1 and S2 , the result is obtained from Lemmas 3, 4
and 5.
Conversely, by hypothesis 1 and 3, given d0 ∈ Betti(S)\{d} the set M(IS1 )d0
is constructed from CdA0 1 and M(IS2 )d0 from CdA0 2 as in Construction 1. Analogously, if d ∈ Betti(S), the set M(IS )d is obtained from the union of M(IS1 )d ,
M(IS2 )d and the binomial X γX −Y γY with X γX ∈ CdA1 and Y γY ∈ CdA2 . Finally
G
M(IS1 )m t M(IS2 )m t {X γX − Y γY }
m∈Betti(S)
is a generating set of IS and S is the gluing of S1 and S2 .
6
From Theorem 6 we obtain an equivalent property to Theorem 12 in [5] by
using the language of monomials and binomials.
Corollary 7. Let S be the gluing of S1 and S2 , and X γX − Y γY ∈ IS a glued binomial with S−degree d. The ideal IS is minimally generated by its indispensable
binomials if and only if the following conditions are fulfilled:
• The ideals IS1 and IS2 are minimally generated by their indispensable binomials.
• The element X γX − Y γY is an indispensable binomial of IS .
• For all d0 ∈ Betti(S), the elements of Cd0 are pure monomials.
Proof. Suppose that IS is generated by its indispensable binomials. By [8,
Corollary 6], for all m ∈ Betti(S) the simplicial complex ∇m has only two
vertices. By Contruction 1 ∇d = {{X γX }, {Y γY }} and by Theorem 6 for all
A2
1
d0 ∈ Betti(S) \ {d} the simplicial ∇d0 is equal to ∇A
d0 or ∇d0 . In any case,
γX
γY
X −Y
∈ IS is an indispensable binomial, and IS1 , IS2 are generated by
their indispensable binomials.
Conversely, suppose that IS is not generated by its indispensable binomials.
Then, there exists d0 ∈ Betti(S) \ {d} such that ∇d0 has more than two vertices
in at least two different connected components. By hypothesis, there are not
mixed monomials in ∇d0 and thus:
A2
1
• If ∇d0 is equal to ∇A
d0 (or ∇d0 ), then IS1 (or IS2 ) is not generated by its
indispensable binomials.
• Otherwise, CdA0 1 6= ∅ =
6 CdA0 2 and by Lemma 4, d0 = jd with j ∈ N, therefore
(j−1)γX γY
X
Y
∈ Cd0 which contradicts the hypothesis.
We conclude IS is generated by its indispensable binomials.
The following example taken from [13] illustrates the above results.
Example 8. Let S ⊂ N2 be the semigroup generated by the set
{(13, 0), (5, 8), (2, 11), (0, 13), (4, 4), (6, 6), (7, 7), (9, 9)} .
In this case, Betti(S) is
{(15, 15), (14, 14), (12, 12), (18, 18), (10, 55), (15, 24), (13, 52), (13, 13)}.
Using the appropriated notation for the indeterminates in the polynomial
ring k[x1 , . . . , x4 , y1 , . . . , y4 ] (x1 , x2 , x3 and x4 for the first four generators of
S and y1 , y2 , y3 , y4 for the others), the simplicial complexes associated to the
elements in Betti(S) are those that appear in Table 1. From this table and by
using Theorem 6, the semigroup S is the gluing of h(13, 0), (5, 8), (2, 11), (0, 13)i
and h(4, 4), (6, 6), (7, 7), (9, 9)i and the glued degree is (13, 13). From Corollary
7, the ideal IS is not generated by its indispensable binomials (IS has only four
indispensable binomials).
7
C(15,15) = {y12 y3 , y2 y4 }
C(14,14) = {y12 y2 , y32 }
C(12,12) = {y13 , y22 }
C(10,55) = {x21 x4 , x53 }
∇(15,15)
∇(14,14)
∇(12,12)
∇(10,55)
Ys
Ys
Ys
Xs
C(18,18) = {y12 y2 , y23 ,
y1 y32 , y42 }
∇(18,18)
C(15,24) = {x1 x2 x3 , x32 ,
x3 y1 y4 , x3 y2 y3 }
∇(15,24)
C(13,52) = {x2 x43 , x1 x44 ,
x34 y1 y4 , x34 y2 y3 }
∇(13,52)
C(13,13) = {x1 x4 , y1 y4 ,
y2 y3 }
∇(13,13)
Ys
Xs
Ys
Xs
Ys
Ys
Table 1: Non-connected simplicial complexes associated to Betti(S).
3
Generating glued semigroups
In this section, an algorithm to obtain examples of glued semigroups is given.
Consider A1 = {a1 , . . . , ar } and A2 = {b1 , . . . , bt } two minimal generator sets of
the semigroups T1 and T2 and let Lj = {ρji }i be a basis of ker Tj with j = 1, 2.
Assume that IT1 and IT2 are nontrivial proper ideals of their corresponding
polynomial rings. Consider γX and γY be two nonzero elements in Nr and Nt
respectively2 , and the integer matrix
L1
0
L2 .
(5)
A= 0
γX −γY
Let S be a semigroup such that ker S is the lattice generated by the rows of
the matrix A. This semigroup can be computed by using the Smith Normal
Form (see [11, Chapter 3]). Denote by B1 , B2 two sets of cardinality r and t
respectively satisfying S = hB1 , B2 i and ker(hB1 , B2 i) is generated by the rows
of A.
The following Proposition shows that the semigroup S satisfies one of the
necessary conditions to be a glued semigroup.
Proposition 9. The semigroup S verifies G(hB1 i) ∩ G(hB2 i) = (B1 γX )Z =
(B2 γY )Z with d = B1 γX ∈ hB1 i ∩ hB2 i.
Proof. Use that ker S has a basis as (3) and proceed as in the proof of the
necessary condition of Proposition 2.
Due to B1 ∪ B2 may not be a minimal generating set, this condition does not
assure that S is a glued semigroup. For instance, taking the numerical semigroups T1 = h3, 5i, T2 = h2, 7i and (γX , γY ) = (1, 0, 2, 0), the matrix obtained
2 Note
that γX ∈
/ ker T1 and γY ∈
/ ker T2 because these semigroups are reduced.
8
Xs
from formula (5) is
5
0
1
−3 0
0
7
0 −2
0
−2
0
and B1 ∪ B2 = {12, 20, 6, 21} is not a minimal generating set. The following
result solve this issue.
Corollary 10. The semigroup S is a glued semigroup if
r
X
γXi > 1 and
i=1
t
X
γY i > 1.
(6)
i=1
Proof. Suppose that the set of generators B1 ∪ B2 of S is non-minimal, thus
one of its elements is a natural combination of the others. Assume that this
element is the first of B1 ∪ B2 , then there exist λ2 , . . . , λr+t ∈ N such that
B1 (1, −λ2 , . . . , −λr ) = B2 (λr+1 , . . . , λr+t ) ∈ G(hB1 i)∩G(hB2 i). By Proposition
9, there exists λ ∈ Z satisfying B1 (1, −λ2 , . . . , −λr ) = B2 (λr+1 , . . . , λr+t ) =
B1 (λγX ). Since B2 (λr+1 , . . . , λr+t ) ∈ S, λ ≥ 0 and thus
ν = (1 − λγX1 , −λ2 − λγX2 , . . . , −λr − λγXr ) ∈ ker(hB1 i) = ker T1 ,
{z
}
|
≤0
with the following cases:
• If λγX1 = 0, then T1 is not minimally generated which it is not possible
by hypothesis.
• If λγX1 > 1, then 0 > ν ∈ ker T1 , but this is not possible due to T1 is a
reduced semigroup.
• If λγX1 = 1, then λ = γX1 = 1 and
ν = (0, −λ2 − γX2 , . . . , −λr − γXr ) ∈ ker T1 .
{z
}
|
≤0
If λi + γXi 6= 0 for some i = 2, . . . , r, then T1 is not a reduced semigroup.
This implies λi = γXi = 0 for all i = 2, . . . , r.
We have just proved that γX = (1, 0, . . . , 0). In the general case, if S is not
minimally generated it is because either γX or γY are elements in the canonical
bases of Nr or Nt , respectively.
To avoidPthis situation, it is sufficient to take
Pr
t
γX and γY satisfying i=1 γXi > 1 and i=1 γY i > 1.
From the above result we obtain a characterization of glued semigroups: S
is a glued semigroup if and only if ker S has a basis as (3) satisfying Condition
(6).
Example 11. Let T1 = h(−7, 2), (11, 1), (5, 0), (0, 1)i ⊂ Z2 and T2 = h3, 5, 7i ⊂
N be two reduced affine semigroups. We compute their associated lattices
ker T1 = h(1, 2, −3, −4), (2, −1, 5, −3)i and ker T2 = h(−4, 1, 1), (−7, 0, 3)i.
9
If we take γX = (2, 0, 2, 0) and γY = (1, 2, 1), the matrix A is
1 2 −3 −4 0
0
0
2 −1 5 −3 0
0
0
0 0
0
0
−4
1
1
0 0
0
0 −7 0
3
2 0
2
0 −1 −2 −1
and the semigroup S ⊂ Z4 × Z2 is generated by
{(1, −5, 35), (3, 12, −55), (1, 5, −25), (0, 1, 0), (2, 0, 3), (2, 0, 5), (2, 0, 7)}.
|
{z
} |
{z
}
B1
B2
The semigroup S is the gluing of the semigroups hB1 i and hB2 i and ker S is generated by the rows of the above matrix. The ideal IS ⊂ C[x1 , . . . , x4 , y1 , . . . , y3 ]
is generated3 by
{x1 x83 x4 − x32 , x1 x22 − x33 x44 , x21 x53 − x2 x34 , x31 x2 x23 − x77 ,
y1 y3 − y22 , y13 y2 − y32 , y14 − y2 y3 , x21 x23 − y15 y2 },
{z
}
|
glued binomial
then S is really a glued semigroup.
3.1
Generating affine glued semigroups
From Example 11 it be can deduced that the semigroup S is not necessarily
torsion-free. In general, a semigroup T is affine (or equivalently it is torsionfree) if and only if the invariant factors4 of the matrix whose rows are a basis
of ker T are equal to one. Assume zero-columns of the Smith Normal Form of
a matrix are located on its right side. We now show conditions for S being
torsion-free.
Take L1 and L2 the matrices whose rows form a basis of ker T1 and ker T2 ,
respectively and let P1 , P2 , Q1 and Q2 be some matrices with determinant ±1
(i.e. unimodular matrices) such that D1 = P1 L1 Q1 and D2 = P2 L2 Q2 are the
Smith Normal Form of L1 and L2 , respectively. If T1 and T2 are two affine
semigroups, the invariant factors of L1 and L2 are equal to 1. Then
D1 0
P1 0 0
L1
0
Q1 0
0 D2 = 0 P2 0 0
L2
, (7)
0 Q2
0
γX
γY0
0
0 1
γX −γY
{z
}
|
=:A
0
where γX
= γX Q1 and γY0 = −γY Q2 . Let s1 and s2 be the numbers of zerocolumns of D1 and D2 (s1 , s2 > 0 because T1 and T2 are reduced, see [11,
Theorem 3.14]).
Lemma 12. The semigroup S is an affine semigroup if and only if
0
gcd {γXi
}ri=r−s1 ∪ {γY0 i }ti=t−s2 = 1.
3 See
[14] to compute IS when S has torsion.
invariant factors of a matrix are the diagonal elements of its Smith Normal Form (see
[2, Chapter 2] and [11, Chapter 2]).
4 The
10
Proof. With the conditions fulfilled by T1 , T2 and (γX , γY ), the necessary and
sufficient
condition for the invariant
factors of A to be all equal to one is
0
t
0
r
gcd {γXi }i=r−s1 ∪ {γY i }i=t−s2 = 1.
The following Corollary gives the explicit conditions that γX and γY must
satisfy to construct an affine semigroup.
Corollary 13. The semigroup S is an affine glued semigroup if and only if:
1. T1 and T2 are two affine semigroups.
2. (γX , γY ) ∈ Nr+t .
Pr
Pt
3.
i=1 γXi ,
i=1 γY i > 1.
4. There exist fr−s1 , . . . , fr , gt−s2 , . . . , gt ∈ Z such that
0
0
, . . . , γXr
(fr−s1 , . . . , fr )·(γX(r−s
)+(gt−s2 , . . . , gt )·(γY0 (t−s2 ) , . . . , γY0 t ) = 1.
1)
Proof. It is trivial by the given construction, Corollary 10 and Lemma 12.
Therefore, to obtain an affine glued semigroup it is enough to take two affine
semigroups and any solution (γX , γY ) of the equations of the above corollary.
Example 14. Let T1 and T2 be the semigroups of Example 11. We compute
two elements γX = (a1 , a2 , a3 , a4 ) and γY = (b1 , b2 , b3 ) in order to obtain an
affine semigroup. First of all, we perform a decomposition of the matrix as (7)
by computing the integer Smith Normal Form of L1 and L2 :
1
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
a1 a1 − 2a2 − a3 −7a1 + 11a2 + 5a3 2a1 + a2 + a4
0
0
0
0
0
0
=
1
0
0
0
1
0
−b1 b1 + 2b2 + 3b3 −3b1 − 5b2 − 7b3
1 0
2 −1
0 0
0 0
0 0
0 0
0 0
−2 1
7 −4
0 0
0
1 2 −3 −4
2 −1 5 −3
0
0
0 0 0 0
0 0 0 0 0
1
a1 a2 a3 a4
0
0
0
0
−4 1
−7 0
−b1 −b2
1 1 −7 2
0 −2 11 1
0
0
0 −1 5 0
1
0 0 0 1
3
0 0 0 0
−b3 0 0 0 0
0 0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
1 −1 3
0 −2 5
0 −3 7
Second, by Corollary 13, we must find a solution to the system:
a1 + a2 + a3 + a4 > 1
b1 + b2 + b3 > 1
f1 , f2 , g1 ∈ Z
f1 (−7a1 + 11a2 + 5a3 ) + f2 (2a1 + a2 + a4 ) + g1 (−3b1 − 5b2 − 7b3 ) = 1
with a1 , a2 , a3 , a4 , b1 , b2 , b3 ∈ N. Such solution is computed (in less than a second) using FindInstance of Wolfram Mathematica (see [15]):
FindInstance[(−7a1 +11a2 +5a3 )∗f1 +(2a1 +a2 +a4 )∗f2 +(−3b1 −5b2 −7b3 )∗g1 == 1
&& a1+a2+a3+a4 > 1&& b1 +b2 +b3 > 1&& a1 ≥ 0&& a2 ≥ 0&& a3 ≥ 0&& a4 ≥ 0
11
&& b1 ≥ 0&& b2 ≥ 0&& b3 ≥ 0, {a1 , a2 , a3 , a4 , b1 , b2 , b3 , f1 , f2 , g1 }, Integers ]
{{a1 → 0, a2 → 0, a3 → 3, a4 → 0, b1 → 1, b2 → 1, b3 → 0, f1 → 1, f2 → 0, g1 → 0}}
We now take γX = (0, 0, 3, 0) and γY = (1, 1, 0), and construct the matrix
1
2 −3 −4
0
0 0
2 −1
5 −3
0
0 0
0
0
0
0
−4
1
1
A=
.
0
0
0
0 −7
0 3
0
0
3
0 −1 −1 0
We have the affine semigroup S ⊂ Z2 which is minimally generated by
{(2, −56), (1, 88), (0, 40), (1, 0), (0, 45), (0, 75), (0, 105)}
|
{z
} |
{z
}
B1
B2
satisfies that ker S is generated by the rows of A and it is the result of gluing
the semigroups hB1 i and hB2 i. The ideal IS is generated by
{x1 x83 x4 − x32 , x1 x22 − x33 x44 , x21 x53 − x2 x34 , x31 x2 x23 − x74 ,
y1 y3 − y22 , y13 y2 − y32 , y14 − y2 y3 , x33 − y1 y2 },
| {z }
glued binomial
therefore, S is a glued semigroup.
All glued semigroups have been computed by using our programm ecuaciones
which is available in [3] (this programm requires Wolfram Mathematica 7 or
above to run).
References
[1] E. Briales, A. Campillo, C. Marijuán, P. Pisón. Minimal Systems
of Generetors for Ideals of Semigroups. J. Pure Appl. Algebra 124 (1998),
7–30.
[2] H. Cohen. A Course in Computational Algebraic Number Theory, Graduate Texts in Mathematics, 138, Springer-Verlag, 1996.
[3] Ecuaciones.
http://www.uca.es/dpto/C101/pags-personales/
alberto.vigneron/1ecuaciones.rar
[4] S. Eliahou. Courbes monomiales et algébre de Rees symbolique. PhD
Thesis. Université of Genève, 1983.
[5] P. A. Garcia-Sanchez, I. Ojeda. Uniquely presented finitely generated
commutative monoids. Pacific J. Math. 248(1) (2010), 91–105.
[6] J. Herzog. Generators and relations of abelian semigroups and semigroup
rings. Manuscripta Math. 3 (1970) 175–193.
12
[7] I. Ojeda, A. Vigneron-Tenorio. Simplicial complexes and minimal
free resolution of monomial algebras. J. Pure Appl. Algebra 214 (2010),
850–861.
[8] I. Ojeda, A. Vigneron-Tenorio. Indispensable binomials in semigroup
ideals. Proc. Amer. Math. Soc. 138 (2010), 4205–4216.
[9] H. Ohsugi, T. Hibi. Toric ideals arising from contingency tables. Commutative Algebra and Combinatorics. Ramanujan Mathematical Society
Lecture Notes Series, Vol. 4, Ramanujan Mathematical Society, Mysore,
India, 2007, pp. 91-115.
[10] J.C. Rosales. On presentations of subsemigroups of Nn . Semigroup Forum 55 (1997), no. 2, 152–159.
[11] J.C. Rosales, P.A. Garcı́a-Sánchez. Finitely generated commutative
monoids. Nova Science Publishers, Inc., New York, 1999.
[12] B. Sturmfels. Gröbner bases and convex polytopes, volume 8 of University Lecture Series. American Mathematical Society, Providence, RI,
1996.
[13] A. Thoma. Construction of Set Theoretic Complete Intersections via
Semigroup Gluing. Beiträge Algebra Geom. 41(1) (2000), 195–198.
[14] A. Vigneron-Tenorio. Semigroup Ideals and Linear Diophantine Equations. Linear Algebra and its Applications 295 (1999), 133–144.
[15] Wolfram Mathematica. http://www.wolfram.com/mathematica/
13
| 0 |
Reverse Curriculum Generation
for Reinforcement Learning
arXiv:1707.05300v2 [] 17 Oct 2017
Carlos Florensa
UC Berkeley
[email protected]
David Held
UC Berkeley
[email protected]
Markus Wulfmeier
Oxford Robotics Institute
[email protected]
Michael Zhang
UC Berkeley
[email protected]
Pieter Abbeel
OpenAI
UC Berkeley
ICSI
[email protected]
Abstract: Many relevant tasks require an agent to reach a certain state, or to
manipulate objects into a desired configuration. For example, we might want
a robot to align and assemble a gear onto an axle or insert and turn a key in a
lock. These goal-oriented tasks present a considerable challenge for reinforcement
learning, since their natural reward function is sparse and prohibitive amounts
of exploration are required to reach the goal and receive some learning signal.
Past approaches tackle these problems by exploiting expert demonstrations or by
manually designing a task-specific reward shaping function to guide the learning
agent. Instead, we propose a method to learn these tasks without requiring any
prior knowledge other than obtaining a single state in which the task is achieved.
The robot is trained in “reverse”, gradually learning to reach the goal from a set
of start states increasingly far from the goal. Our method automatically generates a curriculum of start states that adapts to the agent’s performance, leading
to efficient training on goal-oriented tasks. We demonstrate our approach on difficult simulated navigation and fine-grained manipulation problems, not solvable
by state-of-the-art reinforcement learning methods.
Keywords: Reinforcement Learning, Robotic Manipulation, Automatic Curriculum Generation
1
Introduction
Reinforcement Learning (RL) is a powerful learning technique for training an agent to optimize a
reward function. Reinforcement learning has been demonstrated on complex tasks such as locomotion [1], Atari games [2], racing games [3], and robotic manipulation tasks [4]. However, there are
many tasks for which it is hard to design a reward function such that it is both easy to maximize
and yields the desired behavior once optimized. An ubiquitous example is a goal-oriented task; for
such tasks, the natural reward function is usually sparse, giving a binary reward only when the task
is completed [5]. This sparse reward can create difficulties for learning-based approaches [6]; on the
other hand, non-sparse reward functions for such tasks might lead to undesired behaviors [7].
For example, suppose we want a seven DOF robotic arm to learn how to align and assemble a gear
onto an axle or place a ring onto a peg, as shown in Fig. 1c. The complex and precise motion required
to align the ring at the top of the peg and then slide it to the bottom of the peg makes learning highly
impractical if a binary reward is used. On the other hand, using a reward function based on the
distance between the center of the ring and the bottom of the peg leads to learning a policy that
places the ring next to the peg, and the agent never learns that it needs to first lift the ring over the
top of the peg and carefully insert it. Shaping the reward function [8] to efficiently guide the policy
towards the desired solution often requires considerable human expert effort and experimentation
to find the correct shaping function for each task. Another source of prior knowledge is the use of
demonstrations, but it requires an expert intervention.
1st Conference on Robot Learning (CoRL 2017), Mountain View, United States.
In our work, we avoid all reward engineering or use of demonstrations by exploiting two key insights.
First, it is easier to reach the goal from states nearby the goal, or from states nearby where the agent
already knows how to reach the goal. Second, applying random actions from one such state leads
the agent to new feasible nearby states, from where it is not too much harder to reach the goal. This
can be understood as requiring a minimum degree of reversibility, which is usually satisfied in many
robotic manipulation tasks like assembly and manufacturing.
We take advantage of these insights to develop a “reverse learning” approach for solving such difficult manipulation tasks. The robot is first trained to reach the goal from start states nearby a given
goal state. Then, leveraging that knowledge, the robot is trained to solve the task from increasingly
distant start states. All start states are automatically generated by executing short random walk from
the previous start states that got some reward but still require more training. This method of learning
in reverse, or growing outwards from the goal, is inspired by dynamic programming methods like
value iteration, where the solutions to easier sub-problems are used to compute the solution to harder
problems.
In this paper, we present an efficient and principled framework for performing such “reverse learning.” Our method automatically generates a curriculum of initial positions from which to learn to
achieve the task. This curriculum constantly adapts to the learning agent by observing its performance at each step of the training process. Our method requires no prior knowledge of the task other
than providing a single state that achieves the task (i.e. is at the goal). The contributions of this paper
include:
• Formalizing a novel problem definition of finding the optimal start-state distribution at
every training step to maximize the overall learning speed.
• A novel and practical approach for sampling a start state distribution that varies over the
course of training, leading to an automatic curriculum of start state distributions.
• Empirical experiments showing that our approach solves difficult tasks like navigation or
fine-grained robotic manipulation, not solvable by state-of-the-art learning methods.
2
Related Work
Curriculum-based approaches with manually designed schedules have been explored in supervised
learning [9, 10, 11, 12] to split particularly complex tasks into smaller, easier-to-solve sub-problems.
One particular type of curriculum learning explicitly enables the learner to reject examples which it
currently considers too hard [13, 14]. This type of adaptive curriculum has mainly been applied
for supervised tasks and most practical curriculum approaches in RL rely on pre-specified task
sequences [15, 16]. Some very general frameworks have been proposed to generate increasingly
hard problems [17], although few implementations concretize the idea and only tackle preliminary
tasks [18]. A similar line of work uses intrinsic motivation based on learning progress to obtain
“developmental trajectories” that focus on increasingly harder tasks [19]. Nevertheless, their method
requires iteratively partitioning the full task space, which strongly limits the application to fine-grain
manipulation tasks like the ones presented in our work (see detailed analysis on easier tasks in [5]).
More recent work in curriculum for RL assumes baseline performances for several tasks are given,
and it uses them to gauge which tasks are the hardest (furthest behind the baseline) and require more
training [20]. However, this framework can only handle finite sets of tasks and requires each task to
be learnable on its own. On the other hand, our method trains a policy that generalizes to a set of
continuously parameterized tasks, and it is shown to perform well even under sparse rewards by not
allocating training effort to tasks that are too hard for the current performance of the agent.
Closer to our method of adaptively generating the tasks to train on, an interesting asymmetric selfplay strategy has recently been proposed [21]. Contrary to our approach, which aims to generate
and train on all tasks that are at the appropriate level of difficulty, the asymmetric component of
their method can lead to biased exploration concentrating on only a subset of the tasks that are at
the appropriate level of difficulty, as the authors and our own experiments suggests. This problem
and their time-oriented metric of hardness may lead to poor performance in continuous state-action
spaces, which are typical in robotics. Furthermore, their approach is designed as an exploration
bonus for a single target task; in contrast, we define a new problem of efficiently optimizing a policy
across a range of start states, which is considered relevant to improve generalization [22].
2
Our approach can be understood as sequentially composing locally stabilizing controllers by growing a tree of stabilized trajectories backwards from the goal state, similar to work done by Tedrake
et al. [23]. This can be viewed as a “funnel” which takes start states to the goal state via a series of
locally valid policies [24]. Unlike these methods, our approach does not require any dynamic model
of the system. An RL counterpart, closer to our approach, is the work by Bagnell et al. [25], where
a policy search algorithm in the spirit of traditional dynamic programming methods is proposed to
learn a non-stationary policy: they learn what should be done in the last time-step and then “back
it up” to learn the previous time-step and so on. Nevertheless, they require the stronger assumption
of having access to baseline distributions that approximate the optimal state-distribution at every
time-step.
The idea of directly influencing the start state distribution to accelerate learning in a Markov Decision Process (MDP) has already drawn attention in the past. Kakade and Langford [26] studied the
idea of exploiting the access to a ‘generative model’ [27] that allows training the policy on a fixed
‘restart distribution’ different from the one originally specified by the MDP. If properly chosen, this
is proven to improve the policy training and final performance on the original start state distribution.
Nevertheless, no practical procedure is given to choose this new distribution (only suggesting to use
a more uniform distribution over states, which is what our baseline does), and they don’t consider
adapting the start state distribution during training, as we do. Other researchers have proposed to
use expert demonstrations to improve learning of model-free RL algorithms, either by modifying the
start state distribution to be uniform among states visited by the provided trajectories [7], or biasing
the exploration towards relevant regions [28]. Our method works without any expert demonstrations,
so we do not compare against these lines of research.
3
Problem Definition
We consider the general problem of learning a policy that leads a system into a specified goal-space,
from any start state sampled from a given distribution. In this section we first briefly introduce the
general reinforcement learning framework and then we formally define our problem statement.
3.1
Preliminaries
We define a discrete-time finite-horizon Markov decision process (MDP) by a tuple M =
(S, A, P, r, ρ0 , T ), in which S is a state set, A an action set, P : S × A × S → R+ is a transition probability distribution, r : S × A → R is a bounded reward function, ρ0 : S → R+ is an
start state distribution, and T is the horizon. Our aim is to learn a stochastic policy πθ : S ×A → R+
parametrized by θ that maximizes objective the expected return, ηρ0 (πθ ) = Es0 ∼ρ0 R(π, s0 ). We
PT
denote by R(π, s0 ) := Eτ |s0 [ t=0 r(st , at )] the expected reward starting when starting from a
s0 ∼ ρ0 , where τ = (s0 , a0 , , . . . , aT −1 , sT ) denotes a whole trajectory, with at ∼ πθ (at |st ), and
st+1 ∼ P(st+1 |st , at ). Policy search methods iteratively collect trajectories on-policy (i.e. sampling
from the above distributions ρ0 , πθ , P) and use them to improve the current policy [29, 30, 31].
In our work we propose to instead use a different start-state distribution ρi at every training iteration
i such that the learning speed. Learning progress is still evaluated based on the original distribution
ρ0 . Convergence of ρi to ρ0 is desirable but not required as an optimal policy πi? under a start
distribution ρi is also optimal under any other ρ0 , as long as their support coincide. In the case of
approximately optimal policies under ρi , bounds on the performance under ρ0 can be derived [26].
3.2
Goal-oriented tasks
We consider the general problem of reaching a certain goal space S g ⊂ S from any start state in
S 0 ⊂ S. This simple, high-level description can be translated
into an MDP without further domain
knowledge by using a binary reward function r(st ) = 1 st ∈ S g and a uniform distribution over
the start states ρ0 = Unif(S 0 ). We terminate the episode when the goal is reached. This implies
that the return R(π, s0 ) associated with every start state s0 is the probability of reaching the goal at
some time-step t ∈ {0 . . . T }.
T
T
n[
o
[
R(π, s0 ) = Eπ(·|st ) 1
st ∈ S g |s0 = P
st ∈ S g π, s0
(1)
t=0
t=0
3
As advocated by Rajeswaran et al. [22], it is important to be able to train an agent to achieve the goal
from a large set of start states S 0 . An agent trained in this way would be much more robust than an
agent that is trained from just a single start state, as it could recover from undesired deviations from
the intended trajectory. Therefore, we choose the set of start states S 0 to be all the feasible points
in a wide area around the goal. On the other hand, the goal space S g for our robotics fine-grained
manipulation tasks is defined to be a small set of states around the desired configuration (e.g. key in
the key-hole, or ring at the bottom of the peg, as described in Sec. 5).
As discussed above, the sparsity of this reward function makes learning extremely difficult for RL
algorithms [6, 32, 33], and approaches like reward shaping [8] are difficult and time-consuming to
engineer for each task. In the following subsection we introduce three assumptions, and the rest of
the paper describes how we can leverage these assumptions to efficiently learn to achieve complex
goal-oriented tasks directly from sparse reward functions.
3.3
Assumptions for reverse curriculum generation
In this work we study how to exploit three assumptions that hold true in a wide range of practical
learning problems (especially if learned in simulation):
Assumption 1 We can arbitrarily reset the agent into any start state s0 ∈ S at the beginning of all
trajectories.
Assumption 2 At least one state sg is provided such that sg ∈ S g .
Assumption 3 The Markov Chain induced by taking uniformly sampled random actions has a communicating class1 including all start states S 0 and the given goal state sg .
The first assumption has been considered previously (e.g. access to a generative model in Kearns
et al. [27]) and is deemed to be a considerably weaker assumption than having access to the full
transition model of the MDP. The use of Assumption 1 to improve the learning in MDPs that require
large exploration has already been demonstrated by Kakade and Langford [26]. Nevertheless, they
do not propose a concrete procedure to choose a distribution ρ from which to sample the start states
in order to maximally improve on the objective in Eq. (1). In our case, combining Assumption 1
with Assumption 2 we are able to reset the state to sg , which is critical in our method to initialize
the start state distribution to concentrate around the goal space at the beginning of learning.For the
second assumption, note that we only assume access to one state sg in the goal region; we do not
require a description of the full region nor trajectories leading to it. Finally, Assumption 3 ensures
that the goal can be reached from any of the relevant start states, and that those start states can
also be reached from the goal; this assumption is satisfied by many robotic problems of interest,
as long as there are no major irreversibilities in the system. In the next sections we detail our
automatic curriculum generation method based on continuously adapting the start state distribution
to the current performance of the policy. We demonstrate the value of this method for challenging
robotic manipulation tasks.
4
Methodology
In a wide range of goal-oriented RL problems, reaching the goal from an overwhelming majority
of start states in S 0 requires a prohibitive amount of on-policy or undirected exploration. On the
other hand, it is usually easy for the learning agent (i.e. our current policy πi ) to reach the goal S g
from states nearby a goal state sg ∈ S g . Therefore, learning from these states will be fast because
the agent will perceive a strong signal, even under the indicator reward introduced in Section 3.2.
Once the agent knows how to reach the goal from these nearby states, it can train from even further
states and bootstrap its already acquired knowledge. This reverse expansion is inspired by classical
RL methods like Value Iteration or Policy Iteration [34], although in our case we do not assume
knowledge of the transition model and our environments have high-dimensional continuous actionstate spaces. In the following subsections we propose a method that leverages the Assumptions
from the previous section and the idea of reverse expansion to automatically adapt the start state
distribution, generating a curriculum of start state distributions that can be used to tackle problems
unsolvable by standard RL methods.
1
A communicating class is a maximal set of states C such that every pair of states in C communicates with
each other. Two states communicate if there is a non-zero probability of reaching one from the other.
4
Algorithm 1: Policy Training
Input : π0 , sg , ρ0 , Nnew , Nold , Rmin , Rmax , Iter
Output: Policy πN
startsold ← [sg ];
starts, rews ← [sg ], [1];
for i ← 1 to Iter do
starts ← SampleNearby(starts, Nnew );
starts.append[sample(startsold , Nold )];
ρi ← Unif(starts);
πi , rews ← train pol(ρi , πi−1 );
starts ← select(starts, rews, Rmin , Rmax );
startsold .append[starts];
evaluate(πi , ρ0 );
end
4.1
Procedure 2: SampleNearby
Input : starts, Nnew , Σ, TB , M
Output: startsnew
while len(starts) < M do
s0 ∼ Unif(starts);
for t ← 1 to TB do
at = t , t ∼ N (0, Σ);
st ∼ P(st |st−1 , at );
starts.append(st );
end
end
startsnew ←
sample(starts, Nnew )
Policy Optimization with modified start state distribution
Policy gradient strategies are well suited for robotic tasks with continuous and high dimensional
action-spaces [35]. Nevertheless, applying them directly on the original MDP does poorly in tasks
with sparse rewards and long horizons like our challenging manipulation tasks. If the goal is not
reached from the start states in S 0 , no reward is received, and the policy cannot improve. Therefore,
we propose to adapt the distribution ρi from where start states s0 are sampled to train policy πi .
Analogously to Held et al. [5], we postulate that in goal-oriented environments, a strong learning
signal is obtained when training on start states s0 ∼ ρi from where the agent reaches the goal
sometimes, but not always. We call these start states “good starts”. More formally, at training
iteration i, we would like to sample from ρi = Unif(Si0 ) where Si0 = {s0 : Rmin < R(πi , s0 ) <
Rmax }. The hyper-parameters Rmin and Rmax are easy to tune due to their interpretation as bounds
on the probability of success, derived from Eq. (1). Unfortunately, sampling uniformly from Si0
is intractable. Nevertheless, at least at the beginning of training, states nearby a goal state sg are
more likely to be in Si0 . Then, after some iterations of training on these start states, some will
0
be completely mastered (i.e. Rmax > Rmax and no longer in Si+1
), but others will still need
more training. To find more “good starts”, we follow the same reasoning: the states nearby these
0
0
remaining s ∈ Si+1
are likely to also be in Si+1
. In the rest of the section we describe an effective
way of sampling feasible nearby states and we layout the full algorithm.
4.2
Sampling “nearby” feasible states
For robotic manipulation tasks with complex contacts and constraints, applying noise in state-space
s0 = s + , ∼ N may yield many unfeasible start states s0 . For example, even small random
perturbations of the joint angles of a seven degree-of-freedom arm generate large modifications to
the end-effector position, potentially placing it in an infeasible state that intersects with surrounding
objects. For this reason, the concept of “nearby” states might be unrelated to the Euclidean distance
ks0 − sk2 between these states. Instead, we have to understand proximity in terms of how likely it
is to reach one state from the other by taking actions in the MDP.
Therefore, we choose to generate new states s0 from a certain seed state s by applying noise in action
space. This means we exploit Assumption 1 to reset the system to state s, and from there we execute
short “Brownian motion” rollouts of horizon TB taking actions at+1 = t with t ∼ N (0, Σ). This
method of generating “nearby” states is detailed in Procedure 2. The total sampled states M should
be large enough such that the Nnew desired states startsnew , obtained by subsampling, extend in all
directions around the input states starts. All states visited during the rollouts are guaranteed to be
feasible and can then be used as start states to keep training the policy.
4.3
Detailed Algorithm
Our generic algorithm is detailed in Algorithm 1. We first initialize the policy with π0 and the
“good start” states list starts with the given goal state sg . Then we perform Iter training iterations
of our RL algorithm of choice train pol. In our case we perform 5 iterations of Trust Region
Policy Optimization (TRPO) [36] but any on-policy method could be used. At every iteration, we
set the start state distribution ρi to be uniform over a list of start states obtained by sampling Nnew
5
start states from nearby the ones in our “good starts” list starts (see SampleNearby in previous
section), and Nold start states from our replay buffer of previous “good starts” startsold . As already
shown by Held et al. [5], the replay buffer is an important feature to avoid catastrophic forgetting.
Technically, to check which of the states s0 ∈ starts are in Si0 (i.e. the “good starts”) we should
execute some trajectories from each of those states to estimate the expected returns R(s0 , πi−1 ),
but this considerably increases the sample complexity. Instead, we use the trajectories collected by
train pol to estimate R(πi−1 , s0 ) and save it in the list rews. These are used to select the
“good” start states for the next iteration - picking the ones with Rmin ≤ R(πi−1 , s0 ) ≤ Rmax . We
found this heuristic to give a good enough estimate and not drastically decrease learning performance
of the overall algorithm.
Our method keeps expanding the region of the state-space from which the policy can reach the goal
reliably. It samples more heavily nearby the start states that need more training to be mastered and
avoiding start states that are yet too far to receive any reward under the current policy. Now, thanks to
Assumption 3, the Brownian motion used to generate further and further start states will eventually
reach all start states in S 0 , and therefore our method improves the metric ηρ0 defined in Sec. 3.1 (see
also Sec. A.2 for a practical implementation).
5
Experimental Results
We investigate the following questions in our experiments:
• Does the performance of the policy on the target start state distribution ρ0 improve by training
on distributions ρi growing from the goal?
• Does focusing the training on “good starts” speed up learning?
• Is Brownian motion a good way to generate “good starts” from previous “good starts”?
We use the below task settings to explore these questions. All are implemented in MuJoCo [37] and
the hyperparameters used in our experiments are described in Appendix A.1.
(a) Point-mass maze task
(c) Ring on Peg task
(b) Ant maze task
(d) Key insertion task
Figure 1: Task images. Source code and videos of the performance obtained by our algorithm are
available here: http://bit.ly/reversecurriculum
Point-mass maze: (Fig. 1a) A point-mass agent (orange) must navigate within 30cm of the goal
position (4m, 4m) at the end of a G-shaped maze (red). The target start state distribution from
which we seek to reach the goal is uniform over all feasible (x, y) positions in the maze.
Ant maze: (Fig. 1b) A quadruped robot (orange) must navigate its Center of Mass to within 50cm
of the goal position (0m, 4m) at the end of a U-shaped maze (red). The target start state distribution
from which we seek to reach the goal is uniform over all feasible ant positions inside the maze.
Ring on Peg: (Fig. 1c) A 7 DOF robot must learn to place a “ring” (actually a square disk with a
hole in the middle) on top of a tight-fitting round peg. The task is complete when the ring is within
3 cm of the bottom of the 15 cm tall peg. The target start state distribution from which we seek to
reach the goal is uniform over all feasible joint positions for which the center of the ring is within
40 cm of the bottom of the peg.
Key insertion: (Fig. 1d) A 7 DOF robot must learn to insert a key into a key-hole. The task is
completed when the distance between three reference points at the extremities of the key and its
corresponding targets is below 3cm. In order to reach the target, the robot must first insert the key
at a specific orientation, then rotate it 90 degrees clockwise, push forward, then rotate 90 degrees
counterclockwise. The target start state distribution from which we seek to reach the goal is uniform
over all feasible joint positions such that the tip of the key is within 40 cm of key-hole.
6
5.1
Effect of start state distribution
In Figure 2, the Uniform Sampling (baseline) red curves show the average return of policies learned
with TRPO without modifying the start state distribution. The green and blue curves correspond
to our method and an ablation, both exploiting the idea of modifying the start state distribution at
every learning iterations. These approaches perform consistently better across the board. In the
case of the point-mass maze navigation task in Fig. 2a, we observe that Uniform Sampling has a
very high variance because some policies only learn how to perform well from one side of the goal
(see Appendix B.2 for a thorough analysis). The Ant-maze experiments in Fig. 2b also show a
considerable slow-down of the learning speed when using plain TRPO, although the effect is less
drastic as the start state distribution ρ0 is over a smaller space.
In the more complex manipulation tasks shown in Fig. 2c-2d, we see that the probability of reaching
the goal with Uniform Sampling is around 10% for the ring task and 2% for the key task. These
success probabilities correspond to reliably reaching the goal only from very nearby positions: when
the ring is already on the peg or when the key is initialized very close to the final position. None of
the learned policies trained on the original ρ0 learn to reach the goal from more distant start states.
On the other hand, our methods do succeed at reaching the goal from a wide range of far away start
states. The underlying RL training algorithm and the evaluation metric are the same. We conclude
that training on a different start state distribution ρi can improve training or even allow it at all.
(a) Point-mass Maze task
(b) Ant Maze task
(c) Ring on Peg task
(d) Key insertion task
Figure 2: Learning curves for goal-oriented tasks (mean and variance over 5 random seeds).
5.2
Effect of “good starts”
In Figure 2 we see how applying our Algorithm 1 to modify the start state distribution considerably
improves learning (Brownian on Good Starts, in green) and final performance on the original MDP.
Two elements are involved in this improvement: first, the backwards expansion from the goal, and
second, the concentration of training efforts on “good starts”. To test the relevance of this second
element, we ablate our method by running our SampleNearby Procedure 2 on all states from
which the policy was trained in the previous iteration. In other words, the select function in
Algorithm 1 is replaced by the identity, returning all starts independently of the rewards rews they
obtained during the last training iteration. The resulting algorithm performance is shown as the
Brownian from All Starts blue curve in Figures 2. As expected, this method is still better than
not modifying the start state distribution but has a slower learning than running SampleNearby
around the estimated good starts.
7
Now we evaluate an upper bound of the benefit provided by our idea of sampling “good starts”. As
mentioned in Sec. 4.1, we would ideally like to sample start states from ρi = Unif(Si0 ), but it is
0
intractable. Instead, we evaluate states in Si−1
, and we use Brownian motion to find nearby states,
0
to approximate Si . We can evaluate how much this approximation hinders learning by exhaustively
sampling states in the lower dimensional point-mass maze task. To do so, at every iteration we can
samples states s0 uniformly from the state-space S, empirically estimates their return R(s0 , πi ), and
rejects the ones that are not in the set Si0 = {s0 : Rmin < R(πi , s0 ) < Rmax }. This exhaustive
sampling method is orders of magnitude more expensive in terms of sample complexity, so it would
not be of practical use. In particular, we can only run it in the easier point-mass maze task. Its
performance is shown in the brown curve of Fig. 2a, called “Oracle (rejection sampling)”; training
on states sampled in such a manner further improves the learning rate and final performance. Thus
0
we can see that our approximation of using states in Si−1
to find states in Si0 leads to some loss in
performance, at the benefit of a greatly reduced computation time.
Finally, we compare to another way of generating start states based on the asymmetric self-play
method of Sukhbaatar et al. [38]. The basic idea is to train another policy, “Alice”, that proposes start
states to the learning policy, “Bob”. As can be seen, this method performs very poorly in the pointmass maze task, and our investigation shows that “Alice” often gets stuck in a local optimum, leading
to poor start states suggestions for “Bob”. In the original paper, the method was demonstrated only
on discrete action spaces, in which a multi-modal distribution for Alice can be maintained; even in
such settings, the authors observed that Alice can easily get stuck in local optima. This problem is
exacerbated when moving to continuous action spaces defined by a unimodal Gaussian distribution.
See a detailed analysis of these failure modes in Appendix B.3.
5.3
Brownian motion to generate “nearby” states
Here we evaluate if running our Procedure 2 SampleNearby around “good starts” yields more
good starts than running SampleNearby from all previously visited states. This can clearly be
seen in Figs. 3a-3b for the robotic manipulation tasks.
(a) Key insertion task
(b) Ring on Peg task
Figure 3: Fraction of Good Starts generated during training for the robotic manipulation tasks
6
Conclusions and Future Directions
We propose a method to automatically adapt the start state distribution on which an agent is trained,
such that the performance on the original problem is efficiently optimized. We leverage three assumptions commonly satisfied in simulated tasks to tackle hard goal-oriented problems that state of
the art RL methods cannot solve.
A limitation of the current approach is that it generates start states that grow from a single goal
uniformly outwards, until they cover the original start state distribution Unif(S 0 ). Nevertheless, if
the target set of start states S 0 is far from the goal and we have some prior knowledge, it would
be interesting to bias the generated start distributions ρi towards the desired start distribution. A
promising future line of work is to combine the present automatic curriculum based on start state
generation with goal generation [5], similar to classical results in planning [39].
It can be observed in the videos of our final policy for the manipulation tasks that the agent has
learned to exploit the contacts instead of avoiding them. Therefore, the learning based aspect of the
presented method has a huge potential to tackle problems that classical motion planning algorithms
could struggle with, such as environments with non-rigid objects or with uncertainties in the task
geometric parameters. We also leave as future work to combine our curriculum-generation approach
with domain randomization methods [40] to obtain policies that are transferable to the real world.
8
References
[1] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-Dimensional continuous
control using generalized advantage estimation. In International Conference on Learning Representation, 2015.
[2] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,
M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
[3] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra.
Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
[4] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies.
Journal of Machine Learning Research, 17(39):1–40, 2016.
[5] D. Held, X. Geng, C. Florensa, and P. Abbeel. Automatic goal generation for reinforcement
learning agents. arXiv preprint arXiv:1705.06366, abs/1705.06366, 2017.
[6] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. Benchmarking deep reinforcement learning for continuous control. International Conference on Machine Learning, 2016.
[7] I. Popov, N. Heess, T. Lillicrap, R. Hafner, G. Barth-Maron, M. Vecerik, T. Lampe, Y. Tassa,
T. Erez, and M. Riedmiller. Data-efficient deep reinforcement learning for dexterous manipulation. arXiv preprint arXiv: 1704.03073, 2017.
[8] A. Y. Ng, D. Harada, and S. Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In International Conference in Machine Learning,
volume 99, pages 278–287, 1999.
[9] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In International
Conference on Machine Learning, pages 41–48. ACM, 2009.
[10] W. Zaremba and I. Sutskever. Learning to execute. CoRR, abs/1410.4615, 2014.
[11] S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer. Scheduled sampling for sequence prediction
with recurrent neural networks. In Advances in Neural Information Processing Systems, 2015.
[12] A. Graves, M. G. Bellemare, J. Menick, R. Munos, and K. Kavukcuoglu. Automated Curriculum Learning for Neural Networks. arXiv preprint, arXiv:1704.03003, 2017.
[13] M. P. Kumar, B. Packer, and D. Koller. Self-paced learning for latent variable models. In
Advances in Neural Information Processing Systems, pages 1189–1197, 2010.
[14] L. Jiang, D. Meng, Q. Zhao, S. Shan, and A. G. Hauptmann. Self-paced curriculum learning.
In AAAI, volume 2, page 6, 2015.
[15] M. Asada, S. Noda, S. Tawaratsumida, and K. Hosoda. Purposive behavior acquisition for a
real robot by Vision-Based reinforcement learning. Machine Learning, 1996.
[16] A. Karpathy and M. Van De Panne. Curriculum learning for motor skills. In Canadian Conference on Artificial Intelligence, pages 325–330. Springer, 2012.
[17] J. Schmidhuber. POWER P LAY : Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem. Frontiers in Psychology, 2013.
[18] R. K. Srivastava, B. R. Steunebrink, M. Stollenga, and J. Schmidhuber. Continually adding
self-invented problems to the repertoire: First experiments with POWERPLAY. In IEEE International Conference on Development and Learning and Epigenetic Robotics, 2012.
[19] A. Baranes and P.-Y. Oudeyer. Active learning of inverse models with intrinsically motivated
goal exploration in robots. Robotics and Autonomous Systems, 61(1), 2013.
[20] S. Sharma and B. Ravindran. Online Multi-Task Learning Using Biased Sampling. arXiv
preprint arXiv: 1702.06053, 2017.
9
[21] S. Sukhbaatar, I. Kostrikov, A. Szlam, and R. Fergus. Intrinsic Motivation and Automatic
Curricula via Asymmetric Self-Play. arXiv preprint, arXiv: 1703.05407, 2017.
[22] A. Rajeswaran, K. Lowrey, E. Todorov, and S. Kakade. Towards generalization and simplicity
in continuous control. arXiv preprint, arXiv:1703.02660, 2017.
[23] R. Tedrake, I. R. Manchester, M. Tobenkin, and J. W. Roberts. Lqr-trees: Feedback motion
planning via sums-of-squares verification. The International Journal of Robotics Research, 29
(8):1038–1052, 2010.
[24] R. R. Burridge, A. A. Rizzi, and D. E. Koditschek. Sequential composition of dynamically
dexterous robot behaviors. The International Journal of Robotics Research, 1999.
[25] J. A. Bagnell, S. Kakade, A. Y. Ng, and J. Schneider. Policy search by dynamic programming.
Advances in Neural Information Processing Systems, 16:79, 2003.
[26] S. Kakade and J. Langford. Approximately Optimal Approximate Reinforcement Learning.
International Conference in Machine Learning, 2002.
[27] M. Kearns, Y. Mansour, and A. Y. Ng. A Sparse Sampling Algorithm for Near-Optimal Planning in Large Markov Decision Processes. Machine Learning, 49(2/3):193–208, 2002.
[28] K. Subramanian, C. L. Isbell, Jr., and A. L. Thomaz. Exploration from demonstration for interactive reinforcement learning. In Proceedings of the International Conference on Autonomous
Agents & Multiagent Systems, 2016.
[29] D. P. Bertsekas, D. P. Bertsekas, D. P. Bertsekas, and D. P. Bertsekas. Dynamic programming
and optimal control, volume 1. Athena scientific Belmont, MA, 1995.
[30] J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural
networks, 21(4):682–697, 2008.
[31] I. Szita and A. Lörincz. Learning tetris using the noisy cross-entropy method. Learning, 18
(12), 2006.
[32] I. Osband, B. Van Roy, and Z. Wen. Generalization and exploration via randomized value
functions. In International Conference on Machine Learning, 2016.
[33] S. D. Whitehead. Complexity and Cooperation in Q-Learning. Machine Learning Proceedings
1991, pages 363–367, 1991.
[34] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT press Cambridge,
1998.
[35] M. P. Deisenroth, G. Neumann, J. Peters, et al. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1–2):1–142, 2013.
[36] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization.
In International Conference on Machine Learning, pages 1889–1897, 2015.
[37] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012.
[38] S. Sukhbaatar, I. Kostrikov, A. Szlam, and R. Fergus. Intrinsic motivation and automatic
curricula via asymmetric self-play. arXiv preprint arXiv:1703.05407, 2017.
[39] J. Kuffner and S. LaValle. RRT-connect: An efficient approach to single-query path planning.
In IEEE International Conference on Robotics and Automation, volume 2, pages 995–1001.
IEEE, 2000.
[40] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel. Domain Randomization
for Transferring Deep Neural Networks from Simulation to the Real World. arXiv preprint,
arXiv:1703.06907, 2017.
10
A
Experiment Implementation Details
A.1
Hyperparameters
Here we describe the hyperparemeters used for our method. Each iteration, we generate new start
states, (as described in Section 4.2 and Procedure 2), which we append to the seed states until we
have a total of M = 10000 start states. We then subsample these down to Nnew = 200 new start
states. These are appended with Nold = 100 sampled old start states (as described in Section 4.3
and Procedure 1), and these states are used to initialize our agent when we train our policy. The
“Brownian motion” rollouts have a horizon of TB = 50 timesteps, and the actions taken are random,
sampled from a standard normal distribution (e.g. a 0-mean Gaussian with a covariance Σ = I).
For our method as well as the baselines, we train a (64, 64) multi-layer perceptron (MLP) Gaussian
policy with TRPO [36], implemented with rllab [6]. We use a TRPO step-size of 0.01 and a (32, 32)
MLP baseline. For all tasks, we train with a batch size of 50,000 timesteps. All experiments use a
maximum horizon of T = 500 time steps but the Ant maze experiments that use a maximum horizon
of T = 2000. The episode ends as soon as the agent reaches a goal state. We define the goal set S g
to be a ball around the goal state, in which the ball has a radius of 0.03m for the ring and key tasks,
0.3m for the point-mass maze task and 0.5m for the ant-maze task. In our definition of Si0 , we use
Rmin = 0.1 and Rmax = 0.9. We use a discount factor γ = 0.998 for the optimization, in order to
encourage the policy to reach the goal as fast as possible.
A.2
Performance metric
The aim of our tasks is to reach a specified goal region S g from all start states s0 ∈ S 0 that are
feasible and within a certain distance of that goal region. Therefore, to evaluate the progress on
ηρ0 (πi ) we need to collect trajectories starting at states uniformly sampled from S 0 . For the pointmass maze navigation task this is straight forward as the designer can give a concrete description
of the feasible (x, y) space, so we can uniformly sample from it. Nevertheless, it is not trivial to
uniformly sample from all feasible start states for the robotics tasks. In particular, the state space is in
joints angles and angular velocities of the 7 DOF arm, but the physical constraints of these contactrich environments are given by the geometries of the task. Therefore, uniformly sampling from
the angular bounds mostly yields unfeasible states, with some part of the arm or the end-effector
intersecting with other objects in the scene. In order to approximate uniformly sampling from S 0 ,
we use Assumption 2: from the provided feasible goal state sg . We simply run our SampleNearby
procedure initialized with starts = [sg ] with a very large M and long time horizons TB . This
large aggregated state data-set is saved and samples from it are used as proxy to S 0 to evaluate the
performance of our algorithm. Figures 4a and 4b show six sampled start states from the data sets
used to evaluate the ring task and the key task. These data sets are available at the project website2
for future reproducibility and benchmarking.
Given the quasi-static nature of the tasks considered, we generate only initial angle positions, and
we set all initial velocities to zero. Generating initial velocities is a fairly simple extension of our
approach that we leave for future work.
B
Other methods
B.1
Distance reward shaping
Although our method is able to train policies with sparse rewards, the policy optimization steps
train pol can use any kind of reward shaping available. Up to an extend, we already do that by
using a discount factor γ, which motivates the policies to reach the goal as soon as possible. Similar
reward modulations could be included to take into account energy penalties or reward shaping from
prior knowledge. For example, in the robotics tasks considered in this paper, the goal is defined
in terms of a reference state, and hence it seems natural to try to use the distance to this state as an
additional penalty to guide learning. However, we have found that this modification does not actually
improve training. For the start states near to the goal, the policy can learn to reach the goal simply
from the indicator reward introduced in Section 3.2. For the states that are further away, the distance
2
Videos, data sets and code available at: bit.ly/reversecurriculum
11
(a) Uniformly sampled start states for ring task. There are 39,530 states in the data-set, of which 5,660 have
the ring with its hole already in the peg
(b) Uniformly sampled start states for key task. There are 544,575 states in the data-set, of which 120,784
have the key somewhere inside the key-hole
Figure 4: Samples from the test distribution for the manipulation tasks
to the goal is actually not a useful metric to guide the policy; hence, the distance reward actually
guides the policy updates towards a suboptimal local optimum, leading to poor performance. In
Fig. 5 we see that the ring task is not much affected by the additional reward, whereas the key task
suffers considerably if this reward is added.
(a) Ring on Peg task
(b) Key insertion task
Figure 5: Learning curves for the robotics manipulation tasks
B.2
Failure cases of Uniform Sampling for maze navigation
In the case of the maze navigation task, we observe that applying TRPO directly on the original
MDP incures in very high variance across learning curves. We have observed that some policies
only learned how to perform well from a certain side of the goal. The reason for this is that our
learning algorithm (TRPO) is a batch on-policy method; therefore, at the beginning of learning,
uniformly sampling from the state-space might give a batch with very few trajectories that reach
the goal and hence it is more likely that they all come from one side of the goal. In this case, the
algorithm will update the policy to go in the same direction from everywhere, wrongly extrapolating
from these very few successful trajectories it received. This is less likely to happen if the trajectories
for the batch are collected with a different start state distribution that concentrates more uniformly
around the goal, as the better learning progress of the other curves show.
B.3
Failure cases of Asymmetric Self-play
In Section 5.2, we compare the performance of our method to the asymmetric self-play approach
of Sukhbaatar et al. [38]. Although such an approach learns faster than the uniform sampling baseline, it gets stuck in a local optimum and fails to learn to reach the goal from more than 40% of
start-states in the point-mass maze task.
12
As explained above, part of the reason that this method gets stuck in a local optimum is that “Alice”
(the policy that is proposing start-states) is represented with a unimodal Gaussian distribution, which
is a common representation for policies in continuous action spaces. Thus Alice’s policy tends to
converge to moving in a single direction. In the original paper, this problem is somewhat mitigated
by using a discrete action space, in which a multi-modal distribution for Alice can be maintained.
However, even in such a case, the authors of the original paper also observed that Alice tends to
converge to a local optimum [38].
A further difficulty for Alice is that her reward function can be sparse, which can be inherently
difficult to optimize. Alice’s reward is defined as rA = max(0, tB − tA ), where tA is the time that
it takes Alice to reach a given start state from the goal (at which point Alice executes the “stop”
action), and tB is the time that it takes Bob to return to the goal from the start state. Based on this
reward, the optimal policy for Alice is to find the nearest state for which Bob does not know how
to return to the goal; this will lead to a large value for tB with a small value for tA . In theory, this
should lead to an automatic curriculum of start-states for Bob.
However, in practice, we find that sometimes, Bob’s policy might improve faster than Alice’s. In
such a case, Bob will have learned how to return to the goal from many start states much faster than
Alice can reach those start states from the goal. In such cases, we would have that tB < tA , and
hence rA = 0. Thus, Alice’s rewards are sparse (many actions that Alice takes result in 0 reward)
and hence it will be difficult for Alice’s policy to improve, leading to a locally optimal policy for
Alice. For these reasons, we have observed Alice’s policy often getting “stuck,” in which Alice is
unable to find new start-states to propose for Bob that Bob does not already know how to reach the
goal from.
We have implemented a simple environment that illustrates these issues. In this environment, we
use a synthetic “Bob” that can reach the goal from any state within a radius rB from the goal. For
states within rB , Bob can reach the goal in a time proportional to the distance between the state
and the goal; in other words, for such states s0 ∈ {s : |s − sg | < rB , s ∈ S 0 }, we have that
tB = |s0 − sg |/vB , where |s0 − sg | is the distance between state s0 and the goal sg , and vB is Bob’s
speed. For states further than rB from the goal, Bob does not know how to reach the goal, and thus
tB is the maximum possible value.
This setup is illustrated in Figure 6. The region shown in red designates the area within rB from the
goal, e.g. the set of states from which Bob knows how to reach the goal. On the first iteration, Alice
has a random policy (Figure 6a). After 10 iterations of training, Alice has converged to a policy that
reaches the location just outside of the set of states from which Bob knows how to reach the goal
(Figure 6b). From these states, Alice receives a maximum reward, because tB is very large while
tA is low. Note that we also observe the unimodal nature of Alice’s policy; Alice has converged to
a policy which proposes just one small set of states among all possible states for which she would
receive a similar reward.
(a) Iteration 1
(b) Iteration 10
(c) Iteration 32
Figure 6: Simple environment to illustrate asymmetric self-play [38]. The red areas indicate the
states from which Bob knows how to reach the goal. The blue points are the start-states proposed
by Alice at each iteration (i.e. the states from which Alice performed the stop action)
.
At this point we synthetically increase rB , corresponding to the situation in which Bob learns how to
reach the goal from a larger set of states. However, Alice’s policy has already converged to reaching
a small set of states which were optimal for Bob’s previous policy. From these states Alice now
receives a reward of 0, as described above: Bob can return from these states quickly to the goal, so
13
we have that tB < tA and rA = 0. Thus, Alice does not receive any reward signal and is not able to
improve her policy. Hence, Alice’s policy remains stuck at this point and she is not able to find new
states to propose to Bob (Figure 6c).
In this simple case, one could attempt to perform various hacks to try to fix the situation, e.g. by
artificially increasing Alice’s variance, or by resetting Alice to a random policy. However, note that,
in a real example, Bob is learning an increasingly complex policy, and so Alice would need to learn
an equally complex policy to find a set of states that Bob cannot succeed from; hence, these simple
fixes would not suffice to overcome this problem. Fundamentally, the asymmetric nature of the selfplay between Alice and Bob creates a situation in which Alice has a difficult time learning and often
gets stuck in a local optimum from which she is unable to improve.
14
| 9 |
arXiv:1802.06292v2 [stat.ML] 6 Mar 2018
Nonparametric Estimation of Low Rank
Matrix Valued Function
Fan Zhou∗
School of Mathematics
Georgia Institute of Technology
Atlanta, GA 30332-0160
e-mail: [email protected]
Abstract: Let A : [0, 1] → Hm (the space of Hermitian matrices) be
a matrix valued function which is low rank with entries in Hölder class
Σ(β, L). The goal of this paper is to study statistical estimation of A based
on the regression model E(Yj |τj , Xj ) = hA(τj ), Xj i, where τj are i.i.d.
uniformly distributed in [0, 1], Xj are i.i.d. matrix completion sampling
matrices, Yj are independent bounded responses. We propose an innovative
nuclear norm penalized local polynomial estimator and establish an upper
bound on its point-wise risk measured by Frobenius norm. Then we extend
this estimator globally and prove an upper bound on its integrated risk
measured by L2 -norm. We also propose another new estimator based on
bias-reducing kernels to study the case when A is not necessarily low rank
and establish an upper bound on its risk measured by L∞ -norm. We show
that the obtained rates are all optimal up to some logarithmic factor in
minimax sense. Finally, we propose an adaptive estimation procedure based
on Lepski’s method and the penalized data splitting technique which is
computationally efficient and can be easily implemented and parallelized.
MSC 2010 subject classifications: Primary 62G05, 62G08; secondary
62H12.
Keywords and phrases: Nonparametric estimation, low rank, matrix
completion, nuclear norm penalization, local polynomial estimator, model
selection.
1. Introduction
Let A : [0, 1] → Hm (the space of Hermitian matrices) be a matrix valued
function. The goal of this paper is to study the problem of statistical estimation
of A based on the regression model
E(Yj |τj , Xj ) = hA(τj ), Xj i, j = 1, . . . , n,
(1.1)
where τj are i.i.d. time design variables uniformly distributed in [0, 1], Xj are
i.i.d. matrix completion sampling matrices, Yj are independent bounded random
responses. Sometimes, it will be convenient to write model (1.1) in the form
Yj = hA(τj ), Xj i + ξj , j = 1, . . . , n,
(1.2)
where the noise variables ξj = Yj − E(Yj |τj , Xj ) are independent and have zero
means. In particular, we are interested in the case when A is low rank and
∗ Supported
in part by NSF Grants DMS-1509739 and CCF-1523768.
1
imsart-generic ver. 2014/10/16 file: matrix_func.tex date: March 7, 2018
satisfies certain smoothness condition. When A(t) = A0 with some A0 ∈ Hm
and for any t ∈ [0, 1], such problem coincides with the well known matrix completion/recovery problem which has drawn a lot of attention in the statistics
community during the past few years, see [7, 5, 6, 8, 16, 13, 18, 27, 25, 9]. The
low rank assumption in matrix completion/estimation problems has profound
practical background. For instance, when [20] introduced their famous work on
matrix factorization techniques for recommender systems, they considered temporal dynamics, see [19]. Another very common example is Euclidean distance
matrix (EDM) which contains the distance information of a large set of points
like molecules which are in low dimensional spaces such as R2 or R3 . To be more
specific, given m points p1 , ..., pm in Rd , the EDM D ∈ Rm×m formed by them
has entries Dij = kpi − pj k22 . Clearly, this matrix has rank at most d + 1 regardless of its size m. If m ≫ d, then the recovery problem falls into the low rank
realm. Similar topics in cases when points are fixed (see [34]) or in rigid motion
(see [28]) have been studied. While points are moving in smooth trajectories,
the EDMs are naturally high dimensional low rank matrix valued functions.
An appealing way to address the low rank issue in matrix completion/estimation
is through nuclear norm minimization, see [26]. In section 3, we inherit this idea
and propose a local polynomial estimator (see [11]) with nuclear norm penalization:
τ − t
E2
DX
1 X τj − t0
j
0
Si p i
+ εkSk1 .
K
Yj −
, Xj
Sbh = arg min
S∈D nh
h
h
i=0
j=1
ℓ
n
(1.3)
where D ⊂ H(ℓ+1)m is a closed subset of block diagonal matrices with Sj ∈ Hm
on its diagonal, and {pi } is a sequence of orthogonal polynomials with nonnegative weight function K. The solution to the convex optimization problem (1.3)
P
induces a pointwise estimator of A(t0 ): Sbh (t0 ) := ℓi=0 Sbih pi (0). We prove that
2β/(2β+1)
on the
under mild conditions, Sbh (t0 ) achieves a rate of O mr log n
n
2
pointwise risk measured by m12 Sbh (t0 ) − A(t0 ) 2 over Hölder class Σ(β, L) with
low rank parameter r, where k·k2 denotes the Frobenius norm of a matrix. In secb based on the local results and prove
tion 4, we propose a new global estimator A
2β/(2β+1)
mr
log
n
b achieves a rate of O
on the integrated risk measured
that A
n
R1
2
1
b − A(t) dt. Then we study another naive kernel
by L2 -norm, i.e. 2
A(t)
m
0
2
estimator à which can be used to estimate matrix valued functions which are
not necessarily low rank. This estimator is associated with another popular approach to deal with low rank recovery which is called singular value thresholding,
n 2β/(2β+1)
see [5, 18, 9]. We prove that à achieves a rate of O m log
measured
n
2
by supt∈[h,1−h] m12 Ã(t) − A(t) , where k · k denotes the matrix operator norm.
Note that those rates coincide with that of classical matrix recovery/estimation
setting when the smoothness parameter β → ∞.
An immediate question is whether the above rates are optimal. In section 5,
we prove that all the rates we established are optimal up to some logarithmic
factor in the minimax sense, which essentially verified the effectiveness of our
2
methodology.
As one may have noticed, there is an adaptation issue involved in (1.3).
Namely, one needs to choose a proper bandwidth h and a proper order of degree
ℓ of polynomials. Both parameters are closely related to the smoothness of A
which is unknown to us in advance. In section 6, we propose a model selection
procedure based on Lepskii’s method ([23]) and the work of [3] and [33]. We
prove that this procedure adaptively selects an estimator that achieves a rate
on the integrated risk measured by L2 -norm which is the smallest among all
candidates plus a negligible term. What is more important, such a procedure is
computationally efficient, feasible in high dimensional setting, and can be easily
parallelized.
The major contribution of our paper is that we generalized the recent developments of matrix completion/estimation theory to low rank matrix valued
function setting by proposing a new optimal estimation procedure. To our best
knowledge, no one has ever thoroughly studied such problems from a theoretical
point of view.
2. Preliminaries
In this section, we introduce some important definitions, basic facts, and notations for the convenience of presentation.
2.1. Notations
For any Hermitian matrices A, B ∈ Hm , denote hA, Bi = tr(AB) which is known
as the Hilbert-Schmidt inner product. Denote hA, BiL2 (Π) = EhA, XihB, Xi,
where Π denotes the distribution of X. The corresponding norm kAk2L2 (Π) is
given by kAk2L2 (Π) = EhA, Xi2 .
We use k·k2 to denote the Hilbert-Schimidt norm (Frobenus norm or Schatten
2-norm) generated by the inner product h·, ·i; k · k to denote the operator norm
(spectral norm) of a matrix: the largest singular value; k · k1 to denote the trace
norm (Schatten 1-norm or nuclear norm), i.e. the sum of singular values; |A| to
denote the nonnegative matrix with entries |Aij | corresponding to A.
We denote
2
σ 2 := Eξ 2 , σX
:= n−1
n
X
j=1
EXi2 , UX := kXk
L∞
.
2.2. Matrix Completion and statistical learning setting
The matrix completion setting refers to that the random sampling matrices Xj
are i.i.d. uniformly distributed on the following orthonormal basis X of Hm :
X := {Ekj : k, j = 1, ..., m},
3
where Ekk := ek ⊗ ek , k = 1, ..., m; Ejk := √12 (ek ⊗ ej + ej ⊗ ek ), 1 ≤ k < j ≤ m;
Ekj := √i2 (ek ⊗ ej − ej ⊗ ek ), 1 ≤ k < j ≤ m with {ej }m
j=1 being the canonical
m
basis of C . The following identities are easy to check when the design matrices
are under matrix completion setting:
A
2
L2 (Π)
=
1
2
A 2,
m2
2
σX
≤
2
,
m
UX = 1.
(2.1)
The statistical learning setting refers to the bounded response case: there exists
a constant a such that
max |Yj | ≤ a, a.s.
(2.2)
j=1,...n
In this paper, we will consider model (1.1) under both matrix completion and
statistical learning setting.
2.3. Matrix Valued Functions
Let A : [0, 1] → Hm be a matrix valued function. One should notice that we
consider the image space to be Hermitian matrix space for the convenience
of presentation. Our methods and results can be readily extended to general
rectangular matrix space. Now we define the rank of a matrix valued function.
Let rankA (t) := rank(A(t)), ∀t ∈ [0, 1].
Definition 1. Let β and L be two positive real numbers. The Hölder class
Σ(β, L) on [0, 1] is defined as the set of ℓ = ⌊β⌋ times differentiable functions
f : [0, 1] → R with derivative f (ℓ) satisfying
|f (ℓ) (x) − f (ℓ) (x′ )| ≤ L|x − x′ |β−ℓ , ∀x, x′ ∈ [0, 1].
(2.3)
In particular, we are interested in the following assumptions on matrix valued
functions.
A1 Given a measurement matrix X and for some constant a,
sup hA(t), Xi ≤ a.
t∈[0,1]
A2 Given a measurement matrix X and for some constant a, the derivative
matrices A(k) of A satisfy
sup hA(k) (t), Xi ≤ a, k = 1, ..., ℓ.
t∈[0,1]
A3 The rank of A, A′ , ...,A(ℓ) are uniformly bounded by a constant r,
sup rankA(k) (t) ≤ r, k = 0, 1, ..., ℓ.
t∈[0,1]
A4 Assume that for ∀i, j, aij is in the Hölder class Σ(β, L).
4
3. A local polynomial Lasso estimator
In this section, we study the estimation of matrix valued functions that are
low rank. The construction of our estimator is inspired by localization of nonparametric least squares and nuclear norm penalization. The intuition of the
localization technique originates from classical local polynomial estimators, see
[11]. The intuition behind nuclear norm penalization is that whereas rank function counts the number of non-vanishing singular values, the nuclear norm sums
their amplitude. The theoretical foundations behind nuclear norm heuristic for
the rank minimization was proved by [26]. Instead of using the trivial basis
{1, t, t2 , ..., tℓ } to generate an estimator, we use orthogonal polynomials which
fits our problem better. Let {pi }∞
i=0 be a sequence of orthogonal polynomials
with nonnegative weight function K compactly supported on [−1, 1], then
Z 1
K(u)pi (u)pj (u)du = δij .
−1
There exist an invertible linear transformation T ∈ R(ℓ+1)×(ℓ+1) such that
(1, t, t2 /2!, ..., tℓ /ℓ!)T = T (p0 , p1 , ..., pℓ )T .
Apparently, T is lower triangular. We denote R(T ) = max1≤j≤ℓ+1
ℓ+1
P
i=1
|Tij |. Note
that in some literature, R(T ) is denoted as kT k1 as the matrix ”column norm”.
Since we already used k · k1 to denote the nuclear norm, R(T ) is used to avoid
any ambiguity. Denote
)
(
⊂ Hm(ℓ+1)
D := Diag S0 S1 . . . Sℓ−1 Sℓ
the set of block diagonal matrices with Sk ∈ Hm satisfying |Sij | ≤ R(T )a. With
observations (τj , Xj , Yj ), j = 1, ..., n from model (1.1), define Sbh as
τ − t
E2
DX
1 X τj − t0
j
0
Si p i
+ εkSk1 .
K
Yj −
, Xj
Sbh = arg min
S∈D nh
h
h
i=0
j=1
ℓ
n
(3.1)
Sbh naturally induces a local polynomial estimator of order ℓ around t0 :
Sbh (τ ) :=
τ − t n τ − t
o
0
0
Sbih pi
I
≤1 .
h
h
i=0
ℓ
X
(3.2)
The point estimate at t0 is given by
Sbh (t0 ) :=
ℓ
X
i=0
Sbih pi (0).
(3.3)
In the following theorem, we establish an upper bound on the point-wise risk
of Sbh (t0 ) when A(t) is in the Hölder class Σ(β, L) with ℓ = ⌊β⌋.
5
Theorem 3.1. Under model (1.1), let (τj , Xj , Yj ), j = 1, ..., n be i.i.d. copies
of the random triplet (τ, X, Y ) with X uniformly distributed in X , τ uniformly
distributed in [0, 1], X and τ are independent, and |Y | ≤ a, a.s. for some constant a > 0. Let A be a matrix
valued function satisfying A1, A2, A3, and A4.
√
Denote Φ = maxi=0,...,ℓ k Kpi k∞ , and ℓ = ⌊β⌋. Take
s
1
(ℓ3 (ℓ!)2 Φ2 R(T )2 a2 mr log n 2β+1
log 2m
b
,
hn = C1
, ε = DℓaΦ
L2 n
nmb
hn
for some numerical constants C1 and D. Then for any b
h n ≤ t0 ≤ 1 − b
hn , the
1
estimator defined in (3.3) satisfies with probability at least 1 − nmr
,
1 bh
S (t0 ) − A(t0 )
m2
2
2
2β
mr log n 2β+1
,
≤ C1 (a, Φ, ℓ, L)
n
(3.4)
where C1 (a, Φ, ℓ, L) is a constant depending on a, Φ, ℓ and L.
The proof of Theorem 3.1 can be found in section 7.1. One should notice
that when β → ∞, bound (3.4) coincides with similar result in classical matrix
completion. In section 5, we prove that bound (3.4) is minimax optimal up to a
logarithmic factor.
4. Global estimators and upper bounds on integrated risk
In this section, we propose two global estimators and study their integrated risk
measured by L2 -norm and L∞ -norm.
4.1. From localization to globalization
Firstly, we construct a global estimator based on (3.2). Take
1
ℓ3 (ℓ!)2 Φ2 R(T )2 a2 mr log n 2β+1
b
hn = C1
, M = ⌈1/b
hn ⌉.
L2 n
Without loss of generality, assume that M is even. Denote Sbkh (t) the local polynomial estimator around t2k−1 as in (3.2) by using orthogonal polynomials with
K = I[−1,1] , where t2k−1 = 2k−1
M , k = 1, 2, ..., M/2 and I is the indicator function. Denote
M/2
b =
A(t)
X
k=1
Sbkh (t)I(t2k−1 −bhn ,t2k−1 +bhn ] , t ∈ (0, 1).
(4.1)
Note that the weight function K is not necessary to be I[−1,1] . It can be replaced
by any K that satisfies K ≥ K0 > 0 on [−1, 1]. The following result characterizes the global performance of estimator (4.1) under matrix completion setting
measured by L2 -norm.
6
b be
Theorem 4.1. Assume that the conditions of Theorem 3.1 hold, and let A
1
,
an estimator defined as in (4.1). Then with probability at least 1 − nmr−1
1
m2
Z
0
1
b − A(t)
A(t)
2
dt
2
2β
mr log n 2β+1
,
≤ C2 (a, Φ, ℓ, L)
n
(4.2)
where C2 (a, Φ, ℓ, L) is a constant depending on a, Φ, ℓ, L.
The proof of Theorem 4.1 can be found in section 7.2. Compared with the
integrated risk measured by L2 -norm of real valued functions in Hölder class,
the result in (4.2) has an excess log n term, which is introduced by the matrix
Bernstein inequality, see [30]. In section 5, we show that bound (4.2) is minimax
optimal up to a logarithmic factor.
4.2. Bias reduction through higher order kernels
If A(t) is not necessarily low rank, we propose an estimator which is easy to
implement and prove an upper bound on its risk measured by L∞ -norm. Such
estimators are related to another popular approach parallel to local polynomial
estimators for bias reduction, namely, using high order kernels to reduce bias.
They can also be applied to another important technique of low rank estimation
or approximation via singular value thresholding, see [5] and [9]. The estimator
proposed by [18] is shown to be equivalent to soft singular value thresholding of
such type of estimators.
The kernels we are interested in satisfy the following conditions:
K1
K2
K3
K4
K5
K(·) is symmetric, i.e. K(u) = K(−u).
K(·) isRcompactly supported on [−1, 1].
∞
RK = −∞ K 2 (u)du < ∞.
K(·) is of order ℓ for some ℓ ∈ N∗ .
K(·) is Lipschitz continuous with Lipschitz constant 0 < LK < ∞.
Consider
Ã(t) =
n
m2 X τj − t
K
Yj Xj .
nh j=1
h
(4.3)
Note that when K ≥ 0, (4.3) is the solution to the following optimization
problem
n
1 X τj − t
Ã(t) = arg min
K
(Yj − hS, Xj i)2 .
(4.4)
S∈D nh
h
j=1
In the following theorem we prove an upper bound on its global performance
measured by L∞ -norm over Σ(β, L) which is much harder to obtain for matrix
lasso problems.
Theorem 4.2. Under model (1.1), let (τj , Xj , Yj ), j = 1, ..., n be i.i.d. copies
of the random triplet (τ, X, Y ) with X uniformly distributed in X , τ uniformly
distributed in [0, 1], X and τ are independent, and |Y | ≤ a a.s. for some constant
7
a > 0; Kernel K satisfies K1-K5; let A be any matrix valued function satisfying
A1 and A4. Denote ℓ = ⌊β⌋. Take
1
a2 (ℓ!)2 m log n 2β+1
,
(4.5)
h̃n := c∗ (K)
2βL2 n
Then with probability at least 1 − n−2 , the estimator defined in (4.3) satisfies
2β
a2 (ℓ!)2 m log n 2β+1
1
2
∗
,
(4.6)
Ã(t)
−
A(t)
≤
C
(K)
sup
2
2βL2 n
t∈[h̃n ,1−h̃n ] m
where C ∗ (K) and c∗ (K) are constants depending on K.
The proof of Theorem 4.2 can be found in section 7.3. When the smoothness
parameter β tends to infinity, bound (4.6) coincides with similar bounds in
3
classical matrix completion, which is O( m nlog n ). When m degenerates to 1, the
bound coincides with that of real valued case, which is O(( logn n )2β/(2β+1) ). In
section 5, we show that this bound is minimax optimal up to a logarithmic
factor.
5. Lower bounds under matrix completion setting
In this section, we prove the minimax lower bound of estimators (3.3), (4.1)
and (4.3). In the realm of classical low rank matrix estimation, [25] studied
the optimality issue measured by the Frobenius norm on the classes defined
in terms of a ”spikeness index” of the true matrix; [27] derived optimal rates
in noisy matrix completion on different classes of matrices for the empirical
prediction error; [18] established that the rates of the estimator they propose
under matrix completion setting are optimal up to a logarithmic factor measured
by the Frobenius norm. Based on the ideas of [18], standard methods to prove
minimax lower bounds in real valued case in [31], and some fundamental results
in coding theory, we establish the corresponding minimax lower bounds of (3.4),
(4.2) and (4.6) which essentially shows that the upper bounds we get are all
optimal up to some logarithmic factor.
For the convenience of representation, we denote by inf Ab the infimum over
b of A. We denote by A(r, a) the set of matrix valued functions
all estimators A
satisfying A1, A2, A3, and A4. We denote by P(r, a) the class of distributions
of random triplet (τ, X, Y ) that satisfies model (1.1) with any A ∈ A(r, a).
Theorem 5.1. Under model (1.1), let (τj , Xj , Yj ), j = 1, ..., n be i.i.d. copies
of the random triplet (τ, X, Y ) with X uniformly distributed in X , τ uniformly
distributed in [0, 1], X and τ are independent, and |Y | ≤ a, a.s. for some constant a > 0; let A be any matrix valued function in A(r, a). Then there is an
absolute constant η ∈ (0, 1) such that for all t0 ∈ [0, 1]
n 1
2β o
b 0 ) − A(t0 ) 2 > C(β, L, a) mr 2β+1 ≥ η.
sup
PPτ,X,Y
A
inf
A(t
2
b PA
m2
n
A
τ,X,Y ∈P(r,a)
(5.1)
8
where C(β, L, a) is a constant depending on β, L and a.
The proof of Theorem 5.1 can be found in section 7.4. Note that compared
with the upper bound (3.4), the lower bound (5.1) matches it that up to a
logarithmic factor. As a consequence, it shows that the estimator (3.3) achieves
a near optimal minimax rate of pointwise estimation. Although, the result of
Theorem 5.1 is under bounded response condition, it can be readily extended
to the case when the noise in (1.2) is Gaussian.
Theorem 5.2. Under model (1.1), let (τj , Xj , Yj ), j = 1, ..., n be i.i.d. copies
of the random triplet (τ, X, Y ) with X uniformly distributed in X , τ uniformly
distributed in [0, 1], X and τ are independent, and |Y | ≤ a, a.s. for some constant a > 0; let A be any matrix valued function in A(r, a). Then there is an
absolute constant η ∈ (0, 1) such that
n 1 Z 1
b
sup
PPτ,X,Y
A
inf
A(t)−
A(t)
2
b PA
m
A
0
∈P(r,a)
τ,X,Y
2
dt
2
2β o
mr 2β+1
≥ η,
> C̃(β, L, a)
n
(5.2)
where C̃(β, L, a) is a constant depending on L, β and a.
The proof of Theorem 5.2 can be found in section 7.5. The lower bound
in (5.2) matches the upper bound we get in (4.2) up to a logarithmic factor.
Therefore, our estimator (4.1) achieves a near optimal minimax rate on the
integrated risk measured by L2 -norm up to a logarithmic factor. The result of
Theorem 5.2 can be readily extended to the case when the noise in (1.2) is
Gaussian.
Now we consider the minimax lower bound on integrated risk measured by
L∞ -norm for general matrix valued functions without any rank information.
Denote
A(a) = A(t) ∈ Hm , ∀t ∈ [0, 1] : |aij (t)| ≤ a, aij ∈ Σ(β, L) .
We denote by P(a) the class of distributions of random triplet (τ, X, Y ) that
satisfies model (1.1) with any A ∈ A(a).
Theorem 5.3. Under model (1.1), let (τj , Xj , Yj ), j = 1, ..., n be i.i.d. copies
of the random triplet (τ, X, Y ) with X uniformly distributed in X , τ uniformly
distributed in [0, 1], X and τ are independent, and |Y | ≤ a, a.s. for some constant a > 0; let A be any matrix valued function in A(a). Then there exist an
absolute constant η ∈ (0, 1) such that
inf
sup
b PA
A
τ,X,Y ∈P(a)
A
PPτ,X,Y
n
1 b
A(t)−A(t)
2
t∈(0,1) m
sup
2
2β o
m ∨ log n 2β+1
> C̄(β, L, a)
≥ η.
n
(5.3)
where C̄(β, L, a) is a constant depending on β, L and a.
The proof of Theorem 5.3 can be found in section 7.6. Recall that in the real
valued case, the minimax lower bound measured by sup norm of Hölder class is
9
O(( logn n )2β/(2β+1) ). In our result (5.3), if the dimension m degenerates to 1, we
get the same result as in real valued case and it is optimal. While the dimension
m is large enough such that m ≫ log n, the lower bound (5.3) shows that the
estimator (4.3) achieves a near minimax optimal rate up to a logarithmic factor.
6. Model selection
Despite the fact that estimators (3.3) and (4.1) achieve near optimal minimax
rates in theory with properly chosen bandwidth h and order of degree ℓ, such
parameters depend on quantities like β and L which are unknown to us in advance. In this section, we propose an adaptive estimation procedure to choose h
and ℓ adaptively. Two popular methods to address such problems are proposed
in the past few decades. One is Lepskii’s method, and the other is aggregation
method. In the 1990s, many data-driven procedures for selecting the ”best” estimator emerged. Among them, a series of papers stood out and shaped a method
what is now called ”Lepskii’s method”. This method has been described in its
general form and in great detail in [23]. Later, [21] proposed a bandwidth selection procedure based on pointwise adaptation of a kernel estimator that achieves
optimal minimax rate of point estimation over Hölder class, and [22] proposed
a new bandwidth selector that achieves optimal rates of convergence over Besov
classes with spatially imhomogeneous smoothness. The basic idea of Lepskii’s
method is to choose a bandwidth from a geometric grid to get an estimator
not ”very different” from those indexed by smaller bandwidths on the grid. Although Lepskii’s method is shown to give optimal rates in pointwise estimation
over Hölder class in [21], it has a major defect when applied to our problem:
the procedure already requires a huge amount of computational cost when real
valued functions are replaced by matrix valued functions. Indeed, with Lepskii’s
method, in order to get a good bandwidth, one needs to compare all smaller
bandwidth with the target one, which leads to dramatically growing computational cost. Still, we have an extra parameter ℓ that needs to fit with h. As a
result, we turn to aggregation method to choose a bandwidth from the geometric grid introduced by Lepskii’s method, which is more computationally efficient
for our problem. The idea of aggregation method can be briefly summarized as
follows: one splits the data set into two parts; the first is used to build all candidate estimators and the second is used to aggregate the estimates to build
a new one (aggregation) or select one (model selection) which is as good as
the best candidate among all constructed. The model selection procedure we
use was initially introduced by [3] in classical nonparametric estimation with
bounded response. [33] generalized this method to the case where the noise can
be unbounded but with a finite p-th moment for some p > 2. One can find a
more detailed review on such penalization methods in [14].
Firstly, we introduce the geometric grid created by [21] where to conduct our
model selection procedure. Assume that the bandwidth being considered falls
into the range [hmin , hmax ]. Recall that the ”ideal” bandwidth b
hn which is given
10
as
1
ℓ3 ℓ!ΦR(T )a2 mr log n 2β+1
b
,
(6.1)
hn = C1
L2 n
hmax , hmin can be chosen as
ℓ3 ℓ !ΦR(T )a2 mr log n 2β 1+1
ℓ∗3 ℓ∗ !ΦR(T )a2 mr log n 2β∗1+1
∗
∗
, hmin = C1 ∗
hmax = C1
,
2
∗2
L∗ n
L n
where [β∗ , β ∗ ] and [L∗ , L∗ ] are the possible ranges of β, L respectively. Obviously,
β is the most important parameter among all. Note that when those ranges are
not so clear, a natural upper bound of hmax is 1, and a typical choice of hmin
can be set to n−1/2 . Denote
r
r
h
h
1
max
max
.
d(h) = 1 ∨ 2 log
, dn = 2 log
, α(h) = p
h
hmin
d(h)
√
Apparently, dn = O( log n). Define grid H inductively by
n
o
hk
H = hk ∈ [hmin , hmax ] : h0 = hmax , hk+1 =
, k = 0, 1, 2, ... . (6.2)
1 + α(hk )
Note that the grid H is a decreasing sequence and the sequence becomes denser
as k grows.
We consider possible choices of ℓk for each hk . A trivial candidate set is
ℓk ∈ L := {⌊β∗ ⌋, ⌊β∗ ⌋ + 1, ..., ⌊β ∗ ⌋} ⊂ N∗ . If the size of this set is large,
one can shrink it through the correspondence (6.1) between h and β, ℓk ≤
1
n−1
mr log 2m
(1− dlog) log
log n−1 +log
−1
−1
log hk
hk
d
.
If
n
≥
m
for
some
d
>
1,
≤ ℓk ≤
2
2
n−1
log
−1
log hk
, which indicates the more the data, the narrower the range. We
2
denote the candidate set for ℓ as L. Then the set
H̃ = H × L := {(h, ℓ) : h ∈ H, ℓ ∈ L}
indexed a countable set of candidate
estimators. Once (hk , ℓk ) is fixed, one can
q
log 2m
take εk = D(ℓk + 1)R(T )Φa nmhk .
Now we introduce our model selection procedure based on H̃. We split the
data (τj , Xj , Yj ), j = 1, ..., 2n, into two parts with equal size. The first part of the
observations {(τj , Xj , Yj ) : j ∈ ~n } contains n data points, which are randomly
drawn without replacement from the original data set. We construct a sequence
bk , k = 1, 2, ... based on the training data set ~n through (4.1) for
of estimators A
b among {A
bk }, which
each pair in H̃. Our main goal is to select an estimator A
is as good as the one that has the smallest mean square error. We introduce an
bk which serves as a penalty term.
quantity πk associated with each estimator A
We use the remaining part of the data set {(τj , Xj , Yj ) : j ∈ ℓn } to perform the
selection procedure:
1 X
bk (τj ), Xj i)2 + πk .
(Yj − hA
(6.3)
k ∗ = arg min
k n
n
j∈ℓn
11
∗
b∗ = A
bk as the adaptive estimator. In practice, we suggest one to rank
Denote A
bk according to the following rule: 1. pairs with bigger h always
all estimators A
have smaller index; 2. if two pairs have the same h, the one with smaller ℓ has
smaller index. Our selection procedure can be summarized in Algorithm 1:
1. Construct the geometric grid H defined as in (6.2) and compute the
candidate set H̃;
2. Equally split the data set (τj , Xj , Yj ), j = 1, ..., N into two parts (~n and
ℓn ) by randomly drawing without replacement;
bk defined as in (4.1) using
3. For each pair in H̃, construct an estimator A
data set ~n ;
4. Using the second data set ℓn to perform the selection rule defined as in
(6.3).
Algorithm 1: Model Selection Procedure
The selection procedure described in Algorithm 1 have several advantages:
firstly, it chooses a global bandwidth instead of a local one; secondly, since our
bk , no matrix
selection procedure is only based on computations of entries of A
computation is involved in the last step, which saves a lot of computational
cost and can be easily applied to high dimensional problems; finally, step 3 and
4 can be easily parallelized. The following theorem shows that the integrated
b∗ measured by L2 -norm can be bounded by the smallest one among all
risk of A
candidates plus an extra term of order n−1 which is negligible.
Theorem 6.1. Under model (1.1), let (τj , Xj , Yj ), j = 1, ..., 2n be i.i.d. copies
of the random triplet (τ, X, Y ) with X uniformly distributed in X , τ uniformly
distributed in [0, 1], X and τ are independent, and |Y | ≤ a, a.s. for some constant a > 0; let A be a matrix valued function satisfying A1, A2, A3, and A4;
bk } be a sequence of estimators constructed from H̃; let A
b∗ be the adaptive
let {A
1
estimator selected through Algorithm 1. Then with probability at least 1 − nmr
Z 1
Z 1
o
n
1
b∗ (t) − A(t) 2 dt ≤ 3 min 1
bk (t) − A(t) 2 dt + πk + C(a) ,
A
A
2
2
k
m2 0
m2 0
n
n
(6.4)
where C(a) is a constant depending on a.
The proof of Theorem 6.1 can be found in section 7.7. Recall that Card(H) =
O(log n), we can take πk = kmr. Then πk ≤ c1 mr log n uniformly for all k
with some numerical constant c1 . According to Lepskii’s method that at least
one candidate in H gives the optimal bandwidth associated with the unknown
smoothness parameter β, together with the result of Theorem 4.1, the following
b∗ is
corollary is a direct consequence of Theorem 4.1 and 6.1, which shows A
adaptive.
Corollary 6.1. Assume that the conditions of Theorem 6.1 hold with πk =
1
kmr, and n > mr log n. Then with probability at least 1 − nmr−1
Z 1
2β
1
b∗ (t) − A(t) 2 dt ≤ C(a, Φ, ℓ, L) mr log n 2β+1
(6.5)
A
2
m2 0
n
12
where C(a, Φ, ℓ, L) is a constant depending on a, Φ, ℓ, and L.
7. Proofs
7.1. Proof of Theorem 3.1
Proof. Firstly, we introduce a sharp oracle inequality of ”locally integrated risk”
of estimator (3.2) in the following lemma.
Lemma 1. Assume that the condition of Theorem 3.1 holds. Then there exist
a numerical constants D > 0 such that for all
r log 2m _ (log 2m)Φ
,
ε ≥ D(ℓ + 1)R(T )Φa
nmh
nh
and for arbitrary η > 0, the estimator (3.3) satisfies with probability at least
1 − e−η
E2
τ − t D
1
0
A(τ ) − Sbh (τ ), X
EK
h
h
E2 D2 (ℓ + 1)2 Φ2 R(T )2 a2 (rank(S)m log 2m + η) o
τ − t D
n1
0
A(τ ) − S(τ ), X +
.
EK
≤ inf
S∈D h
h
nh
(7.1)
ℓ
P
τ −t0
Si p i h .
where S(τ ) :=
i=0
The proof of Lemma 1 can be derived from Theorem 19.1 in [17]. To be more
specific, one just needs to rewrite (3.1) as
E2
D
1 X
+ εkSk1.
Sbh = arg min
Ỹj − S, X̃j
S∈D n
j=1
n
hr
1
hK
τj −t0
h
(7.2)
r
r
i
τj −t0
τj −t0
τj −t0
τj −t0
τj −t0
1
1
p0
X
,
p
X
,
...,
p
Xj ,
K
K
j
1
j
ℓ
h
h
h
h
h
h
h
where X̃j = Diag
r
τ −t
and Ỹj = h1 K j h 0 Yj . Then Lemma 1 is simply an application of Theorem
19.1 in [17]. Indeed, when the loss function is set to squared loss, then all arguments in the proof of Lemma 1 can be reproduced from the original proof. Since
it is mostly tedious repeated arguments, we omit it here. One should notice that
in the original proof of Theorem 19.1 in [17], a matrix isometry condition needs
to be satisfied. That is kAk2L (Π̃) = µ0 kAk22 for some constant µ0 > 0 and any
2
A ∈ Hm with Π̃ being the distribution of X̃. One can easily check that it is true
with (7.2). This is also the primary reason why we abandoned the classical local polynomial estimator with the trivial polynomial basis and used orthogonal
polynomials instead.
13
Consider
τ − t E2
τ − t D
X
1
0
0
Sbih pi
A(τ ) −
,X
EK
h
h
h
i=0
ℓ
ℓ
ℓ
τ − t X
τ − t E2
τ − t D
X
1
0
0
0
Si p i
(Si − Sbih )pi
A(τ ) −
+
,X
= EK
h
h
h
h
i=0
i=0
τ − t E2
τ − t E2 1
τ − t D X
τ − t D
X
1
0
0
0
0
Si p i
(Si − Sbih )pi
, X + EK
A(τ ) −
,X
EK
h
h
h
h
h
h
i=0
i=0
ℓ
=
ℓ
ℓ
ℓ
τ − t ED X
τ − t E
τ − t D
X
2
0
0
0
Si p i
(Si − Sbih )pi
A(τ ) −
,X
,X
EK
h
h
h
h
i=0
i=0
(7.3)
Therefore, from (7.1) and (7.3), we have for any S ∈ D
+
τ − t E2
τ − t D X
1
0
0
(Si − Sbih )pi
,X
EK
h
h
h
i=0
ℓ
≤
ℓ
ℓ
τ − t ED X
τ − t E
τ − t D
X
2
0
0
0
Si p i
(Si − Sbih )pi
,X
,X
A(τ ) −
EK
h
h
h
h
i=0
i=0
D2 (ℓ + 1)2 Φ2 R(T )2 a2 (rank(S)m log 2m + η)
.
nh
ℓ
τ − t E2
c4 1
τ − t D
X
0
0
≤ 2
Si p i
A(τ ) −
,X
EK
c −1 h
h
h
i=0
c2 n D2 (ℓ + 1)2 Φ2 R(T )2 a2 (rank(S)m log 2m + η) o
,
+ 2
c −1
nh
+
where we used the fact that for any positive constants a and b, 2ab ≤
for some c > 1. Take S such that
ℓ
X
i=0
Si p i
(7.4)
1 2
2 2
c2 a +c b
τ − t
τ − t
A(ℓ) (t0 )hℓ τ − t0 ℓ
0
0
= A(t0 )+ A′ (t0 )h
+ ...+
. (7.5)
h
h
ℓ!
h
Note that this is possible since the right hand side is a matrix valued polynomial
0
of τ −t
up to order ℓ, and span{p0 , p1 , ..., pℓ } = span{1, x, ..., xℓ }. Under the
h
condition that all entries of A(k) (t) are bounded by a, then entries of Sk are
bounded by R(T )a. Thus, the corresponding S ∈ D. Obviously, rank(Si ) ≤
(ℓ + 1 − i)r. Since A ∈ Σ(β, L), we consider ℓ-th order Taylor expansion of A at
t0 to get
Ã(τ − t0 )ℓ
A(τ ) = A(t0 ) + A′ (t0 )(τ − t0 ) + ... +
,
(7.6)
ℓ!
(ℓ)
where à is the matrix with Ãij = aij (t0 + αij (τ − t0 )) for some αij ∈ [0, 1].
14
Then we apply the Taylor expansion (7.6) and identity (7.5) to get
τ − t E2
τ − t D
τ − t 1 LU (τ − t)β
X
1
1
0
0
0
Si p i
A(τ ) −
, X ≤ EK
EK
h
h
h
h
h
m2
ℓ!
i=0
(7.7)
where U denotes the matrix with all entries being 1. The first inequality is due
to aij ∈ Σ(β, L), and the second is due to |τ − t0 | ≤ h. Under the condition that
X is uniformly distributed in X , and the orthogonality of {pi }ℓi=0 , it is easy to
check that
ℓ
ℓ
ℓ
τ − t E2
τ − t D X
1
1 X bh
0
0
h
b
(Si − Si )pi
,X = 2
kSi − Si k22
EK
h
h
h
m
i=0
i=0
2
2
≤
L2 h2β
.
(ℓ!)2
(7.8)
Note that
Sbh (t0 ) − S(t0 )
2
2
=
ℓ
X
i=0
(Sbih − Si )pi (0)
2
2
≤ (ℓ + 1)Φ2
ℓ
X
i=0
Sbih − Si
2
,
2
(7.9)
where the second inequality is due to Cauchy-Schwarz inequality and pi are
uniformly bounded on [-1,1]. Combining (7.4), (7.7), (7.8), and (7.9), we get
with probability at least 1 − e−η
c4 2L2 h2β c2 n D2 (ℓ + 1)2 Φ2 R(T )2 a2 (rank(S)m log 2m + η) o
1 bh
,
kS (t0 )−A(t0 )k22 ≤ 2
+ 2
2
m
c − 1 (ℓ!)2
c −1
nh
By optimizing the right hand side with respect to h and take η = mr log n, we
take
1
ℓ3 (ℓ!)2 Φ2 R(T )2 a2 mr log n 2β+1
b
.
hn = C
L2 n
where C is a numerical constant. This completes the proof of the theorem.
7.2. Proof of Theorem 4.1
Proof. It is easy to see that
Z
1
0
b − A(t)k2 dt ≤
kA(t)
2
For each k,
Z t2k−1 +bhn
1
Sbkh (t) − A(t)
m2 t2k−1 −bhn
M/2 Z t2k−1 +b
hn
X
k=1
2
dt
2
t2k−1 −b
hn
kSbkh (t) − A(t)k22 dt.
(7.10)
D
E2
= Eτ,X I(t2k−1 −bhn ,t2k−1 +bhn ] A(τ ) − Sbh (τ ), X
By (7.1), (7.7) and arguments used to prove Theorem 3.1, we have with proba1
bility at least 1 − nmr
,
1
2
hn
m b
Z
t2k−1 +b
hn
t2k−1 −b
hn
Sbkh (t) − A(t)
2
dt
2
15
2β
mr log n 2β+1
.
≤ C1 (a, Φ, ℓ, L)
n
We take the union bound over k, from (7.10) we get with probability at least
1
,
1 − nmr−1
1
m2
Z
0
1
2β
b − A(t)k2 dt ≤ C2 (a, Φ, ℓ, L) mr log n 2β+1 .
kA(t)
2
n
where C2 (a, Φ, ℓ, L) is a constant depending on a, Φ, ℓ, L.
7.3. Proof of Theorem 4.2
Proof. In this proof, we use C(K) to denote any constant depending on K which
may vary from place to place. This simplifies the representation while does no
harm to the soundness of our proof. Consider
sup
t∈[h̃n ,1−h̃n ]
kÃ(t)−A(t)k ≤
sup
t∈[h̃n ,1−h̃n ]
kÃ(t)−EÃ(t)k+
sup
t∈[h̃n ,1−h̃n ]
kEÃ(t)−A(t)k.
(7.11)
The first term on the right hand side is recognized as the variance and the second
is the bias. Firstly, we deal with the bias term. Denote B(t0 ) := EÃ(t0 ) − A(t0 ),
t0 ∈ [h̃n , 1 − h̃n ]. Recall from (1.2), E(ξj |τj , Xj ) = 0 for any t0 ∈ [h̃n , 1 − h̃n ] we
have
n
τ − t
m2
m2 X τj − t0
0
K
(hA(τj ), Xj i+ξj )Xj =
hA(τ ), XiX.
EK
EÃ(t0 ) = E
nh j=1
h
h
h
By applying the Taylor expansion of A(τ ) as in (7.6) and the fact that K is a
kernel of order ℓ, we get
EÃ(t0 ) = E
m2 τ − t0 (τ − t0 )ℓ
m 2 τ − t0
hA(t0 ), XiX + E
K
K
hÃ, XiX,
h
h
h
h
ℓ!
where à is the same as in (7.6). It is easy to check that the first term on the
right hand side is A(t0 ). Therefore we rewrite B(t0 ) as
B(t0 ) = E
m2 τ − t0 (τ − t0 )ℓ
m2 τ − t0 (τ − t0 )ℓ
K
hÃ, XiX = E
K
hÃ−A(ℓ) (t0 ), XiX,
h
h
ℓ!
h
h
ℓ!
where the second equity is due to the fact that each element of A(t) is in Σ(β, L)
and K is a kernel of order ℓ. Then we can bound each element of matrix B(t0 )
as
Z 1
1 τ − t0 |τ − t0 |ℓ (ℓ)
(ℓ)
|Bij (t0 )| ≤
K
|aij (t0 + α(τ − t0 )) − aij (t0 )|dτ
h
h
ℓ!
0
Z 1
|uh|β
du
|K(u)|
≤L
ℓ!
0
Lhβ
≤ C(K)
.
ℓ!
16
Thus
sup
t∈[h̃n ,1−h̃n ]
kB(t)k ≤ C(K)
Lmhβ
.
ℓ!
(7.12)
On the other hand, for the variance term supt∈[h̃n ,1−h̃n ] kÃ(t) − EÃ(t)k2 , we
construct a δ − net on the interval [0, 1] with δ = 1/M , and
M = n2 ,
tj =
2j − 1
, j = 1, ..., M.
2M
Denote Sn (t) := Ã(t) − EÃ(t), then we have
sup
t∈[h̃n ,1−h̃n ]
kSn (t)k ≤ sup kSn (t)k ≤ max kSn (ti )k + sup kSn (t) − Sn (t′ )k.
i
t∈[0,1]
|t−t′ |≤δ
(7.13)
Next, we bound both terms on the right hand side respectively. For each ti ,
Sn (ti ) =
n
τ − t
m2 X τj − ti
j
i
K
Yj Xj − EK
Yj Xj .
nh j=1
h
h
The right hand side is a sum of zero mean random matrices, we apply the matrix
Bernstein inequality, see [30]. Under the assumption of Theorem 4.2, one can
easily check that with probability at least 1 − e−η ,
r
a2 (η + log 2m) _ a(η + log 2m)
2
.
kSn (ti )k ≤ C(K)m
mnh
nh
2
2
Indeed, by setting X̄ = mh K τ h−t Y X − E mh K τ −t
h Y X, it is easy to check
2
that UX̄ . kKk∞am2 /h and σX̄
. RK a2 m3 /h. By taking the union bound
over all i and setting η = 4 log n, we get with probability at least 1 − n−2 ,
2
max Sn (ti )
i
≤ C(K)
a2 m3 log n
,
nh
As for the second term on the right hand side of (7.13), by the assumption
that K is a Lipschitz function with Lipschitz constant LK , we have
sup kSn (t) − Sn (t′ )k ≤
|t−t′ |≤δ
≤
sup k(Ã(t) − Ã(t′ ))k + sup kE(Ã(t) − Ã(t′ ))k
|t−t′ |≤δ
|t−t′ |≤δ
3
LK am
LK am
+ 2 2 .
2
2
n h
n h
Thus with probability at least 1 − n−2 ,
sup
t∈[h̃n ,1−h̃n ]
kSn (t)k2 ≤ C(K)
a2 m3 log n
nh
Together with the upper bound we get on the bias in (7.12), we have with
probability at least 1 − n−2 ,
a2 m log n L2 h2β
1
2
.
k
Ã(t)
−
A(t)k
≤
C(K)
+
sup
2
nh
ℓ!2
t∈[h̃n ,1−h̃n ] m
17
Choose
1
a2 (ℓ!)2 m log n 2β+1
,
h̃n = C(K)
2
2βL n
we get
2β
a2 (ℓ!)2 m log n 2β+1
1
2
.
k
Ã(t)
−
A(t)k
≤
C(K)
2
2βL2 n
t∈[h̃n ,1−h̃n ] m
sup
7.4. Proof of Theorem 5.1
Proof. Without loss of generality, we assume that both m and r are even numbers. We introduce several notations which are key to construct the hypothesis
set. For some constant γ > 0, denote
m
r
C = Ã = (aij ) ∈ C 2 × 2 : aij ∈ {0, γ}, ∀1 ≤ i ≤ m/2, 1 ≤ j ≤ r/2 ,
and consider the set of block matrices
(
)
m
m
×
B(C) =
à à . . . à O ∈ C 2 2 : à ∈ C ,
(7.14)
where O denotes the m/2 × (m/2 − r⌊m/r⌋/2) zero matrix. Then we consider
a subset of Hermitian matrices Sm ⊂ Hm ,
("
#
)
b
Õ A
m×m
b
Sm =
: A ∈ B(C) .
(7.15)
b∗ Õ ∈ C
A
An immediate observation is that for any matrix A ∈ Sm , rank(A) ≤ r.
Due to the Varshamov-Gilbert bound (see Lemma 2.9 in [31]), there exists
a subset A0 ⊂ Sm with cardinality Card(A0 ) ≥ 2mr/32 + 1 containing the zero
m × m matrix 0 such that for any two distinct elements A1 and A2 of A0 ,
m2
mr m 2
⌊ ⌋γ ≥ γ 2
.
(7.16)
16 r
32
0
Let fn (t) denote the function fn (t) := Lhβn f t−t
hn , t ∈ [0, 1], where hn =
1
2β+1
c0 mr
, with some constant c0 > 0, and f ∈ Σ(β, 1/2)∩C ∞ and Supp(f ) =
n
[−1/2, 1/2]. Note that there exist functions f satisfying this condition. For instance, one can take
− 1
f (t) = αe 1−4u2 I(|u| < 1/2),
(7.17)
kA1 − A2 k22 ≥
for some sufficient small α > 0. It is easy to check that fn (t) ∈ Σ(β, L) on [0, 1].
We consider the following hypotheses of A at t0 :
b = Afn (t), t ∈ [0, 1] : A ∈ A0 .
Aβ0 := A(t)
18
The following claims are easy to check: firstly, any element in Aβ0 together with
its derivative have rank uniformly bounded by r, and the difference of any two
elements of Aβ0 satisfies the same property for fixed t0 ; secondly, the entries
of any element of Aβ0 together with its derivative are uniformly bounded by
some constant for sufficiently small chosen γ; finally, each element of A(t) ∈ Aβ0
belongs to Σ(β, L). Therefore, Aβ0 ⊂ A(r, a) with some chosen γ.
b1 (t) and A
b2 (t) of Aβ , the
According to (7.16), for any two distinct elements A
0
b
b
difference between A1 (t) and A2 (t) at point t0 is given by
2 2 2β 2
b1 (t0 ) − A
b2 (t0 )k2 ≥ γ L c0 f (0) m2 mr 2β+1 .
kA
2
32
n
2β
(7.18)
A
On the other hand, we consider the joint distributions Pτ,X,Y
such that τ ∼
U [0, 1], X ∼ Π0 where Π0 denotes the uniform distribution on X , τ and X are
independent, and
(
1
+ hA(τ ),Xi , Y = a,
PA (Y |τ, X) = 21 hA(τ4a),Xi
, Y = −a.
2 −
4a
A
One can easily check that as long as A(τ ) ∈ Aβ0 , such Pτ,X,Y
belongs to the
distribution class P(r, a). We denote the corresponding n−product probability
measure by PA . Then for any A(τ ) ∈ Aβ0 , the Kullback-Leibler Divergence
between P0 and PA is
1 − p0 (τ, X)
p0 (τ, X)
+ (1 − p0 (τ, X)) log
,
K(P0 , PA ) = nE p0 (τ, X) log
pA (τ, X)
1 − pA (τ, X)
where pA (τ, X) = 1/2 + hA(τ ), Xi/4a. Note that PA (Y = a|τ, X) ∈ [1/4, 3/4] is
guaranteed provided that |hA(t), Xi| ≤ a. Thus by the inequality − log(1 + u) ≤
−u + u2 /2, ∀u > −1, and the fact that PA (Y = a|τ, X) ∈ [1/4, 3/4], we have
K(P0 , PA ) ≤ nE2(p0 (τ, X) − pA (τ, X))2 ≤
n
EhA(τ ), Xi2 .
8a2
Recall that A(τ ) = Afn (τ ) ∈ Aβ0 , by τ ∼ U [0, 1] and X ∼ Π0 , we have
K(P0 , PA ) ≤
L2 kf k22 c2β+1
γ2
n 1 2
2 2β+1 2 2
0
L
kf
k
h
m
γ
≤
mr.
2 n
2
2
2
8a m
8a
(7.19)
Therefore, provided the fact that Card(A0 ) ≥ 2mr/32 + 1, together with (7.19),
we have
X
1
K(P0 , PA ) ≤ α log(Card(Aβ0 ) − 1)
(7.20)
β
Card(A0 ) − 1
β
A∈A0
is satisfied for any α > 0 if γ is chosen as a sufficiently small constant. In view
of (7.18) and (7.20), the lower bound (5.1) follows from Theorem 2.5 in [31].
19
7.5. Proof of Theorem 5.2
Proof. Without loss of generality, we assume that both m and r are even numbers. Take a real number c1 > 0, define
and
1
l n 2β+1
m
M = c1
,
mr
φj (t) = Lhβn f
t − t
j
hn
hn =
,
1
,
2M
tj =
j = 1, ...M,
2j − 1
,
2M
t ∈ [0, 1],
where f is defined the same as in (7.17).
Meanwhile, we consider o
the set of all
n
binary sequences of length M : Ω = ω = (ω1 , ..., ωM ), ωi ∈ {0, 1} = {0, 1}M .
By Varshamov-Gilbert bound, there exists a subset Ω0 = {ω 0 , ..., ω N } of Ω such
M
8
that ω 0 = (0, ..., 0) ∈ Ω0 , and d(ω j , ω k ) ≥ M
8 , ∀ 0 ≤ j < k ≤ N, and N ≥ 2 ,
where d(·, ·) denotes the Hamming distance of two binary sequences. Then we
o
n
M
P
ωj φj (t) : ω ∈ Ω0 .
define a collection of functions based on Ω0 : E = fω (t) =
j=1
From the result of Varshamov-Gilbert bound, we know that S := Card(E) =
M
Card(Ω0 ) ≥ 2 8 + 1. It is also easy to check that for all fω , fω′ ∈ E,
Z
0
1
(fω (t) − fω′ (t))2 dt =
M
X
j=1
(ωj − ωj′ )2
= L2 hn2β+1 kf k22
≥
Z
∆j
M
X
j=1
φ2j (t)dt
(ωj − ωj′ )2
(7.21)
2
L2 h2β
n kf k2 /16,
where ∆j = [(j − 1)/M, j/M ].
In what follows, we combine two fundamental results in coding theory: one
is Varshamov-Gilbert bound ([12, 32]) in its general form of a q-ary code, the
other is the volume estimate of Hamming balls. Let Aq (n, d) denote the largest
size of a q-ary code of block length n with minimal Hamming distance d.
Proposition 7.1. The maximal size of a q − ary code of block length n with
minimal Hamming distance d = pn, satisfies
Aq (n, d + 1) ≥ q n(1−hq (p)) ,
(7.22)
where p ∈ [0, 1 − 1/q], hq (p) = p logq (q − 1) − p logq p − (1 − p) logq (1 − p) is the
q − ary entropy function.
We now have all the elements needed in hand to construct our hypotheses set.
Denote Ω1 = {ω 1 , ..., ω N }, which is a subset of Ω0 without ω 0 . We then consider
o
n
M
P
ωj φj (t) : ω ∈ Ω1 .
a subset E1 of E which is given by E1 := fω (t) =
j=1
20
Clearly, S1 := Card(E1 ) ≥ 2M/8 . Then we define a new collection of matrix
valued functions as
r
m
C = Ã = (aij ) ∈ C 2 × 2 : aij ∈ {δfω : ω ∈ Ω1 , δ ∈ C, ∀1 ≤ i ≤ m/2, 1 ≤ j ≤ r/2}.
Obviously, the collection C is a S1 -ary code of block length mr/4. Thus, we can
apply the result of Proposition 7.1. It is easy to check that for p = 1/4, and
q≥4
1 − hq (p) = 1 − p logq
1
q−1
+ (1 − p) logq (1 − p) ≥ .
p
4
(7.23)
In our case, q = S1 ≥ 2M/8 and n = mr/4. If we take p = 1/4, we know that
mr/16
AS1 (mr/4, mr/16) ≥ AS1 (mr/4, mr/16 + 1) ≥ S1
.
(7.24)
In other words, (7.24) guarantees that there exists a subset H0 ⊂ C with
Card(H0 ) ≥ 2Mmr/128 such that for any A1 , A2 ∈ H0 , the Hamming distance
between A1 and A2 is at least mr/16. Now we define the building blocks of our
hypotheses set
o
n
H := H0 ∪ O m2 × r2 ,
r
where O m2 × r2 is the m
2 × 2 zero matrix. Obviously, H has size Card(H) ≥
2Mmr/64 + 1, and for any A1 (t), A2 (t) ∈ H, the minimum Hamming distance is
still greater than mr/16. We consider the set of matrix valued functions
(
)
B(H) =
à à . . . à O : à ∈ H ,
where O denotes the m/2 × (m/2 − r⌊m/r⌋/2) zero matrix. Finally, our hypotheses set of matrix valued functions Hm is defined as
("
#
)
b
Õ A
m×m
b ∈ B(H) .
Hm =
:A
b∗ Õ ∈ C
A
By the definition of Hm and similar to the arguments in proof of Theorem 5.1,
it is easy to check that Hm ⊂ A(r, a), and also
Card(Hm ) ≥ 2Mmr/64 + 1.
Now we consider any two different hypotheses Aj (t), Ak (t) ∈ Hm .
Z 1
Z
mr j m k 1
kAj (t) − Ak (t)k22 dt ≥ γ 2
2
(fω (t) − fω′ (t))2 dt,
16
r
0
0
where ω 6= ω ′ . Based on (7.21), we have
Z 1
2β
mr 2β+1
2
1
γ 2 L2 h2β
n kf k2
2
.
≥
c
kA
(t)
−
A
(t)k
dt
≥
∗
j
k
2
m2 0
256
n
21
(7.25)
(7.26)
(7.27)
where c∗ is a constant depending on kf k2 , L, c1 and γ.
On the other hand, we repeat the same analysis on the Kullback-Leibler
divergence K(P0 , PA ) as in the proof of Theorem 5.1. One can get
K(P0 , PA ) ≤
M Z
n
n 2X 1 2
γ 2 c2β+1
L2 M mrkf k22
2
1
EhA(τ
),
Xi
≤
γ
,
φ
(τ
)dτ
≤
j
2
2
8a
8a
8a2
j=1 0
(7.28)
where A(τ ) ∈ Hm . Combine (7.25) and (7.28) we know that
1
Card(Hm ) − 1
X
A(t)∈Hm
K(P0 , PA ) ≤ α log(Card(Hm ) − 1)
(7.29)
is satisfied for any α > 0 if γ is chosen as a sufficiently small constant. In view
of (7.27) and (7.29), the lower bound follows from Theorem 2.5 in [31].
7.6. Proof of Theorem 5.3
Proof. Without loss of generality,
assume that m is an even number.
o For some
n
m
constant γ > 0, denote V = v ∈ C 2 : ai ∈ {0, γ}, ∀ 1 ≤ i ≤ m/2 . Due to the
Varshamov-Gilbert bound (see Lemma 2.9 in [31]), there exists a subset V 0 ⊂ V
m
with cardinality Card(V 0 ) ≥ 2m/16 + 1 containing the zero vector 0 ∈ C 2 , and
such that for any two distinct elements v1 and v2 of V 0 ,
kv1 − v2 k22 ≥
Consider the set of matrices
n
v v
B(V) =
...
v
m 2
γ .
16
(7.30)
o
m
m
∈ C 2 × 2 : v ∈ V0 .
Clearly, B(V) is a collection of rank one matrices. Then we construct another
matrix set Vm ,
)
(
Õ V
m×m
∈C
: V ∈ B(V)
Vm =
V ∗ Õ
where Õ is the m/2 × m/2 zero matrix. Apparently, Vm ⊂ Hm .
On the other hand, we define the grid on [0, 1]
l
M = c2
1
m
2β+1
n
,
m + log n
and
φj (t) = Lhβn f
t − t
j
,
hn
22
hn =
1
,
2M
j = 1, ...M,
tj =
2j − 1
,
2M
t ∈ [0, 1]
where
f is defined
n
o the same as in (7.17), and c2 is some constant. Denote Φ :=
b =
φj : j = 1, ...M . We consider the following set of hypotheses: AβB := {A(t)
V φj (t) : V ∈ Vm , φj ∈ Φ}. One can immediately get that the size of AβB satisfies
Card(AβB ) ≥ (2m/16 + 1)M.
(7.31)
b of Aβ has
By construction, the following claims are obvious: any element A(t)
B
β
b
rank at most 2; the entries of A(t)
∈ AB are uniformly bounded for some
bij (t) ∈ Σ(β, L). Thus Aβ ⊂ A(a).
sufficiently small γ, and A
B
Now we consider the distance between two distinct elements A(t) and A′ (t)
of AβB . An immediate observation is that
sup kA(t) − A′ (t)k2 ≥
t∈[0,1]
1
sup kA(t) − A′ (t)k22 ,
4 t∈[0,1]
due to the fact that ∀t ∈ (0, 1), rank(A(t)−A′ (t)) ≤ 4. Then we turn to get lower
bound on sup kA(t) − A′ (t)k22 . Recall that by construction of AβB , we have for
t∈(0,1)
any A 6= A′ , A(t) = A1 φj (t), A′ (t) = A2 φk (t), where A1 , A2 ∈ Vm . There are
three cases need to be considered: 1). A1 6= A2 and j = k; 2). A1 = A2 6= 0 and
j 6= k; 3). A1 6= A2 and j 6= k.
For case 1,
m + log n 2β+1
m2 2 2 2β
γ L hn kf k2∞ ≥ c∗1 m2
,
16
n
2β
sup kA(t) − A′ (t)k22 = kA1 − A2 k22 kφj k2∞ ≥
t∈[0,1]
where c∗1 is a constant depending on kf k2∞ , β, L and γ.
For case 2,
m + log n 2β+1
m2 2 2 2β
,
γ L hn kf k2∞ ≥ c∗2 m2
16
n
2β
sup kA(t) − A′ (t)k22 = kA1 k22 kφj − φk k2∞ ≥
t∈[0,1]
where c∗2 is a constant depending on kf k2∞ , β, L and γ.
For case 3,
m + log n 2β+1
m2 2 2 2β
γ L hn kf k2∞ ≥ c∗3 m2
,
16
n
2β
sup kA(t) − A′ (t)k22 ≥ (kA1 k22 kφj k2∞ ∨ kA2 k22 kφk k2∞ ) ≥
t∈[0,1]
where c∗3 is a constant depending on kf k2∞ , β, L and γ.
Therefore, by the analysis above we conclude that for any two distinct elements A(t) and A′ (t) of AβB ,
m + log n 2β+1
1
sup kA(t) − A′ (t)k22 ≥ c∗ m2
,
4 t∈[0,1]
n
t∈[0,1]
(7.32)
where c∗ is a constant depending on kf k2∞ , L, γ and β.
23
2β
sup kA(t) − A′ (t)k2 ≥
Meanwhile, we repeat the same analysis on the Kullback-Leibler divergence
K(P0 , PA ) as in the proof of Theorem 5.1. One can get that for any A ∈ AβB ,
the Kullback-Leibler divergence K(P0 , PA ) between P0 and PA satisfies
K(P0 , PA ) ≤
n
n
E|hA(τ ), Xi|2 ≤ 2 γ 2
2
8a
8a
Z
1
0
φ2j (τ )dτ ≤
γ 2 c2β+1
L2 (m + log n)kf k22
2
.
8a2
(7.33)
Combine (7.31) and (7.33) we know that
1
Card(AβB ) − 1
X
A∈Aβ
B
K(P0 , PA ) ≤ α log(Card(AβB ) − 1)
(7.34)
is satisfied for any α > 0 if γ is chosen as a sufficiently small constant. In view
of (7.32) and (7.34), the lower bound follows from Theorem 2.5 in [31].
7.7. Proof of Theorem 6.1
bk , denote the difference in empirical loss between A
bk and A by
Proof. For any A
n
n
n
X
X
1X
bk (τj ), Xj i)2 − 1
bk , A) : = 1
(Yj − hA
(Yj − hA(τj ), Xj i)2 = −
Uj ,
rn (A
n j=1
n j=1
n j=1
bk (τj ), Xj i)2 . It is easy to check that
where Uj = (Yj − hA(τj ), Xj i)2 − (Yj − hA
bk (τj ) − A(τj ), Xj i − hA
bk (τj ) − A(τj ), Xj i2 . (7.35)
Uj = 2(Yj − hA(τj ), Xj i)hA
bk (τ ) − A(τ ), Xi2 . The following concentration inWe denote r(Abk , A) := EhA
equality developed by [10] to prove Bernstein’s inequality is key to our proof.
Lemma 2. Let Uj , j = 1, ..., n be independent bounded
Pn random variables
satisfying |Uj − EUj | ≤ M with h = M/3. Set Ū = n−1 j=1 Uj . Then for all
t>0
n
nεvar(Ū ) o
t
≤ e−t ,
+
P Ū − EŪ ≥
nε
2(1 − c)
with 0 < εh ≤ c < 1.
Firstly, we bound the variance of Uj . Under the assumption that |Y | and
|hA(τ ), Xi| are bounded by a constant a, one can easily check that h = 8a2 /3.
Given E(Yj |τj , Xj ) = hA(τj ), Xj i, we know that the covariance between the
two terms on the right hand side of (7.35) is zero. Conditionally on (τ, X), the
second order moment of the first term satisfies
bk (τj ) − A(τj ), Xj i2 ≤ 4a2 r(Abk , A).
4EσY2 |τ,X hA
To see why, one can consider the random variable Ỹ with the distribution P{Ỹ =
a} = P{Ỹ = −a} = 1/2. The variance of Y is always bounded by the variance
24
bk (τj ), Xj i| are bounded
of Ỹ which is a2 under the assumption that |Yj | and |hA
by a constant a > 0. Similarly, we can get that the variance of the second term
bk (τj )− A(τj ), Xj i2 . As a result,
conditioned on (τ, X) is also bounded by 4a2 EhA
2
k
bk with
b
nvar(Ū ) ≤ 8a r(A , A). By the result of Lemma 2, we have for any A
−t
probability at least 1 − e
2
bk
bk , A) < t + 4a εr(A , A) .
r(Abk , A) − rn (A
nε
1−c
Set t = επk + log 1/δ, we get with probability at least 1 − δ/eεπk
log 1/δ
2
bk , A) + πk + 4a
.
(1 − α)r(Abk , A) < rn (A
n
(1 − c)α
n
where α = 4a2 ε/(1 − c) < 1. Denote
n
πk o
.
k̃ ∗ = arg min r(Abk , A) +
k
n
b∗ , we have with probability at least 1 − δ/eεbπ∗
By the definition of A
log 1/δ
2
bk̃∗ , A) + πk̃∗ + 4a
(1 − α)r(Ab∗ , A) < rn (A
.
n
(1 − c)α
n
(7.36)
b∗ .
where π
b∗ is the penalty terms associated with A
Now we apply the result of Lemma 2 one more time and set t = log 1/δ, we
get with probability at least 1 − δ
bk̃∗ , A) ≤ (1 + α)r(Abk̃∗ , A) +
rn (A
4a2 log 1/δ
.
(1 − c)α n
(7.37)
Apply the union bound of (7.36) and (7.37), we get with probability at least
∗
1 − δ(1 + e−εbπ )
b∗ , A) ≤
r(A
π ∗
4a2
log 1/δ
(1 + α) bk̃∗
r(A , A) + k̃ +
.
(1 − α)
n
(1 − c)α(1 − α) n
By taking ε = 3/32a2 and c = εh,
2
bk̃∗ , A) + πk̃∗ + 64a log 1/δ .
r(Ab∗ , A) ≤ 3 r(A
n
3
n
By taking δ = 1/nmr and adjusting the constant, we have with probability at
least 1 − 1/nmr
1
m2
Z
0
1
b∗ (t)−A(t)
A
2
dt
2
≤ 3 min
k
n 1 Z 1
bk (t)−A(t)
A
m2 0
where C(a) is a constant depending on a.
25
πk
2
dt+
2
n
o
mr log n
+C(a)
n
Acknowledgements
The author would like to thank Dr. Vladimir Koltchinskii for all the guidance,
discussion and inspiration along the course of writing this paper. The author also
would like to thank NSF for its generous support through Grants DMS-1509739
and CCF-1523768.
References
[1] Rudolf Ahlswede and Andreas Winter. Strong converse for identification
via quantum channels. IEEE Transactions on Information Theory, 48(3):
569–579, 2002.
[2] Jean-Pierre Aubin and Ivar Ekeland. Applied nonlinear analysis. Courier
Corporation, 2006.
[3] Andrew R Barron. Complexity regularization with application to artificial
neural networks. Nonparametric Functional Estimation and Related Topics,
335:561–576, 1991.
[4] Olivier Bousquet. A bennett concentration inequality and its application
to suprema of empirical processes. Comptes Rendus Mathematique, 334(6):
495–500, 2002.
[5] Jian-Feng Cai, Emmanuel J Candès, and Zuowei Shen. A singular value
thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010.
[6] Emmanuel J Candes and Yaniv Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925–936, 2010.
[7] Emmanuel J Candès and Benjamin Recht. Exact matrix completion via
convex optimization. Foundations of Computational mathematics, 9(6):717,
2009.
[8] Emmanuel J Candès and Terence Tao. The power of convex relaxation:
Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2010.
[9] Sourav Chatterjee. Matrix estimation by universal singular value thresholding. The Annals of Statistics, 43(1):177–214, 2015.
[10] Cecil C Craig. On the Tchebychef inequality of Bernstein. The Annals of
Mathematical Statistics, 4(2):94–102, 1933.
[11] Jianqing Fan and Irene Gijbels. Local polynomial modelling and its applications: monographs on statistics and applied probability 66, volume 66. CRC
Press, 1996.
[12] Edgar N Gilbert. A comparison of signalling alphabets. Bell Labs Technical
Journal, 31(3):504–522, 1952.
[13] David Gross. Recovering low-rank matrices from few coefficients in any
basis. IEEE Transactions on Information Theory, 57(3):1548–1566, 2011.
[14] Vladimir Koltchinskii. Local rademacher complexities and oracle inequalities in risk minimization. The Annals of Statistics, 34(6):2593–2656, 2006.
[15] Vladimir Koltchinskii. Oracle inequalities in empirical risk minimization
and sparse recovery problems. 2011.
26
[16] Vladimir Koltchinskii. Von Neumann entropy penalization and low-rank
matrix estimation. The Annals of Statistics, pages 2936–2973, 2011.
[17] Vladimir Koltchinskii. Sharp oracle inequalities in low rank estimation. In
Empirical Inference, pages 217–230. Springer, 2013.
[18] Vladimir Koltchinskii, Karim Lounici, and Alexandre B Tsybakov. Nuclearnorm penalization and optimal rates for noisy low-rank matrix completion.
The Annals of Statistics, 39(5):2302–2329, 2011.
[19] Yehuda Koren. Collaborative filtering with temporal dynamics. Communications of the ACM, 53(4):89–97, 2010.
[20] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8), 2009.
[21] Oleg V Lepski and Vladimir G Spokoiny. Optimal pointwise adaptive methods in nonparametric estimation. The Annals of Statistics, pages 2512–
2546, 1997.
[22] Oleg V Lepski, Enno Mammen, and Vladimir G Spokoiny. Optimal spatial
adaptation to inhomogeneous smoothness: an approach based on kernel
estimates with variable bandwidth selectors. The Annals of Statistics, pages
929–947, 1997.
[23] Oleg V Lepskii. On a problem of adaptive estimation in gaussian white
noise. Theory of Probability & Its Applications, 35(3):454–466, 1991.
[24] Elliott H Lieb. Convex trace functions and the wigner-yanase-dyson conjecture. Advances in Mathematics, 11(3):267–288, 1973.
[25] Sahand Negahban and Martin J Wainwright. Restricted strong convexity
and weighted matrix completion: Optimal bounds with noise. Journal of
Machine Learning Research, 13(May):1665–1697, 2012.
[26] Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed
minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review, 52(3):471–501, 2010.
[27] Angelika Rohde and Alexandre B Tsybakov.
Estimation of highdimensional low-rank matrices. The Annals of Statistics, 39(2):887–930,
2011.
[28] Amit Singer and Mihai Cucuringu. Uniqueness of low-rank matrix completion by rigidity theory. SIAM Journal on Matrix Analysis and Applications,
31(4):1621–1641, 2010.
[29] Michel Talagrand. New concentration inequalities in product spaces. Inventiones mathematicae, 126(3):505–563, 1996.
[30] Joel A Tropp. User-friendly tail bounds for sums of random matrices.
Foundations of computational mathematics, 12(4):389–434, 2012.
[31] Alexandre B Tsybakov. Introduction to nonparametric estimation. revised
and extended from the 2004 french original. translated by vladimir zaiats,
2009.
[32] Rom R Varshamov. Estimate of the number of signals in error correcting
codes. In Dokl. Akad. Nauk SSSR, volume 117, pages 739–741, 1957.
[33] Marten Wegkamp. Model selection in nonparametric regression. The Annals of Statistics, 31(1):252–273, 2003.
[34] Luwan Zhang, Grace Wahba, and Ming Yuan. Distance shrinkage and
27
euclidean embedding via regularized kernel estimation. Journal of the Royal
Statistical Society: Series B (Statistical Methodology), 78(4):849–867, 2016.
8. Appendix: Proof of Lemma 1
The proof of Lemma 1 follows from a similar approach introduced by [17].
Pr
Proof. For any S ∈ Hm of rank r, S = j=1 λi (ej ⊗ ej ), where λj are nonzero eigenvalues of S (repeated with their multiplicities) and ej ∈ Cm are the
r
P
sign(λi )(ej ⊗ej ).
corresponding orthonormal eigenvectors. Denote sign(S) :=
j=1
Let PL , PL⊥ be the following orthogonal projectors in the space (Hm , h·, ·i):
PL (A) := A − PL⊥ APL⊥ , PL⊥ (A) := PL⊥ APL⊥ , ∀A ∈ Hm
where PL denotes the orthogonal projector on the linear span of {e1 , ..., er },
and PL⊥ is its orthogonal complement. Clearly, this formulation provides a
decomposition of a matrix A into a ”low rank part” PL (A) and a ”high rank
part” PL⊥ (A) if rank(S) = r is small. Given b > 0, define the following cone in
the space Hm :
K(D; L; b) := {A ∈ D : kPL⊥ Ak1 ≤ bkPL (A)k1 }
which consists of matrices with a ”dominant” low rank part if S is low rank.
Firstly, we can rewrite (3.1) as
E2
D
1 X
Sbh = arg min
+ εkSk1.
Ỹj − S, X̃j
S∈D n
j=1
n
(8.1)
r
r
i
hr
τj −t0
τj −t0
τ −t
τj −t0
τj −t0
τj −t0
1
1
1
p
X
,
p
X
,
...,
pℓ j h 0 Xj ,
K
K
where X̃j = Diag
0
j
1
j
h
h
h
h
h
h
hK
h
r
τ −t
and Ỹj = h1 K j h 0 Yj .
Denote the loss function as
E2
D
L Ỹ ; hS(τ ), X̃i := Ỹj − S, X̃j
,
and the risk
1 τ − t0
P L Ỹ ; hS(τ ), X̃i := EL Ỹ ; hS(τ ), X̃i = σ 2 + E K
Y − hS(τ ), Xi)2
h
h
Since Sbh is a solution of the convex optimization problem (8.1), there exists
a Vb ∈ ∂ Sbh 1 , such that for ∀S ∈ D (see [2] Chap. 2)
E
2 X D bh
S , X̃j − Ỹj hSbh − S, X̃j i + εhVb , Sbh − Si ≤ 0.
n j=1
n
28
This implies that, for all S ∈ D,
b X̃i)hSbh − S, X̃i + εhVb , Sbh − Si
EL′ (Ỹ ; hS,
n
2 X bh
(hS , X̃j i − Ỹj )hSbh − S, X̃j i.
≤ EL′ (Ỹ ; hSbh , X̃i)hSbh − S, X̃i −
n j=1
(8.2)
where L′ denotes the partial derivative of L(y; u) with respect to u. One can
easily check that for ∀S ∈ D,
EL′ (Ỹ ; hSbh , X̃i)hSbh − S, X̃i ≥ E(L(Ỹ ; hSbh , X̃i) − L(Ỹ ; hS, X̃i)) + kSbh − Sk2L2 (Π̃) .
(8.3)
where Π̃ denotes the distribution of X̃. If EL(Ỹ ; hSbh , X̃i) ≤ EL(Ỹ ; hS, X̃i) for
∀S ∈ D, then the oracle inequality in Lemma 1 holds trivially. So we assume
that EL(Ỹ ; hSbh , X̃i) > EL(Ỹ ; hS, X̃i) for some S ∈ D. Thus, inequalities (8.2)
and (8.3) imply that
EL(Ỹ ; hSbh , X̃i) + kSbh − Sk2L2 (Π̃) + εhVb , Sbh − Si
≤ EL(Ỹ ; hS, X̃i) + EL′ (Ỹ ; hSbh , X̃i)hSbh − S, X̃i −
n
2 X bh
(hS , X̃j i − Ỹj )hSbh − S, X̃j i.
n j=1
(8.4)
According to the well known representation of subdifferential of nuclear norm,
see [15] Sec. A.4, for any V ∈ ∂kSk1 , we have
V := sign(S) + PL⊥ (W ), W ∈ Hm , kW k ≤ 1.
By the duality between nuclear norm and operator norm
hPL⊥ (W ), Sbh − Si = hPL⊥ (W ), Sbh i = hW, PL⊥ (Sbh )i = kPL⊥ (Sbh )k1 .
Therefore, by the monotonicity of subdifferentials of convex function k · k1 , for
any V := sign(S) + PL⊥ (W ) ∈ ∂kSk1 , we have
hV, Sbh − Si = hsign(S), Sbh − Si + kPL⊥ (Sbh )k1 ≤ hVb , Sbh − Si,
(8.5)
we can use (8.5) to change the bound in (8.4) to get
EL(Ỹ ; hSbh , X̃i) + kS − Sbh k2L2 (Π̃) + εkPL⊥ (Sbh )k1
≤ EL(Ỹ ; hS, X̃i) + εhsign(S), S − Sbh i + EL′ (Ỹ ; hSbh , X̃i)hSbh − S, X̃i
n
2 X bh
(hS , X̃j i − Ỹj )hSbh − S, X̃j i.
−
n j=1
(8.6)
For the simplicity of representation, we use the following notation to denote the
empirical process:
(P − Pn )(L′ (Ỹ ; hSbh , X̃i))hSbh − S, X̃i :=
n
2 X bh
EL′ (Ỹ ; hSbh , X̃i)hSbh − S, X̃i −
(hS , X̃j i − Ỹj )hSbh − S, X̃j i.
n j=1
29
(8.7)
The following part of the proof is to derive an upper bound on the empirical
process (8.7). Before we start with the derivation, let us present several vital
ingredients that will be used in the following literature. For a given S ∈ D and
for δ1 , δ2 , δ3 , δ4 ≥ 0, denote
A(δ1 , δ2 ) := {A ∈ D : A − S ∈ K(D; L; b), kA − SkL2 (Π̃) ≤ δ1 , kPL⊥ Ak1 ≤ δ2 },
Ã(δ1 , δ2 , δ3 ) := {A ∈ D : kA − SkL2 (Π̃) ≤ δ1 , kPL⊥ Ak1 ≤ δ2 , kPL (A − S)k1 ≤ δ3 },
and
Ǎ(δ1 , δ4 ) := {A ∈ D : kA − SkL2 (Π̃) ≤ δ1 , kA − Sk1 ≤ δ4 },
αn (δ1 , δ2 ) := sup{|(P − Pn )(L′ (Ỹ ; hA, X̃i))hA − S, X̃i| : A ∈ A(δ1 , δ2 )},
α̃n (δ1 , δ2 , δ3 ) := sup{|(P − Pn )(L′ (Ỹ ; hA, X̃i))hA − S, X̃i| : A ∈ Ã(δ1 , δ2 , δ3 )},
α̌n (δ1 , δ4 ) := sup{|(P − Pn )(L′ (Ỹ ; hA, X̃i))hA − S, X̃i| : A ∈ Ǎ(δ1 , δ4 )}.
Given the definitions above, Lemma 3 below shows upper bounds on the three
quantities αn (δ1 , δ2 ), α̃n (δ1 , δ2 , δ3 ), α̌n (δ1 , δ4 ). The proof of Lemma 3 can be
found in section 8.1. Denote
n
X
(8.8)
εj X̃j
Ξ := n−1
j=1
where εj are i.i.d. Rademacher random variables.
Lemma 3. Suppose 0 < δk− < δk+ , k = 1, 2, 3, 4. Let η > 0 and
η̄ := η +
2
X
log([log2 (
δk+
)] + 2) + log 3,
δk−
3
X
log([log2 (
δk+
)] + 2) + log 3,
δk−
k=1
η̃ := η +
k=1
η̌ := η +
X
k=1,k=4
log([log2 (
δk+
)] + 2) + log 3.
δk−
Then with probability at least 1 − e−η , for all δk ∈ [δk− , δk+ ], k=1,2,3
p
2(ℓ + 1)R(T )Φaη̄
C1 (ℓ + 1)R(T )Φa n
√
√
EkΞk( rank(S)mδ1 +δ2 )+
+δ1
h
n h
(8.9)
r
2(ℓ + 1)R(T )Φaη̃
η̃ o
C2 (ℓ + 1)R(T )Φa n
√
√
EkΞk(δ2 + δ3 ) +
α̃n (δ1 , δ2 , δ3 ) ≤
+ δ1
n
h
n h
(8.10)
r o
η̌
C3 (ℓ + 1)R(T )Φa n
2(ℓ + 1)R(T )Φaη̌
√
√
(8.11)
+ δ1
α̌n (δ1 , δ4 ) ≤
EkΞkδ4 +
n
h
n h
where C1 , C2 , and C3 are numerical constants.
30
αn (δ1 , δ2 ) ≤
r o
η̄
n
Since both Sbh and S are in D, by the definition of α̃ and α̌, we have
(P − Pn )(L′ (Ỹ ; hSbh , X̃i))hSbh − S, X̃i ≤ α̃(kSbh − SkL2 (Π̃) ; kPL⊥ Sbh k1 ; kPL (Sbh − S)k1 ),
(8.12)
and
(P − Pn )(L′ (Ỹ ; hSbh , X̃i))hSbh − S, X̃i ≤ α̌(kSbh − SkL2 (Π̃) ; kSbh − Sk1 ), (8.13)
If Sbh − S ∈ K(D; L; b), by the definition of α, we have
(P − Pn )(L′ (Ỹ ; hSbh , X̃i))hSbh − S, X̃i ≤ α(kSbh − SkL2 (Π̃) ; kPL⊥ Sbh k1 ), (8.14)
Assume for a while that
kSbh − SkL2 (Π̃) ∈ [δ1− , δ1+ ], kPL⊥ Sbh k1 ∈ [δ2− , δ2+ ], kPL⊥ (Sbh − S)k1 ∈ [δ3− , δ3+ ].
(8.15)
By the definition of subdifferential, for any Vb ∈ ∂kSbhk1 ,
hVb , S − Sbh i ≤ kSk1 − kSbh k1 .
Then we apply (8.13) in bound (8.4) and use the upper bound on α̌n (δ1 , δ4 ) of
Lemma 3, and get with probability at least 1 − e−η ,
P (L(Ỹ ; hSbh , X̃i)) + kSbh − Sk2L2 (Π̃)
≤ P (L(Ỹ ; hS, X̃i)) + ε(kSk1 − kSbh k1 ) + α̌n (kSbh − SkL2 (Π̃) , kSbh − Sk1 )
≤ P (L(Ỹ ; hS, X̃i)) + ε(kSk1 − kSbh k1 )
r o
η̌
C3 (ℓ + 1)R(T )Φa n
2(ℓ
+
1)R(T
)Φaη̌
h
h
√
√
.
+
EkΞkkSb − Sk1 +
+ kSb − SkL2 (Π̃)
n
h
n h
(8.16)
Assuming that
C(ℓ + 1)R(T )Φa
√
ε>
EkΞk,
(8.17)
h
where C = C1 ∨ 4C2 ∨ C3 . From (8.16)
C3 (ℓ + 1)2 R(T )2 Φ2 a2 η̃
.
P (L(Ỹ ; hSbh , X̃i)) ≤ P (L(Ỹ ; hS, X̃i)) + 2εkSk1 +
nh
(8.18)
We now apply the upper bound on α̃n (kSbh − SkL2 (Π̃) , kPL⊥ Sbh )k1 , kPL (Sbh −
S)k1 ) to (8.6) and get with probability at least 1 − e−η ,
P (L(Ỹ ; hSbh , X̃i)) + kSbh − Sk2L2 (Π̃) + εkPL⊥ (Sbh )k1
≤ P (L(Ỹ ; hS, X̃i)) + εkPL (Sbh − S)k1 + α̃n (kSbh − SkL2 (Π̃) , kPL⊥ Sbh )k1 , kPL (Sbh − S)k1 )
≤ P (L(Ỹ ; hS, X̃i)) + εkPL (Sbh − S)k1
o C (ℓ + 1)2 R(T )2 Φ2 a2 η̃
C2 (ℓ + 1)R(T )Φa n
2
√
,
EkΞk(kPL⊥ Sbh )k1 + kPL (Sbh − S)k1 ) +
+
nh
h
(8.19)
31
where the first inequality is due to the fact that
|hsign(S), S−Sbh i| = |hsign(S), PL (S−Sbh )i| ≤ ksign(S)kkPL (S−Sbh )k1 ≤ kPL (S−Sbh )k1 .
With assumption (8.17) holds, we get from (8.19)
P L(Ỹ ; hSbh , X̃i) + εkPL⊥ (Sbh )k1
5ε
ε
C2 (ℓ + 1)2 R(T )2 Φ2 a2 η̃
kPL (Sbh − S)k1 + kPL⊥ (Sbh )k1 +
.
4
4
nh
(8.20)
If the following is satisfied:
≤ P L(Ỹ ; hS, X̃i) +
5ε
ε
C2 (ℓ + 1)2 R(T )2 Φ2 a2 η̃
≥ kPL (Sbh − S)k1 + kPL⊥ (Sbh )k1 ,
nh
4
4
(8.21)
we can just conclude that
C2 (ℓ + 1)2 R(T )2 Φ2 a2 η̃
P (L(Ỹ ; hSbh , X̃i)) ≤ P (L(Ỹ ; hS, X̃i)) +
,
nh
(8.22)
which is sufficient to meet the bound of Lemma 1. Otherwise, by the assumption
that P (L(Ỹ ; hSbh , X̃i)) > P (L(Ỹ ; hS, X̃i)), one can easily check that
kPL⊥ (Sbh − S)k1 ≤ 5kPL (Sbh − S)k1 ,
which implies that Sbh − S ∈ K(D; L; 5). This fact allows us to use the bound on
αn (δ1 , δ2 ) of Lemma 3. We get from (8.6)
P (L(Ỹ ; hSbh , X̃i)) + kSbh − Sk2L2 (Π̃) + εkPL⊥ (Sbh )k1
≤ P (L(Ỹ ; hS, X̃i)) + εhsign(S), S − Sbh i
p
C1 (ℓ + 1)R(T )Φa
C1 (ℓ + 1)2 R(T )2 Φ2 a2 η̄
√
+
EkΞk( rank(S)mkSbh − SkL2 (Π̃) + kPL⊥ (Sbh )k1 ) +
.
nh
h
(8.23)
By applying the inequality
p
hsign(S), Sbh − Si ≤ m rank(S)kSbh − SkL2 (Π̃) ,
and the assumption (8.17), we have with probability at least 1 − e−η ,
C1 (ℓ + 1)2 R(T )2 Φ2 a2 η̄
.
nh
(8.24)
To sum up, the bound of Lemma 1 follows from (8.18), (8.22) and (8.24)
provided that condition (8.17) and condition (8.15) hold.
We still need to specify δk− , δk+ , k = 1, 2, 3, 4 to establish the bound of the
theorem. By the definition of Sbh , we have
P (L(Ỹ ; hSbh , X̃i)) ≤ P (L(Ỹ ; hS, X̃i)) + ε2 m2 rank(S) +
Pn (L(Ỹ ; hX, Sbh i)) + εkSbh k1 ≤ Pn (L(Ỹ ; hX, 0i)) ≤ Q,
32
Q
⊥ bh
bh
bh
implying that kSbh k1 ≤ Q
ε . Next, kPL S k1 ≤ kS k1 ≤ ε and kPL (S − S)k1 ≤
2Q
h
h
b
b
2kS − Sk1 ≤ ε + 2kSk1. Finally, we have kS − SkL2 (Π̃) ≤ 2a. Thus, we can
2Q
Q
+
+
take δ1+ := 2a, δ2+ := Q
ε , δ3 := ε + 2kSk1 , δ4 := ε + kSk1 . With these choices,
+
δk , k = 1, 2, 3, 4 are upper bounds on the corresponding norms in condition
2
δ+
2
δ+
2
δ+
a
a
a
(8.15). We choose δ1− := √an , δ2− := nε
∧ 22 , δ3− := nε
∧ 23 , δ4− := nε
∧ 24 .
Let η ∗ := η + 3 log(B log2 (kSk1 ∨ n ∨ ε ∨ a−1 ∨ Q)). It is easy to verify that
η̄ ∨ η̃ ∨ η̃ ≤ η ∗ . for a proper choice of numerical constant B in the definition
of η ∗ . When condition (8.15) does not hold, which means at least one of the
numbers δk− , k = 1, 2, 3, 4 we chose is not a lower bound on the corresponding
norm, we can still use the bounds
(P − Pn )(L′ (Y ; hSbh , X̃i))hSbh − S, X̃i
≤ α̃(kSbh − Sk
∨ δ − ; kPL⊥ Sbh k1 ∨ δ − ; kPL (Sbh − S)k1 ∨ δ − ),
L2 (Π̃)
and
1
2
(8.25)
3
−
−
b ε − Sk
bh
(P − Pn )(L′ (Y ; hSbh , X̃i))hSbh − S, X̃i ≤ α̌(kA(t)
L2 (Π̃) ∨ δ1 ; kS − Sk1 ∨ δ4 ),
(8.26)
instead of (8.12), (8.13). In the case when Sbh − S ∈ K(D; L; 5), we can use the
bound
(P − Pn )(L′ (Y ; hSbh , X̃i))hSbh − S, X̃i ≤ α(kSbh − SkL2 (Π̃) ∨ δ1− ; kPL⊥ Sbh k1 ∨ δ2− ),
(8.27)
instead of bound (8.14). Then one can repeat the arguments above with only
minor modifications. By the adjusting the constants, the result of Lemma 1
holds.
The last thing we need to specify is the size of ε which controls the nuclear
norm penalty. Recall that from condition (8.17), the essence is to control EkΞk.
Here we use a simple but powerful noncommutative matrix Bernstein inequalities. The original approach was introduced by [1]. Later, the result was improved
by [30] based on the classical result of [24]. We give the following lemma which
is a direct consequence of the result proved by [30], and we omit the proof here.
Lemma 4. Under the model (1.1), Ξ is defined as in (8.8) with τj are i.i.d.
uniformly distributed in [0,1], and εj are i.i.d. Rademacher random variables,
and Xj are i.i.d uniformly distributed in X . Then for any η > 0, with probability
at least 1 − e−η
r
(η + log 2m) _ (η + log 2m)Φ
√
,
kΞk ≤ 4
nm
n h
and by integrating its exponential tail bounds
r log 2m _ (log 2m)Φ
√
EkΞk ≤ C
nm
n h
where C is a numerical constant.
33
Together with (8.17), we know for some numerical constant D > 0,
r
Φa(ℓ + 1)R(T ) log 2m _ (log 2m)Φ
√
√
ε≥D
.
nm
h
n h
which completes the proof of Lemma 1.
8.1. Proof of Lemma 3
Proof. We only prove the first bound in detail, and proofs of the rest two bounds
are similar with only minor modifications. By Talagrand’s concentration inequality [29], and its Bousquet’s form [4], with probability at least 1 − e−η ,
√
24(ℓ + 1)2 R(T )2 Φ2 a2 η 12(ℓ + 1)R(T )Φaδ1 η
√
+
αn (δ1 , δ2 ) ≤ 2Eαn (δ1 , δ2 ) +
.
nh
nh
(8.28)
By standard Rademacher symmetrization inequalities, see [15], Sec. 2.1, we can
get
n
n 1X
o
Eαn (δ1 , δ2 ) ≤ 4E sup
εj (hA, X̃j i − Ỹj )hA − S, X̃j i : A ∈ A(δ1 , δ2 ) ,
n j=1
(8.29)
where {εj } are i.i.d. Rademacher random variables independent of {(τj , Xj , Ỹj )}.
√ and |v|, |u| ≤
Then we consider the function f (u) = (u−y+v)u, where |y| ≤ 2Φa
h
)Φa
√
Clearly, this function has a Lipschitz constant 6(ℓ+1)R(T
. Thus
h
by comparison inequality, see [15], Sec. 2.2, we can get
n
o
n
X
εj (hA, X̃j i − Ỹj )hA − S, X̃j i : A ∈ A(δ1 , δ2 )
E sup n−1
2(ℓ+1)R(T )Φa
√
.
h
j=1
n
o
n
X
6(ℓ + 1)R(T )Φa
√
≤
εi hA − S, X̃j i : A ∈ A(δ1 , δ2 ) .
E sup n−1
h
j=1
(8.30)
As a consequence of (8.29) and (8.30), we have
n
o
n
X
12(ℓ + 1)R(T )Φa
−1
√
εi hA − S, X̃j i : A ∈ A(δ1 , δ2 ) .
E sup n
Eαn (δ1 , δ2 ) ≤
h
j=1
(8.31)
n
P
εi hA − S, X̃j i . Recall that
The next step is to get an upper bound on n−1
j=1
Ξ := n
−1
n
P
j=1
check that
εj X̃j , then we have n
−1
n
P
j=1
εi hA − S, X̃j i = hA − S, Ξi. One can
|hA − S, Ξi| ≤ |hPL (A − S), PL Ξi| + |hPL⊥ (A − S), Ξi|
≤ kPL Ξk2 kPL (A − S)k2 + kΞkkPL⊥ Ak1
p
≤ m 2rank(S)kΞkkA − SkL2 (Π̃) + kΞkkPL⊥ Ak1 .
34
The second line of this inequality is due to Hölder’s inequality and the third
line is due to the facts that (A − S) ∈ K(D; L; 5), rank(PL (Ξ)) ≤ 2rank(S),
p
2
kPL Ξk2 ≤ 2 rank(PL (Ξ))kΞk, and A − S L2 (Π̃) = m12 A − S 2 . Therefore,
n
o
n
X
12(ℓ + 1)R(T )Φa
√
εi hA − S, X̃j i : A ∈ A(δ1 , δ2 )
E sup n−1
h
j=1
p
12(ℓ + 1)R(T )Φa
√
≤
EkΞk(2 2rank(S)mδ1 + δ2 ).
h
(8.32)
It follows from (8.28), (8.31) and (8.32) that with probability at least 1 − e−η ,
12(ℓ + 1)R(T )Φa
p
√
EkΞk( rank(S)mδ1 + δ2 )
h
√
2
12(ℓ + 1)R(T )Φaδ1 η
24(ℓ + 1) R(T )2 Φ2 a2 η
√
+
.
+
nh
nh
αn (δ1 , δ2 ) ≤
Now similar to the approach in [17], we make this bound uniform in δk ∈ [δk− , δk+ ].
Let δkjk = δk+ 2−jk , jk = 0, ..., [log2 (δk+ /δk− )] + 1, k = 1, 2. By the union bound,
with probability at least 1 − e−η /3, for all jk = 0, ..., [log2 (δk+ /δk− )] + 1, k = 1, 2,
we have
12(ℓ + 1)R(T )Φa
p
√
EkΞk( rank(S)mδ1j1 + δ2j2 )
αn (δ1 , δ2 ) ≤
h
√
2
12(ℓ + 1)R(T )Φaδ1j1 η
24(ℓ + 1) R(T )2 Φ2 a2 η
√
+
.
+
nh
nh
which implies that for all δk ∈ [δk− , δk+ ], k = 1, 2,
12(ℓ + 1)R(T )Φa
p
√
EkΞk( rank(S)mδ1 + δ2 )
h
√
24(ℓ + 1)2 R(T )2 Φ2 a2 η̄
12(ℓ + 1)R(T )Φaδ1 η̄
√
.
+
+
nh
nh
αn (δ1 , δ2 ) ≤
The proofs of the second and the third bounds are similar to this one, we omit
the repeated arguments.
35
| 10 |
Clustering via Crowdsourcing
arXiv:1604.01839v1 [] 7 Apr 2016
Arya Mazumdar∗
Barna Saha†
College of Information & Computer Science
University of Massachusetts Amherst
Amherst, MA, 01002
Abstract
In recent years, crowdsourcing, aka human aided computation has emerged as an effective
platform for solving problems that are considered complex for machines alone. Using human
is time-consuming and costly due to monetary compensations. Therefore, a crowd based
algorithm must judiciously use any information computed through an automated process,
and ask minimum number of questions to the crowd adaptively.
One such problem which has received significant attention is entity resolution. Formally,
we are given a graph G = (V, E) with unknown edge set E where G is a union of k (again
unknown, but typically large O(nα ), for α > 0) disjoint cliques Gi (Vi , Ei ), i = 1, . . . , k. The
goal is to retrieve the sets Vi s by making minimum number of pair-wise queries V ×V → {±1}
to an oracle (the crowd). When the answer to each query is correct, e.g. via resampling,
then this reduces to finding connected components in a graph. On the other hand, when
crowd answers may be incorrect, it corresponds to clustering over minimum number of
noisy inputs. Even, with perfect answers, a simple lower and upper bound of Θ(nk) on
query complexity can be shown. A major contribution of this paper is to reduce the query
complexity to linear or even sublinear in n when mild side information is provided by a
machine, and even in presence of crowd errors which are not correctable via resampling. We
develop new information theoretic lower bounds on the query complexity of clustering with
side information and errors, and our upper bounds closely match with them. Our algorithms
are naturally parallelizable, and also give near-optimal bounds on the number of adaptive
rounds required to match the query complexity.
∗
University of Massachusetts Amherst, [email protected]. This work is supported in part by an NSF
CAREER award CCF 1453121 and NSF award CCF 1526763.
†
University of Massachusetts Amherst, [email protected] work is partially supported by a NSF CCF
1464310 grant, a Yahoo ACE Award and a Google Faculty Research Award.
0
1
Introduction
Consider we have an undirected graph G(V ≡ [n], E), [n] ≡ {1, . . . , n}, such that G is a union
of k disjoint cliques Gi (Vi , Ei ), i = 1, . . . , k, but the subsets Vi ⊂ [n], k and E are unknown to
us. We want to make minimum number of adaptive pair-wise queries from V × V to an oracle,
and recover the clusters. Suppose, in addition, we are also given a noisy weighted similarity
matrix W = {wi,j } of G, where wi,j is drawn from a probability distribution f+ if i and j belong
to the same cluster, and else from f− . However, the algorithm designer does not know either
f+ or f− . How does having this side information affect the number of queries to recover the
clusters, which in this scenario are the hidden connected components of G? To add to it, let
us also consider the case when some of the answers to the queries are erroneous. We want to
recover the clusters with minimum number of noisy inputs possibly with the help of some side
information. In the applications that motivate this problem, the oracle is the crowd.
In the last few years, crowdsourcing has emerged as an effective solution for large-scale
“micro-tasks”. Usually, the micro-tasks that are accomplished using crowdsourcing tend to
be those that computers cannot solve very effectively, but are fairly trivial for humans with
no specialized training. Consider for example six places, all named after John F. Kennedy 1 :
(ra ) John F. Kennedy International Airport, (rb ) JFK Airport, (rc ) Kennedy Airport, NY (rd ) John
F. Kennedy Memorial Airport, (re ) Kennedy Memorial Airport, WI, (rf ) John F. Kennedy Memorial
Plaza. Humans can determine using domain knowledge that the above six places correspond to
three different entities: ra , rb , and rc refer to one entity, rd and re refer to a second entity, and rf
refers to a third entity. However, for a computer, it is hard to distinguish them. This problem
known as entity resolution is a basic task in classification, data mining and database management
[26, 23, 29]. It has many alias in literature, and also known as coreference/identity/name/record
resolution, entity disambiguation/linking, duplicate detection, deduplication, record matching
etc. There are several books that just focus on this topic [17, 37]. For a comprehensive study
and applications, see [29].
Starting with the work of Marcus et al. [43], there has been a flurry of works that have aimed
at using human power for entity resolution [31, 51, 22, 52, 27, 50, 21, 30, 39]. Experimental
results using crowdsourcing platforms such as Amazon Mechanical Turk have exceeded the
machine only performance [51, 52]. In all of these works, some computer generated pair-wise
similarity matrix is used to order the questions to crowd. Using human in large scale experiments
is costly due to monetary compensation paid to them, in addition to being time consuming.
Therefore, naturally these works either implicitly or explicitly aim to minimize the number
of queries to crowd. Assuming the crowd returns answers correctly, entity resolution using
crowdsourcing corresponds exactly to the task of finding connected components of G with
minimum number of adaptive queries to V × V . Typically k is large [51, 52, 27, 50], and we
can, not necessarily, take k ≥ nα for some constant α ∈ [0, 1].
It is straightforward to obtain an upper bound of nk on the number of queries: simply ask
one question per cluster for each vertex, and is achievable even when k is unknown. Except for
this observation [51, 22], no other theoretical guarantees on the query complexity were known
so far. Unfortunately, Ω(nk) is also a lower bound [22]. Bounding query complexity of basic
problems like selection and sorting have received significant attention in the theoretical computer
science community [25, 10, 4, 9]. Finding connected components is the most fundamental graph
problem, and given the matching upper and lower bounds, there seems to be a roadblock in
improving its query complexity beyond nk.
In contrast, the heuristics developed in practice often perform much better, and all of them
use some computer generated similarity matrix to guide them in selecting the next question to
ask. We call this crowdsourcing using side information. So, we are given a similarity matrix
W = {wi,j }i,j∈V ×V , which is a noisy version of the original adjacency matrix of G as discussed
1
http://en.wikipedia.org/wiki/Memorials_to_John_F._Kennedy
1
in the beginning. Many problems such as sorting, selection, rank aggregation etc. have been
studied using noisy input where noise is drawn from a distribution [11, 12, 41]. Many probabilistic generative models, such as stochastic block model, are known for clustering [1, 35, 16, 45].
However, all of these works assume the underlying distributions are known and use that information to design algorithms. Moreover, none of them consider query complexity while dealing
with noisy input.
We show that with side information, even with unknown f+ and f− , a drastic reduction
in query complexity is possible. We propose a randomized algorithm that reduces the number
2
of queries from O(nk) to Õ( ∆(fk+ ,f− ) ), where ∆(f+ , f− ) ≡ D(f+ kf− ) + D(f− kf+ ) and D(pkq)
is the Kullback-Leibler divergence between the probability distributions p,
and2 q, and
recovers
the clusters accurately with high probability. Interestingly, we show Ω ∆(fk+ ,f− ) is also an
information-theoretic lower bound, thus matching the query complexity upper bound within
a logarithmic factor. This lower bound could be of independent interest, and may lead to
other lower bounds in related communication complexity models. To obtain the clusters accurately
with probability
1, we propose a Las Vegas algorithm with expected query complexity
k2
Õ n + ∆(f+ ,f− ) which again matches the corresponding lower bound.
So far, we have considered the case when crowd answers are accurate. It is possible that
crowd answers contain errors, and remain erroneous even after repeating a question multiple
times. That is, resampling, repeatedly asking the same question and taking the majority vote,
does not help much. Such observation has been reported in [50, 34] where resampling only
reduced errors by ∼ 20%. Crowd workers often use the same source (e.g., Google) to answer
questions. Therefore, if the source is not authentic, many workers may give the same wrong
answer to a single question. Suppose that error probability is p < 12 . Under such crowd error
model, our problem becomes that of clustering with noisy input, where this noisy input itself
is obtained via adaptively querying the crowd.
We give the first information theoretic lower bounds in this model to obtain the maximum likelihood estimator, and again provide nearly matching upper bounds with and without
side information. Side information helps us to drastically reduce the query complexity, from
k2
nk
) to Õ( D(pk1−p)∆(f
) where D(pk1 − p) = (1 − 2p) log 1−p
Õ( D(pk(1−p))
p . An intriguing fact
+ ,f− )
log n
about this algorithm is that it has running time O(k D(pk1−p) ), and assuming the conjectured
hardness of finding planted clique from an Erdős-Rényi random graph [36], this running time
cannot be improved2 . However, if we are willing to pay a bit more on the query complexity,
then the running time can be made into polynomial. This also provides a better bound on an
oft-studied clustering problem, correlation clustering over noisy input [44, 8]. While prior works
have considered sorting without resampling [12], these are the first results to consider crowd
errors for a natural clustering problem.
The algorithms proposed in this work are all intuitive, easily implementable, and can be
parallelized. They do not assume any knowledge on the value of k, or the underlying distributions f+ and f− . On the otherhand, our information theoretic lower bounds work even with
the complete knowledge of k, f+ , f− . While queries to crowd can be made adaptively, it is also
important to minimize the number of adaptive rounds required maintaining the query upper
bound. Low round complexity helps to obtain results faster. We show that all our algorithms
extend nicely to obtain close to optimal round complexity as well. Recently such results have
been obtained for sorting (without any side information) [10]. Our work extends nicely to two
more fundamental problems: finding connected components, and noisy clustering.
2
Note that a query complexity bound does not necessarily removes the possibility of a super-polynomial
running time.
2
1.1
Related Work
In a recent work [10], Braverman, Mao and Weinberg studied the round complexity of selection
and obtaining the top-k and bottom-k elements when crowd answers are all correct, or are
erroneous with probability 12 − λ2 , or erased with probability 1 − λ, for some λ > 0. They
do not consider any side information. There is an extensive literature of algorithms in the
TCS community where the goal is to do either selection or sorting with O(n) comparisons in
the fewest interactive rounds, aka parallel algorithms for sorting [49, 47, 5, 6, 4, 9]. However,
those works do not consider any erroneous comparisons, and of course do not incorporate side
information. Feige et al., study the depth of noisy decision tree for simple boolean functions,
and selection, sorting, ranking etc. [25], but not with any side information. Parallel algorithms
for finding connected components and clustering have similarly received a huge deal of attention
[28, 33, 18, 46]. Neither those works, nor their modern map-reduce counterparts [40, 24, 32, 2]
study query complexity, or noisy input. There is an active body of work dealing with sorting and
rank aggregation with noisy input under various models of noise generation [11, 12, 41]. However
these works aim to recover the maximum likelihood ordering without any querying. Similarly,
clustering algorithms like correlation clustering has been studied under various random and
semirandom noise models without any active querying [8, 44, 42]. Stochastic block model is
another such noisy model which has recently received a great deal of attention [1, 35, 16, 45],
but again prior to this, no work has considered the querying capability when dealing with noisy
input. In all these works, the noise model is known to the algorithm designer, since otherwise
the problems become NP-Hard [11, 8, 3].
In more applied domains, many frameworks have been developed to leverage humans for performing entity resolution [52, 31]. Wang et al. [52] describe a hybrid human-machine framework
CrowdER, that automatically detects pairs or clusters that have a high likelihood of matching
based on a similarity function, which are then verified by humans. Use of similarity function
is common across all these works to obtain querying strategies [31, 51], but hardly any provide
bounds on the query complexity. The only exceptions are [51, 22] where a simple nk bound on
the query complexity has been derived when crowd returns correct answers, and no side information is available. This is also a lower bound even for randomized algorithms [22]. Firmani
et al. [27] analyzed the algorithms of [52] and [51] under a very stringent noise model.
To deal with the possibility that the crowdsourced oracle may give wrong answers, there are
simple majority voting mechanisms or more complicated heuristic techniques [50, 21, 30, 39]
to handle such errors. No theoretical guarantees exist in any of these works. Davidson et al.,
consider a variable error model where clustering is based on a numerical value–in that case
clusters are intervals with few jumps (errors), and the queries are unary (ask for value) [22].
This error model is not relevant for pair-wise comparison queries.
1.2
Results and Techniques
Problem (Crowd-Cluster). Consider an undirected graph G(V ≡ [n], E), such that G is a union
of k disjoint cliques (clusters) Gi (Vi , Ei ), i = 1, . . . , k, where k, the subsets Vi ⊆ [n] and E
are unknown. There is an oracle O : V × V → {±1}, which takes as input a pair of vertices
u, v ∈ V × V , and returns either +1 or −1. Let O(Q), Q ⊆ V × V correspond to oracle answers
to all pairwise queries in Q. The queries in Q can be done adaptively.
The adjacency matrix of G is a block-diagonal matrix. Let us denote this matrix by A = (ai,j ).
Consider W , an n × n matrix, which is the noisy version of the matrix A. Assume that the
(u, v)th entry of the matrix W , wu,v , is a nonnegative random variable in [0, 1] drawn from a
probability density or mass function f+ for ai,j = 1, and is drawn from a probability density or
mass function f− if ai,j = 0. f+ and f− are unknown.
• Crowd-Cluster with Perfect Oracle Here O(u, v) = +1 iff u and v belong to the same
cluster and O(u, v) = −1 iff u and v belong to different clusters.
3
1. Without Side Information. Given V , find Q ⊆ V × V such that |Q| is minimum,
and from O(Q) it is possible to recover Vi , i = 1, 2, ..., k.
2. With Side Information. Given V and W , find Q ⊆ V × V such that |Q| is
minimum, and from O(Q) it is possible to recover Vi , i = 1, 2, ..., k.
• Crowd-Cluster with Faulty Oracle There is an error parameter p = 21 −λ for some λ > 0.
We denote this oracle by Op . Here if u, v belong to the same cluster then Op (u, v) = +1
with probability 1 − p and Op (u, v) = −1 with probability p. On the otherhand, if u, v do
not belong to the same cluster then Op (u, v) = −1 with probability 1 − p and Op (u, v) = +1
with probability p (in information theory literature, such oracle is called binary symmetric
channel).
1. Without Side Information. Given V , find Q ⊆ V × V such that |Q| is minimum,
and from Op (Q) it is possible to recover Vi , i = 1, 2, ..., k with high probability.
2. With Side Information. Given V and W , find Q ⊆ V × V such that |Q| is minimum, and from Op (Q) it is possible to recover Vi , i = 1, 2, ..., k with high probability.
• Crowd-Cluster with Round Complexity Consider all the above problems where O (similarly Op ) can answer to n log n queries simultaneously, and the goal is to minimize the
number of adaptive rounds of queries required to recover the clusters.
1.2.1
Lower Bounds
When no side information is available, it is somewhat straight-forward to have a lower bound
on the query complexity if the oracle is perfect. Indeed, in that case the query complexity of
Crowd-Cluster is Ω(nk) where n is the total number of elements and k is the number of clusters.
To see this, note that, any algorithm can be provided with a clustering designed adversarially
in the following way. First, k elements residing in k different clusters are revealed to the
algorithm. For a vertex among the remaining n − k vertices, if the algorithm makes any less
than k − 2 queries, the adversary still can place the vertex in one of the remaining 2 clusters–
resulting in a query complexity of (n − k)(k − 1). This argument can be extended towards
randomized algorithms as well, by using Yao’s min-max principal, and has been done in [22].
However [22] left open the case of proving lower bound for randomized algorithms when the
clusters are nearly balanced (ratio between the minimum and maximum cluster size is bounded).
One of the lower bound results proved in this paper resolves it.
Our main technical results for perfect oracle are for Crowd-Cluster with side information.
Our lower bound results are information theoretic, and can be summarized in the following
theorem.
Theorem 1. Any (possibly randomized) algorithm
with the knowledge of f+ , f− , and the number
k2
of clusters k, that does not perform at least Ω ∆(f+ ,f− ) queries, ∆(f+ , f− ) > 0, will be unable
to return the correct clustering with probability at least
1
10 .
(Proof in Sec. 4.1).
Corollary 1. Any (possibly randomized but Las Vegas) algorithmwith the knowledge of f+ , f− ,
k2
queries,
and the number of clusters k, that does not perform at least Ω n + min{1,∆(f
+ ,f− )}
∆(f+ , f− ) > 0, will be unable to return the correct clustering. (Proof in Sec. 4.1).
The main high-level technique is the following. Suppose, a vertex is to be assigned to a
cluster. We have some side-information and answers to queries involving this vertex at hand.
Let these constitute a random variable X that we have observed. Assuming that there are k
possible clusters to assign this vertex to, we have a k-hypothesis testing problem. By observing
X, we have to decide which of the k different distributions (corresponding to the vertex being
4
in k different clusters) it is coming from. If the distributions are very close (in the sense of total
variation distance or divergence), then we are bound to make an error in deciding.
We can compare this problem of assigning a vertex to one of the k-clusters to finding a
biased coin among k coins. In the later problem, we are asked to find out the minimum number
of coin tosses needed for correct identification. This type of idea has previously been applied
to design adversarial strategies that lead to lower bounds on average regret for the multi-arm
bandit problem (see, [7, 13]).
The problem that we have in hand, for lower bound on query-complexity, is substantially
different. It becomes a nontrivial task to identify the correct input and design the set-up so
that we can handle the problem in the framework of finding a biased coin. The key insight here
is that, given a vertex, the combined side-information pertaining to this vertex and a cluster
plays the role of tossing a particular coin (multiple times) in the coin-finding problem. However
the liberty of an algorithm designer to query freely creates the main challenge.
For faulty oracle, note that we are not allowed to ask the same question multiple times to
get the correct answer with high probability. This changes the situation quite a bit, though
in some sense this is closer to coin-tossing experiment than the previous one as we handle
binary random variables here (the answer to the queries). We first note that, for faulty-oracle,
even for probabilistic recovery a minimum size bound on cluster size is required. For example,
k−2
consider the following two different clusterings. C1 : V = ⊔i=1
Vi ⊔ {v1 , v2 } ⊔ {v3 } and C2 :
k−2
V = ⊔i=1 Vi ⊔ {v1 } ⊔ {v2 , v3 }. Now if one of these two clusterings are given two us uniformly
at random, no matter how many queries we do, we will fail to recover the correct cluster with
probability at least p. Our lower bound result works even when all the clusters are close to their
average size (which is nk ), and resolves a question from [22] for p = 0 case.
This removes the constraint on the algorithm designer on how many times a cluster can be
queried with a vertex and the algorithms can have greater flexibility. While we have to show
that enough number of queries must be made with a large number of vertices V ′ ⊂ V , either
of the conditions on minimum or maximum sizes of a cluster ensures that V ′ contains enough
vertices that do not satisfy this query requirement.
Theorem 2. Assume either of the following cases:
• the maximum size of a cluster is ≤
4n
k .
• the minimum size of a cluster is ≥
n
20k .
For a clustering
that satisfies
either of the above two conditions, any (randomized) algorithm
nk
must make Ω D(pk1−p) queries to recover the correct clusters with probability 0.9 when p > 0.
For p = 0 any (randomized) algorithm must make Ω(nk) queries to recover the correct clusters
with probability 0.9. (Proof in Sec. 6.1.1).
We believe that our lower bound techniques are of independent interest, and can spur new
lower bounds for communication complexity problems.
1.2.2
Upper Bounds
Our upper bound results are inspired by the lower bounds. For Crowd-Cluster with perfect oracle,
a straight forward algorithm achieves a nk query complexity. One of our main contributions
is a drasticR reduction in query
complexity of Crowd-Cluster when side information is provided.
R
Let µ+ ≡ xf+ (x)dx, µ− ≡ xf− (x)dx. Our first theorem that assumes µ+ > µ− is as follows.
Theorem 3 (Perfect Oracle+Side Information). With known µ+ , µ− , there exist a Monte Carlo
2 log n
algorithm for Crowd-Cluster with query complexity O( (µk+ −µ
2 ), and a Las Vegas algorithm with
−)
2
log n
expected query complexity O(n + (µk+ −µ
2 ) even when µ+ , µ− are unknown. (Proof in Sec. 4.2).
−)
5
Many natural distributions such as N (µ+ , 1) and N (µ− , 1) have ∆(N (µ+ , 1)kN (µ− , 1)) =
(µ+ − µ− )2 . But, it is also natural to have distributions where µ+ = µ− but ∆(f+ , f− ) > 0. As
a simple example, consider two discrete distributions with mass 1/4, 1/2, 1/4 and 1/3, 1/3, 1/3
respectively at points 0, 1/2, 1. Their means are the same, but divergence is constant.The
following theorem matches the lower bound upto a log n factor with no assumption on µ+ , µ− .
Theorem 4 (Perfect Oracle+Side Information). Let f+ and f− be pmfs 3 and mini f+ (i),
mini f− (i) ≥ ǫ for a constant ǫ. There exist a Monte Carlo algorithm for Crowd-Cluster with
k 2 log n
) with known f+ and f− , and a Las Vegas algorithm with expected
query complexity O( ∆(f
+ ,f− )
2
k log n
) even when k, f+ and f− are unknown. (Proof in Sec. 4.2).
query complexity O(n log n+ ∆(f
+ ,f− )
To improve from Theorem 3 to Theorem 4, we would need a more precise approach. The
minor restriction that we have on f+ and f− , namely, mini f+ (i), mini f− (i) ≥ ǫ allows
∆(f+ , f− ) ≤ 2ǫ . Note that, by our lower bound result, Lemma 1, it is not possible to achieve
query complexity below k2 .
While our lower bound results assume knowledge of k, f+ and f− , our Las Vegas algorithms
do not even need to know them, and none of the algorithms know k. For Theorem 4, indeed,
either of mini f− (i) or mini f+ (i) having at least ǫ will serve our purpose.
The main idea is as follows. It is much easier to determine whether a vertex belongs to a
cluster, if that cluster has enough number of members. On the other hand, if a vertex v has the
highest membership in some cluster C with a suitable definition of membership, then v should be
queried with C first. For any vertex v and a cluster C, define the empirical “inter” distribution
1
· |{u : wu,v = ai }|. Also compute
pv,C in the following way. For, i = 1, . . . , q,: pv,C (i) = |C|
1
the ‘intra’ distribution pC for i = 1, . . . , q, pC (i) = |C|(|C|−1) · |{(u, v) : u 6= v, wu,v = ai }|. Then
Membership(v, C) = −kpv,C − pC kT V , where kpv,C − pC kT V denotes the total variation distance
between distributions defined in Section 3. If Membership(v, C) is highest for C, then using
Sanov’s Theorem (Theorem 9) it is highly likely that v is in C, if |C| is large enough. However
we do not know f+ or f− . Therefore, the highest membership could be misleading since we
do not know the desired size threshold that C must cross to be reliable. But yet, it is possible
to query a few clusters and determine correctly the one which contains v. The main reason
behind using total variation distance as opposed to divergence, is that divergance is not a metric,
and hence do not satisfy the triangle inequality which becomes crucial in our analysis. This
is the precise reason why we need the minimum value to be at least ǫ in Theorem 4. Under
these restrictions, a close relationship between divergence and total variation distance can be
established using Pinsker’s and Reverse Pinsker’s inequalities (see, Section 3).
For faulty oracle, let us first take the case of no side information (later, we can combine it
with the previous algorithm to obtain similar results with side information). Suppose all V × V
queries have been made. If the maximum likelihood (ML) estimate on G with these n2 query
answers is same as the true clustering of G, then Algorithm 2 finds the true clustering with
high probability. We sample a small graph G′ from G, by asking all possible queries in G′ , and
check for the heaviest weight subgraph (assuming ±1 weight on edges) in G′ . If that subgraph
crosses a desired size, it is removed from G′ . If this cluster is detected correctly, then it has
enough members; we can ask separate queries to them to determine if a vertex belongs to that
cluster. The main effort goes in showing that the computed cluster from G′ is indeed correct,
and that G′ has small size.
Theorem 5 (Faulty Oracle with No Side Information). There exists an algorithm with
query
complexity O( λ12 nk log n) for Crowd-Cluster that returns Ĝ, ML estimate of G with all n2 queries,
with high probability when query answers are incorrect with probability p = 12 − λ. Noting that,
4λ2
, this matches the information theoretic lower bound on the query complexity
D(pk1− p) ≤ 1/2−λ
3
We can handle probability density functions as well for Theorem 4 and Theorem 6, if the quantization error is
small. Our other theorems are valid for f+ and f− being both probability mass functions and density functions.
6
within a log n factor. Moreover, the algorithm returns all the true clusters of G of size at least
36
log n with high probability. (Proof in Sec. 6.1.2).
λ2
Theorem 6 (Faulty Oracle with Side Information). Let f+ , f− be pmfs and
mini f+ (i), mini f− (i) ≥ ǫ for a constant ǫ. With side information and faulty oracle with
error probability 12 − λ, there exist an algorithm for Crowd-Cluster with query complex2
n
ity O( λ2 k∆(flog
) when f+ , f− known, and an algorithm with expected query complexity
+ ,f− )
2
n
) when f+ , f− unknown, that recover Ĝ, ML estimate of G with all
O(n + λ2 k∆(flog
+ ,f− )
with high probability. (Proof in Sec. 6.1.3).
n
2
queries,
log n
A subtle part of these results is that, the running time is O(k λ2 ), which is optimal assuming
the hardness of planted cliques. However, by increasing the query complexity, the running time
can be reduced to polynomial.
Corollary 2 (Faulty Oracle with/without Side Information). For faulty oracle with error probability 12 − λ, there exists a polynomial time algorithm with query complexity O( λ12 nk2 ) for CrowdCluster that recovers all clusters of size at least O(max { λ12 log n, k}). (Proof in Sec. 6.1.2).
As it turns out the ML estimate of G with all n2 queries is equivalent to computing correlation clustering on G [11, 8, 3, 14, 15]. As a side result, we get a new algorithm for correlation
√
clustering over noisy input, where any cluster of size min (k, n) will be recovered exactly with
√
n
log n
high probability as long as k = Ω( log
λ2 ). When k ∈ [Ω( λ2 ), o( n)], our algorithm strictly
improves over [11, 8].
We hope our work will inspire new algorithmic works in the area of crowdsourcing where
both query complexity and side information are important.
1.2.3
Round Complexity
Finally, we extend all our algorithms to obtain near optimal round complexity.
Theorem 7 (Perfect Oracle with Side Information). There exists an algorithm for CrowdCluster with perfect oracle and unknown side information f+ and f− such that it√achieves a
√
round complexity within Õ(1) factor of the optimum when k = Ω( n) or k = O( ∆(f+nkf− ) ), and
otherwise within Õ( ∆(f+1kf− ) ). (Proof in Sec. 7.1).
Theorem 8 (Faulty Oracle with no Side Information). There exists an algorithm for CrowdCluster with faulty oracle with error probability 21 −λ and no side information such that it achieves
√
a round complexity
within Õ( log n) factor of the optimum that recovers Ĝ, ML estimate of G
with all n2 queries with high probability. (Proof in Sec. 7.2).
This also leads to a new parallel algorithm for correlation clustering over noisy input where
computation in every round is bounded by n log n.
2
Organization of the remaining paper
The rest of the paper is organized as follows. In Section 3, we provide the information theoretic
tools (definitions and basic results) necessary for our upper and lower bounds.
In Section 4 we provide our main upper bound results for the perfect oracle case when f+
and f− are unknown. In Section 5 we give some more insight into the working of Algorithm
1 and for the case when f+ and f− are known, provide near optimal Monte Carlo/Las Vegas
algorithms for Crowd-Cluster with side information and perfect oracle. In Section 6, we consider
the case when crowd may return erroneous answers. In this scenario we give tight lower and
upper bounds on query complexity in both the cases when we have or lack side information.
In Section 7, we show that the algorithms developed for optimizing query complexity naturally
extend to the parallel version of minimizing the round complexity.
7
3
Information Theory Toolbox
The lower bounds for randomized algorithms presented in this paper are all information theoretic.
We also use information theoretic tools of large-deviations in upper bounds. To put these
bounds into perspective, we will need definition of many information theoretic quantities and
some results. Most of this material can also be found in a standard information theory textbook,
such as Cover and Thomas [19].
Definition (Divergence). The Kullback-Leibler divergence, or simply divergence, between two
probability measures P and Q on a set X , is defined to be
D(P kQ) =
Z
X
dP ln
dP
.
dQ
When P and Q are distributions of a continuous random variable, represented by probability
R∞
(x)
fp (x) ln ffpq (x)
densities fp (x) and fq (x) respectively, we have, D(fp kfq ) = −∞
dx. Similarly when
P and Q are discrete random variable taking values in the set X , and represented by the
probability mass functions p(x) and q(x), where x ∈ X respectively, we have D(p(x)kq(x)) =
P
p(x)
x∈X p(x) ln q(x) .
For two Bernoulli distributions with parameters p and q, where 0 ≤ p, q ≤ 1, by abusing the
notation the divergence is written as,
D(pkq) = p ln
1−p
p
+ (1 − p) ln
.
q
1−q
p
1−p
In particular, D(pk1 − p) = p ln 1−p
+ (1 − p) ln 1−p
p = (1 − 2p) ln p . Although D(P kQ) ≥ 0,
with equality when P = Q, note that in general D(P kQ) 6= D(QkP ). Define the symmetric
divergence between two distribution P and Q as,
∆(P, Q) = D(P kQ) + D(QkP ).
The following property of the divergence is going to be useful to us. Consider a set of
random variables X1 , . . . , Xm , and consider the two joint distribution of the random variables,
P m and Qm . When the random variables are independent, let Pi and Qi be the corresponding marginal distribution of the random variable Xi , i = 1, . . . , m. In other words, we have,
Qm
Q
m
P m (x1 , x2 , . . . , xm ) = m
i=1 Qi (xi ). Then we must have,
i=1 Pi (xi ) and Q (x1 , x2 , . . . , xm ) =
m
m
D(P kQ ) =
m
X
i=1
D(Pi kQi ).
(1)
A more general version, when the random variables are not independent, is given by the
chain-rule, described below for discrete random variables.
Lemma 1. Consider a set of discrete random variables X1 , . . . , Xm , and consider the two joint
distribution of the random variables, P and Q. The chain-rule for divergence states that,
D(P (x1 , . . . , xm )kQ(x1 , . . . , xm )) =
m
X
i=1
D(P (xi | x1 , . . . , xi−1 )kQ(xi | x1 , . . . , xi−1 )),
where,
D(P (x|y)kQ(x|y)) =
X
P (Y = y)D(P (x|Y = y)kQ(x|Y = y)).
y
8
Definition (Total Variation Distance). For two probability distributions P and Q defined on a
sample space X and same sigma-algebra F, the total variation distance between them is defined
to be,
kP − QkT V = sup{P (A) − Q(A) : A ∈ F}.
In words, the distance between two distributions is their largest difference over any measurable
set. For finite X total variation distance is half of the ℓ1 distance between pmfs.
The total variation distance and the divergence are related by the Pinsker’s inequality.
Lemma 2 (Pinsker’s inequality). For any two probability measures P and Q,
1
kP − Qk2T V ≤ D(P kQ).
2
It is easy to see that, there cannot be a universal ‘reverse’ Pinsker’s inequality, i.e., an
upper bound on the divergence by the total variation distance (for example, the total variation
distance is always less than 1, while the divergence can be infinity). However, under various
assumptions, such upper bounds have been proposed [48, 20]. For example we provide one such
inequality below.
Lemma 3 (Reverse Pinsker’s inequality[48]). For any two probability measures on finite alphabet X , given by probability mass functions p and q, we must have,
kp − qk2T V ≥
minx∈X q(x)
D(pkq)
2
(2)
This inequality can be derived from Eq.(28) of [48].
A particular basic large-deviation inequality that we use for the upper bounds is Sanov’s
theorem.
Theorem 9 (Sanov’s theorem). Let X1 , . . . , Xn are iid random variables with a finite sample
space X and distribution P . Let P n denote their joint distribution. Let E be a set of probability
P
distributions on X . The empirical distribution P̃n gives probability P̃n (A) = n1 ni=1 1Xi ∈A to
any event A. Then,
P n ({x1 , . . . , xn } : P̃n ∈ E) ≤ (n + 1)|X | exp(−n min
D(P ∗ kP )).
∗
P ∈E
A continuous version of Sanov’s theorem is also possible but we omit here for clarity.
Hoeffding’s inequality for large deviation of sums of bounded independent random variables
is well known [38, Thm. 2].
Lemma 4 (Hoeffding). If X1 , . . . , Xn are independent random variables and ai ≤ Xi ≤ bi for
all i ∈ [n]. Then
Pr(|
n
1X
2n2 t2
).
(Xi − EXi )| ≥ t) ≤ 2 exp(− Pn
2
n i=1
i=1 (bi − ai )
This inequality can be used when the random variables are independently sampled with
replacement from a finite sample space. However due to a result in the same paper [38, Thm.
4], this inequality also holds when the random variables are sampled without replacement from
a finite population.
Lemma 5 (Hoeffding). If X1 , . . . , Xn are random variables sampled without replacement from
a finite set X ⊂ R, and a ≤ x ≤ b for all x ∈ X . Then
Pr(|
n
2nt2
1X
(Xi − EXi )| ≥ t) ≤ 2 exp(−
).
n i=1
(b − a)2
9
Crowd-Cluster with Perfect Oracle
4
In this section, we consider the clustering problem using crowdsourcing when crowd always
returns the correct answers, and there is side information.
4.1
Lower Bound
Recall that there are k clusters in the n-vertex graph. That is G(V, E) is such that, V = ⊔ki=1 Vi
and E = {(i, j) : i, j ∈ Vℓ for some ℓ}. In other words, G is a union of at most k disjoint cliques.
Every entry of the side-information matrix W is generated independently as described in the
introduction. We now prove Theorem 1.
Proof of Theorem 1. We are going to construct an input that any randomized algorithm will
be unable to correctly
j identify
k with positive probability.
1
Suppose, a = ∆(f+ ,f− ) . Consider the situation when we are already given a complete
cluster Vk with n − (k − 1)a elements, remaining (k − 1) clusters each has 1 element, and the
rest (a − 1)(k − 1) elements are evenly distributed (but yet to be assigned) to the k − 1 clusters.
This means each of the smaller clusters has size a each. Note that, we assumed the knowledge
of the number of clusters k.
The side information matrix W = (wi,j ) is provided. Each wi,j are independent random
variables.
Now assume the scenario when we use an algorithm ALG to assigns a vertex to one of the
k − 1 clusters, Vu , u = 1, . . . , k − 1. Note that, for any vertex, l, the side informations wi,j where
i 6= l and j 6= l, do not help in assigning l to a cluster (since in that case wi,j is independent of l).
Therefore, given a vertex l, ALG takes as input the random variables wi,l s where i ∈ ⊔t Vt , and
makes some queries involving l and outputs a cluster index, which is an assignment for l. Based
on the observations wi,l s, the task of algorithm ALG is thus a multi-hypothesis testing among
k − 1 hypotheses. Let Hu , u = 1, . . . k − 1 denote the k − 1 different hypotheses Hu : l ∈ Vu .
And let Pu , u = 1, . . . k − 1 denote the joint probability distributions of the random variables
wi,j s when l ∈ Vu . In short, for any event A, Pu (A) = Pr(A|Hu ). Going forward, the subscript
of probabilities or expectations will denote the appropriate conditional distribution.
For this hypothesis testing problem, let E{Number of queries made by ALG} = T. Then,
there exist t ∈ {1, . . . , k − 1} such that, Et {Number of queries made by ALG} ≤ T. Note that,
k−1
X
v=1
Pt { a query made by ALG involving cluster Vv } ≤ Et {Number of queries made by ALG} ≤ T.
Consider the set
J ′ ≡ {v ∈ {1, . . . , k − 1} : Pt { a query made by ALG involving cluster Vv } <
1
}.
10
1
≤ T, which implies, |J ′ | ≥ k − 1 − 10T.
We must have, (k − 1 − |J ′ |) · 10
Note that, to output a cluster without using the side information, ALG has to either make
a query to the actual cluster the element is from, or query at least k − 2 times. In any other
case, ALG must use the side information (in addition to using queries) to output a cluster. Let
E u denote the event that ALG output cluster Vu by using the side information.
Pk−1
10
Let J ′′ ≡ {u ∈ {1, . . . , k − 1} : Pt (E u ) ≤ k−1
}. Since, u=1
Pt (E u ) ≤ 1, we must have,
(k − 1 − |J ′′ |) ·
We have,
9(k − 1)
10
≤ 1, or |J ′′ | ≥
.
k−1
10
|J ′ ∩ J ′′ | ≥ k − 1 − 10T +
9(k − 1)
9(k − 1)
− (k − 1) =
− 10T.
10
10
10
Now consider two cases.
Case 1: T ≥ 9(k−1)
100 . In this case, average number of queries made by ALG to assign one vertex
to a cluster is at least 9(k−1)
10 . Since there are (k − 1)(a − 1) vertices that needs to be assigned
to clusters, the expected total number of queries performed by ALG is
9(k−1)2 (a−1)
.
10
. In this case, J ′ ∩ J ′′ is nonempty. Assume that we need to assign the
Case 2: T < 9(k−1)
100
vertex j ∈ Vℓ for some ℓ ∈ J ′ ∩ J ′′ to a cluster (Hℓ is the true hypothesis). We now consider the
following two events.
E1 =
n
a query made by ALG involving cluster Vℓ
n
o
o
E2 = k − 2 or more queries were made by ALG .
Note that, if the algorithm ALG can correctly assign j to a cluster without using the side
information then either of E1 or E2 must have to happen. Recall, E ℓ denote the event that ALG
S S
output cluster Vℓ using the side information. Now consider the event E ≡ E ℓ E1 E2 . The
probability of correct assignment is at most Pℓ (E). We have,
Pℓ (E) ≤ Pt (E) + |Pℓ (E) − Pt (E)| ≤ Pt (E) + kPℓ − Pt kT V ≤ Pt (E) +
r
1
D(Pℓ kPt ),
2
where we first used the definition of the total variation distance and in the last step we have
used Pinsker’s inequality (Lemma 2). Now we bound the divergence D(Pℓ kPt ). Recall that Pℓ
and Pt are the joint distributions of the independent random variables wi,j , i ∈ ∪u Vu . Now,
using lemma 1, and noting that the divergence between identical random variables are 0, we
obtain
D(Pℓ kP1 ) ≤ aD(f− kf+ ) + aD(f+ kf− ) = a∆ ≤ 1.
This is true because the only times when wi,jqdiffers under Pt and under Pℓ is when i ∈ Vt or
i ∈ Vℓ . As a result we have, Pℓ (E) ≤ Pt (E) + 12 .
Now, using Markov inequality Pt (E2 ) ≤
T
k−2
≤
Pt (E) ≤ Pt (E ℓ ) + Pt (E1 ) + Pt (E2 ) ≤
9(k−1)
100(k−2)
≤
9
100
+
9
100(k−2) .
Therefore,
10
1
9
9
+
+
+
.
k − 1 10 100 100(k − 2)
19
+
For large enough k, we overall have Pℓ (E) ≤ 100
1
j to the correct cluster with probability at least 10 .
q
1
2
<
9
10 .
This means ALG fails to assign
Considering the above two cases, we can say that any algorithm either makes on average
1
queries, or makes an error with probability at least 10
.
9(k−1)2 (a−1)
10
Note that, in this proof we have not in particular tried to optimize the constants. Corollary
1 follows by noting that to recover the clusters exactly, the query complexity
has to be at least
k
k
(n − k) + 2 . If the number of queries issued is at most (n − k) + 2 − 1, then either there
exists a vertex v in a non-singleton cluster which has not been queried to any other member of
that same cluster, or there exist two clusters such that no inter-cluster edge across them have
been queried.
4.2
Upper Bound
We do not know k, f+ , f− , µ+ , or µ− , and our goal, in this section, is to design an algorithm
with optimum query complexity for exact reconstruction of the clusters with probability 1. We
are provided with the side information matrix W = (wi,j ) as an input. Let θgap = µ+ − µ− .
11
The algorithm uses a subroutine called Membership that takes as input a vertex v and a
subset of vertices C ⊆ V.
PAt this point, let the membership of a vertex v in cluster C is defined
wv,u
as follows: avg(v, C) = u∈C
, and we use Membership(v, C) = avg(v, C).
|C|
The pseudocode of the algorithm is given in Algorithm 1. The algorithm works as follows.
Let C1 , C2 , ..., Cl be the current clusters in nonincreasing order of size. We find the minimum
index j ∈ [1, l] such that there exists a vertex v not yet clustered, with the highest average
membership to Cj , that is Membership(v, Cj )≥Membership(v, Cj ′ ), ∀j ′ 6= j, and j is the smallest
index for which such a v exists. We first check if v ∈ Cj by querying v with any current member
of Cj . If not, then we group the clusters C1 , C2 , .., Cj−1 in at most ⌈log n⌉ groups such that
1 | |C1 |
, 2i ). For each group, we pick the cluster which
clusters in group i has size in the range [ 2|Ci−1
has the highest average membership with respect to v, and check by querying whether v belongs
to that cluster. Even after this, if the membership of v is not resolved, then we query v with
one member of each of the clusters that we have not checked with previously. If v is still not
clustered, then we create a new singleton cluster with v as its sole member.
We now give a proof of the Las Vegas part of Theorem 3 here using Algorithm 1, and defer
the more formal discussions on the Monte Carlo part to the next section.
Proof of Theorem 3, Las Vegas Algorithm. First, The algorithm never includes a vertex in a
cluster without querying it with at least one member of that cluster. Therefore, the clusters
constructed by our algorithm are always proper subsets of the original clusters. Moreover, the
algorithm never creates a new cluster with a vertex v before first querying it with all the existing
clusters. Hence, it is not possible that two clusters produced by our algorithm can be merged.
Let C1 , C2 , ..., Cl be the current non-empty clusters that are formed by Algorithm 1, for some
l ≤ k. Note that Algorithm 1 does not know k. Let without loss of generality |C1 | ≥ |C2 | ≥ ... ≥
n
.
|Cl |. Let there exists an index i ≤ l such that |C1 | ≥ |C2 | ≥ · · · ≥ |Ci | ≥ M , where M = 6θlog
2
gap
Of course, the algorithm does not know either i or M . If even |C1 | < M , then i = 0. Suppose j ′
is the minimum index such that there exists a vertex v with highest average membership in Cj ′ .
There are few cases to consider based on j ′ ≤ i, or j ′ > i and the cluster that truly contains v.
Case 1. v truly belongs to Cj ′ . In that case, we just make one query between v and an
existing member of Cj ′ and the first query is successful.
Case 2. j ′ ≤ i and v belongs to Cj , j 6= j ′ for some j ∈ {1, . . . , i}. Let avg(v, Cj )
and avg(v, Cj ′ ) be the average membership of v to Cj , and Cj ′ respectively. Then we have
avg(v, Cj ′ ) ≥ avg(v, Cj ), that is Membership(v, Cj′ )≥Membership(v, Cj ). This is only possible if
θ
θ
gap
′
either avg(v, Cj ′ ) ≥ µr + gap
2 or avg(v, Cj ) ≤ µg − 2 . Since both Cj and Cj have at least
M current members, then using the Chernoff-Hoeffding’s bound (Lemma 4) followed by union
bound this happens with probability at most n23 . Therefore, the expected number of queries
involving v before its membership gets determined is ≤ 1 + n23 k < 2.
Case 4. v belongs to Cj , j 6= j ′ for some j > i. In this case the algorithm may make k
queries involving v before its membership gets determined.
Case 5. j ′ > i, and v belongs to Cj for some j ≤ i. In this case, there exists no v with its
highest membership in C1 , C2 , ..., Ci .
Suppose C1 , C2 , ..., Cj′ are contained in groups H1 , H2 , ..., Hs where s ≤ ⌈log n⌉. Let Cj ∈
1|
Ht , t ∈ [1, s]. Therefore, |Cj | ∈ [ 2|Ct−1
, |C21t | ]. If |Cj | ≥ 2M , then all the clusters in group Ht
have size at least M . Now with probability at least 1 − n22 , avg(v, Cj ) ≥ avg(v, Cj ′′ ), that is
Membership(v, Cj )≥Membership(v, Cj ′′ ) for every cluster Cj ′′ ∈ Ht . In that case, the membership
of v is determined within at most ⌈log n⌉ queries. Otherwise, with probability at most n22 , there
may be k queries to determine the membership of v.
Therefore, once a cluster has grown to size 2M , the number of queries to resolve the membership of any vertex in those clusters is at most ⌈log n⌉ with probability at least 1− n2 . Hence, for at
most 2kM elements, the number of queries made to resolve their membership can be k. Thus the
12
2
log n
expected number of queries made by Algorithm 1 is O(n log n + M k2 ) = O(n log n + (µk+ −µ
2 ).
−)
Moreover, if we knew µ+ and µ− , we can calculate M , and thus whenever a clusters grows to
size M , remaining of its members can be included in that cluster without making any error with
high probability. This leads to Theorem 3.
We can strengthen this algorithm by changing the subroutine Membership in the following
way. Assume that f+ , f− are discrete distributions over q points a1 , a2 , . . . , aq ; that is wi,j takes
value in the set {a1 , a2 , . . . , aq } ⊂ [0, 1].
The subroutine Membership takes v ∈ V and C ⊆ V \ {v} as inputs. Compute the ‘inter’
1
· |{u : wu,v = ai }|.
distribution pv,C for i = 1, . . . , q, pv,C (i) = |C|
1
Also compute the ‘intra’ distribution pC for i = 1, . . . , q, pC (i) = |C|(|C|−1)
· |{(u, v) : u 6=
v, wu,v = ai }|. Then define Membership(v, C) = −kpv,C −pC kT V . Note that, since the membership
is always negative, a higher membership implies that the ‘inter’ and ‘intra’ distributions are
closer in terms of total variation distance. With this modification in the subroutine we can
prove what is claimed in Theorem 4.
The analysis for this case proceeds exactly as above. However, to compare memberships we
use Lemma 6 below. Indeed, Lemma 6 can be used in the cases 2 and 5 in lieu of ChernoffHoeffding bounds to obtain the exact same result.
Lemma 6. Suppose, C, C ′ ⊆ V , C ∩ C ′ = ∅ and |C| ≥ M, |C ′ | ≥ M =
mini f+ (i), mini f− (i) ≥ ǫ for a constant ǫ. Then,
Pr Membership(v, C ′ ) ≥ Membership(v, C) | v ∈ C ≤
16 log n
ǫ∆(f+ ,f− ) ,
with where
4
.
n3
Proof. Let β = kf+ −f2− kT V . If Membership(v, C ′ ) ≥ Membership(v, C) then we must have, kpv,C ′ −
pC ′ kT V ≤ kpv,C − pC kT V . This means, either kpv,C ′ − pC ′ kT V ≤ β2 or kpv,C − pC kT V ≥ β2 . Now,
using triangle inequality,
Pr kpv,C ′ − pC ′ kT V ≤
≤ Pr kpv,C ′ − f+ kT V
β
β
≤ Pr kpv,C ′ − f+ kT V − kpC ′ − f+ kT V ≤
2
2
β
β
≤ β or kpC ′ − f+ kT V ≥
≤ Pr kpv,C ′ − f+ kT V ≤ β + Pr kpC ′ − f+ kT V ≥
.
2
2
Similarly,
Pr kpv,C − pC kT V ≥
≤ Pr kpv,C − f+ kT V
β
β
≤ Pr kpv,C − f+ kT V + kpC − f+ kT V ≥
2
2
β
β
β
β
≥ or kpC − f+ kT V ≥
≤ Pr kpv,C − f+ kT V ≥
+ Pr kpC − f+ kT V ≥
.
4
4
4
4
Now, using Sanov’s theorem (Theorem 9), we have,
Pr kpv,C ′ − f+ kT V ≤ β ≤ (M + 1)q exp(−M
min
p:kp−f+ kT V ≤β
D(pkf− )).
At the optimizing p of the exponent,
D(pkf− ) ≥ 2kp − f− k2T V
from Pinsker’s Inequality (Lemma 2)
2
≥ 2(kf+ − f− kT V − kp − f+ kT V )
from using triangle inequality
2
≥ 2(2β − β)
kf+ − f− k2T V
2
ǫ
≥ max{D(f+ kf− ), D(f− kf+ )}
2
from noting the value of β
=
from reverse Pinsker’s inequality (Lemma 3)
13
ǫ∆(f+ , f− )
4
Again, using Sanov’s theorem (Theorem 9), we have,
≥
Pr kpC ′ − f+ kT V ≥
β
≤ (M + 1)q exp(−M
min
D(pkf+ )).
2
p:kp−f+ kT V ≥ β2
At the optimizing p of the exponent,
D(pkf+ ) ≥ 2kp − f+ k2T V
≥
from Pinsker’s Inequality (Lemma 2)
β2
from noting the value of β
2
kf+ − f− k2T V
=
8
ǫ
≥ max{D(f+ kf− ), D(f− kf+ )} from reverse Pinsker’s inequality (Lemma 3)
8
ǫ∆(f+ , f− )
≥
16
Now substituting this in the exponent, using the value of M and doing the same exercise
for the other two probabilities we get the claim of the lemma.
5
Crowd-Cluster with Perfect Oracle: Known f+, f−
In this section, we take a closer look at the results presented in Section 4. Recall that, we
have an undirected graph G(V ≡ [n], E), such that G is a union of k disjoint cliques Gi (Vi , Ei ),
i = 1, . . . , k, but the subsets Vi ∈ [n] and E are unknown to us. The goal is to determine these
clusters accurately (with probability 1) by making minimum number of pair-wise queries. As a
side information, we are given W which represents the similarity values that are computed by
some automated algorithm, and therefore reflects only a noisy version of the true similarities
({0, 1}). Based on the sophistication of the automated algorithm, and the amount of information
available to it, the densities f+ and f− will vary. We have provided a lower bound for this in
Section 4.
In Algorithm 1, we do not know k, f+ , f− , µ+ , or µ− , and our goal was to achieve optimum
query complexity for exact reconstruction of the clusters with probability 1. In this section we
are going to provide a simpler algorithm that has the knowledge of µ+ , µ− or even f+ , f− , and
show that we can achieve optimal query complexity.
Let µ+ − µ− ≥ θgap , and we select a parameter M satisfying
M=
6 log n
.
2
θgap
The simpler algorithm, referred to as Algorithm (1-a), contains two phases, that are repeated
as long as there are vertices that have not yet been clustered.
Querying Phase. The algorithm maintains a list of active clusters which contain at least
one vertex but whose current size is strictly less than M . For every v which has not yet been
assigned to any cluster, the algorithm checks by querying to the oracle whether v belongs to
any of the cluster in the list. If not, it opens a new cluster with v as its sole member, and add
that cluster to the list.
Estimation Phase. If the size of any cluster, say C in the list becomes M , then the cluster
is removed from the list, and the algorithm enters an estimation phase with C. For every vertex
v which has not yet been clustered, it computes the average membership score of v in C as
θgap
1 P
avg(v, C) = |C|
u wu,v . If avg(v, C) ≥ µ+ − 2 , then include v in C. After this phase, mark C
as final and inactive.
14
Algorithm 1 Crowd-Cluster with Side Information. Input: {V, W } (Note: O is the perfect
oracle.
⊲ Initialization.
1: Pick an arbitrary vertex v and create a new cluster {v}. Set V = V \ v
2: while V 6= ∅ do
⊲ Let the number of current clusters be l ≥ 1
3:
Order the existing clusters in nonincreasing size.
⊲ Let |C1 | ≥ |C2 | ≥ . . . ≥ |Cl | be the ordering (w.l.o.g).
4:
for j = 1 to l do
5:
If ∃v ∈ V such that j = maxi∈[1,l] Membership(v, Ci ), then select v and Break;
6:
end for
7:
O(v, u) where u ∈ Cj
8:
if O(v, u) == “ + 1” then
9:
Include v in Cj . V = V \ v
10:
else
⊲ logarithmic search for membership in the large groups. Note s ≤ ⌈log n⌉
11:
Group C1 , C2 , ..., Cj−1 into s consecutive classes H1 , H2 , ..., Hs such that the clusters
1 | |C1 |
, 2i )
in group Hi have their current sizes in the range [ 2|Ci−1
12:
for i = 1 to s do
13:
j = maxa:Ca ∈Hi Membership(v, Ca )
14:
O(v, u) where u ∈ Cj .
15:
if O(v, u) == “ + 1” then
16:
Include v in Cj . V = V \ v. Break.
17:
end if
18:
end for
⊲ exhaustive search for membership in the remaining groups
19:
if v ∈ V then
20:
for i = 1 to l + 1 do
21:
if i = l + 1 then
⊲ v does not belong to any of the existing clusters
22:
Create a new cluster {v}. Set V = V \ v
23:
else
24:
if ∄u ∈ Ci such that (u, v) has already been queried then
25:
O(v, u)
26:
if O(v, u) == “ + 1” then
27:
Include v in Cj . V = V \ v. Break.
28:
end if
29:
end if
30:
end if
31:
end for
32:
end if
33:
end if
34: end while
15
Lemma 7. The total number of queries made by Algorithm (1-a) is at most k2 M .
Proof. Suppose, there are k′ clusters of size at least M . The number of queries made in the
querying phase to populate these clusters is at most M k′ · k. For the remaining (k − k′ ) clusters,
their size is at most (M − 1), and again the number of queries made to populate them is at
most (M − 1)(k − k′ ) · k. Hence, the total number of queries made during the querying phases is
k2 M . Furthermore, no queries are made during the estimation phases, and we get the desired
bound.
Lemma 8. Algorithm (1-a) retrieves the original clusters with probability at least 1 − n2 .
Proof. Any vertex that is included to an active cluster, must have got a positive answer from
a query during the querying phase. Now consider the estimation phase. If u, v ∈ V belong to
the same cluster C, then wu,v ∼ f+ , else wu,v ∼ f− . Therefore, E[wu,v | u, v ∈ C] = µ+ and
E[wu,v | u ∈ C, v ∈
/ C] = µ− . Then, by the Chernoff-Hoeffding bound (Lemma 4),
Pr avg(v, C) < µ+ −
And similarly,
2
M θgap
1
θgap
|v ∈ C ≤ e− 2 ≤ 3 .
2
n
M θ2
1
θgap
− 2gap
| v 6∈ C ≤ e
≤ 3.
Pr avg(v, C) > µ− +
2
n
Therefore by union bound, for every vertex that is included in C during the estimation phase
truly belongs to it, and any vertex that is not included in C truly does not belong to it with
probability ≥ 1− n22 . Or, the probability that the cluster C is not correctly constructed is at most
2
n2 . Since, there could be at most n clusters, the probability that there exists one incorrectly
constructed cluster is at most n2 . Note that, any cluster that never enters the estimation
phase is always constructed correctly, and any cluster that enters the estimation phase is fully
constructed before moving to a new unclustered vertex. Therefore, if a new cluster is formed
in the querying phase with v, then v cannot be included to any existing clusters assuming the
clusters grown in the estimation phases are correct. Hence, Algorithm (1-a) correctly retrieves
the clusters with probability at least 1 − n2 .
So far, Algorithm (1-a) has been a Monte Carlo algorithm. In order to turn it into a Las
Vegas algorithm, we make the following modifications.
θ
• In the estimation phase with cluster C, if avg (v, C) ≥ µ+ − gap
2 , then we query v with
some member of C. If that returns +1 (i.e., the edge is present), then we include v, else,
we query v with one member of every remaining clusters (active and inactive). If none of
these queries returns +1, then a singleton cluster with v is created, and included in the
active list. We then proceed to the next vertex in the estimation phase.
Clearly, this modified Algorithm (1-a) retrieves all clusters correctly and it is a Las Vegas
algorithm. We now analyze the expected number of queries made by the algorithm in the
estimation phase.
Lemma 9. The modified Las Vegas Algorithm (1-a) makes at most n + 2 queries on expectation
during the estimation phase.
Proof. In the estimation phase, only 1 query is made with v while determining its membership
if indeed v belongs to cluster C when avg (v, C) ≥ µ+ − θgap
2 . Now this happens with probability
at least 1 − n22 . With the remaining probability, at most (k − 1) ≤ n extra queries may be
made with v. At the end of this, either v is included in a cluster, or a new singleton cluster is
formed with v. Therefore, the expected number of queries made with v, at the end of which
the membership of v is determined is at most 1 + 2(k−1)
n2 . Hence the expected number of total
queries made by Algorithm (1-a) in the estimation phase is at most n + 2.
16
Theorem 10. With known µ+ and µ− , there exist a Monte Carlo algorithm for Crowd2 log n
Cluster with query complexity O( (µk+ −µ
2 ) and a Las Vegas algorithm with expected query com−)
plexity O(n +
k 2 log n
(µ+ −µ− )2 ).
Comparing with the Lower Bound.
Example 1. The KL-divergence between two univariate normal distributions with means
µ1 and µ2 , and standard deviations
σ1 and σ2 respectively can be calculated as
σ12 +(µ1 −µ2 )2
σ1
D(N (µ1 , σ1 )kN (µ2 , σ2 )) = log σ2 +
− 12 . Therefore ∆(f+ , f− ) = (µ+ − µ− )2 .
2σ2
2
Algorithm (1-a) is optimal under these natural distributions within a log n factor.
Example 2.
f− (x) =
(
(1 + ǫ)
(1 − ǫ)
if 0 ≤ x < 12
if 1 ≥ x ≥ 12 ;
f+ (x) =
(
(1 − ǫ)
(1 + ǫ)
if 0 ≤ x < 12
if 1 ≥ x ≥ 12 .
That is, they are derived by perturbing the uniform distribution slightly so that f+ puts
slightly higher mass when x ≥ 12 , and f− puts slightly higher mass when x < 21 .
R 1/2
1
(1 − ǫ) dx = 1. Similarly, 01 f+ (x) dx = 1, that
Note that 01 f− (x) dx = 0 (1 + ǫ) dx + 1/2
is they represent valid probability density functions.
We have
Z 1
2−ǫ
1 ǫ
(1 + ǫ) 3(1 − ǫ)
+
=
= −
xf− (x) dx =
µ− =
8
8
4
2 4
0
and
Z 1
2+ǫ
1 ǫ
(1 − ǫ) 3(1 + ǫ)
+
=
= + .
µ+ =
xf+ (x) dx =
8
8
4
2 4
0
R
R
R
Thereby, µ+ − µ− = 2ǫ . Moreover
D(f+ kf− ) =
Z
0
1/2
(1 − ǫ) log
1−ǫ
dx +
1+ǫ
Z
1
(1 + ǫ) log
1/2
1+ǫ
1+ǫ
dx = ǫ log
= O(ǫ2 )
1−ǫ
1−ǫ
Therefore, again Algorithm (1-a) is optimal under these distributions within a log n factor.
Improving Algorithm (1-a) to match the lower bound. We will now show a way to
achieve the lower bound in the Crowd-Cluster up to a logarithmic term (while matching the
denominator) by modifying Algorithm (1-a). We first do this by assuming f+ , f− to be discrete
distributions over q points a1 , a2 , . . . , aq . So, wi,j takes value in the set {a1 , a2 , . . . , aq }.
Theorem 11. With known f+ and f− such that mini f+ (i), mini f− (i) ≥ ǫ for a constant ǫ,
k 2 log n
) and a
there exist a Monte Carlo algorithm for Crowd-Cluster with query complexity O( ∆(f
+ ,f− )
Las Vegas algorithm with expected query complexity O(n +
k 2 log n
∆(f+ ,f− ) ).
Indeed, either of mini f− (i) or mini f+ (i) strictly greater than 0 will serve our purpose. We
have argued before that it is not so restrictive condition.
Proof of Theorem 11. For any vertex v and a cluster C, define the empirical distribution pv,C in
the following way. For, i = 1, . . . , q,
pv,C (i) =
1
· |{u : wu,v = ai }|.
|C|
17
Now modify Algorithm (1-a) as follows. The querying phase of the algorithm remains exactly
same. In the estimation phase for a cluster C and an unassigned vertex v, include v in C if
D(pv,C kf+ ) < D(pv,C kf− ).
Everything else remains same.
Now, a vertex v ∈ C will be erroneously not assigned to it with probability
Pr D(pv,C kf+ ) ≥ D(pv,C kf− ) | v ∈ C = f+ {pv,C : D(pv,C kf+ ) ≥ D(pv,C kf− )}
= (M + 1)q exp(−M
min
p:D(pkf+ )≥D(pkf− )
D(pkf+ )),
where in the last step we have used Sanov’s theorem (see, Theorem 9). Due to lemma 10, we
can replace the constraint of the optimization in the exponent above by an equality.
Hence,
Pr D(pv,C kf+ ) ≥ D(pv,C kf− ) | v ∈ C ≤ (M + 1)q+1 exp(−M
whenever
min
D(pkf+ )) ≤
p:D(pkf+ )=D(pkf− )
1
,
n3
8 log n
.
min
D(pkf+ )
M=
p:D(pkf+ )=D(pkf− )
This value of M is also sufficient to have,
Pr D(pv,C kf+ ) < D(pv,C kf− ) | v ∈
/C ≤
1
.
n3
While the rest of the analysis stays same as before, the overall query complexity of this modified
algorithm is
k2 log n
O
.
min
D(pkf+ )
p:D(pkf+ )=D(pkf− )
If the divergence were a distance then in the denominator above we would have got D(f+ kf− )/2
and that would be same as the lower bound we have obtained. However, since that is not the
case, we rely on the following chain of inequalities instead at the optimizing point of p.
D(pkf+ ) + D(pkf− )
≥ kp − f+ k2T V + kp − f− k2T V
2
+ kp − f− kT V )2
kf+ − f− k2T V
≥
,
2
2
D(pkf+ ) = D(pkf− ) =
≥
(kp − f+ kT V
where we have used the Pinsker’s inequality (Lemma 2), the convexity of the function x2 and
the triangle inequality for the total variation distance respectively. Now as the last step we use
the reverse Pinsker’s inequality (Lemma 3) to obtain,
kf+ − f− k2T V ≥
ǫ
ǫ∆(f+ , f− )
max{D(f+ kf− ), D(f− kf+ )} ≥
.
2
2
This completes the proof.
Lemma 10.
min
p:D(pkf+ )≥D(pkf− )
D(pkf+ ) =
18
min
D(pkf+ ).
p:D(pkf+ )=D(pkf− )
Proof. Since the condition D(pkf+ ) ≥ D(pkf− ) can be written as
to solve the constrained optimization
P
f− (i)
i p(i) ln f+ (i)
min D(pkf+ )
p
such that
X
i
p(i) ln
f− (i)
≥ 0.
f+ (i)
≥ 0 we need
(3)
(4)
We claim that the inequality of (4) can be replaced by an equality without any change
in the optimizing value. Suppose this is not true and the optimizing value p̃ is such that
P
f− (i)
i p̃(i) ln f+ (i) = ǫ > 0.
Let, λ = ǫ+D(fǫ− kf+ ) ∈ (0, 1). Note that for the value p∗ = λf+ + (1 − λ)p̃ we have,
s
X
p∗ (i) ln
i
f− (i)
= −λD(f+ kf− ) + (1 − λ)ǫ = 0.
f+ (i)
However, since D(pkf+ ) is a strictly convex function of p, we must have,
D(p∗ kf+ ) < λD(f+ kf+ ) + (1 − λ)D(p̃kf+ ) = (1 − λ)D(p̃kf+ ),
which is a contradiction of p̃ being the optimizing value.
6
Crowd-Cluster with Faulty Oracle
We now consider the case when crowd may return erroneous answers. We do not allow resampling of the same query. By resampling one can always get correct answer for each query with
high probability followed by which we can simply apply the algorithms for the perfect oracle.
Hence, the oracle can be queried with a particular tuple only once in our setting.
6.1
6.1.1
No Side Information
Lower bound for the faulty-oracle model
Suppose, G(V, E) is a union of k disjoint cliques as before. We have, V = ⊔ki=1 Vi . We consider
the following faulty-oracle model. We can query the oracle whether there exists an edge between
vertex i and j. The oracle will give the correct answer with probability 1 − p and will give the
incorrect answer with probability p. We would like to estimate the minimum number of queries
one must make to the oracle so that we can recover the clusters with high probability. In this
section we forbid the use of any side information that may be obtained from an automated
system. The main goal of this section is to prove Theorem 2.
As argued in the introduction, there is a need for a minimum cluster size. If there is no
minimum size requirement on a cluster then the input graph can always consist of multiple
clusters of very small size. Then consider the following two different clusterings. C1 : V =
k−2
k−2
⊔i=1
Vi ⊔ {v1 , v2 } ⊔ {v3 } and C2 : V = ⊔i=1
Vi ⊔ {v1 } ⊔ {v2 , v3 }. Now if one of these two
clusterings are given to us uniformly at random, no matter how many queries we do, we will fail
to recover the correct cluster with probability at least p (recall that, resampling is not allowed).
The argument above does not hold for the case when p = 0. In that case any (randomized)
algorithm has to use (on expectation) O(nk) queries for correct clustering (see the p = 0 case
of Theorem 2 below). While for deterministic algorithm the proof of the above fact is straightforward, for randomized algorithms it was established in [22]. In [22], a clustering was called
balanced if the minimum and maximum sizes of the clusters are only a constant factor way. In
19
particular, [22] observes that, for unbalanced input the lower bound for p = 0 case is easier.
For randomized algorithms and balanced inputs, they left the lower bound as an open problem.
Theorem 2 resolves this as a special case.
Indeed, in Theorem 2, we provide lower bounds for 0 ≤ p < 1/2, assuming inputs such that
either 1) the maximum size of the cluster is within a constant times away from the average size,
or 2) the minimum size of the cluster is a constant fraction of the average size. Note that the
average size of a cluster is nk .
The technique to prove Theorem 2 is similar to the one we have used in Theorem 1. However
we only handle binary random variables here (the answer to the queries). The significant
difference is that, while designing the input we consider a balanced clustering with small sized
clusters we can always fool any algorithm as exemplified above). This removes the constraint
on the algorithm designer on how many times a cluster can be queried with a vertex. While
Lemma 11 shows that enough number of queries must be made with a large number of vertices
V ′ ⊂ V , either of the conditions on minimum or maximum sizes of a cluster ensures that V ′
contains enough vertices that do not satisfy this query requirement.
As mentioned, Lemma 11 is crucial to prove Theorem 2.
Lemma 11. Suppose, there are k clusters. There exist at least 4k
5 clusters such that a vertex v
from any one of these clusters will be assigned to a wrong cluster by any randomized algorithm
k
with positive probability unless the number of queries involving v is more than 10D(pk1−p)
when
p > 0 and
k
10
when p = 0.
Proof. Let us assume that the k clusters are already formed, and we can moreover assume that
all vertices except for the said vertex has already been assigned to a cluster. Note that, queries
that do not involve the said vertex plays no role in this stage.
Now the problem reduces to a hypothesis testing problem where the ith hypothesis Hi for
i = 1, . . . , k, denote that the true cluster is Vi . We can also add a null-hypothesis H0 that
stands for the vertex belonging to none of the clusters. Let Pi denote the joint probability
distribution of our observations (the answers to the queries involving vertex v) when Hi is true,
i = 0, 1, . . . , k. That is for any event A we have,
Pi (A) = Pr(A|Hi ).
Suppose Q denotes the total number of queries made by a (possibly randomized) algorithm
at this stage. Let the random variable Qi denote the number of queries involving cluster
Vi , i = 1, . . . , k.
P
We must have, ki=1 E0 Qi ≤ Q. Let,
J1 ≡ {i ∈ {1, . . . , k} : E0 Qi ≤
10Q
}.
k
9k
Since, (k − |J1 |) 10Q
k ≤ Q, we have |J1 | ≥ 10 .
Let Ei ≡ { the algorithm outputs cluster Vi }. Let
J2 = {i ∈ {1, . . . , n} : P0 (Ei ) ≤
10
}.
k
9k
Moreover, since ki=1 P0 (Ei ) ≤ 1 we must have, (k − |J2 |) 10
k ≤ 1, or |J2 | ≥ 10 . Therefore,
J = J1 ∩ J2 has size,
4k
9k
−k =
.
|J| ≥ 2 ·
10
5
Now let us assume that, we are given a vertex v ∈ Vj for some j ∈ J to cluster. The
probability of correct clustering is Pj (Ej ). We must have,
P
Pj (Ej ) = P0 (Ej ) + Pj (Ej ) − P0 (Ej ) ≤
20
10
+ |P0 (Ej ) − Pj (Ej )|
k
10
10
≤
+ kP0 − Pj kT V ≤
+
k
k
r
1
D(P0 kPj ).
2
where we again used the definition of the total variation distance and in the last step we have
used the Pinsker’s inequality (lemma 2). The task is now to bound the divergence D(P0 kPj ).
Recall that P0 and Pj are the joint distributions of the independent random variables (answers to
queries) that are identical to one of two Bernoulli random variables:Y , which is Bernoulli(p), or
Z, which is Bernoulli(1 − p). Let X1 , . . . , XQ denote the outputs of the queries, all independent
random variables. We must have, from the chain rule (lemma 1),
D(P0 kPj ) =
=
Q
X
i=1
Q
X
D(P0 (xi |x1 , . . . , xi−1 )kPj (xi |x1 , . . . , xi−1 ))
X
i=1 (x1 ,...,xi−1 )∈{0,1}i−1
P0 (x1 , . . . , xi−1 )D(P0 (xi |x1 , . . . , xi−1 )kPj (xi |x1 , . . . , xi−1 )).
Note that, for the random variable Xi , the term D(P0 (xi |x1 , . . . , xi−1 )kPj (xi |x1 , . . . , xi−1 )) will
contribute to D(pk1 − p) only when the query involves the cluster Vj . Otherwise the term will
contribute to 0. Hence,
D(P0 kPj ) =
Q
X
X
i=1 (x1 ,...,xi−1 )∈{0,1}i−1 :ith query involves Vj
= D(pk1 − p)
Q
X
= D(pk1 − p)
Q
X
P0 (x1 , . . . , xi−1 )D(pk1 − p)
X
P0 (x1 , . . . , xi−1 )
i=1 (x1 ,...,xi−1 )∈{0,1}i−1 :ith query involves Vj
i=1
P0 (ith query involves Vj ) = D(pk1 − p)E0 Qj ≤
10Q
D(pk1 − p).
k
Now plugging this in,
10
+
D(P0 kPj ) ≤
k
if Q ≤
10Q
k
k
10D(pk1−p) .
s
10
1 10Q
D(pk1 − p) ≤
+
2 k
k
r
1
,
2
On the other hand, when p = 0, Pj (Ej ) < 1 when E0 Qj < 1. Therefore
≥ 1 whenever p = 0.
Now we are ready to prove Theorem 2.
Proof of Theorem 2. We will show that claim by considering any input, with a restriction on
either the maximum or the minimum cluster size. We consider the following two cases for the
proof.
Case 1: the maximum size of a cluster is ≤
4n
k .
Suppose, total number of queries is = T . That means number of vertices involved in the
queries is ≤ 2T . Note that, there are k clusters and n elements.
Let U be the set of vertices that are involved in less than 16T
n queries. Clearly,
(n − |U |)
16T
≤ 2T,
n
or |U | ≥
7n
.
8
Now we know from Lemma 11 that there exists 4k
5 clusters such that a vertex v from any one
of these clusters will be assigned to a wrong cluster by any randomized algorithm with positive
21
probability unless the expected number of queries involving this vertex is more than
k
10D(pk1−p) ,
k
for p > 0, and 10
when p = 0.
We claim that U must have an intersection with at least one of these 4k
5 clusters. If not,
7n
4k
k
then more than 8 vertices must belong to less than k − 5 = 5 clusters. Or the maximum size
4n
of a cluster will be 7n·5
8k > k , which is prohibited according to our assumption.
Consider the case, p > 0. Now each vertex in the intersection of U and the 4k
5 clusters
k
are going to be assigned to an incorrect cluster with positive probability if, 16T
≤
n
10D(pk1−p) .
Therefore we must have
nk
.
T ≥
160D(pk1 − p)
Similarly, when p = 0 we must have, T ≥
nk
160 .
n
.
Case 2: the minimum size of a cluster is ≥ 20k
′
Let U be the set of clusters that are involved in at most 16T
k queries. That means, (k −
16T
7k
′
′
|U |) k ≤ 2T. This implies, |U | ≥ 8 .
∗
Now we know from lemma 11 that there exists 4k
5 clusters (say U ) such that a vertex v
from any one of these clusters will be assigned to a wrong cluster by any randomized algorithm
with positive probability unless the expected number of queries involving this vertex is more
k
k
, p > 0 and 10
for p = 0.
than 10D(pk1−p)
4k
27k
Quite clearly |U ∗ ∩ U | ≥ 7k
8 + 5 − k = 40 .
∗
Consider a cluster Vi such that i ∈ U ∩ U , which is always possible because the intersection
is nonempty. Vi is involved in at most 16T
k queries. Let the minimum size of any cluster be t.
Now, at least half of the vertices of Vi must each be involved in at most 32T
kt queries. Now each
k
of these vertices must be involved in at least 10D(pk1−p)
queries (see Lemma 11) to avoid being
assigned to a wrong cluster with positive probability (for the case of p = 0 this number would
k
).
be 10
This means,
nk
k
32T
≥
,
or T = Ω
,
kt
10D(pk1 − p)
D(pk1 − p)
for p > 0, since t ≥
6.1.2
n
20k .
Similarly when p = 0 we need T = Ω(nk).
Upper Bound
Now we provide an algorithm to retrieve the clustering with the help of the faulty oracle when
no side information is present. The algorithm is summarized in Algorithm 2. The algorithm
works as follows. It maintains an active list of clusters A, and a sample graph G′ = (V ′ , E ′ )
which is an induced subgraph of G. Initially, both of them are empty. The algorithm always
maintains the invariant that any cluster in A has at least c log n members where c = λ62 , and
p = 21 − λ. Note that the algorithm knows λ. Furthermore, all V ′ (G′ ) × V ′ (G′ ) queries have
been made. Now, when a vertex v is considered by the algorithm (step 3), first we check if v
can be included in any of the clusters in A. This is done by picking c log n distinct members
from each cluster, and querying v with them. If majority of these questions return +1, then
v is included in that cluster, and we proceed to the next vertex. Otherwise, if v cannot be
included in any of the clusters in A, then we add it to V ′ (G′ ), and ask all possible queries to
the rest of the vertices in G′ with v. Once G′ has been modified, we extract the heaviest weight
subgraph from G′ where weight on an edge (u, v) ∈ E(G′ ) is defined as ωu,v = +1 if the query
answer for that edge is +1 and −1 otherwise. If that subgraph contains c log n members then
we include it as a cluster in A. At that time, we also check whether any other vertex u in G′
can join this newly formed cluster by counting if the majority of the (already) queried edges to
22
this new cluster gave answer +1. At the end, all the clusters in A, and the maximum likelihood
clustering from G′ is returned.
Before showing the correctness of Algorithm 2, we elaborate on finding the maximum likelihood estimate for the clusters in G.
Finding the Maximum Likelihood Clustering of G with faulty oracle We have an
undirected graph G(V ≡ [n], E), such that G is a union of k disjoint cliques Gi (Vi , Ei ), i =
1, . . . , k. The subsets Vi ∈ [n] are unknown to us. The adjacency matrix of G is a block-diagonal
matrix. Let us denote this matrix by A = (ai,j ).
Now suppose, each edge of G is erased independently with probability p, and at the same
time each non-edge is replaced with an edge with probability p. Let the resultant adjacency
matrix of the modified graph be Z = (zi,j ). The aim is to recover A from Z.
The maximum likelihood recovery is given by the following:
max
Sℓ ,ℓ=1,···:V =⊔ℓ Sℓ
=
max
Sℓ ,ℓ=1,···:V =⊔ℓ=1 Sℓ
Y
Y
Y
Y
Y
P+ (zi,j )
r,t,r6=t i∈Sr ,j∈St
ℓ i,j∈Sℓ ,i6=j
ℓ i,j∈Sℓ ,i6=j
Y
P− (zi,j )
P+ (zi,j ) Y
P− (zi,j ).
P− (zi,j ) i,j∈V,i6=j
where, P+ (1) = 1 − p, P+ (0) = p, P− (1) = p, P− (0) = 1 − p. Hence, the ML recovery asks for,
max
Sℓ ,ℓ=1,···:V =⊔ℓ=1 Sℓ
X
X
ln
ℓ i,j∈Sℓ ,i6=j
P+ (zi,j )
.
P− (zi,j )
Note that,
ln
Hence the ML estimation is,
P+ (0)
P+ (1)
p
= − ln
= ln
.
P− (0)
P− (1)
1−p
max
Sℓ ,ℓ=1,···:V =⊔ℓ=1 Sℓ
X
X
ωi,j ,
(5)
ℓ i,j∈Sℓ ,i6=j
where ωi,j = 2zi,j − 1, i 6= j, i.e., ωi,j = 1, when zi,j = 1 and ωi,j = −1 when zi,j = 0, i 6= j.
Further ωi,i = zi,i = 0, i = 1, . . . , n.
Note that (5) is equivalent to finding correlation clustering in G with the objective of maximizing the consistency with the edge labels, that is we want to maximize the total number of
positive intra-cluster edges and total number of negative inter-cluster edges [8, 44, 42]. This
can be seen as follows.
max
Sℓ ,ℓ=1,···:V =⊔ℓ=1 Sℓ
X
X
ωi,j
ℓ i,j∈Sℓ ,i6=j
≡
Sℓ ,ℓ=1,···:V =⊔ℓ=1 Sℓ
=
max
max
Sℓ ,ℓ=1,···:V =⊔ℓ=1 Sℓ
X
X
ℓ i,j∈Sℓ ,i6=j
X
X
(i, j) : ωi,j = +1 − (i, j) : ωi,j = −1 +
(i, j) : ωi,j = +1 +
ℓ i,j∈Sℓ ,i6=j
X
r,t:r6=t
X
i,j∈V,i6=j
(i, j) : ωi,j = −1
(i, j) : i ∈ Sr , j ∈ St , ωi,j = −1 .
Therefore (5) is same as correlation clustering, however viewing it as obtaining clusters with
maximum intra-cluster weight helps us to obtain the desired running time of our algorithm.
Also, note that, we have a random instance of correlation clustering here, and not a worst case
instance.
We are now ready to prove the correctness of Algorithm 2.
23
Algorithm 2 Crowd-Cluster with Error & No Side Information. Input: {V }
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
V ′ = ∅, E ′ = ∅, G′ = (V ′ , E ′ )
A=∅
while ∃v ∈ V yet to be clustered do
for each cluster C ∈ A do
⊲ Set c = λ62 where λ ≡ 12 − p.
Select u1 , u2 , .., ul , where l = c log n, distinct members from C and obtain Op (ui , v),
i = 1, 2, .., l. If the majority of these queries return +, then include v in C. Break.
end for
if v is not included in any cluster in A then
Add v to V ′ . For every u ∈ V ′ \ v, obtain Op (v, u). Add an edge (v, u) to E ′ (G′ )
with weight ωu,v = +1 if Op (u, v) == +1, else with ωu,v = −1
Find the heaviest weight subgraph S in G′ . If |S| ≥ c log n, then add S to the list of
clusters in A, and remove the incident vertices and edges on S from V ′ , E ′ .
P
while ∃z ∈ V ′ with u∈S ωz,u > 0 do
Include z in S and remove z and all edges incident on it from V ′ , E ′ .
end while
end if
end while
return all the clusters formed in A and the ML estimates from G′
Correctness of Algorithm
2 To establish the correctness of Algorithm 2, we show the
following. Suppose all n2 queries
on V × V have been made. If the Maximum Likelihood (ML)
estimate of G with these n2 answers is same as the true clustering of G, then Algorithm 2 finds
the true clustering with high probability. There are few steps to prove the correctness.
The first step is to show that any set S that is retrieved in step 9 of Algorithm 2 from G′ ,
and added to A is a subcluster of G (Lemma 12). This establishes that all clusters in A at any
time are subclusters of some original cluster in G. Next, we show that vertices that are added
to a cluster in A, are added correctly, and no two clusters in A can be merged (Lemma 13).
Therefore, clusters obtained from A, are the true clusters. Finally, the remaining of the clusters
can be retrieved from G′ by computing a ML estimate on G′ in step 15, leading to theorem 12.
Lemma 12. Let c′ = 6c = λ362 , where λ = 12 − p. Algorithm 2 in step 9 returns a subcluster
of G of size at least c log n with high probability if G′ contains a subcluster of G of size at least
c′ log n. Moreover, Algorithm 2 in step 9 does not return any set of vertices of size at least
c log n if G′ does not contain a subcluster of G of size at least c log n.
Proof. Let V ′ = Vi′ , i ∈ [1, k], Vi′ ∩ Vj′ = ∅ for i 6= j, and Vi′ ⊆ Vi (G). Suppose without loss
of generality |V1′ | ≥ |V2′ | ≥ .... ≥ |Vk′ |.
The lemma is proved via a series of claims.
S
Claim 1. Let |V1′ | ≥ c′ log n. Then in step 9, a set S ⊆ Vi for some i ∈ [1, k] will be returned
with size at least c log n with high probability.
For an i : |Vi′ | ≥ c′ log n, we have
E
X
s,t∈Vi′ ,s<t
ωs,t
!
!
|Vi′ |
|Vi′ |
=
((1 − p) − p) = (1 − 2p)
.
2
2
Since ωs,t are independent binary random variables, using the Hoeffding’s inequality (Lemma
24
4),
Pr
X
s,t∈Vi′ ,s<t
ωs,t ≤ E
−
X
ωs,t − u ≤ e
s,t∈Vi′ ,s<t
(
2
u2
|V ′ |
i
2
).
Hence,
Pr
X
s,t∈Vi′ ,s<t
ωs,t > (1 − δ)E
Therefore with high probability
c′2
3 (1
2
P
X
s,t∈Vi′ ,s<t
s,t∈Vi′ ,s<t ωs,t
ωs,t ≥ 1 − e−
> (1 − δ)(1 − 2p)
1
3 ).
− 2p) log n, for an appropriately chosen δ (say δ =
δ 2 (1−2p)2
2
|Vi′ |
2
(define c′′ = c′
q
2(1−2p)
)
3
with high probability - since otherwise
X
c′
ωi,j <
i,j∈S,i<j
q
2(1−2p)
3
!
log n
2
<
.
≥ (1 − δ)(1 − 2p)
c′
So, Algorithm 2 in step (9) must return a set S such that |S| ≥
′
(|V2i |)
q
2(1−2p)
3
c′ log n
2
>
log n = c′′ log n
c′2
(1 − 2p) log2 n.
3
Now let S * Vi for any i. Then S must have intersection with at least 2 clusters. Let
Vi ∩ S = Ci and let j ∗ = arg mini:Ci 6=∅ |Ci |. We claim that,
X
X
ωi,j <
i,j∈S,i<j
ωi,j ,
(6)
i,j∈S\Cj ∗ ,i<j
with high probability. Condition (6) is equivalent to,
X
X
ωi,j +
i,j∈Cj ∗ ,i<j
ωi,j < 0.
i∈Cj ∗ ,j∈S\Cj ∗
However this is true because,
1. E
P
i,j∈Cj ∗ ,i<j ωi,j = (1−2p)
|Cj ∗ |
2
and E
P
i∈Cj ∗ ,j∈S\Cj ∗ ωi,j = −(1−2p)|Cj ∗ |·|S\Cj ∗ |.
√
2. As long as |Cj ∗ | ≥ 2 log n we have, from Hoeffding’s inequality (Lemma 4),
Pr
X
ωi,j
i,j∈Cj ∗ ,i<j
|C ∗ |
λ2 (1−2p)2 ( j )
|Cj ∗ |
2
−
2
= on (1).
≥ (1 + λ)(1 − 2p)
≤e
2
!
While at the same time,
Pr
X
i∈Cj ∗ ,j∈S\Cj ∗
ωi,j ≥ −(1 − λ)(1 − 2p)|Cj ∗ | · |S \ Cj ∗ | ≤ e−
λ2 (1−2p)2 |Cj ∗ |·|S\Cj ∗ |
In this case of course with high probability
X
i,j∈Cj ∗ ,i<j
ωi,j +
X
i∈Cj ∗ ,j∈S\Cj ∗
25
ωi,j < 0.
2
= on (1).
√
3. When |Cj ∗ | < 2 log n, we have,
X
i,j∈Cj ∗ ,i<j
ωi,j ≤
!
|Cj ∗ |
≤ 2 log2 n.
2
While at the same time,
Pr
X
i∈Cj ∗ ,j∈S\Cj ∗
ωi,j ≤ (1 − λ)(1 − 2p)|Cj ∗ | · |S \ Cj ∗ | ≤ e−
λ2 (1−2p)2 |Cj ∗ |·|S\Cj ∗ |
2
= on (1).
Hence, even in this case, with high probability,
X
i,j∈Cj ∗ ,i<j
ωi,j +
X
ωi,j < 0.
i∈Cj ∗ ,j∈S\Cj ∗
Hence (6) is true with high probability. But then the algorithm 2 in step 9 would not return S,
but will return S \ Cj ∗ . Hence, we have run into a contradiction. This means S ⊆ Vi for some
Vi .
q
log n, while |V1′ | ≥ c′ log n. In fact, with high probability, |S| ≥
We know |S| ≥ c′ 2(1−2p)
3
(1−δ) ′
2 c log n.
Since all the vertices in S belong to the same cluster in G, this holds again by the
application of Hoeffding’s inequality. Otherwise, the probability that the weight of S is at least
as high as the weight of V1′ is at most n12 .
Claim 2. If |V1′ | < c log n. then in step 9 of Algorithm 2, no subset of size > c log n will be
returned.
If Algorithm 2 in step 9 returns a set S with |S| > c log n then S must have intersection
with at least 2 clusters in G. Now following the same argument as in Claim 1 to establish Eq.
(6), we arrive to a contradiction, and S cannot be returned.
This establishes the lemma.
Lemma 13. The collection A contains all the true clusters of G of size ≥ c′ log n at the end of
Algorithm 2 with high probability.
Proof. From Lemma 12, any cluster that is computed in step 9 and added to A is a subset
of some original cluster in G, and has size at least c log n with high probability. Moreover,
whenever G′ contains a subcluster of G of size at least c′ log n, it is retrieved by our Algorithm
and added to A.
A vertex v is added to a cluster in A either is step 5 or step 11. Suppose, v has been
added to some cluster C ∈ A. Then in both the cases, |C| ≥ c log n at the time v is added, and
there exist l = c log n distinct members of C, say, u1 , u2 , .., ul such that majority of the queries
of v with these vertices returned +1. By the standard Chernoff-Hoeffding bound (Lemma
2
2λ2
λ2
4), Pr(v ∈
/ C) ≤ exp(−c log n (1−2p)
12p ) = exp(−c log n 3(1+2λ) ) ≤ exp(−c log n 3 ), where the
last inequality followed since λ < 21 . On the other hand, if there exists a cluster C ∈ A
such that v ∈ C, and v has already been considered by the algorithm, then either in step
5 or step 11, v will be added to C. This again follows by the Chernoff-Hoeffding bound, as
2
λ2
λ2
Pr(v not included in C | v ∈ C) ≤ exp(−c log n (1−2p)
8(1−p) ) = exp(−c log n 1+2λ ) ≤ exp(−c log n 2 ).
Therefore, if we set c = λ62 , then for all v, if v is included in a cluster in A, the assignment is
correct with probability at least 1 − n2 . Also, the assignment happens as soon as such a cluster
is formed in A.
Furthermore, two clusters in A cannot be merged. Suppose, if possible there are two clusters
C1 and C2 both of which are proper subset of some original cluster in G. Let without loss of
generality C2 is added later in A. Consider the first vertex v ∈ C2 that is considered by our
26
Algorithm 2 in step 3. If C1 is already there in A at that time, then with high probability v will
be added to C1 in step 5. Therefore, C1 must have been added to A after v has been considered
by our algorithm and added to G′ . Now, at the time C1 is added to A in step 9, v ∈ V ′ , and
again v will be added to C1 with high probability in step 11–thereby giving a contradiction.
This completes the proof of the lemma.
All this leads us to the following theorem.
Theorem 12. If the ML estimate on G with all possible n2 queries return the true clustering,
then Algorithm 2 returns the true clusters with high probability. Moreover, Algorithm 2 returns
all the true clusters of G of size at least c′ log n with high probability.
Proof. From Lemma 12 and Lemma 13, A contains all the true clusters of G of size at least
c′ log n with high probability. Any vertex that is not included in the clusters in A at the end of
Algorithm 2 are in G′ , and G′ contains all possible pairwise queries among them. Clearly, then
the ML estimate of G′ will be the true ML estimate of G restricted to these clusters.
Query Complexity of Algorithm 2
1
2
′
k
Lemma 14. Let p =
− λ. The query complexity of Algorithm 2 is
36nk log n
.
λ2
Proof. Let there be
clusters in A when v is considered in step 3 of Algorithm 2. Then v
is queried with at most ck′ log n current members, c log n each from these k′ clusters. If the
membership of v does not get determined then v is queried with all the vertices in G′ . We
have seen in the correctness proof (Lemma 12) that if G′ contains at least c′ log n vertices from
any original cluster, then ML estimate on G′ retrieves those vertices as a cluster in step 9 with
high probability. Hence, when v is queried with all vertices in G′ , |V ′ | ≤ (k − k′ )c′ log n. Thus
the total number of queries made to determine the membership of v is at most c′ k log n, where
c′ = 6c = λ362 when the error probability p = 12 −λ. This gives the query complexity of Algorithm
2 considering all the vertices.
This matches the lower bound computed in Section 6.1.1 within a log n factor, since D(pk1−
1/2+λ
2λ
4λ2
2
p) = (1 − 2p) ln 1−p
p = 2λ ln 1/2−λ = 2λ ln(1 + 1/2−λ ) ≤ 1/2−λ = O(λ ).
Now combining all these we get the statement of Theorem 5.
Theorem (5). Faulty Oracle with No Side Information. There exists an algorithm with
query
complexity O( λ12 nk log n) for Crowd-Cluster that returns Ĝ, ML estimate of G with all n2 queries,
with high probability when query answers are incorrect with probability p = 12 − λ. Noting that,
4λ2
D(pk1− p) ≤ 1/2−λ
, this matches the information theoretic lower bound on the query complexity
within a log n factor. Moreover, the algorithm returns all the true clusters of G of size at least
36
λ2 log n with high probability.
Running Time of Algorithm 2 and Further Discussions In step 9 of Algorithm 2, we
need to find a large cluster of size at least O( λ12 log n) of the original input G from G′ . By
Lemma 12, if we can extract the heaviest weight subgraph in G′ where edges are labelled ±1,
and that subgraph meets the required size bound, then with high probability, it is a subset of
1
an original cluster. This subset can of course be computed in O(n λ2 log n ) time. Since size of
1
G′ is bounded by O( λk2 log n), the running time is O([ λk2 log n] λ2 log n ). While, query complexity
is independent of running time, it is unlikely that this running time can be improved to a
polynomial. This follows from the planted clique conjecture.
Conjecture 1 (Planted Clique Hardness). Given an Erdős-Rényi random graph G(n, p), with
p = 12 , the planted clique conjecture states that if we plant in G(n, p) a clique of size t where
√
t = [O(log n), o( n)], then there exists no polynomial time algorithm to recover the largest clique
in this planted model.
27
Given such a graph with a planted clique of size t = Θ(log n), we can construct a new graph
H by randomly deleting each edge with probability 13 . Then in H, there is one cluster of size
t where edge error probability is 31 and the remaining clusters are singleton with inter-cluster
edge error probability being (1 − 12 − 61 ) = 31 . So, if we can detect the heaviest weight subgraph
in polynomial time in Algorithm 2, there will be a polynomial time algorithm for the planted
clique problem.
Polynomial time algorithm We can reduce the running time from quasi-polynomial to
polynomial, by paying higher in the query-complexity. Suppose, we accept a subgraph extracted
from G′ as valid and add it to A iff its size is Ω(k). Then note that since G′ can contain at
most k2 vertices, such a subgraph can be obtained in polynomial time following the algorithm
√
of correlation clustering with noisy input [44], where all the clusters of size at least O( n) are
recovered on a n-vertex graph. Since our ML estimate is correlation clustering, we can employ
[44]. For k ≥ λ12 log n, the entire analysis remains valid, and we get a query complexity of
1
1
Õ(nk2 ) as opposed to O( nk
λ2 ). If k < λ2 log n, then clusters that have size less than λ2 log n are
anyway not recoverable. Note that, any cluster that has size less than k are not recovered in
√
√
this process, and this bound only makes sense when k < n. When k ≥ n, we can however
√
recover all clusters of size at least O( n).
Corollary (2). There exists a polynomial time algorithm with query complexity O( λ12 nk2 ) for
Crowd-Cluster when query answers are incorrect with probability 12 −λ, which recovers all clusters
of size at least O(max { λ12 log n, k}) in G.
This also leads to an improved algorithm for correlation clustering over noisy graph. Pre√
viously, the works of [44, 8] can only recover cluster of size at least O( n). However, now if
√
n
k ∈ [Ω( log
λ2 ), o( n)], using this algorithm, we can recover all clusters of size at least k.
6.1.3
With Side Information
The algorithm for Crowd-Cluster with side information when crowd may return erroneous answers
is a direct combination of Algorithm 1 and Algorithm 2. We assume side information is less
accurate than querying because otherwise, querying is not useful. Or in other words ∆(fg , fr ) <
∆(p, 1 − p).
We therefore use only the queried answers to extract the heaviest subgraph from G′ , and
add that to the list A. For the clusters in list A, we follow the strategy of Algorithm 1 to
recover the underlying clusters. The pseudocode is given in Algorithm 3. The correctness of
the algorithm follows directly from the analysis of Algorithm 1 and Algorithm 2.
We now analyze the query complexity. Consider a vertex v which needs to be included in a
cluster. Let there be (r − 1) other vertices from the same cluster as v that have been considered
by the algorithm prior to v.
1. Case 1. r ∈ [1, c log n], the number of queries is at most kc log n. In that case v is added
to G′ according to Algorithm 2.
2. Case 2. r ∈ (c log n, 2M ], the number of queries can be k ∗ c log n. In that case, the cluster
n
). In
that v belongs to is in A, but has not grown to size 2M . Recall M = O( ∆(flog
+ ,f− )
that case, according to Algorithm 1, v may need to be queried with each cluster in A, and
according to Algorithm 2, there can be at most c log n queries for each cluster in A.
3. Case 3. r ∈ (2R, |C|], the number of queries is at most c log n ∗ log n. In that case,
according to Algorithm 1, v may need to be queried with at most ⌈log n⌉ clusters in A,
and according to Algorithm 2, there can be at most c log n queries for each chosen cluster
in A.
28
Hence, the total number of queries per cluster is at most O(kc2 (log n)2 + (2M −
c log n)kc log n + (|C| − 2M )c(log n)2 ). So, over all the clusters, the query complexity is
O(nc(log n)2 + k2 M c log n). Note that, if have instead insisted on a Monte Carlo algorithm
with known f+ and f− , then the query complexity would have been O(k2 M c log n). Recall that
∆(pk(1 − p)) = O(λ2 ).
Theorem (6). Let f+ and f− be pmfs and mini f+ (i), mini f− (i) ≥ ǫ for a constant ǫ. With
side information and faulty oracle with error probability 21 − λ, there exist an algorithm for
2
n
) with known f+ and f− , and an algorithm
Crowd-Cluster with query complexity O( λ2 k∆(flog
+ ,f− )
2
n
with expected query complexity O(n+ λ2 k∆(flog
) even when f+ and f− are unknown that recover
+ ,f− )
Ĝ, ML estimate of G with all
7
n
2
queries with high probability.
Round Complexity
So far we have discussed developing algorithms for Crowd-Cluster where queries are asked adaptively one by one. To use the crowd workers in the most efficient way, it is also important to
incorporate as much parallelism as possible without affecting the query complexity by much. To
formalize this, we allow at most Θ(n log n) queries simultaneously in a round, and then the goal
is to minimize the number or rounds to recover the clusters. We show that the algorithms developed for optimizing query complexity naturally extends to the parallel version of minimizing
the round complexity.
7.1
Crowd-Cluster with Perfect Oracle
When crowd gives correct answers and there is no side information, then it is easy to get a
round complexity of k which is optimal within a log n factor as Ω(nk) is a lower bound on the
query complexity in this case. One can just pick a vertex v, and then for every other vertex
issue a query involving v. This grows the cluster containing v completely. Thus in every round,
one new cluster gets formed fully, resulting in a round complexity of k.
We now explain the main steps of our algorithm when side information W is available.
1. Sample
√
n log n vertices, and ask all possible
√
n log n
2
queries involving them.
2. Suppose C1 , C2 , ..., Cl are the clusters formed so far. Arrange these clusters in nondecreasing size of their current membership. For every vertex v not yet clustered, choose
the cluster Cj with j = maxi∈[1,l] Membership(v, Ci ), and select at most ⌈log n⌉ clusters
using steps (11) and (13) of Algorithm 1. Issue all of these at most n log n queries simultaneously, and based on the results, grow clusters C1 , C2 , ..., Cl .
√
3. Among the vertices that have not been put
into any cluster, pick n log n vertices uni√
formly at random, and ask all possible n 2log n queries involving them. Create clusters
C1′ , C2′ , ..., Cl′′ based on the query results.
′
ll
4. Merge the clusters C1′ , C2′ , ..., Cl′′ with C1 , C2 , ..., Cl by issuing a total of ll′ queries in ⌈ n log
n⌉
rounds. Goto step 2.
Analysis First, the algorithm computes the clusters correctly. Every vertex that is included
in a cluster, is done so based on a query result. Moreover, no clusters in step 2 can be merged.
So all the clusters returned are correct.
We now analyzed the number of rounds required to compute the clusters.
29
Algorithm 3 Crowd-Cluster with Error & Side Information. Input: {V, W }
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
34:
35:
36:
37:
38:
39:
40:
41:
42:
V ′ = ∅, E ′ = ∅, G′ = (V ′ , E ′ ), A = ∅
while V 6= ∅ do
If A is empty, then pick an arbitrary vertex v and Go to Step 23
⊲ Let the number of current clusters in A be l ≥ 1
Order the existing clusters in A in nonincreasing size of current membership.
⊲ Let |C1 | ≥ |C2 | ≥ . . . ≥ |Cl | be the ordering (w.l.o.g).
for j = 1 to l do
If ∃v ∈ V such that j = maxi∈[1,l] Membership(v, Ci ), then select v and Break;
end for
Select u1 , u2 , .., ul ∈ Cj , where l = c log n, distinct members from Cj and obtain Op (ui , v),
i = 1, 2, .., l. checked(v, j) = true
if the majority of these queries return +1 then
Include v in Cj . V = V \ v
else
⊲ logarithmic search for membership in the large groups. Note s ≤ ⌈log k⌉
Group C1 , C2 , ..., Cj−1 into s consecutive classes H1 , H2 , ..., Hs such that the clusters
1 | |C1 |
, 2i )
in group Hi have their current sizes in the range [ 2|Ci−1
for i = 1 to s do
j = maxa:Ca ∈Hi Membership(v, Ca )
Select u1 , u2 , .., ul ∈ Cj , where l = c log n, distinct members from Cj and obtain
Op (ui , v), i = 1, 2, .., l. checked(v, j) = true.
if the majority of these queries return +1 then
Include v in Cj . V = V \ v. Break.
end if
end for
⊲ exhaustive search for membership in the remaining groups in A
if v ∈ V then
for i = 1 to l + 1 do
if i = l + 1 then
⊲ v does not belong to any of the existing clusters
Add v to V ′ . Set V = V \ v
For every u ∈ V ′ \ v, obtain Op (v, u). Add an edge (v, u) to E ′ (G′ ) with
weight ω(u, v) = +1 if Op (v, u) == +1, else with ω(u, v) = −1
Find the heaviest weight subgraph S in G′ . If |S| ≥ c log n, then add S to
the list of clusters in A, and remove the incident vertices and edges on S from V ′ , E ′ .
P
while ∃z ∈ V ′ with u∈S ω(z, u) > 0 do
Include z in S and remove z and all edges incident on it from V ′ , E ′ .
end while
Break;
else
if checked(v, i) 6= true then
Select u1 , u2 , .., ul ∈ Cj , where l = c log n, distinct members from Cj
and Op (ui , v), i = 1, 2, .., l. checked(v, i) = true.
if the majority of these queries return +1 then
Include v in Cj . V = V \ v. Break.
end if
end if
end if
end for
end if
end if
end while
return all the clusters formed in A and the ML estimates from G′
30
In one iteration of the algorithm (steps 1 to 4), steps 1 to
3 each require one round, and
√
min (k 2 ,k n log n)
rounds and issue at most
issue at most n log n queries. Step 4 requires at most
n log n
√
√
2
′
min (k , k n log n) queries. This is because l ≤ n log n.
n
)), for i ∈ [1, l], then, at the end of that step,
In step 2, if |Ci | ≥ 2M (recall M = O( ∆(flog
+ kf− )
Ci will be fully grown with high probability from the analysis of Algorithm 1. This happens
since with high probability any vertex that belongs to Ci has been queried with some u already
in Ci . However, since we do not know M , we cannot identify whether Ci has grown fully.
Consider the case when steps 1 and 3 have picked 6kM random vertices. Consider all those
n
. Note that by Markov Inequality, at least n2 vertices are
clusters that have size at least 2k
n
.
contained in clusters of size at least 2k
If we choose these 6kM vertices with replacement, then on expectation, the number of
n
members chosen from each cluster of size 2k
is 3M , and with high probability above 2M .
This same concentration bound holds even though here sampling is done without replacement
(Lemma 5).
Therefore, after 6kM vertices have been chosen, and step 2 has been performed, at least n2
vertices get clustered and removed.
⌉. If k2 ≥ n, then
The number of iterations required to get 6kM random vertices is ⌈ √6kM
n log n
the number of rounds required in each iteration is 2 + ⌈ √n klog n ⌉. So the total number of rounds
2
M
required to get 6kM vertices is O( nklog
n ). And, finally to get all the vertices clustered, the
2
number of rounds required will be O( k nM ), whereas the optimum round complexity could be
2M
O( nklog
n ).
If k2 < n, then the number of rounds in each√iteration is at most 3. Hence the total number
. If kM ≤ n log n, then the number of rounds required
of iterations is at most 3 + √6kM
n log n
√
√
is O(1). Else, we have kM > n log n and k < n. While our algorithm requires O( √nkM
)
log n
2
rounds, we know the optimum round complexity is at least O( nklogMn ). Overall, the gap may be
√
n
at most O( n klog n ) = O(M ) = O( ∆(flog
).
+ kf− )
This leads to Theorem 7.
Theorem (7). Perfect Oracle with Side Information. There exists an algorithm for CrowdCluster with perfect oracle and unknown side information f+ and f− such that it√achieves a
√
round complexity within Õ(1) factor of the optimum when k = Ω( n) or k = O( ∆(f+nkf− ) ), and
otherwise within Õ(∆(f+ kf− )).
7.2
Crowd-Cluster with Faulty Oracle
We now move to the case of Crowd-Cluster with faulty oracle. We obtain an algorithm with close
to optimal round complexity when no side information is provided. By combining this algorithm
with the one in the previous section, one can easily obtain an algorithm for Crowd-Cluster with
faulty oracle and side information. This is left as an exercise to the reader.
We now give the algorithm for the case when crowd may return erroneous answer with
probability p = 12 − λ (known), and there is no side information.
√
1. Sample n log n vertices uniformly at random, and ask all possible
volving them to form a subgraph G′′ = (V ′′ , E ′′ )
√
n log n
2
queries in-
2. Extract the highest weighted subgraph S from G′′ after setting a weight of +1 for every
positive answer and −1 for every negative answer like in Algorithm 2. If |S| ≥ c log n
where c is set as in Algorithm 2, then for every vertex not yet clustered issue c log n
queries to distinct vertices in S simultaneously in at most c rounds. Grow S by including
any vertex where the majority of those queries returned is +1. Repeat step 2 as long as
31
the extracted subgraph has size at least c log n, else move to step 3 while not all vertices
have been clustered or included in G′′ .
3. Among the vertices that have not
been clustered yet, Pick r vertices Sr uniformly at
r
random, and ask all possible 2 + r|V ′′ | queries among Sr and across Sr and V ′′ . r is
chosen such that the total number of queries is at most n log n. Goto step 2.
Analysis By the analysis (Lemma 12) of Algorithm 2 the extracted subgraph S will have size
≥ c log n iff G′′ contains a subcluster of original G of size O(c log n). Moreover, by Lemma 13,
once S is detected S will be fully grown at the end of that step, that is within the next c rounds.
Now by the same analysis as in the previous section 7.1, once we choose 4kc log n vertices,
2 2
thus query 16k2 c2 log n edges in ⌈ 16kn c ⌉ rounds, then with high probability, each cluster with
n
size will have c log n representatives in G′′ and will be fully grown. We are then left
at least 2k
with at most n2 vertices and can apply the argument recursively. Thus the round complexity is
2 2
O(⌈ 16kn c ⌉ log n + kc) where the second term comes from using at most c rounds for growing
cluster S in step 2.
n
, then we pick n2 edges in at most logn n rounds, and the optimum algorithm
If kc ≥ 4√log
n
√
kc
√ n
log n factor of
has round complexity at least Θ( log
n ) = Θ( 4 log n log n ). So, we are within a
the optimum.
√
√
n
If kc ≤ 4√log
, but kc ≥ log n, then the round complexity of our algorithm is O(kc log n+
n
√
√
log n) = O(kc
√ log n), again within a log n log n factor of the optimum.
√
If kc ≤ log n, then in the first round, all the clusters that have size at least c n will have
enough representatives, and will be fully grown at the end of step 2. After that each cluster
√
√
will have at most c n vertices. Hence, a total of at most kc n ≤ log n vertices will remain to
be clustered. Thus the total number of rounds required will be O(kc), within log n factor of the
optimum.
1
), we get Theorem 8.
Recalling that c = O( λ12 ) = O( ∆(pk(1−p))
Theorem (8). Faulty Oracle with no Side Information. There exists an algorithm for CrowdCluster with faulty oracle with error probability 21 −λ and no side information such that it achieves
√
a round complexity
within Õ( log n) factor of the optimum that recovers Ĝ, ML estimate of G
with all n2 queries with high probability.
This also gives a new parallel algorithm for correlation clustering over noisy input where in
each round n log n work is allowed.
References
[1] Emmanuel Abbe, Afonso S. Bandeira, and Georgina Hall. Exact recovery in the stochastic
block model. IEEE Trans. Information Theory, 62(1):471–487, 2016.
[2] Kook-Jin Ahn, Graham Cormode, Sudipto Guha, Andrew McGregor, and Anthony Ian
Wirth. Correlation clustering in data streams. In Proceedings of the 32nd International
Conference on Machine Learning, volume 37, 2015.
[3] Nir Ailon, Moses Charikar, and Alantha Newman. Aggregating inconsistent information:
ranking and clustering. Journal of the ACM (JACM), 55(5):23, 2008.
[4] Miklos Ajtai, János Komlos, William L Steiger, and Endre Szemerédi. Deterministic selection in o (loglog n) parallel time. In Proceedings of the eighteenth annual ACM symposium
on Theory of computing, pages 188–195. ACM, 1986.
32
[5] N. Alon and Y. Azar. The average complexity of deterministic and randomized parallel comparison sorting algorithms. In Proceedings of the 28th Annual Symposium on Foundations
of Computer Science, FOCS ’87, pages 489–498, 1987.
[6] Noga Alon and Yossi Azar. Sorting, approximate sorting, and searching in rounds. SIAM
Journal on Discrete Mathematics, 1(3):269–280, 1988.
[7] Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic
multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002.
[8] Nikhil Bansal, Avrim Blum, and Shuchi Chawla. Correlation clustering. Machine Learning,
56(1-3):89–113, 2004.
[9] Béla Bollobás and Graham Brightwell. Parallel selection with high probability. SIAM
Journal on Discrete Mathematics, 3(1):21–31, 1990.
[10] Mark Braverman, Jieming Mao, and Matthew S. Weinberg. Parallel algorithms for select and partition with noisy comparisons. In 48th Annual Symposium on the Theory of
Computing, STOC. ACM, 2016.
[11] Mark Braverman and Elchanan Mossel. Noisy sorting without resampling. In Proceedings
of the nineteenth annual ACM-SIAM symposium on Discrete algorithms, pages 268–276.
Society for Industrial and Applied Mathematics, 2008.
[12] Mark Braverman and Elchanan Mossel.
abs/0910.1191, 2009.
Sorting from noisy information.
CoRR,
[13] Nicolo Cesa-Bianchi and Gábor Lugosi. Prediction, learning, and games. Cambridge university press, 2006.
[14] Shuchi Chawla, Konstantin Makarychev, Tselil Schramm, and Grigory Yaroslavtsev. Near
optimal lp rounding algorithm for correlationclustering on complete and complete k-partite
graphs. In Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of
Computing, pages 219–228. ACM, 2015.
[15] Flavio Chierichetti, Nilesh Dalvi, and Ravi Kumar. Correlation clustering in mapreduce.
In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery
and data mining, pages 641–650. ACM, 2014.
[16] Peter Chin, Anup Rao, and Van Vu. Stochastic block model and community detection
in the sparse graphs: A spectral algorithm with optimal rate of recovery. arXiv preprint
arXiv:1501.05021, 2015.
[17] Peter Christen. Data matching: concepts and techniques for record linkage, entity resolution, and duplicate detection. Springer Science & Business Media, 2012.
[18] Richard Cole, Philip N. Klein, and Robert E. Tarjan. Finding minimum spanning forests
in logarithmic time and linear work using random sampling. In Proceedings of the Eighth
Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA ’96, pages 243–
250, 1996.
[19] Thomas M Cover and Joy A Thomas. Elements of information theory, 2nd Ed. John Wiley
& Sons, 2012.
[20] Imre Csiszár and Zsolt Talata. Context tree estimation for not necessarily finite memory
processes, via bic and mdl. Information Theory, IEEE Transactions on, 52(3):1007–1016,
2006.
33
[21] Nilesh Dalvi, Anirban Dasgupta, Ravi Kumar, and Vibhor Rastogi. Aggregating crowdsourced binary ratings. In WWW, pages 285–294, 2013.
[22] Susan B. Davidson, Sanjeev Khanna, Tova Milo, and Sudeepa Roy. Top-k and clustering
with noisy comparisons. ACM Trans. Database Syst., 39(4):35:1–35:39, 2014.
[23] Ahmed K Elmagarmid, Panagiotis G Ipeirotis, and Vassilios S Verykios. Duplicate record
detection: A survey. IEEE Trans. Knowl. Data Eng., 19(1):1–16, 2007.
[24] Alina Ene, Sungjin Im, and Benjamin Moseley. Fast clustering using mapreduce. In
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery
and data mining, pages 681–689. ACM, 2011.
[25] Uriel Feige, Prabhakar Raghavan, David Peleg, and Eli Upfal. Computing with noisy
information. SIAM Journal on Computing, 23(5):1001–1018, 1994.
[26] Ivan P Fellegi and Alan B Sunter. A theory for record linkage. Journal of the American
Statistical Association, 64(328):1183–1210, 1969.
[27] Donatella Firmani, Barna Saha, and Divesh Srivastava. Online entity resolution using an
oracle. PVLDB, 9(5):384–395, 2016.
[28] Hillel Gazit. An optimal randomized parallel algorithm for finding connected components
in a graph. SIAM Journal on Computing, 20(6):1046–1067, 1991.
[29] Lise Getoor and Ashwin Machanavajjhala. Entity resolution: theory, practice & open
challenges. PVLDB, 5(12):2018–2019, 2012.
[30] Arpita Ghosh, Satyen Kale, and Preston McAfee. Who moderates the moderators?: crowdsourcing abuse detection in user-generated content. In EC, pages 167–176, 2011.
[31] Chaitanya Gokhale, Sanjib Das, AnHai Doan, Jeffrey F Naughton, Narasimhan Rampalli,
Jude Shavlik, and Xiaojin Zhu. Corleone: Hands-off crowdsourcing for entity matching. In
SIGMOD Conference, pages 601–612, 2014.
[32] Michael T Goodrich, Nodari Sitchinava, and Qin Zhang. Sorting, searching, and simulation
in the mapreduce framework. In Algorithms and Computation, pages 374–383. Springer,
2011.
[33] John Greiner. A comparison of parallel algorithms for connected components. In Proceedings of the sixth annual ACM symposium on Parallel algorithms and architectures, pages
16–25. ACM, 1994.
[34] Anja Gruenheid, Besmira Nushi, Tim Kraska, Wolfgang Gatterbauer, and Donald Kossmann. Fault-tolerant entity resolution with the crowd. CoRR, abs/1512.00537, 2015.
[35] Bruce Hajek, Yihong Wu, and Jiaming Xu. Achieving exact cluster recovery threshold via
semidefinite programming: Extensions. arXiv preprint arXiv:1502.07738, 2015.
[36] Elad Hazan and Robert Krauthgamer. How hard is it to approximate the best nash
equilibrium? SIAM J. Comput., 40(1):79–91, January 2011.
[37] Thomas N. Herzog, Fritz J. Scheuren, and William E. Winkler. Data Quality and Record
Linkage Techniques. Springer Publishing Company, Incorporated, 1st edition, 2007.
[38] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal
of the American statistical association, 58(301):13–30, 1963.
34
[39] David R Karger, Sewoong Oh, and Devavrat Shah. Iterative learning for reliable crowdsourcing systems. In NIPS, pages 1953–1961, 2011.
[40] Howard Karloff, Siddharth Suri, and Sergei Vassilvitskii. A model of computation for
mapreduce. In Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete
Algorithms, pages 938–948. Society for Industrial and Applied Mathematics, 2010.
[41] Konstantin Makarychev, Yury Makarychev, and Aravindan Vijayaraghavan. Sorting noisy
data with partial information. In Proceedings of the 4th conference on Innovations in
Theoretical Computer Science, pages 515–528. ACM, 2013.
[42] Konstantin Makarychev, Yury Makarychev, and Aravindan Vijayaraghavan. Correlation
clustering with noisy partial information. In Proceedings of The 28th Conference on Learning Theory, pages 1321–1342, 2015.
[43] Adam Marcus, Eugene Wu, David Karger, Samuel Madden, and Robert Miller. Humanpowered sorts and joins. Proceedings of the VLDB Endowment, 5(1):13–24, 2011.
[44] Claire Mathieu and Warren Schudy. Correlation clustering with noisy input. In Proceedings
of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2010,
Austin, Texas, USA, January 17-19, 2010, pages 712–728, 2010.
[45] Elchanan Mossel, Joe Neeman, and Allan Sly. Consistency thresholds for binary symmetric
block models. Arxiv preprint, 2014.
[46] Xinghao Pan, Dimitris Papailiopoulos, Samet Oymak, Benjamin Recht, Kannan Ramchandran, and Michael I Jordan. Parallel correlation clustering on big graphs. In Advances in
Neural Information Processing Systems, pages 82–90, 2015.
[47] Rudiger Reischuk. A fast probabilistic parallel sorting algorithm. In Proceedings of the
22Nd Annual Symposium on Foundations of Computer Science, FOCS ’81, pages 212–219,
1981.
[48] Igal Sason. On reverse pinsker inequalities. arXiv preprint arXiv:1503.07118, 2015.
[49] Leslie G Valiant. Parallelism in comparison problems. SIAM Journal on Computing,
4(3):348–355, 1975.
[50] Vasilis Verroios and Hector Garcia-Molina. Entity resolution with crowd errors. In 31st
IEEE International Conference on Data Engineering, ICDE 2015, Seoul, South Korea,
April 13-17, 2015, pages 219–230, 2015.
[51] Norases Vesdapunt, Kedar Bellare, and Nilesh Dalvi. Crowdsourcing algorithms for entity
resolution. PVLDB, 7(12):1071–1082, 2014.
[52] Jiannan Wang, Tim Kraska, Michael J Franklin, and Jianhua Feng. Crowder: Crowdsourcing entity resolution. PVLDB, 5(11):1483–1494, 2012.
35
| 8 |
Journal of Scheduling manuscript No.
(will be inserted by the editor)
A parameterized complexity view on non-preemptively scheduling
interval-constrained jobs: few machines, small looseness, and small slack
arXiv:1508.01657v2 [] 24 Mar 2016
René van Bevern · Rolf Niedermeier · Ondřej Suchý
Submitted: August 7, 2015
Accepted: March 23, 2016
Abstract We study the problem of non-preemptively scheduling n jobs, each job j with a release time t j , a deadline d j ,
and a processing time p j , on m parallel identical machines.
Cieliebak et al (2004) considered the two constraints |d j −
t j | ≤ λ p j and |d j − t j | ≤ p j + σ and showed the problem to
be NP-hard for any λ > 1 and for any σ ≥ 2. We complement their results by parameterized complexity studies: we
show that, for any λ > 1, the problem remains weakly NPhard even for m = 2 and strongly W[1]-hard parameterized
by m. We present a pseudo-polynomial-time algorithm for
constant m and λ and a fixed-parameter tractability result for
the parameter m combined with σ .
Keywords release times and deadlines · machine minimization · sequencing within intervals · shiftable intervals ·
fixed-parameter tractability · NP-hard problem
1 Introduction
Non-preemptively scheduling jobs with release times and
deadlines on a minimum number of machines is a well-studied problem both in offline and online variants (Chen et al
René van Bevern is supported by grant 16-31-60007 mol a dk of the
Russian Foundation for Basic Research (RFBR).
Ondřej Suchý is supported by grant 14-13017P of the Czech Science
Foundation.
René van Bevern
Novosibirsk State University, Novosibirsk, Russian Federation, E-mail:
[email protected]
Rolf Niedermeier
Institut für Softwaretechnik und Theoretische Informatik, TU Berlin,
Germany, E-mail: [email protected]
Ondřej Suchý
Faculty of Information Technology, Czech Technical University in
Prague, Prague, Czech Republic, E-mail: [email protected]
2016; Chuzhoy et al 2004; Cieliebak et al 2004; Malucelli
and Nicoloso 2007; Saha 2013). In its decision version, the
problem is formally defined as follows:
I NTERVAL -C ONSTRAINED S CHEDULING
Input: A set J := {1, . . . , n} of jobs, a number m ∈ N of
machines, each job j with a release time t j ∈ N, a deadline d j ∈ N, and a processing time p j ∈ N.
Question: Is there a schedule that schedules all jobs onto
m parallel identical machines such that
1. each job j is executed non-preemptively for p j time units,
2. each machine executes at most one job at a time, and
3. each job j starts no earlier than t j and is finished by d j .
For a job j ∈ J, we call the half-open interval [t j , d j ) its time
window. A job may only be executed during its time window.
The length of the time window is d j − t j .
We study I NTERVAL -C ONSTRAINED S CHEDULING with
two additional constraints introduced by Cieliebak et al (2004).
These constraints relate the time window lengths of jobs to
their processing times:
Looseness If all jobs j ∈ J satisfy |d j − t j | ≤ λ p j for some
number λ ∈ R, then the instance has looseness λ . By λ L OOSE I NTERVAL -C ONSTRAINED S CHEDULING we denote
the problem restricted to instances of looseness λ .
Slack If all jobs j ∈ J satisfy |d j − t j | ≤ p j + σ for some
number σ ∈ R, then the instance has slack σ . By σ -S LACK
I NTERVAL -C ONSTRAINED S CHEDULING we denote the
problem restricted to instances of slack σ .
Both constraints on I NTERVAL -C ONSTRAINED S CHEDUL ING are very natural: clients may accept some small deviation
of at most σ from the desired start times of their jobs. Moreover, it is conceivable that clients allow for a larger deviation
2
for jobs that take long to process anyway, leading to the case
of bounded looseness λ .
Cieliebak et al (2004) showed that, even for constant λ >
1 and constant σ ≥ 2, the problems λ -L OOSE I NTERVAL C ONSTRAINED S CHEDULING and σ -S LACK I NTERVAL C ONSTRAINED S CHEDULING are strongly NP-hard.
Instead of giving up on finding optimal solutions and
resorting to approximation algorithms (Chuzhoy et al 2004;
Cieliebak et al 2004), we conduct a more fine-grained complexity analysis of these problems employing the framework of parameterized complexity theory (Cygan et al 2015;
Downey and Fellows 2013; Flum and Grohe 2006; Niedermeier 2006), which so far received comparatively little attention in the field of scheduling with seemingly only a handful
of publications (van Bevern et al 2015a,b; Bodlaender and
Fellows 1995; Cieliebak et al 2004; Fellows and McCartin
2003; Halldórsson and Karlsson 2006; Hermelin et al 2015;
Mnich and Wiese 2015). In particular, we investigate the
effect of the parameter m of available machines on the parameterized complexity of interval-constrained scheduling
without preemption.
Related work I NTERVAL -C ONSTRAINED S CHEDULING is
a classical scheduling problem and strongly NP-hard already
on one machine (Garey and Johnson 1979, problem SS1).
Besides the task of scheduling all jobs on a minimum number
of machines, the literature contains a wide body of work
concerning the maximization of the number of scheduled
jobs on a bounded number of machines (Kolen et al 2007).
For the objective of minimizing the number
p of machines,
Chuzhoy et al (2004) developed a factor-O( log n/ log log n)approximation algorithm. Malucelli and Nicoloso (2007) formalized machine minimization and other objectives in terms
of optimization problems in shiftable interval graphs. Online
algorithms for minimizing the number of machines have been
studied as well and we refer to recent work by Chen et al
(2016) for an overview.
Our work refines the following results of Cieliebak et al
(2004), who considered I NTERVAL -C ONSTRAINED S CHED ULING with bounds on the looseness and the slack. They
showed that I NTERVAL -C ONSTRAINED S CHEDULING is
strongly NP-hard for any looseness λ > 1 and any slack σ ≥
2. Besides giving approximation algorithms for various special cases, they give a polynomial-time algorithm for σ = 1
and a fixed-parameter tractability result for the combined
parameter σ and h, where h is the maximum number of time
windows overlapping in any point in time.
Our contributions We analyze the parameterized complexity
of I NTERVAL -C ONSTRAINED S CHEDULING with respect
to three parameters: the number m of machines, the looseness λ , and the slack σ . More specifically, we refine known
René van Bevern et al.
results of Cieliebak et al (2004) using tools of parameterized
complexity analysis. An overview is given in Table 1.1.
In Section 3, we show that, for any λ > 1, λ -L OOSE
I NTERVAL -C ONSTRAINED S CHEDULING remains weakly
NP-hard even on m = 2 machines and that it is strongly W[1]hard when parameterized by the number m of machines. In
Section 4, we give a pseudo-polynomial-time algorithm for λ L OOSE I NTERVAL -C ONSTRAINED S CHEDULING for each
fixed λ and m. Finally, in Section 5, we give a fixed-parameter algorithm for σ -S LACK I NTERVAL -C ONSTRAINED
S CHEDULING when parameterized by m and σ . This is in
contrast to our result from Section 3 that the parameter combination m and λ presumably does not give fixed-parameter
tractability results for λ -L OOSE I NTERVAL -C ONSTRAINED
S CHEDULING.
2 Preliminaries
Basic notation We assume that 0 ∈ N. For two vectors u =
(u1 , . . . , uk ) and v = (v1 , . . . , vk ), we write u ≤ v if ui ≤ vi
for all i ∈ {1, . . . , k}. Moreover, we write u v if u ≤ v and
u 6= v, that is, u and v differ in at least one component. Finally,
1k is the k-dimensional vector consisting of k 1-entries.
Computational complexity We assume familiarity with the
basic concepts of NP-hardness and polynomial-time manyone reductions (Garey and Johnson 1979). We say that a
problem is (strongly) C-hard for some complexity class C if it
is C-hard even if all integers in the input instance are bounded
from above by a polynomial in the input size. Otherwise, we
call it weakly C-hard.
In the following, we introduce the basic concepts of parameterized complexity theory, which are in more detail
discussed in corresponding text books (Cygan et al 2015;
Downey and Fellows 2013; Flum and Grohe 2006; Niedermeier 2006).
Fixed-parameter algorithms The idea in fixed-parameter algorithmics is to accept exponential running times, which are
seemingly inevitable in solving NP-hard problems, but to
restrict them to one aspect of the problem, the parameter.
Thus, formally, an instance of a parameterized problem Π is a pair (x, k) consisting of the input x and the
parameter k. A parameterized problem Π is fixed-parameter tractable (FPT) with respect to a parameter k if there
is an algorithm solving any instance of Π with size n in
f (k) · poly(n) time for some computable function f . Such
an algorithm is called a fixed-parameter algorithm. It is potentially efficient for small values of k, in contrast to an
algorithm that is merely running in polynomial time for each
fixed k (thus allowing the degree of the polynomial to depend on k). FPT is the complexity class of fixed-parameter
tractable parameterized problems.
A parameterized complexity view on non-preemptively scheduling interval-constrained jobs
3
Table 1.1 Overview of results on I NTERVAL -C ONSTRAINED S CHEDULING for various parameter combinations. The parameterized complexity
with respect to the combined parameter λ + σ remains open.
Combined
with
λ
Parameter
looseness λ
slack σ
NP-hard for any λ > 1
(Cieliebak et al 2004)
σ
number m of machines
?
NP-hard for any σ ≥ 2
(Cieliebak et al 2004)
m
We refer to the sum of parameters k1 +k2 as the combined
parameter k1 and k2 .
Parameterized intractability To show that a problem is presumably not fixed-parameter tractable, there is a parameterized analog of NP-hardness theory. The parameterized analog
of NP is the complexity class W[1] ⊇ FPT, where it is conjectured that FPT 6= W[1]. A parameterized problem Π with
parameter k is called W[1]-hard if Π being fixed-parameter
tractable implies W[1] = FPT. W[1]-hardness can be shown
using a parameterized reduction from a known W[1]-hard
problem: a parameterized reduction from a parameterized
problem Π1 to a parameterized problem Π2 is an algorithm
mapping an instance I with parameter k to an instance I 0 with
parameter k0 in time f (k) · poly(|I|) such that k0 ≤ g(k) and
I 0 is a yes-instance for Π1 if and only if I is a yes-instance
for Π2 , where f and g are arbitrary computable functions.
3 A strengthened hardness result
In this section, we strengthen a hardness result of Cieliebak
et al (2004), who showed that λ -L OOSE I NTERVAL -C ON STRAINED S CHEDULING is NP-hard for any λ > 1. This
section proves the following theorem:
Theorem 3.1 Let λ : N → R be such that λ (n) ≥ 1 + n−c
for some integer c ≥ 1 and all n ≥ 2.
Then λ (n)-L OOSE I NTERVAL -C ONSTRAINED S CHED ULING of n jobs on m machines is
(i) weakly NP-hard for m = 2, and
(ii) strongly W[1]-hard for parameter m.
Note that Theorem 3.1, in particular, holds for any constant
function λ (n) > 1.
We remark that Theorem 3.1 cannot be proved using the
NP-hardness reduction given by Cieliebak et al (2004), which
reduces 3-S AT instances with k clauses to I NTERVAL -C ON STRAINED S CHEDULING instances with m = 3k machines.
Since 3-S AT is trivially fixed-parameter tractable for the
W[1]-hard for parameter m for any λ > 1 (Theorem 3.1),
weakly NP-hard for m = 2 and any λ > 1 (Theorem 3.1),
pseudo-polynomial time for fixed m and λ (Theorem 4.1)
fixed-parameter tractable for parameter σ + m (Theorem 5.1)
NP-hard for m = 1 (Garey and Johnson 1979)
parameter number k of clauses, the reduction of Cieliebak
et al (2004) cannot yield Theorem 3.1.
Instead, to prove Theorem 3.1, we give a parameterized
polynomial-time many-one reduction from B IN PACKING
with m bins and n items to λ (mn)-L OOSE I NTERVAL -C ON STRAINED S CHEDULING with m machines and mn jobs.
B IN PACKING
Input: A bin volume V ∈ N, a list a1 , . . . , an ∈ N of items,
and a number m ≤ n of bins.
Question: Is there a partition S1 ] · · · ] Sm = {1, . . . , n} such
that ∑i∈Sk ai ≤ V for all 1 ≤ k ≤ m?
Since B IN PACKING is weakly NP-hard for m = 2 bins and
W[1]-hard parameterized by m even if all input numbers are
polynomial in n (Jansen et al 2013), Theorem 3.1 will follow.
Our reduction, intuitively, works as follows: for each
of the n items ai in a B IN PACKING instance with m bins
of volume V , we create a set Ji := { ji1 , . . . , jim } of m jobs
that have to be scheduled on m mutually distinct machines.
Each machine represents one of the m bins in the B IN PACK ING instance. Scheduling job ji1 on a machine k corresponds
to putting item ai into bin k and will take B + ai time of
machine k, where B is some large integer chosen by the
reduction. If ji1 is not scheduled on machine k, then a job
in Ji \ { ji1 } has to be scheduled on machine k, which will
take only B time of machine k. Finally, we choose the latest
deadline of any job as nB +V . Thus, since all jobs have to
be finished by time nB +V and since there are n items, for
each machine k, the items ai for which ji1 is scheduled on
machine k must sum up to at most V in a feasible schedule.
This corresponds to satisfying the capacity constraint of V of
each bin.
Formally, the reduction works as follows and is illustrated
in Figure 3.1.
Construction 3.2 Given a B IN PACKING instance I with
n ≥ 2 items a1 , . . . , an and m ≤ n bins, and λ : N → R such
that λ (n) ≥ 1 + n−c for some integer c ≥ 1 and all n ≥ 2,
we construct an I NTERVAL -C ONSTRAINED S CHEDULING
4
René van Bevern et al.
created instance
B + a1
B
0
4B + a4
j11
j31
j12
j32
j13
j33
2B
B+A
j11
feasible schedule
3B + a3
2B + a2
j12
j13
j43
3B
j32
j33
j23
M3
j42
j31
j21
M2
j41
j22
j23
2B + A
j22
M1
j21
2B + a1
3B + a2
3B + A
4B
4B +V
j42
j43
j41
3B + a1 + a3
4B + a1 + a3
= 4B + a4
Fig. 3.1 Reduction from B IN PACKING with four items a1 = 1, a2 = a3 = 2, a4 = 3, bin volume V = 3, and m = 3 bins to 3/2-L OOSE I NTERVAL C ONSTRAINED S CHEDULING. That is, Construction 3.2 applies with c = 1, A = 8, and B = 3 · 4 · 8 = 96. The top diagram shows (not to scale) the
jobs created by Construction 3.2. Herein, the processing time of each job is drawn as a rectangle of corresponding length in an interval being the
job’s time window. The bottom diagram shows a feasible schedule for three machines M1 , M2 , and M3 that corresponds to putting items a1 and a3
into the first bin, item a2 into the second bin, and a4 into the third bin.
instance with m machines and mn jobs as follows. First, let
n
A := ∑ ai
and
B := (mn)c · A ≥ 2A.
i=1
If V > A, then I is a yes-instance of B IN PACKING and we
return a trivial yes-instance of I NTERVAL -C ONSTRAINED
S CHEDULING.
Otherwise, we have V ≤ A and construct an instance of
I NTERVAL -C ONSTRAINED S CHEDULING as follows: for
each i ∈ {1, . . . , n}, we introduce a set Ji := { ji1 , . . . , jim } of
jobs. For each job j ∈ Ji , we choose the release time
t j := (i − 1)B,
the processing time
p j :=
(
B + ai
B
if j = ji1 ,
if j 6= ji1 ,
(3.1)
d j :=
if i < n,
iB +V
if i = n.
This concludes the construction.
In the remainder of this section, we show that Construction 3.2 is correct and satisfies all structural properties that
allow us to derive Theorem 3.1.
First, we show that Construction 3.2 indeed creates an I N TERVAL -C ONSTRAINED S CHEDULING instance with small
looseness.
Lemma 3.4 Given a B IN PACKING instance with n ≥ 2 items
and m bins, Construction 3.2 outputs an I NTERVAL -C ON STRAINED S CHEDULING instance with
and the deadline
(
iB + A
Remark 3.3 Construction 3.2 outputs an I NTERVAL -C ON STRAINED S CHEDULING instance with agreeable deadlines,
that is, the deadlines of the jobs have the same relative order
as their release times. Thus, in the offline scenario, all hardness results of Theorem 3.1 will also hold for instances with
agreeable deadlines.
In contrast, agreeable deadlines make the problem significantly easier in the online scenario: Chen et al (2016)
showed an online-algorithm with constant competitive ratio
for I NTERVAL -C ONSTRAINED S CHEDULING with agreeable deadlines, whereas there is a lower bound of n on the
competitive ratio for general instances (Saha 2013).
(i) at most m machines and mn jobs and
(ii) looseness λ (mn).
t
u
Proof It is obvious that the output instance has at most
mn jobs and m machines and, thus, (i) holds.
A parameterized complexity view on non-preemptively scheduling interval-constrained jobs
Towards (ii), observe that mn ≥ n ≥ 2, and hence, for
each i ∈ {1, . . . , n} and each job j ∈ Ji , (3.1) yields
|d j − t j | (iB + A) − (i − 1)B B + A
A
≤
=
= 1+
pj
B
B
B
A
= 1+
= 1 + (mn)−c ≤ λ (mn).
(mn)c · A
t
u
We now show that Construction 3.2 runs in polynomial time
and that, if the input B IN PACKING instance has polynomially
bounded integers, then so has the output I NTERVAL -C ON STRAINED S CHEDULING instance.
Lemma 3.5 Let I be a B IN PACKING instance with n ≥
2 items a1 , . . . , an and let amax := max1≤i≤n ai . Construction 3.2 applied to I
(i) runs in time polynomial in |I| and
(ii) outputs an I NTERVAL -C ONSTRAINED S CHEDULING
instance whose release times and deadlines are bounded
by a polynomial in n + amax .
Proof We first show (ii), thereafter we show (i).
(ii) It is sufficient to show that the numbers A and B
in Construction 3.2 are bounded polynomially in n + amax
since all release times and deadlines are computed as sums
and products of three numbers not larger than A, B, or n.
Clearly, A = ∑ni=1 ai ≤ n · max1≤i≤n ai , which is polynomially
bounded in n + amax . Since mn ≤ n2 , also B = (mn)c · A is
polynomially bounded in n + amax .
(i) The sum A = ∑n1=1 ai is clearly computable in time
polynomial in the input length. It follows that also B = (mn)c ·
A is computable in polynomial time.
t
u
5
It is easy to verify that this is indeed a feasible schedule.
(⇐) Assume that I 0 is a yes-instance for I NTERVAL -C ON STRAINED S CHEDULING. Then, there is a feasible schedule
for I 0 . We define a partition S1 ] · · · ] Sm = {1, . . . , n} for I
as follows. For each k ∈ {1, . . . , m}, let
Sk := {i ∈ {1, . . . , n} | ji1 is scheduled on machine k}. (3.2)
Since, for each i ∈ {1, . . . , n}, the job ji1 is scheduled on
exactly one machine, this is indeed a partition. We show
that ∑i∈Sk ai ≤ V for each k ∈ {1, . . . , m}. Assume, towards a
contradiction, that there is a k such that
∑ ai > V.
(3.3)
i∈Sk
By (3.1), for each i ∈ {1, . . . , n}, the jobs in Ji have the same
release time, each has processing time at least B, and the
length of the time window of each job is at most B + A ≤
B + B/2 < 2B. Thus, in any feasible schedule, the execution
times of the m jobs in Ji mutually intersect. Hence, the jobs
in Ji are scheduled on m mutually distinct machines. By the
pigeonhole principle, for each i ∈ {1, . . . , n}, exactly one
job ji∗ ∈ Ji is scheduled on machine k. We finish the proof by
showing that,
∀i ∈ {1, . . . , n}, job ji∗ is not finished before
time iB + ∑ a j .
(3.4)
j∈Sk , j≤i
This claim together with (3.3) then yields that job jn∗ is not
finished before
nB + ∑ a j = nB + ∑ a j > nB +V,
It remains to prove that Construction 3.2 maps yes-instances
j∈Sk , j≤n
j∈Sk
of B IN PACKING to yes-instances of I NTERVAL -C ONSTRAINED
which contradicts the schedule being feasible, since jobs in Jn
S CHEDULING, and no-instances to no-instances.
have deadline nB +V by (3.1). It remains to prove (3.4). We
Lemma 3.6 Given a B IN PACKING instance I with m bins
proceed by induction.
and the items a1 , . . . , an , Construction 3.2 outputs an I N The earliest possible execution time of j1∗ is, by (3.1),
0
TERVAL -C ONSTRAINED S CHEDULING instance I that is a
time 0. The processing time of j1∗ is B if j1∗ 6= j11 , and B + a1
yes-instance if and only if I is.
otherwise. By (3.2), 1 ∈ Sk if and only if j11 is scheduled
on machine k, that is, if and only if j1∗ = j11 . Thus, job j1∗ is
Proof (⇒) Assume that I is a yes-instance for B IN PACKING.
not finished before B + ∑ j∈Sk , j≤i a j and (3.4) holds for i = 1.
Then, there is a partition S1 ] · · · ] Sm = {1, . . . , n} such that
Now, assume that (3.4) holds for i−1. We prove it for i. Since
∑i∈Sk ai ≤ V for each k ∈ {1, . . . , m}. We construct a feasible
∗
ji−1
is not finished before (i − 1)B + ∑ j∈Sk , j≤i−1 a j , this is
schedule for I 0 as follows. For each i ∈ {1, . . . , n} and k such
the
earliest
possible execution time of ji∗ . The processing
that i ∈ Sk , we schedule ji1 on machine k in the interval
∗
time of ji is B if ji∗ 6= ji1 and B + ai otherwise. By (3.2),
"
!
i ∈ Sk if and only if ji∗ = ji1 . Thus, job ji∗ is not finished
(i − 1)B + ∑ a j , iB + ∑ a j + ai
before iB + ∑ j∈Sk , j≤i a j and (3.4) holds.
t
u
j∈Sk , j<i
j∈Sk , j<i
and each of the m − 1 jobs Ji \ { ji1 } on a distinct machine ` ∈
{1, . . . , m} \ {k} in the interval
"
!
(i − 1)B + ∑ a j
j∈S` , j<i
,
iB + ∑ a j .
j∈S` , j<i
We are now ready to finish the proof of Theorem 3.1.
Proof (of Theorem 3.1) By Lemmas 3.4 to 3.6, Construction 3.2 is a polynomial-time many-one reduction from B IN
PACKING with n ≥ 2 items and m bins to λ (mn)-L OOSE I N TERVAL -C ONSTRAINED S CHEDULING , where λ : N → R
6
René van Bevern et al.
such that λ (n) ≥ 1 + n−c for some integer c ≥ 1 and all n ≥ 2.
We now show the points (i) and (ii) of Theorem 3.1.
(i) follows since B IN PACKING is weakly NP-hard for m =
2 (Jansen et al 2013) and since, by Lemma 3.4(i), Construction 3.2 outputs instances of λ (mn)-L OOSE I NTERVAL -C ON STRAINED S CHEDULING with m machines.
(ii) follows since B IN PACKING is W[1]-hard parameterized by m even if the sizes of the n items are bounded by a
polynomial in n (Jansen et al 2013). In this case, Construction 3.2 generates λ (mn)-L OOSE I NTERVAL -C ONSTRAINED
S CHEDULING instances for which all numbers are bounded
polynomially in the number of jobs by Lemma 3.5(ii). Moreover, Construction 3.2 maps the m bins of the B IN PACKING
instance to the m machines of the output I NTERVAL -C ON STRAINED S CHEDULING instance.
t
u
Concluding this section, it is interesting to note that Theorem 3.1 also shows W[1]-hardness of λ -L OOSE I NTERVAL C ONSTRAINED S CHEDULING with respect to the height
parameter considered by Cieliebak et al (2004):
Definition 3.7 (Height) For an I NTERVAL -C ONSTRAINED
S CHEDULING instance and any time t ∈ N, let
St := { j ∈ J | t ∈ [t j , d j )}
denote the set of jobs whose time window contains time t.
The height of an instance is
h := max |St |.
t∈N
Proposition 3.8 Let λ : N → R be such that λ (n) ≥ 1 + n−c
for some integer c ≥ 1 and all n ≥ 2.
Then λ (n)-L OOSE I NTERVAL -C ONSTRAINED S CHED ULING of n jobs on m machines is W[1]-hard parameterized
by the height h.
Proof Proposition 3.8 follows in the same way as Theorem 3.1; one additionally has to prove that Construction 3.2
outputs I NTERVAL -C ONSTRAINED S CHEDULING instances
of height at most 2m. To this end, observe that, by (3.1), for
each i ∈ {1, . . . , n}, there are m jobs released at time (i − 1)B
whose deadline is no later than iB + A < (i + 1)B since
A ≤ B/2. These are all jobs created by Construction 3.2. Thus,
St contains only the m jobs released at time bt/Bc · B and the
m jobs released at time bt/B − 1c · B, which are 2m jobs in
total.
t
u
Remark 3.9 Proposition 3.8 complements findings of Cieliebak et al (2004), who provide a fixed-parameter tractability result for I NTERVAL -C ONSTRAINED S CHEDULING parameterized by h + σ : our result shows that their algorithm
presumably cannot be improved towards a fixed-parameter
tractability result for I NTERVAL -C ONSTRAINED S CHEDUL ING parameterized by h alone.
4 An algorithm for bounded looseness
In the previous section, we have seen that λ -L OOSE I N S CHEDULING for any λ > 1 is
strongly W[1]-hard parameterized by m and weakly NP-hard
for m = 2. We complement this result by the following theorem, which yields a pseudo-polynomial-time algorithm for
each constant m and λ .
TERVAL -C ONSTRAINED
Theorem 4.1 λ -L OOSE I NTERVAL -C ONSTRAINED S CHED ULING is solvable in `O(λ m) · n + O(n log n) time, where ` :=
max j∈J |d j − t j |.
The crucial observation for the proof of Theorem 4.1 is the
following lemma. It gives a logarithmic upper bound on the
height h of yes-instances (as defined in Definition 3.7). To
prove Theorem 4.1, we will thereafter present an algorithm
that has a running time that is single-exponential in h.
Lemma 4.2 Let I be a yes-instance of λ -L OOSE I NTER S CHEDULING with m machines and
` := max j∈J |d j − t j |. Then, I has height at most
log `
2m ·
+1 .
log λ − log(λ − 1)
VAL -C ONSTRAINED
Proof Recall from Definition 3.7 that the height of an I NTER VAL -C ONSTRAINED S CHEDULING instance is maxt∈N |St |.
We will show that, in any feasible schedule for I and at
any time t, there are at most N jobs in St that are active on
the first machine at some time t 0 ≥ t, where
N≤
log `
+ 1.
log λ − log(λ − 1)
(4.1)
By symmetry, there are at most N jobs in St that are active
on the first machine at some time t 0 ≤ t. Since there are
m machines, the total number of jobs in St at any time t, and
therefore the height, is at most 2mN.
It remains to show (4.1). To this end, fix an arbitrary
time t and an arbitrary feasible schedule for I. Then, for
any d ≥ 0, let J(t +d) ⊆ St be the set of jobs that are active on
the first machine at some time t 0 ≥ t but finished by time t + d.
We show by induction on d that
(
0
if d = 0,
|J(t + d)| ≤
(4.2)
log d
− log(1−1/λ
+
1
if
d ≥ 1.
)
If d = 0, then |J(t + 0)| = 0 and (4.2) holds. Now, consider
the case d ≥ 1. If no job in J(t + d) is active at time t + d − 1,
then J(t + d) = J(t + d − 1) and (4.2) holds by the induction
hypothesis. Now, assume that there is a job j ∈ J(t + d) that
is active at time t + d − 1. Then, d j ≥ t + d and, since j ∈ St ,
t j ≤ t. Hence,
pj ≥
|d j − t j | |t + d − t|
d
≥
= .
λ
λ
λ
A parameterized complexity view on non-preemptively scheduling interval-constrained jobs
It follows that
|J(t + d)| ≤ 1 + |J(t + d − dd/λ e)|.
(4.3)
Thus, if d − dd/λ e = 0, then |J(t + d)| ≤ 1 + |J(t)| ≤ 1 and
(4.2) holds. If d − dd/λ e > 0, then, by the induction hypothesis, the right-hand side of (4.3) is
log(d − dd/λ e)
+1
log(1 − 1/λ )
log(d(1 − 1/λ ))
+1
≤ 1−
log(1 − 1/λ )
log d + log(1 − 1/λ )
= 1−
+1
log(1 − 1/λ )
log d
=−
+ 1,
log(1 − 1/λ )
≤ 1−
and (4.2) holds. Finally, since ` = max1≤ j≤n |d j − t j |, no job
in St is active at time t + `. Hence, we can now prove (4.1)
using (4.2) by means of
log `
N ≤ |J(t + `)| ≤ −
+1
log(1 − 1/λ )
log `
=−
+1
log λ λ−1
log `
=−
+1
log(λ − 1) − log λ
log `
=
+ 1.
log λ − log(λ − 1)
t
u
Proof It is well-known that (1 − 1/λ )λ < 1/e for any λ ≥ 1.
Hence, λ logb (1 − 1/λ ) = logb (1 − 1/λ )λ < logb 1/e ≤ −1,
that is, −λ logb (1 − 1/λ ) ≥ 1. Thus,
1
≤ λ.
− logb (1 − 1/λ )
(a) If t ≥ 1 and S ⊆ St−1 , then set T [t, S, b] := T [t − 1, S0 , b0 ],
where
(b) Otherwise, set T [t, S, b] := 1 if and only if at least one of
the following two cases applies:
i) there is a machine i ∈ {1, . . . , m} such that bi > −`
and T [t, S, b0 ] = 1, where b0 := (b01 , . . . , b0m ) with
(
bi − 1 if i0 = i,
0
bi0 :=
bi0
if i0 6= i,
or
ii) there is a job j ∈ S and a machine i ∈ {1, . . . , m}
such that bi > 0, t + bi ≤ d j , t + bi − p j ≥ t j , and
T [t, S \ { j}, b0 ] = 1, where b0 := (b01 , . . . , b0m ) with
(
bi − p j if i0 = i,
0
bi0 :=
bi0
if i0 6= i.
Note that, since j ∈ St , one has t j ≥ t − ` by definition of `. Hence, b0i ≥ −` is within the allowed
range {−`, . . . , `}.
Finally,
1
1
=
− logb (1 − 1/λ ) − logb ( λ −1 )
λ
1
=
.
− logb (λ − 1) + logb λ
To compute T , first, set T [0, 0,
/ b] := 1 for every vector b ∈
{−`, . . . , `}m . Now we compute the other entries of T by
increasing t, for each t by increasing b, and for each b by S
with increasing cardinality. Herein, we distinguish two cases.
b0i := min{bi + 1, `} for each i ∈ {1, . . . , m}.
Proposition 4.3 For any λ ≥ 1 and any b ∈ (1, e], it holds
that
1
≤ λ.
logb λ − logb (λ − 1)
and
Algorithm 4.4 We solve I NTERVAL -C ONSTRAINED S CHED ULING using dynamic programming. First, for an I NTERVAL C ONSTRAINED S CHEDULING instance, let ` := max j∈J |d j −
t j |, let St be as defined in Definition 3.7, and let St< ⊆ J be
the set of jobs j with d j ≤ t, that is, that have to be finished
by time t.
We compute a table T that we will show to have the
following semantics. For a time t ∈ N, a subset S ⊆ St of jobs
and a vector b = (b1 , . . . , bm ) ∈ {−`, . . . , `}m ,
1 if all jobs in S ∪ St< can be scheduled so
that machine i is idle from time t + bi
T [t, S, b] =
for each i ∈ {1, . . . , m},
0 otherwise.
S0 := S ∪ (St−1 ∩ St< ) and b0 := (b01 , . . . , b0m ) with
The following proposition gives some intuition on how the
bound behaves for various λ .
1
≤1
−λ logb (1 − 1/λ )
7
Finally, we answer yes if and only if T [tmax , Stmax , 1m · `] = 1,
where tmax := max j∈J t j .
t
u
t
u
Towards our proof of Theorem 4.1, Lemma 4.2 provides a
logarithmic upper bound on the height h of yes-instances of
I NTERVAL -C ONSTRAINED S CHEDULING. Our second step
towards the proof of Theorem 4.1 is the following algorithm,
which runs in time that is single-exponential in h. We first
present the algorithm and, thereafter, prove its correctness
and running time.
Lemma 4.5 Algorithm 4.4 correctly decides I NTERVAL -C ON STRAINED S CHEDULING .
Proof We prove the following two claims: For any time 0 ≤
t ≤ tmax , any set S ⊆ St , and any vector b = (b1 , . . . , bm ) ∈
{−`, . . . , `}m ,
if T [t, S, b] = 1, then all jobs in S ∪St< can be scheduled so that machine i is idle from time t + bi for
each i ∈ {1, . . . , m},
(4.4)
8
René van Bevern et al.
and
if all jobs in S ∪ St< can be scheduled so that
machine i is idle from time t + bi for each i ∈
{1, . . . , m}, then T [t, S, b] = 1.
(4.5)
From (4.4) and (4.5), the correctness of the algorithm easily
follows: observe that, in any feasible schedule, all machines
are idle from time tmax + ` and all jobs J ⊆ Stmax ∪ St<max are
scheduled. Hence, there is a feasible schedule if and only if
T [tmax , Stmax , 1m · `] = 1. It remains to prove (4.4) and (4.5).
First, we prove (4.4) by induction. For T [0, 0,
/ b] = 1,
(4.4) holds since there are no jobs to schedule. We now
prove (4.4) for T [t, S, b] under the assumption that it is true
for all T [t 0 , S0 , b0 ] with t 0 < t or t 0 = t and b0 b.
If T [t, S, b] is set to 1 in Algorithm 4.4(a), then, for S0
and b0 as defined in Algorithm 4.4(a), T [t − 1, S0 , b0 ] = 1.
<
By the induction hypothesis, all jobs in S0 ∪ St−1
can be
scheduled so that machine i is idle from time t − 1 + b0i ≤
<
t +bi . Moreover, S∪St< = S0 ∪St−1
since S0 = S∪(St−1 ∩St< ).
Hence, (4.4) follows.
If T [t, S, b] is set to 1 in Algorithm 4.4(bi), then one has
T [t, S, b0 ] = 1 for b0 as defined in Algorithm 4.4(bi). By the
induction hypothesis, all jobs in S ∪ St< can be scheduled so
that machine i0 is idle from time t + b0i0 ≤ t + bi0 , and (4.4)
follows.
If T [t, S, b] is set to 1 in Algorithm 4.4(bii), then T [t, S \
{ j}, b0 ] = 1 for j and b0 as defined in Algorithm 4.4(bii).
By the induction hypothesis, all jobs in (S \ { j}) ∪ St< can
be scheduled so that machine i0 is idle from time t + b0i0 . It
remains to schedule job j on machine i in the interval [t +
b0i ,t + bi ), which is of length exactly p j by the definition of b0 .
Then, machine i is idle from time t +bi and any machine i0 6= i
is idle from time t + b0i0 = t + bi0 , and (4.4) follows.
It remains to prove (4.5). We use induction. Claim (4.5)
clearly holds for t = 0, S = 0,
/ and any b ∈ {−`, . . . , `}m by
the way Algorithm 4.4 initializes T . We now show (4.5)
provided that it is true for t 0 < t or t 0 = t and b0 b.
<
If S ⊆ St−1 , then S ∪ St< = S0 ∪ St−1
for S0 as defined in
<
Algorithm 4.4(a). Moreover, since no job in S0 ∪ St−1
can be
active from time t − 1 + ` by definition of `, each machine i
is idle from time t − 1 + min{bi + 1, `} = t − 1 + b0i , for b0 =
(b01 , . . . , b0m ) as defined in Algorithm 4.4(a). Hence, T [t −
1, S0 , b0 ] = 1 by the induction hypothesis, Algorithm 4.4(a)
applies, sets T [t, S, b] := T [t − 1, S0 , b0 ] = 1, and (4.5) holds.
If some machine i is idle from time t + bi − 1, then, by
the induction hypothesis, T [t, S, b0 ] = 1 in Algorithm 4.4(bi),
the algorithm sets T [t, S, b] := 1, and (4.5) holds.
In the remaining case, every machine i is busy at time t +
bi − 1 and K := S \ St−1 6= 0.
/ Thus, there is a machine i
executing a job from K. For each job j0 ∈ K, we have t j0 ≥ t.
Since machine i is idle from time t + bi and executes j0 , one
has bi > 0. Let j be the last job scheduled on machine i. Then,
since machine i is busy at time t + bi − 1, we have d j ≥ t +
bi > t and j ∈
/ St< . Hence, j ∈ St . Since machine i is idle from
time t + bi , we also have t + bi − p j ≥ t j . Now, if we remove j
from the schedule, then machine i is idle from time t +bi − p j
and each machine i0 6= i is idle from time t + b0i0 = t + bi0 .
Thus, by the induction hypothesis, T [t, S \ { j}, b0 ] = 1 in
Algorithm 4.4(bii), the algorithm sets T [t, S, b] := 1, and
(4.5) holds.
t
u
Lemma 4.6 Algorithm 4.4 can be implemented to run in
O(2h · (2` + 1)m · (h2 m + hm2 ) · n` + n log n) time, where ` :=
max j∈J |d j − t j | and h is the height of the input instance.
Proof Concerning the running time of Algorithm 4.4, we first
bound tmax . If tmax > n`, then there is a time t ∈ {0, . . . ,tmax }
such that St = 0/ (cf. Definition 3.7). Then, we can split the
instance into one instance with the jobs St< and into one
instance with the jobs J \ St< . We answer “yes” if and only if
both of them are yes-instances. Henceforth, we assume that
tmax ≤ n`.
In a preprocessing step, we compute the sets St and St−1 ∩
<
St , which can be done in O(n log n + hn + tmax ) time by
sorting the input jobs by deadlines and scanning over the
input time windows once: if no time window starts or ends at
time t, then St is simply stored as a pointer to the St 0 for the
last time t 0 where a time window starts or ends.
Now, the table T of Algorithm 4.4 has at most (tmax +
1) · 2h · (2` + 1)m ≤ (n` + 1) · 2h · (2` + 1)m entries. A table
entry T [t, S, b] can be accessed in O(m + h) time using a
carefully initialized trie data structure (van Bevern 2014)
since |S| ≤ h and since b is a vector of length m.
To compute an entry T [t, S, b], we first check, for each
job j ∈ S, whether j ∈ St−1 . If this is the case for each j, then
Algorithm 4.4(a) applies. We can prepare b0 in O(m) time
and S0 in O(h) time using the set St−1 ∩ St< computed in the
preprocessing step. Then, we access the entry T [t − 1, S0 , b0 ]
in O(h + m) time. Hence, (a) takes O(h + m) time.
If Algorithm 4.4(a) does not apply, then we check whether
Algorithm 4.4(bi) applies. To this end, for each i ∈ {1, . . . , m},
we prepare b0 in O(m) time and access T [t, S, b0 ] in O(h + m)
time. Hence, it takes O(m2 + hm) time to check (bi).
To check whether Algorithm 4.4(bii) applies, we try
each j ∈ S and each i ∈ {1, . . . , m} and, for each, prepare b0
in O(m) time and check T [t, S \ { j}, b0 ] in O(h + m) time.
Thus (bii) can be checked in O(h2 m + hm2 ) time.
t
u
With the logarithmic upper bound on the height h of yesinstances of I NTERVAL -C ONSTRAINED S CHEDULING given
by Lemma 4.2 and using Algorithm 4.4, which, by Lemma 4.6,
runs in time that is single-exponential in h for a fixed number m of machines, we can now prove Theorem 4.1.
Proof (of Theorem 4.1) We use the following algorithm. Let
log `
h := 2m ·
+1 .
log λ − log(λ − 1)
A parameterized complexity view on non-preemptively scheduling interval-constrained jobs
If, for any time t ∈ N, we have |St | > h, then we are facing
a no-instance by Lemma 4.2 and immediately answer “no”.
This can be checked in O(n log n) time: one uses the interval
graph coloring problem to check whether we can schedule
the time windows of all jobs (as intervals) onto h machines.
Otherwise, we conclude that our input instance has height
at most h. We now apply Algorithm 4.4, which, by Lemma 4.6,
runs in O(2h · (2` + 1)m · (h2 m + hm2 ) · n` + n log n) time.
Since, by Proposition 4.3, h ∈ O(λ m log `), this running time
is `O(λ m) h · n + O(n log n).
t
u
A natural question is whether Theorem 4.1 can be generalized
to λ = ∞, that is, to I NTERVAL -C ONSTRAINED S CHED ULING without looseness constraint. This question can be
easily answered negatively using a known reduction from
3-PARTITION to I NTERVAL -C ONSTRAINED S CHEDULING
given by Garey and Johnson (1979):
Proposition 4.7 If there is an `O(m) · poly(n)-time algorithm
for I NTERVAL -C ONSTRAINED S CHEDULING, where ` :=
max j∈J |d j − t j |, then P = NP.
Proof Garey and Johnson (1979, Theorem 4.5) showed that
I NTERVAL -C ONSTRAINED S CHEDULING is NP-hard even
on m = 1 machine. In their reduction, ` ∈ poly(n). A supposed `O(m) · poly(n)-time algorithm would solve such instances in polynomial time.
t
u
5 An algorithm for bounded slack
So far, we considered I NTERVAL -C ONSTRAINED S CHED ULING with bounded looseness λ . Cieliebak et al (2004)
additionally considered I NTERVAL -C ONSTRAINED S CHED ULING for any constant slack σ .
Recall that Cieliebak et al (2004) showed that λ -L OOSE
I NTERVAL -C ONSTRAINED S CHEDULING is NP-hard for
any constant λ > 1 and that Theorem 3.1 shows that having a small number m of machines does make the problem
significantly easier.
Similarly, Cieliebak et al (2004) showed that σ -S LACK
I NTERVAL -C ONSTRAINED S CHEDULING is NP-hard already for σ = 2. Now we contrast this result by showing
that σ -S LACK I NTERVAL -C ONSTRAINED S CHEDULING is
fixed-parameter tractable for parameter m + σ . More specifically, we show the following:
Theorem 5.1 σ -S LACK I NTERVAL -C ONSTRAINED S CHED ULING is solvable in time
O (σ + 1)(2σ +1)m · n · σ m · log σ m + n log n .
Similarly as in the proof of Theorem 4.1, we first give an
upper bound on the height of yes-instances of I NTERVAL C ONSTRAINED S CHEDULING as defined in Definition 3.7.
To this end, we first show that each job j ∈ St has to occupy
some of the (bounded) machine resources around time t.
9
Lemma 5.2 At any time t in any feasible schedule for σ S LACK I NTERVAL -C ONSTRAINED S CHEDULING, each job
j ∈ St is active at some time in the interval [t − σ ,t + σ ].
Proof If the time window of j is entirely contained in [t −
σ ,t + σ ], then, obviously, j is active at some time during the
interval [t − σ ,t + σ ].
Now, assume that the time window of j is not contained
in [t − σ ,t + σ ]. Then, since j ∈ St , its time window contains t by Definition 3.7 and, therefore, one of t − σ or t + σ .
Assume, for the sake of contradiction, that there is a schedule such that j is not active during [t − σ ,t + σ ]. Then j is
inactive for at least σ + 1 time units in its time window—
a contradiction.
t
u
Now that we know that each job in St has to occupy machine
resources around time t, we can bound the size of St in the
amount of resources available around that time.
Lemma 5.3 Any yes-instance of σ -S LACK I NTERVAL -C ON STRAINED S CHEDULING has height at most (2σ + 1)m.
Proof Fix any feasible schedule for an arbitrary yes-instance
of σ -S LACK I NTERVAL -C ONSTRAINED S CHEDULING and
any time t. By Lemma 5.2, each job in St is active at some
time in the interval [t −σ ,t +σ ]. This interval has length 2σ +
1. Thus, on m machines, there is at most (2σ + 1)m available
processing time in this time interval. Consequently, there can
be at most (2σ + 1)m jobs with time intervals in St .
t
u
We finally arrive at the algorithm to prove Theorem 5.1.
Proof (of Theorem 5.1) Let h := (2σ + 1)m. In the same
way as for Theorem 4.1, in O(n log n) time we discover that
we face a no-instance due to Lemma 5.3 or, otherwise, that
our input instance has height at most h. In the latter case,
we apply the O(n · (σ + 1)h · h log h)-time algorithm due to
Cieliebak et al (2004).
t
u
6 Conclusion
Despite the fact that there are comparatively few studies on
the parameterized complexity of scheduling problems, the
field of scheduling indeed offers many natural parameterizations and fruitful challenges for future research. Notably,
Marx (2011) saw one reason for the lack of results on “parameterized scheduling” in the fact that most scheduling
problems remain NP-hard even for a constant number of machines (a very obvious and natural parameter indeed), hence
destroying hope for fixed-parameter tractability results with
respect to this parameter. In scheduling interval-constrained
jobs with small looseness and small slack, we also have been
confronted with this fact, facing (weak) NP-hardness even
for two machines.
10
The natural way out of this misery, however, is to consider
parameter combinations, for instance combining the parameter number of machines with a second one. In our study, these
were combinations with looseness and with slack (see also
Table 1.1). In a more general perspective, this consideration
makes scheduling problems a prime candidate for offering
a rich set of research challenges in terms of a multivariate
complexity analysis (Fellows et al 2013; Niedermeier 2010).
Herein, for obtaining positive algorithmic results, research
has to go beyond canonical problem parameters, since basic
scheduling problems remain NP-hard even if canonical parameters are simultaneously bounded by small constants, as
demonstrated by Kononov et al (2012).1
Natural parameters to be studied in future research on I N TERVAL -C ONSTRAINED S CHEDULING are the combination
of slack and looseness—the open field in our Table 1.1—and
the maximum and minimum processing times, which were
found to play an important role in the online version of the
problem (Saha 2013).
Finally, we point out that our fixed-parameter algorithms
for I NTERVAL -C ONSTRAINED S CHEDULING are easy to
implement and may be practically applicable if the looseness,
slack, and number of machines is small (about three or four
each). Moreover, our algorithms are based on upper bounds
on the height of an instance in terms of its number of machines, its looseness, and slack. Obviously, this can also be
exploited to give lower bounds on the number of required machines based on the structure of the input instance, namely, on
its height, looseness, and slack. These lower bounds may be
of independent interest in exact branch and bound or approximation algorithms for the machine minimization problem.
References
van Bevern R (2014) Towards optimal and expressive kernelization for
d-Hitting Set. Algorithmica 70(1):129–147
van Bevern R, Chen J, Hüffner F, Kratsch S, Talmon N, Woeginger GJ
(2015a) Approximability and parameterized complexity of multicover by c-intervals. Information Processing Letters 115(10):744–
749
van Bevern R, Mnich M, Niedermeier R, Weller M (2015b) Interval
scheduling and colorful independent sets. Journal of Scheduling
18(5):449–469
Bodlaender HL, Fellows MR (1995) W[2]-hardness of precedence
constrained k-processor scheduling. Operations Research Letters
18(2):93–97
Chen L, Megow N, Schewior K (2016) An O(log m)-competitive algorithm for online machine minimization. In: Proceedings of the
27th Annual ACM-SIAM Symposium on Discrete Algorithms
(SODA’16), SIAM, pp 155–163
1 The results of Kononov et al (2012) were obtained in context of a
multivariate complexity analysis framework described by Sevastianov
(2005), which is independent of the framework of parameterized complexity theory considered in our work: it allows for systematic classification of problems as polynomial-time solvable or NP-hard given concrete
constraints on a set of instance parameters. It is plausible that this framework is applicable to classify problems as FPT or W[1]-hard as well.
René van Bevern et al.
Chuzhoy J, Guha S, Khanna S, Naor J (2004) Machine minimization
for scheduling jobs with interval constraints. In: Proceedings of
the 45th Annual Symposium on Foundations of Computer Science
(FOCS’04), pp 81–90
Cieliebak M, Erlebach T, Hennecke F, Weber B, Widmayer P (2004)
Scheduling with release times and deadlines on a minimum number
of machines. In: Exploring New Frontiers of Theoretical Informatics, IFIP International Federation for Information Processing, vol
155, Springer, pp 209–222
Cygan M, Fomin FV, Kowalik L, Lokshtanov D, Marx D, Pilipczuk M,
Pilipczuk M, Saurabh S (2015) Parameterized Algorithms. Springer
Downey RG, Fellows MR (2013) Fundamentals of Parameterized Complexity. Springer
Fellows MR, McCartin C (2003) On the parametric complexity of
schedules to minimize tardy tasks. Theoretical Computer Science
298(2):317–324
Fellows MR, Jansen BMP, Rosamond FA (2013) Towards fully multivariate algorithmics: Parameter ecology and the deconstruction
of computational complexity. European Journal of Combinatorics
34(3):541–566
Flum J, Grohe M (2006) Parameterized Complexity Theory. Springer
Garey MR, Johnson DS (1979) Computers and Intractability: A Guide
to the Theory of NP-Completeness. Freeman
Halldórsson MM, Karlsson RK (2006) Strip graphs: Recognition and
scheduling. In: Proceedings of the 32nd International Workshop on
Graph-Theoretic Concepts in Computer Science (WG’06), Springer,
LNCS, vol 4271, pp 137–146
Hermelin D, Kubitza JM, Shabtay D, Talmon N, Woeginger G (2015)
Scheduling two competing agents when one agent has significantly
fewer jobs. In: Proceedings of the 10th International Symposium
on Parameterized and Exact Computation (IPEC’15), Leibniz International Proceedings in Informatics (LIPIcs), vol 43, Schloss
Dagstuhl–Leibniz-Zentrum für Informatik, pp 55–65
Jansen K, Kratsch S, Marx D, Schlotter I (2013) Bin packing with fixed
number of bins revisited. Journal of Computer and System Sciences
79(1):39–49
Kolen AWJ, Lenstra JK, Papadimitriou CH, Spieksma FCR (2007)
Interval scheduling: A survey. Naval Research Logistics 54(5):530–
543
Kononov A, Sevastyanov S, Sviridenko M (2012) A complete 4parametric complexity classification of short shop scheduling problems. Journal of Scheduling 15(4):427–446
Malucelli F, Nicoloso S (2007) Shiftable intervals. Annals of Operations
Research 150(1):137–157
Marx D (2011) Fixed-parameter tractable scheduling problems. In:
Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091)
Mnich M, Wiese A (2015) Scheduling and fixed-parameter tractability.
Mathematical Programming 154(1-2):533–562
Niedermeier R (2006) Invitation to Fixed-Parameter Algorithms. Oxford
University Press
Niedermeier R (2010) Reflections on multivariate algorithmics and
problem parameterization. In: Proceedings of the 27th International Symposium on Theoretical Aspects of Computer Science
(STACS’10), Schloss Dagstuhl–Leibniz-Zentrum für Informatik,
Leibniz International Proceedings in Informatics (LIPIcs), vol 5,
pp 17–32
Saha B (2013) Renting a cloud. In: Annual Conference on Foundations of Software Technology and Theoretical Computer Science
(FSTTCS) 2013, Schloss Dagstuhl–Leibniz-Zentrum für Informatik, Leibniz International Proceedings in Informatics (LIPIcs),
vol 24, pp 437–448
Sevastianov SV (2005) An introduction to multi-parameter complexity
analysis of discrete problems. European Journal of Operational
Research 165(2):387–397
| 8 |
High Five: Improving Gesture Recognition by
Embracing Uncertainty
arXiv:1710.09441v1 [] 25 Oct 2017
Diman Zad Tootaghaj† , Adrian Sampson‡ , Todd Mytkowicz∗ , Kathryn S McKinley∗∗
† The Pennsylvania State University, ‡ Cornell University, ∗ Microsoft Research, ∗∗ Google Research
{dxz149}@cse.psu.edu, {asampson}@cs.cornell.edu, {toddm}@microsoft.com, {mckinley}@cs.utexas.edu
HMM training; and (2) classification generates one observation
sequence and produces only one deterministic gesture, rather
than reasoning explicitly about the uncertainty introduced by
gesture error.
Abstract—Sensors on mobile devices—accelerometers, gyroscopes, pressure meters, and GPS—invite new applications in
gesture recognition, gaming, and fitness tracking. However, programming them remains challenging because human gestures
captured by sensors are noisy. This paper illustrates that noisy
gestures degrade training and classification accuracy for gesture recognition in state-of-the-art deterministic Hidden Markov
Models (HMM). We introduce a new statistical quantization
approach that mitigates these problems by (1) during training,
producing gesture-specific codebooks, HMMs, and error models
for gesture sequences; and (2) during classification, exploiting the
error model to explore multiple feasible HMM state sequences.
We implement classification in UncertainhT i, a probabilistic
programming system that encapsulates HMMs and error models
and then automates sampling and inference in the runtime.
UncertainhT i developers directly express a choice of applicationspecific trade-off between recall and precision at gesture recognition time, rather than at training time. We demonstrate benefits in
configurability, precision, recall, and recognition on two data sets
with 25 gestures from 28 people and 4200 total gestures. Incorporating gesture error more accurately in modeling improves the
average recognition rate of 20 gestures from 34% in prior work to
62%. Incorporating the error model during classification further
improves the average gesture recognition rate to 71%. As far as
we are aware, no prior work shows how to generate an HMM
error model during training and use it to improve classification
rates.
We measure gesture noise in accelerometer data and find
it is a gesture-specific Gaussian mixture model: the error
distributions along the x, y, and z axes all vary. In contrast,
when the phone is still, accelerometer error is extremely small,
Gaussian, and uniform in all dimensions. Gesture-specific
error matches our intuition about humans. Making an “M”
is harder and more subject to error than making an “O”
because users make three changes in direction versus a smooth
movement. Even when making the same gesture, humans hold
and move devices with different orientations and rotations.
Since gesture observation is a sequence of error readings,
differences in gesture sizes and speed can compound gesture
error.
State-of-the-art HMM systems [6], [9], [11] assume errors
are small, uniform, and not gesture specific. They compute
one radius for all gestures and all x, y, and z accelerometer
data. They map all gestures to a single spherical set of
codewords centered at (0, 0, 0) that they use to train the HMM.
Classification compounds this problem because HMMs use
deterministic quantization. Even though several nearby states
may be very likely, traditional HMM classifiers only explore
one.
I. I NTRODUCTION
Modern mobile devices host a diverse and expanding array
of sensors: accelerometers, gyroscopes, pressure meters, thermometers, ambient light sensors, and more. These sensors
invite new experiences in fitness, health, translating sign
language, games, and accessibility for people with disabilities [1]–[3]. Despite all these new input methods, user input
on smartphones is still mostly limited to touching the screen
and keypad, a 2-D detection problem. This paper identifies and
addresses algorithmic and practical impediments to deploying 3-D gesture recognition on smartphones. We extend the
commonly used Hidden Markov Model (HMM) approach [4]–
[7]. Determining a 3-D path through space is harder than 2-D
gesture recognition [8] because human gestures as captured by
sensors are uncertain and noisy—much noisier than the sensors
themselves. Humans hold the device at different angles, get
tired, and change their gestures’ pattern. Prior state-of-the-art
gesture-recognition algorithms using HMMs [6], [9], [10] are
limited because (1) they assume all gesture error is uniform
and project all observations to one spherical codebook for
To solve these problems, we present a holistic statistical
quantization approach that (a) computes and reasons about
noise gesture training data; (b) produces per-gesture HMMs
and their error models; (c) modifies classification to use the
error model to choose the most likely gesture; and (d) uses the
UncertainhT i probabilistic programming system [12], [13] to
simplify the implementation and expose the classifier’s tradeoff between precision and recall.
During training, we measure error in accelerometer data
sequences across gestures and use the mean and variance to
improve HMM modeling and classification. In training, we
fit per-gesture data to codewords on an ellipse and generate
gesture specific HMM codebooks. We show that ellipsebased codebooks improve accuracy over prior sphere-based
approaches [6], [9]. We target personal mobile devices where
users both specify and train gestures. With per-gesture HMM
models, users train one gesture at a time. Instead of performing
1
classification by deterministically mapping the 3-D acceleration data to the closest codeword, we sample from the error
model produced during training to explore a range of potential
gestures.
Gesture Recognition
Algorithms
Non-probabilistic
Probabilistic
HMM
We implement classification as a library in the
UncertainhT i programming language. The library provides
trained HMM models and their error models. A gesture
is an Uncertain type. Values of Uncertain types represent
probability distributions by returning samples of the base
type from the error distribution. The runtime lazily performs
statistical tests to evaluate computations on these values.
When the application queries an Uncertain value, such as
with an if statement on the gesture, the runtime performs
the specified statistical hypothesis test by sampling values
from the HMM computation.
Dynamic time wrapping [1,
19, 22 30, 39] : easy to
implement, low accuracy on
user-independent gestures
Neural Networks [28]: Deterministic Programming and
High Five: Probabilistic
need large training
quantization [13, 23, 24, 34] programming and quantization
data set
Fig. 1: Design space for gesture recognition algorithms.
II. OVERVIEW OF E XISTING A PPROACHES
Recognizing human gestures is key to more natural humancomputer interaction [6], [15]–[17]. Sensor choices include
data gloves [1], cameras [18], touch detection for 2-D painting
gestures [8], and our focus, 3-D motion tracking accelerometer
and gyroscope sensors [19]. Figure 1 presents the design space
for common gesture recognition approaches. Non-Probabilistic
approaches include dynamic time warping [11], [16], [17],
[20], [21] and neural networks [22]. A common probabilistic
approach is Hidden Markov Models (HMMs) [6], [9], [10],
[23] that use non-linear algorithms to find similar time-varying
sequences.
Hidden Markov Models: HMMs for gesture recognition
give the best recognition rates for both user-dependent and
user-independent gestures [24]. Our HMM implementation for
gesture recognition differs from the prior literature as follows.
First, instead of using a deterministic codebook for discretization, we use the statistical information about each gesture
during HMM training and generate a different codebook for
each gesture. Second, we exploit a probabilistic programming
framework in our implementation and use uncertain data types
to make more accurate estimations of the probability of each
gesture. Third, unlike prior work that deterministically maps
raw data to one static codebook, we use a stochastic mapping
of raw scattered data based on the gesture’s error model and
the distance from the data to each gesture’s trained codebook.
Kmeans quantization: Since continuous HMMs for gesture
recognition is impractical due to high complexity of tracking
huge observation states, a variety of quantization techniques
transform sensor data into discrete values. The most-common
is kmeans clustering [6], [25]. Although kmeans works well
for large isotropic data sets, it is very sensitive to outliers
and therefore noisy sensor data degrades its effectiveness. For
example, a single noisy outlier results in a singleton cluster.
Furthermore, because humans must train gesture recognizers,
the training gesture data sets are necessarily small, whereas
kmeans is best suited for large data sets.
Dynamic time warping: Dynamic time warping applies
dynamic programming to match time-varying sequences where
gesture samples are represented as feature vectors [16], [17],
[24]. The algorithm constructs a distance matrix between
each gesture template T = {t1 , t2 , ...} and a gesture sample
We evaluate statistical quantization on two data sets: (1) five
gestures trained by 20 people (10 women and 10 men) on a
Windows Phone that we collect, and (2) 20 gestures trained
by 8 people from Costante et al. [14]. Compared to traditional
deterministic spherical quantizers [6], statistical quantization
substantially improves recall, precision, and recognition rate
on both data sets. Improvements result from better modeling and using error in classification. Deterministic elliptical
quantization improves average gesture recognition rates on 20
gestures to 62%, compared to the 34% for traditional deterministic spherical quantization. Statistical elliptical quantization
further improves gesture recognition rates to 71%.
We illustrate the power of our framework to trade off
precision and recall because it exploits the error model during
classification. Prior work chooses one tradeoff during training.
Different configurations significantly improve both precision
and recall. This capability makes statistical quantization suitable both for applications where false positives are undesirable
or even dangerous, and for other applications that prioritize
making a decision over perfect recognition.
Our most significant contribution is showing how to derive
and use gesture error models to improve HMM classification
accuracy and configurability. Our UncertainhT i approach
is a case study in how a programming language abstraction
for error inspires improvements machine-learning systems.
HMM inference algorithms, such as Baum–Welch, exemplify
software that ignores valuable statistical information because
it can be difficult to track and use. They infer a sequence of
hidden states assuming perfect sensor observations of gestures.
Our approach invites further work on enhancing inference in
other machine learning domains, such as speech recognition
and computational biology, that operate on noisy data. We
plan to make the source code available upon publication.
The UncertainhT i compiler and runtime are already open
source [13].
2
Location error (m)
2.5x108
a12
S1
a11
b11
3x108
S2
a21
b13 b21
b12
b22
8
2x10
error = 0.1 degree
error = 0.2 degree
error = 0.4 degree
error = 0.6 degree
error = 0.8 degree
error = 1 degree
8
1.5x10
a22
b23
8
1x10
5x107
0
1800
2160
2520
2880
3240
2600
Time (s)
v1
v2
Fig. 3: Location errors accumulate from various small angle
errors.
v3
because each hidden state can be reached from every other
hidden states in one transition. A left-to-right HMM model
does not have any backward transitions from the current state.
We consider both ergodic and left-to-right HMM models.
Fig. 2: HMM example with state and output probabilities.
S = {s1 , s2 , ...}. The algorithm next calculates a matching
cost DTW(T, S) between each gesture template and the
sample gesture. The sample gesture is classified as the gesture
template with the minimum matching cost. This approach
is easy to implement, but its accuracy for user-independent
gestures is low [11], [26]. Furthermore, it is deterministic
and does not capture the stochastic and noisy behavior of
accelerometer and gyroscope’s data.
Neural networks: Neural networks classify large data sets
effectively [22]. During training, the algorithm adjusts the
weight values and biases to improve classification. While neural networks work well for large data sets, their applicability
is limited for small amounts of training data [5]. Asking end
users to perform hundreds of gestures to train a neural network
model is impractical.
IV. L IMITATIONS OF E XISTING G ETURE R ECOGNITION
A PPROACHES
In theory, all machine learning algorithms tolerate noise in
their training data. Common approaches include using lots of
training data, adding features, and building better models, e.g.,
adding more interior nodes to an HMM. In practice, we show
that understanding and measuring error inspires improvements
in modeling and classification.
A. 3-D Path Tracking is a Hard Problem [27]
A gesture is a meaningful 3-D movement of a mobile
device. When the phone moves, sensors gather an observation
consisting of a sequence of 3-D accelerometer data. We use
accelerometer observations to train 3-D gesture recognition
models, but our approach should apply to other sensors.
III. H IDDEN M ARKOV M ODEL BACKGROUND
HMMs are used extensively in pattern recognition algorithms for speech, hand gestures, and computational biology.
HMMs are Bayesian Networks with the following two properties: 1) the state of the system at time t (St ) produces
the observed process (Yt ), which is a random or deterministic function of the states, is hidden from the observation
process, and 2) the current state of the system St given the
previous state St−1 is independent of all prior states Sτ for
τ < t − 1 [4]. The goal is to find the most likely sequence
of hidden states. Generally speaking, an HMM is a time
sequence of an observation sequence X = {X1 , X2 , ..., Xn },
derived from a quantized codebook V = {v1 , v2 , ..., v|V | },
that is Xk ∈ V, k = 1, 2, ..., n. In addition, hidden states
Y = {Y1 , Y2 , ..., Yn } are derived from the states in the system
S = s1 , s2 , ..., s|S| , that is Yk ∈ S, k = 1, 2, ..., n. The
state transition matrix A = {aij } i, j = 1...|S| models the
probability of transitioning from state si to sj . Furthermore,
B = {bjk } j = 1...|S|, k = 1...|V | models the probability
that the hidden state sj generates the observed output vk .
Figure 2 shows an example of an HMM model with two
hidden states, three observed states, and the corresponding
state transition and output probabilities. This HMM is ergodic
Since users hold devices at different angles and move at
different velocities even when making the same gesture, the
3-D accelerometer data includes a gravity component on all
axes, which varies per user and gesture. Figure 6 shows how
the phone angle generates different acceleration data projected
on the X, Y, and Z axes due to gravity. One approach we tried
was to eliminate gravity variation by using gyroscope data and
tracking a 3-D path. This approach does not work because it
actually amplifies error. We show this derivation to motivate
the difficulty of path tracking and how gesture errors make it
worse.
A 3-D path tracking approach models total acceleration and
projects it to a position as follows. Given
am = af − R(α, β, γ)g~z
(1)
where am is the measured data from the accelerometer; af is
the actual acceleration applied to the phone’s frame by the
user; R(α, β, γ) = Rz (α)Ry (β)Rx (γ) is the rotation matrix
between the actual force applied to the phone and the frame
of the sensor; and ~z is a unique vector along z direction [28],
3
z
1400
z=r.cos(θ)
1200
1000
800
600
r
400
200
Φ
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
Fig. 4: Error distribution of accelerometer data for “O”.
500
450
400
350
300
250
200
x
150
θ
y
100
50
0
-0.2
0
0.2
0.4
0.6
0.8
1
Fig. 6: The phone angle changes accelerometer readings.
1.2
Fig. 5: Error distribution of accelerometer data for “N”.
[29]. Rotating the sensor frame acceleration to the actual force
frame gives the inertial acceleration:
ainertial = R(α, β, γ)
−1
−1
af = R(α, β, γ)
am + g~z
(2)
Integration of the inertial acceleration produces a velocity and
then integrating acceleration twice produces a phone position.
Z
V (t) = ainertial dt
(3)
Fig. 7: Error in sensor and codeword estimation (shaded
circles) is mitigated by UncertainhT i.
Z Z
R(t) =
ainertial dtdt
(4)
codewords can exploit this information. Prior approaches fall
down on both fronts: they do not learn separate models for
each gesture or accomodate gesture noise in their codeword
maps [4]–[7]
A rotation matrix is obtained by multiplying each of the yaw,
roll, and pitch rotation matrices. Adding gyroscope data, and
assuming the phone is still at t = 0 (which means we know
the initial angel with respect to gravity), the accumulated
rotational velocity determines the 3-D angles with respect to
gravity at any time [28]–[30]. Projecting the accelerator data
in this manner may seem appealing, but it is impractical for
two reasons. (1) It results in dimensionless gestures, which
means the classifier cannot differentiate a vertical circle from a
horizontal circle. Users would find this confusing. (2) It amplifies noise making machine learning harder. Figure 3 shows the
accumulated error over time for different values of angle error.
Even small errors result in huge drift in tracking the location,
making gesture tracking almost impossible. Consequently, we
need to use a different approach to handling gesture errors.
C. Noise in Classification
Noise affects how the system maps a sequence of continuous observations to discrete codewords. A deterministic
quantization algorithm does not deal with this source of
error. Figure 7 illustrates why deterministic quantization is
insufficient. The black points are codewords and the white
point is a sensed position in 2D space. The distances dA
and dB are similar, but the B codeword is slightly closer, so
deterministic quantization would choose it. In reality, however,
the sensed value is uncertain and so too are estimates of
the codewords themselves. The gray discs show a confidence
interval on the true position. For some values in the interval,
dA < dB and thus the correct quantization is A.
B. Noise is Gesture Specific
We collect the error distribution, mean, and variance of
the x, y, and z accelerometer readings for each gesture at
each position in sequence. Figures 4 and 5 plot the resulting
distributions for two examples from the “O” and “N” gestures.
The error distributions tend to be Gaussian mixture models
since the accelerometer measure x, y, and z coordinates.
Because the error is large and differs per gesture, it suggests
different models for each gesture should be more accurate.
The error distributions are not uniform. Mapping the data to
Our statistical quantization approach explores multiple
codewords for the white point by assuming that points close
to the sensed value are likely to be the true position. In
other words, the probability of a given point being correct is
proportional to its distance from the sensed value. We therefore
choose the codeword A with a probability proportional to dA
and B with a probability proportional to dB .
4
V. H IGH F IVE : G ESTURE T RAINING AND C LASSIFICATION
To accurately model and recognize gestures, we propose two
techniques: deterministic elliptical quantization and statistical
elliptical quantization.
Deterministic Elliptical Quantization During training, we
gather the statistical data on errors (distribution of error, mean,
and variance) for each position in gesture sequences, create
codewords, and train HMMs for each gesture. We map all the
observation sequences for each gesture to a unique codebook.
We construct an elliptical contour for our codebook based on
mean and variance of the observations. Figure 8 shows the
spherical equal-spaced codebook generated for all gestures
based on prior work [6] and our per-gesture ellipses for
three example gestures. In the figure, the acceleration data is
expressed in terms of g ' 9.8. If we hold the phone along the
Z axis, the scattered data has a bias of (0, 0, −1) which shows
the gravity component. If the user holds the phone upsidedown the scattered data has a bias of (0, 0, 1) and our statistical
generated codewords embraces the gravity component in each
case. Per-gesture ellipses better fit the data than a single sphere
for all gestures. We use 18 equally spaced points on the
elliptical contour to ease comparison with related work, which
uses 18 points on a spherical contour [6]. 18 observation states
strikes a balance between learning complexity and accuracy,
both of which are a function of the number of states. This
method is similar to multi-dimensional data scaling [31], but
as we showed in the previous section, standard projection is
a poor choice for this data. We construct elliptical models for
each gesture as follows.
(y − µy )2
(z − µz )2
(x − µx )2
+
+
=1
(σx )2
(σy )2
(σz )2
Fig. 8: Eliptical and spherical quantization for 3-D accelerometer data for “x-dir”, “y-dir”, and “N” gestures.
Gesture Training After measuring the noise and training
the codebooks for each gesture, we build an HMM model
for each gesture. The gesture recognition application takes as
input the 3-D accelerometer sequence for each gesture and
updates the HMM probabilities using the forward-backward
algorithm [5], [6], [15], [32]. We use Baum–Welch filters to
find the unknown parameters of each gesture’s HMM model
(i.e., aij and bjk ) [33]. Assuming P (Xt |Xt−1 ) is independent
of time t and assuming the probabilities of initial states is
πi = P (X1 = i), the probability of a certain observation at
time t for state j is given by
bj (yt ) = P (Yt = yt |Xt = j)
(6)
Baum–Welch filters use a set of Expectation Maximization (EM) steps. Assuming a random initial condition θ =
(A, B, π) for the HMM, Baum–Welch finds the local maximization state transition probabilities, output probabilities, and
state probabilities. That is, HMM parameters θ∗ which will
maximize the observation probabilities as follows.
(5)
The values µx , µy and µz are the expected value of raw
acceleration data for each gesture. We construct a different
codebook for each gesture. This process maps the accelerometer data to one of the 18 data points as shown in Figure 8.
The mapped data constructs the observed information in the
Hidden Markov Model.
θ∗ = argmaxθ P (Y |θ)
(7)
The algorithm iteratively updates A, B, and π to produce
a new HMM with a higher probability of generating the
observed sequence. It repeats the update procedure until it
finds a local maximum. In our deployment, we store one final
HMM for each gesture as a binary file on the phone.
Our quantization approach differs from the prior work [6]
in two ways. First, since we use the statistics of each gesture,
there is no need to remove the gravity bias, because the center
of mass for all gesture data of a specific gesture includes the
gravity component. The second difference is that we chose a
different contour for each gesture in our data set. As Figure 8
shows, the elliptical contour for a x-dir gesture is completely
different from the contour for y-dir or N.In the spherical
contour, most of the data points from the accelerometer map
to a single codeword, eliminating a lot of information that is
useful for classification. Our approach reduces the quantization
error for different gestures since it is much less likely to map
each gesture to another gesture’s codebook and generate the
same sequence.
Statistical Gaussian Mixture Model (GMM) Quantization The key to our Statistical Elliptical Quantization approach is representing each gesture reading as a random variable that incorporates its sensor noise. For classification, our
statistical quantization approach uses the Gaussian distribution
mixture models based on the error model we observe for each
gesture during training. For example, Figures 4 and 5 show
that the probability distribution of distance of data mapped
to codei follows a Gaussian mixture model distribution with
3 peaks each representing the peak over one of the X, Y
5
or Z coordinate. The probability of mapping a data point to
each codeword for a bivariate Gaussian noise distribution is
computed as follows:
P
1
r
P (codei ) =
2πσ 2 σ 2 σ 2
i,x i,y i,z
k=x,y,z
e
P
PN
1
j=1
r
2πσ 2 σ 2 σ 2
j,x j,y j,z
−(di,k −µi,k )2
2σ 2
i,k
k=x,y,z
e
Algorithm 1: High Five: GMM Quantization
Data: Raw accelerometer data, HMM models for
Gi ∈ G, Inference threshold (thr)
Result: If (gesture == Gi ).P r ≥ thr or
(gesture! = Gi ).P r ≥ thr
1 Most probable gesture ={} ;
2 for Gi ∈ G do
3
(gesture == Gi ).P r = 0;
4
while (gesture == Gi ).P r ≤ thr or
(gesture! = Gi ).P r ≤ thr do
5
X= Map Accelerometer data sequence to to Gi ’s
quantization codebook using random
quantization;
6
Find P (X|Gi ) from the Baum–Welch trained
HMM model;
(X|Gi )
;
7
P (Gi |X) = P (GiP)P(X)
8
Add P (Gi |X) to Gi ’s distribution model;
−(dj,k −µj,k )2
2σ 2
j,k
(8)
A mixture of three Gaussian distribution models maps to
individual Gaussian models as follows:
/* Quantization code for a mixture of
three Gaussian distribution. */
var acc = ReadAccelerometer();
var d = k(acc, codewordi )k ;
if d < (µ1 + µ2 )/2 :
d = N (µ1 , σ1 ) ;
else if (µ1 + µ2 )/2 < d < (µ2 + µ3 )/2 :
d = N (µ2 , σ2 ) ;
else:
d = N (µ3 , σ3 )
9
10
11
12
This mapping produces a probability distribution over codewords for each reading. Sampling from this distribution creates
multiple sequences of observation for the HMM, which then
determines the most likely gesture from the entire distribution.
if (gesture == Gi ).P r ≥ thr and
(gesture == Gi ).P r ≥ M axP rob then
MaxProb = (gesture == Gi ).P r ;
Most probable gesture = Gi ;
return Most probable gesture;
next classify the generated sequence to find P (Gi |X), sample
until the probabilities converge, and then pick the most likely
sequence. When the algorithm completes, we have computed
the most likely HMM path for each Gi . We only consider Gi
with a probability above a threshold thr as potential gestures
and thus may not return a gesture. For those Gi above the
threshold, we return the one with the highest probability as
the most likely gesture. We explore thr values of 0.5 and
1/N and find that 0.5 works best.
Statistical Random Quantization For comparison, we also
implement a random quantizer to exploit error if training did
not or was not able to produce a gesture-specific error model.
This quantizer maps an observation randomly to codewords
depending on their distance to the data point. For example,
given four codewords, it randomly maps the gesture data
proportional to the distance with respect to each codeword:
1
P (codei ) = PNdi
1
j=1 dj
(9)
Algorithm 2 uses the Random quantizer which implements
a Bayesian classification scheme that returns the probability
of each gesture given the observed sequence [34]. Given a set
of observation sequence X = {X1 , X2 , ..., Xn }, it computes
the probability of each gesture Gk as follows.
Gesture Classification The GMM quantization and
Random quantization algorithms appear in Algorithms1
and 2, respectively. We implement these classifiers in the
UncertainhT i programming language (described below), exploiting its first-class support for probability distributions.
P (Gk |X) =
Algorithm 1 shows our novel statistical RMM quantization.
Each step of the algorithm maps user data to a sequence
of observation states from the generated codebook during
training for each of N gestures in G = {G1 , G2 , ...GN }.
We treat the mapping independently for each data point in
Gi . (We also explored computing correlated mapping where
mapping the current 3-D data to one of the quantization
codewords depends on the previous 3-D mapping, which
further improves accuracy, but for brevity omit it). At each
step, we sample nearby codewords in Gi and weigh them by
their probability based on the GMM error model observed
during training to create a sequence of observation states. We
P (Gk )P (X|Gk )
P (X)
(10)
The values P (X|Gk ) and P (Gk ) are produced by the Baum–
Welch training model for each individual gesture.
VI. D ISCUSSION
Our HMM model tracks a single HMM path during classification but builds many possible input observations from
a single trace of accelerometer data. In contrast, prior work
shows it is possible to use an HMM which tracks the k top
paths during classification [35]. It is interesting future work to
explore any experimental differences in such a formulation.
We use the raw accelerometer data as a feature given to
6
Algorithm 2: High Five: Random Quantization
Data: Raw accelerometer data, HMM models for Gi ∈ G
Result: The most probable gesture
1 MaxProb = 0 ;
2 Most probable gesture ={} ;
3 for Gi ∈ G do
4
Map Accelerometer data to to the quantization
codebook for each gesture using deterministic
quantization;
5
Find P (X|Gi ) from the Baum–Welch trained HMM
model;
(X|Gi )
;
6
P (Gi |X) = P (GiP)P(X)
7
if P (Gi |X) ≥ M axP rob then
8
MaxProb = P (Gi |X) ;
9
Most probable gesture = Gi ;
10
/* Classification Code for Statistical
Quantization. */
var acc = ReadAccelerometer();
Uncertain<int> gestures =
// distribution over observations
from obs in new
StatisticalQuantizer(acc)
// returns most likely gesture
let gesture = Bayes.Classify(acc, obs)
select gesture;
// T-test: more likely than not that
// this is the gesture labeled 0
if ((gestures == 0).Pr(0.5))
Console.WriteLine("gesture=N");
return Most probable gesture;
Fig. 9: Statistical Quantization for a single gesture.
free the UncertainhT i runtime from exactly representing a
distribution and let it rely on lazy evaluation and sampling to
ultimately determine the result of any query.
our HMM training and classification. However, there exist
prior works, especially in computer vision, that finds rotationinvariant or scale-invariant features [36], [37]. We did not use
rotation-invariant features because we want the capability to
define more gestures, e.g., “N” in the x-y and in the z-y plane
are distinct. However, more sophisticated features can further
improve our classification accuracy.
B. Statistical Quantization with UncertainhTi
To implement statistical quantization, we express each gesture as a random variable over integer labels. Our implementation of StatisticalQuantizer(acc) (Figure 9) first
reads from the accelerometer and passes this observation to the
RandomQuantizer(acc) constructor which knows how
to sample from observations by randomly mapping analog
accelerometer data to discrete codewords to return a distribution over observations. The LINQ syntax let’s the developer call existing code designed to operate on type T (i.e.,
Bayes.Classify which operates on concrete observations)
and further lets her describe how to lift such computation to
operate over distributions. In gesture recognition, the resulting
type of gestures is then an Uncertain<int> or a
distribution over gesture labels.
VII. I MPLEMENTATION
To help developers create models for problems in big
data, cryptography, and artificial intelligence, which benefit
from probabilistic reasoning, researchers have recently proposed a variety of probabilistic programming languages [12],
[38]–[42]. We use the UncertainhT i programming language
to implement random quantization. We choose it, because
UncertainhT i is sufficiently expressive and automates inference, and thus significantly simplifies our implementation [12],
[42], [43]. The remainder of this section gives background
on UncertainhT i, the programming model that inspired and
supports our technique, and describes our implementation.
The UncertainhT i runtime does not execute the lifted computations until the program queries a distribution’s expected
value or uses it in a conditional test. For example, when the
developer writes if ((gestures == 0).Pr(0.5)) the
UncertainhT i runtime executes a hypothesis test to evaluate
whether there is enough evidence to statistically ascertain
whether it is more likely than not that the random variable
gestures is equal to the gesture with label 0. The runtime
samples from the leaves of the program and propagates
concrete values through any user-defined computation until
enough evidence is ascertained to determine the outcome
of a conditional. The UncertainhT i runtime implements
many inference algorithms under the hood (rejection sampling,
Markov Chain Monte Carlo, etc.). For this domain, we found
no reason to prefer one over the other and so use rejection
sampling for all experiments.
A. The UncertainhTi Programming Model
UncertainhT i is a generic type and an associated runtime
in which developers (i) express how uncertainty flows through
their computations and (ii) how to act on any resulting uncertain computations. To accomplish (i) a developer annotates
some type T as being uncertain and then defines what it
means to sample from the distribution over T through a
simple set of APIs. Consumers of this type compute on the
base type as usual or use LINQ primitives [44] to build
derived computations. The UncertainhT i runtime turns these
derived computations into a distribution over those computations when the program queries it. Querying a distributions
for its expected values and or executing a hypothesis test for
conditionals, triggers a statistical test. Both of these queries
7
performed each gesture 20 times for a total of 3200
samples.
N
Prior studies [6], [11] have smaller data sets and very distinct
gesture patterns. In contrast, our data sets include gestures with
very similar patterns. For example, W and N in the WP data
set differ by about one stroke, and G9 and G11 differ by a
90◦ rotation, making them hard to differentiate.
W
O
x dir
y dir
These data sets represent a realistic amount of training for
individuals, because users, even paid ones, are unlikely to
perform the same gesture well 100s of times for training.
Training must be short and recognition must be effective
quickly to deliver a good user experience. To create sufficient
training data, we train each classifier with data from all the
users (20 for WP and 8 for SW). To assess the accuracy of
the gesture recognition algorithms, we randomly split the data
sets into 75% training data and 25% test data and repeat this
procedure 10 times.
Fig. 10: Five gestures in Windows Phone (WP) data set.
1
2
3
4
5 6 7 8
9 10 11 12
13 14 15 16
17 18 19 20
The High Five gesture recognition application We implement a Windows Phone gesture recognition application, called
High Five. Users train the system online by first specifying
a new gesture and then train the system by performing the
gesture at least 10 times. Users can perform any gesture they
want and then specify a corresponding action triggered by
the gesture (e.g., call Mom on M, send an Email on E). The
application has two modes: signaled, in which users open the
gesture recognition application first before making a gesture,
and dead start, which captures all device movements, and thus
is more likely than signaled recognition to observe actions
that are not gestures. We implement the system in Microsoft’s
Visual Studio 2015 for C# and the Window’s Phone software
development kit (SDK) sensor API. We use the UncertainhT i
libraries and runtime for our statistical quantizer by adding an
HMM API to UncertainhT i that returns samples from HMM
distributions.
Fig. 11: Twenty gestures in Smart-Watch (SW) data set [14].
VIII. M ETHODOLOGY
This section describes the data sets, implementation details,
and algorithms that we use to evaluate statistical quantization.
We evaluate our algorithms and collect one data set on a
smart-phone Nokia Lumia 920 (Windows Phone 8). We use
the Windows SDK 8 API to read the 3-D accelerometer.
Gesture recognition algorithms We evaluate the following
gesture recognition algorithms.
Windows Phone (WP) Data Set We collect data from 10
men and 10 women performing 10 times each of the
5 gestures shown in Figure 10 on our Windows Phone
platform, for a total of 1000 gesture samples.
Deterministic Spherical Quantizer Wijee is the prior stateof-the-art [6]. It uses a traditional left-to-right HMM
with kmeans quantization and one spherical model for all
gestures [6]. We follow their work by limiting transitions
to four possible next codewords, such that Si is the only
possible next state from Sj where i ≤ j ≤ i + 3. (They
find that left-to-right and ergodic HMMs produce the
same results.) We extend their algorithm to train gesturespecific models using a unique codebook for each gesture.
Since the scattered data is different for each gesture, using
per-gesture codebooks offers substantial improvements
over a single codebook for all gestures.
Smart Watch (SW) Data Set We also use a publicly available data set consisting of the 20 gestures shown in
Figure 11 trained by eight people [14]. Each person
Deterministic Elliptical Quantizer This algorithm uses a
left-to-right HMM, elliptical quantization, and a unique
codebook for each gesture.
Data sets, training, and testing We collect our own data set
on the Windows Phone and use the publicly available Smart
Watch data set from Constante et al. [14], which together
make a total of 4200 gesture samples of 25 distinct gestures
performed by 28 people.
8
when the conditional threshold is 1/N or lower, the recognition rate is higher than deterministic elliptical quantization.
Statistical GMM delivers a similar threshold for precision.
Statistical GMM quantization offers a distinct and smooth
trade-off between precision and recall. Applications thus have
a range of choices for the conditional threshold from which
to choose that they can tailor to their requirements, or even
let users configure. For instance, when the user is on a bus,
she could specify higher precision to avoid false positives,
since she does not want the phone to call her boss with an
unusual movement of the bus. Prior work requires the training
algorithm specify this tradeoff, instead of the end developers
and users.
Statistical GMM Quantizer This algorithm uses a left-toright HMM, statistical Gaussian mixture model (GMM)
elliptical quantization based on observed error, and a
unique codebook for each gesture. The runtime generates
multiple observation sequences by mapping the data sequences to multiple codeword sequences for each gesture
using a gaussian mixture model. With statistical quantization, the developer chooses a threshold that controls false
positives and negatives, which we explore below.
Statistical Random Quantizer This algorithm uses a leftto-right per-gesture elliptical HMM, statistical random
quantization, and a unique codebook for each gesture.
Figure 13 shows the recognition rate for each gesture in
the WP data set for all the classifiers. The deterministic elliptical quantizer improves substantially over the deterministic
spherical quantizer. Statistical GMM and random quantization
deliver an additional boost in the recognition rate. Both GMM
and random produce similar results within the standard deviation plotted in the last columns. On average, both statistical
GMM and random quantization deliver a recognition rate of
85 and 88%, respectively, almost a factor of two improvement
over deterministic spherical quantization.
IX. E VALUATION
This section compares the precision, recall, and recognition
rate of the gesture recognition algorithms. We show that statistical quantization is highly configurable and offers substantial
improvements in accuracy, recall, and recognition over other
algorithms. The other recognizers are all much less configurable and achieve lower maximum accuracy, recall, and/or
recognition in their best configurations. These experiments illustrate that a key contribution of statistical quantization is that
it has the power to offer both (1) highly accurate recognition
in the signaled scenario, and (2) significant reductions in false
positives in the dead-start scenario, thus matching a wide range
of application needs.
Recognition rates for dead start and as a function of
gestures This experiment explores the ability of the gesture
recognition algorithm to differentiate between no gesture and
a gesture since users do not signal they will perform a gesture.
For instance, a gesture M must both wake up the phone and
call your mom. In this scenario, controlling false positives
when you carry your phone in your pocket or purse is more
important than recall—you do not want to call your mom
unintentionally.
We explore the sensitivity of gesture classification accuracy
as a function of the number of gestures, using 2 to 20 SW
gestures. For all the algorithms, accuracy improves with fewer
gestures to differentiate. Statistical random quantization is
however substantially more accurate than the others for all
numbers of gestures. We further show that our approach is
relatively insensitive to the number of users in the training
data. Finally, we show how to easily incorporate personalization based on other factors, such as performing a subset of the
gestures, and the result further improves accuracy.
Accuracy as a function of the number of gestures The
more gestures the harder it is to differentiate them. To explore
this sensitivity, we vary the number of gestures from 2 to
20 and compare the four approaches. Figure 14 shows the
recognition rate for the deterministic spherical, deterministic
elliptical, statistical random, and statistical GMM quantizers as
a function of the number of gestures in the High Five application. All classifiers are more accurate with fewer gestures compared to more gestures. Increases in the number of gestures
degrades the recognition rate of the deterministic spherical
faster than compared to the other classifiers. Both deterministic
spherical and elliptical classification have high variance. The
statistical quantizers always achieves the best recognition rates,
but GMM has a lot less variance than random, as expected
since it models the actual error. For instance, GMM achieves
a 71% recognition rate for 20 gestures, whereas deterministic
spherical quantizer has a recognition rate of 33.8%. Statistical
GMM quantization has a 98% recognition rate for 2 gestures.
Recognition rates for signaled gestures In this first experiment, users open the High Five application and then perform
the gesture, signalling their intent. Figure 12 shows precision
(dashed lines) and recall (solid lines) for each of the 5 gestures
in distinct colors for the WP data set as a function of the
conditional threshold. Precision is the probability that a gesture
is correctly recognized and is an indication of false positives
while recall shows the probability of recognizing a performed
gesture and shows false negatives. The deterministic elliptical
quantizer in Figure 12(b) uses the domain specific knowledge
of each gesture during training and thus has higher precision
and recall compared to deterministic spherical quantization in
Figure 12(a).
User-dependent and user-independent gestures To explore the sensitivity of recognition to the training data, we
vary the number of users in the training data from 2 to 8. We
Statistical GMM quantization in Figure 12(c) offers further
improvements in precision and recall. Although the recall
curve goes down as a function of the conditional threshold,
9
1
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.5
0.4
0.6
0.4
0.3
0.2
0.2
0.1
0.1
0
0.2
0.4
0.6
Conditional Threshold
0.8
0
1
(a) Deterministic spherical quantizer
N: Recall
O: Recall
W: Recall
x dir: Recall
y dir: Recall
N: Precision
O: Precision
W: Precision
x dir: Precision
y dir: Precision
0.5
0.3
0
Precision/Recall
1
0.9
Precision/Recall
Precision/Recall
1
0.9
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
0.6
Conditional Threshold
0.8
1
0
(b) Deterministic elliptical quantizer
0.2
0.4
0.6
Conditional Threshold
0.8
1
(c) Statistical GMM quantizer
Fig. 12: Precision and recall curves for gesture recognition algorithms.
Before Personalization
Statistical Random Quantizer (Left-to-right)
Statistical GMM Quantizer (Left-to-right)
Recognition Rate
Recognition Rate
Deterministic Spherical Quantizer (Wijee)
Deterministic Elliptical Quantizer(Left-to-right)
1
0.8
0.6
0.4
0.2
0
N
O
W
x-dir
y-dir
Average
Recognition Rate
Fig. 13: Gesture recognition rates for WP data set.
Deterministic Spherical Quantizer(Wijee)
Deterministic Elliptical Quantizer
Statistical Random Quantizer
Statistical GMM Quantizer
1
Confidence Interval
Confidence Interval
Confidence Interval
Confidence Interval
0.6
0.4
0.2
2
4
6
8
10
12
14
16
18
0.6
0.4
0.2
G1
G2
G3
G4
G5
G7
G8
G9
G10
Fig. 15: Classification accuracy of deterministic elliptical
quantizer with personalization using UncertainhT i.
0.8
0
0.8
0
Std
After Personalization
1
Users
Accuracy
2
4
6
8
82.71
85.09
84.15
82.08
20
Number of Gestures
TABLE I: Classification accuracy of deterministic elliptical
quantizer gestures 1, 3, 5, 7, 9 and 11 from the SW data set
as a function of the number of users training the HMM.
Fig. 14: Gesture recognition rate as a function of the number
of gestures in the SW data set.
compare with Costante et al. [14] and Liu et al. [11] which
both perform this same experiment. We use six gestures from
the Costante et al. SW data set: gestures 1, 3, 5, 7, 9, and 11.
Costante et al. find that more users produces better accuracy,
whereas Liu et al. find more personalized training (fewer
users) works better. Table I presents accuracy for deterministic
elliptical quantization as a function of users. In contrast, our
recognition algorithm is not sensitive to the number of users
and has high accuracy for both user-dependent (fewer users)
and user-independent (more users) training.
personalization to the deterministic elliptical quantization.
Figure 15 shows how the distribution of gestures preformed
by a specific user improves gesture recognition accuracy from
10 to 20% for each of the 10 gestures. Personalization could
also be combined with statistical GMM.
Balancing false positives and false negatives This experiment shows in more detail how the statistical quantization
balances false positives with false negatives. In contrast, the
deterministic elliptical quantizer always returns a classification
with either a high probability (near 1) or low probability (near
zero). Figure 16 shows a case study of classification of 10
gestures from the SW data set. The figure shows the recognition rate of a gesture whose recognition rate for deterministic
elliptical quantizer is 0.90 and for statistical random quantizer
and is 0.87. However, for the statistical random quantizer the
balance between false positives and false negatives occurs
at a higher threshold (near 0.5), which means that changing
the conditional threshold of the classifier can decrease false
negatives. However in the deterministic elliptical quantizer, the
balance between false positives and false negatives happens at
a lower conditional threshold, which means that the probability
Frequency-based personalization This section shows how
our system easily incorporates additional sources of domainspecific information to improve accuracy. Suppose the gesture
recognition application trains with 20 gestures from 8 people.
In deployment, the gesture recognition application detects that
the user makes 10 gestures with equal probability, but very
rarely makes the other 10 gestures. We prototype this scenario,
by expressing the user-specific distribution of the 20 gestures
as a probability distribution in the UncertainhT i programming framework in the classification code. At classification
time, the runtime combines this distribution over the gestures
with the HMM to improve accuracy. This figuration adds
10
Recognition Rate
1
Precision (Deterministic Elliptical Quantizer)
Recall (Deterministic Elliptical Quantizer)
Precision (Statistical Random Quantizer)
Recall (Statistical Random Quantizer)
Precision (Statistical GMM Quantizer)
Recall (Statistical GMM Quantizer)
recognition finds the most likely sequence of hidden states
given a distribution over observations rather than a single observation. We express this new approach using UncertainhT i,
a probabilistic programming system that automates inference
over probabilistic models. We demonstrate how UncertainhT i
helps developers balance false positives with false negatives
at gesture recognition time, instead of at gesture training time.
Our new gesture recognition approach improves recall and
precision over prior work on 25 gestures from 28 people.
0.8
0.6
0.4
0.2
0
0.2
0.5
0.8
Fig. 16: Balancing false positives and false negatives with the
statistical random and GMM quantizer on 10 SW gestures.
public static bool Pr(this Uncertain<bool> source, double
Prob = 0.5, double Alpha = 0.1);
R EFERENCES
[1] R. H. Liang and M. Ouhyoung. A real-time continuous gesture
recognition system for sign language. In IEEE Automatic Face and
Gesture Recognition. IEEE, 1998.
Elapsed Time (ms)
Fig. 17: The default values for .Prob() inference calls.
30
Deterministic Elliptical Quantizer
Statistical Random Quantizer alpha=0.1
Statistical Random Quantizer alpha=0.2
[2] T. Starner, J. Weaver, and A. Pentland. Real-time american sign language
recognition using desk and wearable computer based video. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 1998.
25
20
15
10
5
0
2
4
6
8
10
12
14
16
18
[3] K. Hinckley. Synchronous gestures for multiple persons and computers.
In Proceedings of the 16th annual ACM symposium on User interface
software and technology, pages 149–158. ACM, 2003.
20
Number of Gestures
Fig. 18: Time elapsed for classification in the statistical
random and deterministic elliptical quantizer techniques.
[4] Z. Ghahramani. An introduction to hidden Markov models and Bayesian
networks. International Journal of Pattern Recognition and Artificial
Intelligence, 2001.
[5] H-K Lee and J. H. Kim. An hmm-based threshold model approach for
gesture recognition. Pattern Analysis and Machine Intelligence, IEEE
Transactions on, 1999.
of false negatives is always higher in this classifier. Cost of
statistical random quantization While the statistical random
quantizer gives us the flexibility of higher precision or recall,
it incurs more recognition time. We show that this overhead is
low in absolute terms, but high compared to a classifier that
does not explore multiple options. Figure 18 graphs how much
time it takes to recognize different number of gestures with
the deterministic elliptical quantization and statistical random
quantization techniques. On average the statistical random
quantizer is 16 times slower at recognizing 2–20 different
gestures, taking 23 ms to recognize 20 gestures and 6.5 ms
for two gestures. The statistical random quantizer uses .Pr()
calls to invoke statistical hypothesis tests, and thus samples the
computation many times. Figure 17 shows the default value
for the .Pr() function. If we change the value of α for the
statistical test from 0.1 to 0.2, the time overhead reduces from
28 ms to 23 ms. If the system needs to be faster, statistical
quantization trials are independent and could be performed in
parallel. This additional overhead is very unlikely to degrade
the user experience because in absolute terms, it is still much
less than the 100 ms delay that is perceptible to humans [45].
[6] T. Schlomer, B. Poppinga, N. Henze, and S. Boll. Gesture recognition
with a Wii controller. In ACM Conference on Tangible and Embedded
Interaction, 2008.
[7] X. Zhang, X. Chen, W. Wang, J. Yang, V. Lantz, and K. Wang. Hand
gesture recognition and virtual game control based on 3d accelerometer
and emg sensors. In ACM conference on Intelligent User Interfaces,
2009.
[8] J. O. Wobbrock, A. D. Wilson, and Y. Li. Gestures without libraries,
toolkits or training: A $1 recognizer for user interface prototypes. In
ACM User Interface Software and Technology. ACM, 2007.
[9] J. Mäntyjärvi, J. Kela, P. Korpipää, and S. Kallio. Enabling fast
and effortless customisation in accelerometer based gesture interaction.
In Proceedings of the 3rd international conference on Mobile and
ubiquitous multimedia. ACM, 2004.
[10] F. G. Hofmann, P. Heyer, and G. Hommel. Velocity profile based
recognition of dynamic gestures with discrete hidden Markov models. In
Gesture and Sign Language in Human-Computer Interaction. Springer,
1998.
[11] J. Liu, L. Zhong, J. Wickramasuriya, and V. Vasudevan. uwave:
Accelerometer-based personalized gesture recognition and its applications. Pervasive and Mobile Computing, 2009.
[12] J. Bornholt, T. Mytkowicz, and K. S. McKinley. UncertainhTi: A firstorder type for uncertain data. ASPLOS, 2014.
X. C ONCLUSION
[13] T. Mytkowicz, J. Bornholt, a. Sampson, D. Z. Tootaghaj,
and K. S. McKinley.
Uncertain<T> Open Source Project.
https://github.com/klipto/Uncertainty/.
The promise of novel applications for sensing humans
and machine learning is only realizable if, as a community,
we help developers to use these tools correctly. This paper
demonstrates that human gestures are very noisy and degrade
the accuracy of machine learning models for gesture recognition. To help developers to more accurately deal with gesture
noise, we introduce probabilistic quantization wherein gesture
[14] G. Costante, L. Porzi, O. Lanz, P. Valigi, and E. Ricci. Personalizing a
smartwatch-based gesture interface with transfer learning. In Signal
Processing Conference (EUSIPCO), 2014 Proceedings of the 22nd
European. IEEE, 2014.
[15] S. Mitra and T. Acharya. Gesture recognition: A survey. Systems, Man,
and Cybernetics, Part C: Applications and Reviews, IEEE Transactions
on, 2007.
11
[16] A. Akl and S. Valaee. Accelerometer-based gesture recognition via
dynamic-time warping, affinity propagation, & compressive sensing.
In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE
International Conference on. IEEE, 2010.
[38] J. C Mitchell, A. Ramanathan, A. Scedrov, and V. Teague. A probabilistic polynomial-time process calculus for the analysis of cryptographic
protocols. Theoretical Computer Science, 2006.
[39] P. Xie, J. H. Li, X. Ou, P. Liu, and R. Levy. Using Bayesian networks for
cyber security analysis. In Dependable Systems and Networks (DSN),
2010 IEEE/IFIP International Conference on. IEEE, 2010.
[17] G. Niezen and G. P. Hancke. Gesture recognition as ubiquitous input
for mobile phones. In International Workshop on Devices that Alter
Perception (DAP 2008), conjunction with Ubicomp. Citeseer, 2008.
[40] L. Ngo and P. Haddawy. Answering queries from context-sensitive
probabilistic knowledge bases. Theoretical Computer Science, 1997.
[18] Z. Zhang. Microsoft kinect sensor and its effect. MultiMedia, IEEE,
2012.
[41] O. Kiselyov and C. Shan. Embedded probabilistic programming. In
Domain-Specific Languages. Springer, 2009.
[19] P. O. Kristensson, T. Nicholson, and A. Quigley. Continuous recognition
of one-handed and two-handed gestures using 3d full-body motion
tracking sensors. In Proceedings of the 2012 ACM international
conference on Intelligent User Interfaces. ACM, 2012.
[42] A. Sampson, P. Panchekha, T. Mytkowicz, K. S. McKinley, D. Grossman, and L. Ceze. Expressing and verifying probabilistic assertions. In
ACM Conference on Programming Language Design and Implementation (PLDI). ACM, 2014.
[20] D. Wilson and A. Wilson. Gesture recognition using the xwand.
Technical Report CMURI-TR-04-57, CMU Robotics Institute, 2004.
[43] C. Nandi et al. Debugging probabilistic programs. In Proceedings of
the 1st ACM SIGPLAN International Workshop on Machine Learning
and Programming Languages. ACM, 2017.
[21] D. Mace, W. Gao, and A. Coskun. Accelerometer-based hand gesture
recognition using feature weighted naı̈ve Bayesian classifiers and dynamic time warping. In ACM Conference on Intelligent User Interfaces
(companion). ACM, 2013.
[44] E. Meijer, B. Beckman, and G. Bierman. Linq: Reconciling object,
relations and xml in the .net framework. In Proceedings of the 2006 ACM
SIGMOD International Conference on Management of Data, SIGMOD
’06, pages 706–706, New York, NY, USA, 2006. ACM.
[22] K. Murakami and H. Taguchi. Gesture recognition using recurrent neural
networks. In Proceedings of the SIGCHI conference on Human factors
in computing systems. ACM, 1991.
[45] Stack
Overflow.
Can
a
human
eye
perceive
a
10
milliseconds
latency
in
image
load
time?
http://stackoverflow.com/q/7882713/39182/, accessed
in April, 2016.
[23] VM Mantyla. Discrete hidden Markov models with application to
isolated user-dependent hand gesture recognition. VTT publications,
2001.
[24] A. H. Ali, A. Atia, and M. Sami. A comparative study of user dependent
and independent accelerometer-based gesture recognition algorithms. In
Distributed, Ambient, and Pervasive Interactions. Springer, 2014.
[25] J. A. Hartigan and M. A. Wong. Algorithm as 136: A k-means clustering
algorithm. Applied statistics, 1979.
[26] P. Paudyal, A. Banerjee, and S. KS Gupta. Sceptre: a pervasive,
non-invasive, and programmable gesture recognition technology. In
Proceedings of the 21st International Conference on Intelligent User
Interfaces, pages 282–293. ACM, 2016.
[27] O. J. Woodman. An introduction to inertial navigation. University of
Cambridge, Computer Laboratory, Tech. Rep. UCAMCL-TR-696, 14:15,
2007.
[28] A. B. C. Chatfield. Fundamentals of high accuracy inertial navigation,
volume 174. Aiaa, 1997.
[29] CH Robotics Project. http://www.chrobotics.com/library/.
[30] E. M. Foxlin. Motion tracking system, 2008. US Patent 7,395,181.
[31] J. B. Kruskal and M. Wish. Multidimensional scaling, volume 11. Sage,
1978.
[32] L. E. Baum and T. Petrie. Statistical inference for probabilistic functions
of finite state Markov chains. The annals of mathematical statistics,
1966.
[33] L. E. Baum, T. Petrie, G. Soules, and N. Weiss. A maximization
technique occurring in the statistical analysis of probabilistic functions
of Markov chains. The annals of mathematical statistics, 1970.
[34] S. Russell and P. Norvig. Artificial intelligence: A modern approach.
Prentice Hall, 1995.
[35] N. Seshadri and C. Sundberg. List viterbi decoding algorithms with
applications. IEEE Transactions on Communications, 42(234):313–323,
1994.
[36] D. G. Lowe. Object recognition from local scale-invariant features. In
Computer vision, 1999. The proceedings of the seventh IEEE international conference on, volume 2. IEEE, 1999.
[37] D. G. Lowe. Distinctive image features from scale-invariant keypoints.
International journal of computer vision, 60, 2004.
12
| 1 |
An Anthropic Argument against the Future Existence of
Superintelligent Artificial Intelligence
Toby Pereira
8th May 2017
Abstract
This paper uses anthropic reasoning to argue for a reduced likelihood that superintelligent AI will
come into existence in the future. To make this argument, a new principle is introduced: the SuperStrong Self-Sampling Assumption (SSSSA), building on the Self-Sampling Assumption (SSA) and
the Strong Self-Sampling Assumption (SSSA). SSA uses as its sample the relevant observers,
whereas SSSA goes further by using observer-moments. SSSSA goes further still and weights each
sample proportionally, according to the size of a mind in cognitive terms. SSSSA is required for
human observer-samples to be typical, given by how much non-human animals outnumber humans.
Given SSSSA, the assumption that humans experience typical observer-samples relies on a future
where superintelligent AI does not dominate, which in turn reduces the likelihood of it being created
at all.
1. Introduction to Anthropic Reasoning
The fact that we exist on Earth, a life-permitting planet, might superficially seem like a stroke of
luck. However, it couldn’t really be any other way. All conscious observers must exist in a place
compatible with their existence. So if there is life at all in the universe, the fact that it will be
experiencing such a planet is inevitable. Similarly, we can only exist at a time in the universe’s and
the Earth’s history when conditions are right for life. So if life can only exist for a small slice of
time, either on Earth or in the universe as a whole, we should not be surprised or feel lucky that that
time is now.
But this is not just about absolutes – places and times where life can and cannot exist. Places and
times can exist on a scale from very life-friendly to very life-hostile. As a hypothetical (and
unrealistic) example, we could find out that there are several galaxies where life exists. But among
these, some could be more conducive to life than others and have many more planets with life on
them. To keep things simple, imagine that there are two types of galaxy, and that these two types are
equally common. One type is relatively life-friendly and averages 1000 planets with life per galaxy.
The other is more life-hostile and averages one planet with life per galaxy. For this example, we
will also assume that the make-up of the planets with life in each type of galaxy is roughly similar,
with the same probability of different types of life evolving and the same average number of living
organisms etc.
Assuming that we didn’t already know which type of galaxy the Milky Way was, we would reason
that there is approximately a 99.9% chance that it is the more life-friendly type. If most conscious
observers are in a certain type of galaxy, then you, as a conscious observer, should reason that you
are more likely to be in this type of galaxy, given no other information. All other things being equal,
you should expect yourself to be in the more typical situation.
1
This goes further than expecting your environment to be typical. It is also about expecting yourself
to be a typical observer. For example, imagine that there are two variants of a particular gene, which
cause people to see colours very slightly differently, and because the differences are so small, noone knows which variant they have unless they go through a rigorous sight test, or indeed a genetic
test.
If 95% of people had one gene variant and 5% the other, then with no other information about your
own situation, you should reason that you are more likely to have the more common variant. And,
as advertised, this is part of you, rather than your environment.
The flip side of this is that if you didn’t know which was the more common variant, but you did
know which variant you had, you would reason that yours was probably the more common variant
(with 95% probability). This would then allow you to make predictions about which variants other
people had (you would expect most people to have the same variant as you). This is an important
point because it shows that you can make predications about the rest of the world from your own
case rather than just the other way round. And this is the basis of anthropic reasoning.
2. SSA, SSSA, SSSSA and Boltzmann Brains
This brings us to the Self-Sampling Assumption (SSA), defined by Nick Bostrom as follows:
(SSA) One should reason as if one were a random sample from the set of all observers
in one’s reference class. (2002, p. 57)
The reference class is the class of entities that you should consider yourself to be a sample from.
When considering the probability of having a particular gene variant in the example above, the
statistics relate to humans, and you are a human yourself, so a sensible reference class to use would
be that of all humans. But you might also happen to know that eye colour affects the probability, so
you could narrow down the reference class further and include only people with the same eye
colour as you.
When evaluating your position as a conscious observer more generally, a more general reference
class would be required. For example, if there are many other races of advanced intelligent life in
the galaxy (life that has developed speech and writing, say), then we could make predictions about
them on the basis that we would expect human life to be fairly typical in most respects among this
intelligent life.
Depending on what question you ask, the reference class you use could potentially stretch across the
whole universe, backwards and forwards in time, and even into any other universes that might exist.
And given that we would expect a random sample to be fairly typical in most respects, we can use
our own case to make predictions about what or who else is out there, across the whole universe and
beyond. For example, if there are other universes out there, causally unconnected to our own, and
some of these have intelligent life, we would expect the intelligent life in this universe to be fairly
typical, given no information to the contrary. This would enable us to begin to make predictions
about the life in these other universes.
Anthropic reasoning of this sort is sometimes used to evaluate theories in physics. If we find
ourselves faced with a theory of reality that leads to us being very atypical conscious observers in
the universe, it is arguably grounds to be suspicious of that theory. For example, something that
2
often worries physicists is the idea of Boltzmann Brains, named after the physicist and philosopher
Ludwig Boltzmann. According to the second law of thermodynamics, entropy (roughly a measure
of disorder) only increases, or at least doesn’t decrease. But this is a statistical law, and there can be
local fluctuations where entropy can decrease by random chance, causing a more ordered state.
Most of these fluctuations will be very small and insignificant, and the newly found order will
quickly return to disorder.
But wait long enough and it is statistically probable that there will be a large local decrease in
entropy, and a complex object will randomly fluctuate into existence. And if you wait a really long
time, a fully-formed human brain complete with its own thoughts and memories will fluctuate into
existence. Such brains are known as Boltzmann Brains.
If a theory of physics leads to the conclusion that Boltzmann Brains significantly outnumber
human-like brains that have evolved normally on a planet such as Earth, then a consequence of this
physical theory is that human beings would be highly atypical observers. But by SSA, we should
reason that we probably are fairly typical observers, so this seems to suggest that Boltzmann Brains
do not in fact significantly outnumber normal human-like brains. Therefore, this would give us
grounds to doubt the theory of physics.
However, this should not lead us to start worrying that we are in fact Boltzmann Brains but without
knowing it. The vast majority Boltzmann Brains would not be experiencing anything like a coherent
universe (given that their make-up is effectively random), whereas we are. If we were Boltzmann
Brains, we would be highly atypical ones, which should give us strong grounds to doubt that we are.
It might seem that these Boltzmann Brains could never pose a serious problem. They would clearly
be very few and far between, whereas there are billions of humans on planet Earth. There is also the
possibility of human-like intelligent beings existing on many other planets throughout the universe.
So a few of these Boltzmann Brains fluctuating in and out of existence should have no bearing on
whether we, as normal humans on planet Earth, are typical observers.
However, according to some theories, entropy will continually increase, until eventually the
universe reaches thermal equilibrium, or “heat death”. At this point, all the matter in the universe
will be uniformly spread out, with no galaxies, planets, or indeed life. Except, that is, for random
fluctuations. And if the universe simply exists eternally in this state of heat death, then however rare
these Boltzmann Brain fluctuations are, eventually they will come to outnumber “normal” brains,
and infinitely so.
This is a reason why some physicists reason that the universe cannot be like this, and that there
must be some other outcome, such as the universe ending in a big crunch after a finite time, or a
static end-point where no fluctuations can come about. See Carroll (2017) for a recent discussion of
Boltzmann Brains in physics.
A possible defence against the threat of Boltzmann Brains is that they could not survive in the
vacuum of outer space for more than a few seconds. This means that even if they outnumber human
brains when looking at absolute numbers, they may not do so when it comes to “observermoments”. This brings us to Bostrom’s modification of SSA, the Strong Self-Sampling Assumption
(SSSA):
(SSSA) One should reason as if one’s present observer-moment were a random sample
from the set of all observer-moments in its reference class. (2002, p. 162)
3
This is arguably a more sophisticated version of SSA. If a conscious mind exists only for a fraction
of a second, then it is responsible for far less experience than one that exists for 80 years, and it
makes sense that this should be taken into account when determining how to define an observersample.
But of course a Boltzmann Brain does not need to survive for more than the tiniest fraction of a
second if there are enough of them, as would be the case if the heat-death state of the universe is
eternal. Their observer-moments would still infinitely outnumber those of normal brains. But this is
an aside, as this purported defence against the threat of Boltzmann Brains was really just a way to
introduce SSSA!
So far, so good. However, it would appear that we are actually not typical conscious observers, and
that our observer-moments are not typical observer-moments by any stretch of the imagination, and
we don’t need to speculate into the depths of space and time to see this.
We are just one of millions of species of animal on planet Earth, with myriad individual animals
among these. It seems reasonable to suggest that the individuals of many of these species have some
degree of consciousness themselves. Humans would, therefore, only comprise a tiny proportion of
the total number of conscious beings on planet Earth, even if we limited ourselves to vertebrates or
even just mammals. And being far more cognitively advanced than the animals of other species, we
are not typical conscious observers, but incredibly privileged ones. So does this present a problem
for our anthropic reasoning? How can we expect the logic to hold that we should be typical
conscious observers throughout time and space, when we are not typical conscious observers even
on our own planet right now?
There is a possible solution to this. While we may comprise only a tiny fraction of conscious beings
on planet Earth, our superior intelligence and cognitive power that seemingly makes us so atypical
means that we arguably have a greater amount of conscious experience than other animals. It could
be said that our minds take up a larger area of “consciousness-space”, suitably defined, than the
minds of other animals.
So instead of picking a conscious observer-moment at random, if we took a random point in
“consciousness-space-time” and used that as our observer-sample, we would be far more likely to
find ourselves in a human mind.
Bostrom seems to have hit upon the same idea, but without fully following it through. He points out
that if a being has a sped-up brain (like a computer running at a faster clock speed), then it will
experience more within a given time than another being. Bostrom suggests that such a being would
have more observer-moments per second. However, arguably its main implication is relegated to a
footnote.
One can ponder whether one should not also assign a higher sampling density to certain
types of observer-moments, for example those that have a greater degree of clarity,
intensity, or focus. (2002, p. 165)
Indeed, a mind with a faster clock speed is just one way of having more conscious experience per
unit time. And this brings us to a further extension of SSA and SSSA:
Super-Strong Self-Sampling Assumption (SSSSA) – One should reason as if one’s present observermoment were a random sample from the set of all observer-moments in its reference class, with the
4
probabilities weighted proportionally according to observer-moments’ size in consciousness-spacetime.
This adds another layer of sophistication onto SSSA by taking into account all of the conscious
experience in the reference class together, rather than by separating it out into discrete units of
vastly different sizes (such as a mouse mind and a human mind) and giving all these units the same
probability weighting. It makes sense to say that human consciousness is more typical than mouse
consciousness if there is more of it in total, regardless of how many discrete units there are. As an
analogy, water is more typical on the surface of the Earth than land, even though there are more
countries than oceans.
Using SSSSA also means that there isn’t such a discontinuity at the transition between having no
consciousness and having a tiny bit of consciousness. Using SSA or SSSA would mean that as soon
as a being develops any consciousness, its observer-samples abruptly go from having zero
probability weighting to having a full weighting, even if the amount of consciousness is negligible.
Using SSSSA, there is a smooth transition where probability weighting is proportional to amount of
consciousness.
In this situation, where we are wondering why we are ourselves as opposed to any other type of
conscious being, it makes sense to consider the widest possible reference class: that of all conscious
observers across all space and time.
Of course, it is still an open question as to whether human consciousness does take up enough of
consciousness-space-time to make up for the lack of individual humans when compared to other
conscious animals, but it could arguably be tested if we had a workable measure of the size of a
being’s consciousness.
The Integrated Information Theory of consciousness has a measure of consciousness, called Φ (the
Greek letter ‘phi’). See e.g. Tononi (2012); Oizumi, Albantakis & Tononi (2014). In his book Phi: A
Voyage from the Brain to the Soul, Tononi (ibid.) introduces the idea of a qualiascope, which is a
device that enables you to see objects in terms of their conscious properties rather than their
physical properties. Viewed through a qualiascope, the size of an object is proportional to the
amount of consciousness it has (its measure of Φ) rather than its physical size. This measure, or
something along these lines, could be what we’re looking for when determining a definition of size
in consciousness-space.
SSSSA also allows us to predict that human-level intelligence evolves fairly frequently on planets
where animal-like life evolves. As discussed, when considering whether we are typical conscious
observers, we must consider all life in the universe. If human-level intelligence is very rare on
planets where animal-like life has evolved, then picking a random point in consciousness-spacetime is very unlikely to find a human-level intelligent mind, putting us in an improbably privileged
position. This is why we would expect human-level intelligence to be relatively common.
3. Arguments against, and a Defence of, SSA, SSSA and SSSSA
SSA, SSSA and SSSSA seem to rely on the idea that our identity is somehow picked at random and
that we could have been someone else. And one could rebut this by simply saying that something
can only ever be itself, and wondering why it is this way is just meaningless. For example, you
wouldn’t look at a sheep and say that it could have been a cow. Similarly, it shouldn’t matter how
5
many of some other type of conscious being there are, because they don’t have any relevant causal
effect on you and your own identity.
A full discussion of the question “Why am I me?” and of personal identity generally could get very
philosophical and is beyond the scope of this paper, but briefly, the claim is not that your identity
was picked at random in this way, but reasoning as if you are a random observer-sample does give
us a good methodology that allows us to assess probabilities in a reasonable manner.
I have already given examples where it seems correct to use these principles, such as the case with
the gene variants that are possessed by 5% and 95% of the population respectively. The alternative
is to say that you are who you are and that it makes no sense to put a probability on it. This seems to
be a very unsatisfactory way to reason in this case.
Similarly, your confidence that you are not a Boltzmann Brain relies on anthropic reasoning of this
sort. If we dismiss this reasoning, then it doesn’t matter that a randomly formed brain in the vacuum
of outer space is very unlikely to be experiencing such a coherent illusion of an ordered world and
universe. It is simply a brute fact whether you are the mind of a Boltzmann Brain or the mind of a
brain of a normal human being that has evolved on planet Earth, and it makes no sense to assign a
probability to it. But clearly you live your life as if you are a normal human being, and probably do
not consider the possibility that your reality will crumble away in a matter of moments, and you
also probably think that this is rational as much as it is habitual.
Time-based examples that use SSA, SSSA or SSSSA are arguably even more counterintuitive. Even
if one accepts that human-like minds should take up a reasonable proportion of consciousness-space
at this particular time, to extend this to all time, particularly the future, could seem absurd, because
the future hasn’t happened yet and doesn’t exist, so could have no bearing on what we should be
experiencing now. There are various competing physical theories of time, some of which are in
opposition to this claim, but such a discussion is beyond the scope of this paper. However, a defence
of these time-based examples is possible without an appeal to physics. John Leslie gives the
following example:
Imagine an experiment planned as follows. At some point in time, three humans would
each be given an emerald. Several centuries afterwards, when a completely different set
of humans was alive, five thousand humans would each be given an emerald. Imagine
next that you have yourself been given an emerald in the experiment. You have no
knowledge, however, of whether your century is the earlier century in which just three
people were to be in this situation, or in the later century in which five thousand were to
be in it. Do you say to yourself that if yours were the earlier century then the five
thousand people wouldn’t be alive yet, and that therefore you’d have no chance of being
among them? On this basis, do you conclude that you might just as well bet that you
lived in the earlier century?
Suppose you in fact betted that you lived there. If every emerald-getter in the
experiment betted in this way, there would be five thousand losers and only three
winners. The sensible bet, therefore, is that yours is instead the later century of the two.
(1996, p. 20, italics in original)
Bostrom also defends the idea of making predictions about observers causally unconnected to you,
based on your own experiences:
6
To see why this “dependence on remote regions” is not a problem, it suffices to note that
the probabilities our theory delivers are not physical chances but subjective credences.
Those distant observers have zilch effect on the physical chances of events that take
place on Earth. Rather, what holds is that under certain special circumstances, your
beliefs about the distant observers could come to rationally affect your beliefs about
about a nearby coin toss, say. (2002, p. 120, italics in original)
This paper will continue on the assumption that SSSSA is a sound principle.
4. Superintelligent Artificial Intelligence and the Argument against its
Future Existence
By SSSSA, we would not expect to find ourselves in an improbably privileged position by being far
more intelligent than the mind of an average point in consciousness-space-time, but the flip side of
this is that we would also not expect ourselves to be in an improbably impoverished position, by
being far less intelligent than the mind of an average point in consciousness-space-time. As well as
calling into question the abundance of superintelligent alien life forms in the universe, this brings us
neatly to superintelligent artificial intelligence (AI).
At some point in the future, we might develop a superintelligent AI that dwarfs our own
intelligence. David Chalmers (2010) defines AI++ as AI that dwarfs the intelligence of the most
intelligent human by at least as much as this human’s intelligence dwarfs that of a mouse, and it
seems sensible to use the same terminology here. A rigorously defined scale of intelligence would
be required for this, but this should be achievable. From now on in this paper, AI++ and
superintelligent AI can be considered interchangeable terms. If AI++ is conscious, we would expect
the mind of an AI++ to take up a much greater area in consciousness-space than a human mind. It
also seems likely that if one AI++ is created, then many will be created. And if many are created,
then we might also expect them to dominate total consciousness-space-time.
But by SSSSA, we should expect our human minds to be the minds of fairly typical points in
consciousness-space-time. This would mean that we should not expect AI++s to dominate
consciousness-space-time. Given that we should not expect them to dominate, this also arguably
reduces the likelihood of any AI++s being created at all. This is the central argument of the paper,
and we can put it more formally:
Premise 1: If we create a single example of AI++, then many will be created, either by us or other
AI++s.
Premise 2: If many AI++s are created, then they will dominate consciousness-space-time.
Premise 3: By SSSSA, AI++s probably do not dominate consciousness-space-time.
Conclusion: We will probably not create AI++.
It could be that instead of many individual AI++s being created, just one is created, and any further
advances simply contribute to the further enhancement of this one AI++. While this means that
premise 1 would not be correct as it stood, this example of a single super AI++ (AI+++, perhaps?) is
still likely to dominate consciousness-space-time just as much as if there are many lesser AI++s. As
it does not affect the conclusion, I will continue with the original premise.
7
Accepting this argument also has the knock-on effect of decreasing the likelihood that we will
create even human-level AI. Creating human-level AI is a precursor to creating AI++, and assuming
that AI++ is not created, we don’t know where along the line that we will hit our limits. If we are
told that a road is less than 100 miles long, then this also decreases the probability that it is at least
10 miles long, compared to before we had been given this information. The proportion of roads that
are at least 10 miles long but less than 100 miles long is less than the proportion of roads that are at
least 10 miles long without this upper limit. Similarly, the probability of creating human-level AI
given that we are unlikely to create AI++ is less than the probability that we will create human-level
AI without this extra information. Of course, this is relative and it gives us no absolute probability
figure of us creating human-level AI in the future, and it could still be argued to be over 50%.
5. Attacks on Premise 1 – Simulations
If a single AI++ is created, then the likelihood of AI++s dominating consciousness-space-time does
seem to be high, but it might not be inevitable.
Looking at premise 1, the idea that many AI++s will be created and that they will dominate the
world, perhaps even wiping us out in the process, is just one possibility. It might be that relatively
few are created and that they can co-exist with us. (Although, even if there are a relatively small
number of individual AI++s created, one might still expect them to outlast the relatively fragile
human race and indeed animal life generally, so still end up dominating total consciousness-spacetime.)
The extra computing power available in the future might not mostly be used for individual
integrated intelligent systems, but could be dispersed more widely, such as by being used to run
simulations. So this would mean that while AI++s might end up existing, the small number of
individual superintelligent systems could mean that lower intelligences such as us still take up a
reasonable proportion of total consciousness-space-time.
It is a fairly popular idea that we ourselves could be part of a simulation (e.g. Bostrom, 2003),
perhaps being run by a superintelligent AI. The idea is that once a computer is powerful enough, it
can simulate a universe, including the life in it. And in this simulation there would potentially be
further simulations and so on. If this is the case, then almost all universes would be simulated
universes, meaning that we would almost certainly be living in a virtual universe ourselves. This
could also mean a proliferation of human-level intelligence, which, along with the fact that these
complex simulations would use computing power at the expense of AI++s, could counteract the
potential consciousness-space-time dominance of AI++s.
However, a simulation can never fully represent a universe as complex as the one that it resides in,
at least not in real time. To do so would require the same resources that the universe contains, but
within a much smaller subset of that universe, which is not possible. The caveat about real time is
important because a computer could simulate something more complex than itself, as long as we’re
not worried about it running much more slowly than real time. But we are. For any given amount of
time, a computer running a simulation (or any number of simulations) cannot perform any more
operations than it can when not running a simulation. There is no free lunch. This is also why
computers emulated by your home PC are always far less powerful than the PC itself. A modernday PC can emulate a computer from the 1980s, for example, but you’ll never see it running an
emulation of itself or a more powerful machine.
8
As a more extreme case, imagine if our universe was simulated on a computer inside our own
universe. To be an accurate simulation, that universe would have to also contain a simulation of
itself, and so on, leading to an infinite regress. It would therefore take an infinite amount of
computing power to manage it. But to say that a computer cannot simulate the entire universe in
real time is a vast understatement. It can’t simulate the room it’s sitting in in real time, because the
room contains the computer being simulated, so this would still lead to the same infinite regress.
If we want to run a simulation of a universe, this can be a less complex universe than our own, or
one that runs slowly. Either way, it means that for any finite amount of time we leave it running for,
more will happen in “real life” than in the simulation, giving us reason to doubt that we are in a
simulation ourselves.
Of course, this doesn’t mean that we cannot be living in a simulated reality. But each simulation,
and simulation of a simulation etc., is likely to have fewer and fewer conscious minds as the
computing resources diminish the further along the line you go.
It is actually still possible to have a simulation that has more conscious beings than the reality it is
represented in, by concentrating more of its power on the simulated beings themselves and less on
the rest of the detail of the simulated universe. But it is likely that achieving this would give these
beings a rather impoverished existence without much of a detailed outside world, and there is no
reason to suggest that we are in such a simulation.
Even if some future computing power is dedicated to running simulations, given the likely
limitations of such simulations, it is unlikely that any conscious beings in these simulations will
prevent AI++s from dominating consciousness-space-time. By SSSSA, something is likely to
prevent them, but we need to look beyond simulations.
I’m not going to try to consider every possibility of why we might create only a relatively small
number of AI++s on the assumption that we develop the technology to be able to create them at all.
It might be that there are other uses for future computing resources that sufficiently take away from
AI++s. I discussed simulations specifically because it is an idea that has gained some currency. But
the point is that even if one can envisage further scenarios to challenge premise 1, it still seems to
be a fairly plausible premise at present. So the fact that SSSSA suggests that there probably won’t
be a proliferation of AI++s means that this should overall lower our credence that we will create a
single AI++, unless we can find fault with premise 2.
6. Attacks on Premise 2 – Chinese Room
There is an argument that AI++s wouldn’t actually be conscious, so their existence would have no
effect on whether we are typical conscious observers or not. This is an attack on premise 2 above:
that the creation of a large number of AI++s would cause them to dominate consciousness-spacetime. John Searle’s famous Chinese Room argument (e.g. 1980) is an attempt to demonstrate that no
digital computer, however powerful, could be conscious.
Searle argues that everything that digital computers do is blind symbol manipulation, and that no
amount of this can lead to real understanding, as goes on in our brains. Searle compares a computer
to a person sitting inside a room who is given questions in Chinese, which are being passed through
a hole. The person in the room understands no Chinese, but he has instructions, written in English,
explaining what to do with the Chinese writing. Following the instructions leads him to produce the
9
correct output, which is also in Chinese, although he still has no understanding of it. He passes the
written output back out through the hole. So it is possible for someone outside the room to have a
conversation with the Chinese Room, in Chinese, despite there being no-one in the room who
understands Chinese. Searle says that there is no understanding of Chinese in the Chinese Room
and, analogously, there is no understanding in a digital computer.
This argument is not generally accepted and I won’t go into it in great detail here, although I discuss
it in more detail in my book (Pereira, 2014). But in simple terms, individual neurons blindly fire
without having any idea of the overall picture, and that does not seem to put a stop on human
consciousness, and there is no obvious reason why blind symbol manipulation in a computer is any
less likely to bring about consciousness than blind neuronal firings. The Systems Reply to the
Chinese Room argument says that it’s the system as a whole that is conscious rather than the
individual units. And the person in the Chinese Room is just one such unit, just as the neurons are
individual units in our brains. So the fact that the person in the Chinese Room has no conscious
understanding of Chinese is irrelevant to whether there is any conscious understanding of Chinese
at all, in the same way that the lack of consciousness in individual neurons in a normal human brain
is irrelevant to whether there is any consciousness in this brain.
On top of this, Searle does accept that the human brain is a machine of sorts, but just not of the
same type as a digital computer, so a conscious superintelligent AI could still be created by using
the same or a similar method to our biology. So accepting Searle’s Chinese Room argument does
not mean that a conscious AI++ will not be created.
While potential challenges to premises 1 and 2 could come from many angles, their mere
plausibility, along with acceptance of SSSSA, is enough to at least decrease our credence that we
will create superintelligent AI.
7. Concluding Remarks
Nothing in this paper demonstrates that we will categorically not create superintelligent AI, but
taking the central argument seriously should lead us to ask questions about the likely future of AI
development. The argument itself also does not provide us with any particular obstacle to creating
superintelligent AI; it merely suggests that there might be such an obstacle. It could be that we end
up destroying ourselves, that it is much harder to create than some people imagine, or something
else entirely. But to be clear, the implication is not merely that humans on Earth will probably not
create superintelligent AI, but that when we consider human-level intelligent beings in the universe
as a whole, and in any other universes that might exist, we should not expect enough of these races
to develop superintelligent AI so that it dominates consciousness-space-time. The point is that we
should expect human-like consciousness to take up a reasonable proportion of consciousness-spacetime when considered as a whole, not just on Earth. Superintelligent AI may well be a rare
development among intelligent life across the whole universe, and indeed beyond.
References
Bostrom, N. 2002. Anthropic Bias: Observation Selection Effects in Science
and Philosophy. New York: Routledge.
10
Bostrom, N. 2003. Are you living in a computer simulation? Philosophical Quarterly, 53 (211), pp.
243-255.
Carroll, S. M. 2017. Why Boltzmann brains are bad. arXiv:1702.00850 [hep-th]
Chalmers, D. J. 2010. The singularity: a philosophical analysis. Journal of Consciousness Studies,
17, pp. 7-65.
Leslie, J. 1996. The End of the World: The Science and Ethics of Human Extinction. London:
Routledge.
Oizumi, M., Albantakis, L., & Tononi, G. 2014. From the phenomenology to the mechanisms of
consciousness: integrated information theory 3.0. PLoS Computational Biology, 10 (5),
e1003588.
Pereira, T. 2014. Stuff and Consciousness: Connecting Matter and Mind. Rayne, UK: Toby Pereira.
Searle, J. R. 1980. Minds, brains, and programs. Behavioral and Brain Sciences, 3 (3), pp. 417-457.
Tononi, G. 2012. Phi: A Voyage from the Brain to the Soul. New York: Pantheon Books.
11
| 2 |
DEEP WORD EMBEDDINGS FOR VISUAL SPEECH RECOGNITION
Themos Stafylakis and Georgios Tzimiropoulos
Computer Vision Laboratory, University of Nottingham, UK
arXiv:1710.11201v1 [] 30 Oct 2017
ABSTRACT
In this paper we present a deep learning architecture for extracting word embeddings for visual speech recognition. The
embeddings summarize the information of the mouth region
that is relevant to the problem of word recognition, while
suppressing other types of variability such as speaker, pose
and illumination. The system is comprised of a spatiotemporal convolutional layer, a Residual Network and bidirectional LSTMs and is trained on the Lipreading in-the-wild
database. We first show that the proposed architecture goes
beyond state-of-the-art on closed-set word identification, by
attaining 11.92% error rate on a vocabulary of 500 words. We
then examine the capacity of the embeddings in modelling
words unseen during training. We deploy Probabilistic Linear Discriminant Analysis (PLDA) to model the embeddings
and perform low-shot learning experiments on words unseen
during training. The experiments demonstrate that word-level
visual speech recognition is feasible even in cases where the
target words are not included in the training set.
Index Terms— Visual Speech Recognition, Lipreading,
Word Embeddings, Deep Learning, Low-shot Learning
1. INTRODUCTION
Automatic speech recognition (ASR) is witnessing a renaissance, which can largely be attributed to the advent of deep
learning architectures. Methods such as Connectionist Temporal Classification (CTC) and attentional encoder-decoder
facilitate ASR training by eliminating the need of frame-level
senone labelling, [1] [2] while novel approaches deploying
words as recognition units are challenging the conventional
wisdom of using senones as recognition units, [3] [4] [5] [6].
In parallel, architectures and learning algorithms initially proposed for audio-based ASR are combined with powerful computer vision models and are finding their way to lipreading
and audiovisual ASR, [7] [8] [9] [10] [11] [12] [13].
Motivated by this recent direction in acoustic LVCSR of
considering words as recognition units, we examine the capacity of deep architectures for lipreading in extracting word
embeddings. Yet, we do not merely address word identification with large amounts of training instances per target word;
we are also interested in assessing the generalizability of these
embeddings to words unseen during training. This property is
crucial, since collecting several hundreds of training instances
for all the words in the dictionary is impossible.
To this end, we train and test our architecture on the
LipReading in-the-Wild database (LRW, [14]), which combines several desired properties, such as relatively high number of target words (500), high number or training examples
per word (between 800 and 1000), high speaker and pose
variability, non-laboratory recording conditions (excerpts
from BBC-TV) and target words that are part of segments
of continuous speech of fixed 1.16s duration. We examine
two settings; standard closed-set word identification using
the full set of training instances per target word, and low-shot
learning where the training and test words come from disjoint
sets. For the latter setting, a PLDA model is used on the embedding domain that enables us to estimate class (i.e. word)
conditional densities and evaluate likelihood ratios. Our proposed architecture is an improvement of the one we recently
introduced in [15] which obtains state-of-the-art results on
LRW even without the use of word boundaries.
The rest of the paper is organized as follows. In Sect. 2
we provide a detailed description of the architecture, together
with information about the training strategy and the use of
word boundaries. In Sect. 3.1 we show results on word identification obtained when the model is training on all available
instances, while in Sect. 3.2 we present results on two lowshot learning experiments. Finally, conclusion and directions
for future work are given in Sect. 4.
2. PROPOSED NETWORK ARCHITECTURE
In this section we describe the network we propose, together
with details regarding the preprocessing, the training strategy
and loss function.
2.1. Detailed description of the network
The proposed architecture is depicted in Fig. 1 and it is an
extension of the one we introduced in [15]. The main differences are (a) the use of a smaller ResNet that (18 rather than
34-layer) that reduces the number of parameters from ∼ 24M
to ∼ 17M, (a) the use of a pooling layer for aggregating information across time steps extracting a single embedding per
video, (b) the use of dropouts and batch normalization at the
back-end, and (c) the use of word boundaries which we pass
to the backend as additional feature.
2.1.1. ResNet with spatiotemporal front-end
The frames are passed through a Residual CNN (ResNet),
which is a 18-layer convolutional network with skip connections and outputs a single 256-sized per time step, i.e. a
T × 256 tensor (T = 29 in LRW). There are two differences
from the ImageNet 18-layer ResNet architecture, [16]; (a) the
first 2D convolutional layer has been replaced with a 3D (i.e.
spatiotemporal) convolutional layer with kernel size 5 × 7 × 7
(time×width×height) and the same holds for the first batch
normalization and max pooling layers (without reducing the
time resolution), and (b) the final average pooling layer (introduced for object recognition and detection) has been replaced
with a fully connected layer, which is more adequate for face
images that are centred. The model is trained from scratch,
since pretrained models cannot be deployed due to the spatiotemporal front-end.
2.1.2. Backend, embedding layer and word boundaries
The backend is composed of a two-layer BiLSTM followed
by an average pooling layer that aggregates temporal information enabling us to extract a single 512-size representation
vector (i.e. embedding) for each video. The two-layer BiLSTM differs from the usual stack of two BiLSTM model; we
obtained significantly better results by concatenating the two
directional outputs only at the output of the second LSTMs.
The backend receives as input the collection of 256-size features extracted by the ResNet (CNN-features) concatenated
with a binary variable indicating whether or not the frame lies
inside or outside the word boundaries. We choose to pass
the word boundaries as a feature because (a) dropping out potentially useful information carried in the out-of-boundaries
frames is not in the spirit of deep learning, and (b) the gating
mechanism of LSTMs is powerful enough to make use of it
in an optimal way.
Dropouts with p = 0.4 are applied to the inputs of each
LSTM (yet not to the recurrent layer, see [17]), with the mask
being fixed across features of the same sequence. Finally,
batch normalization is applied to the embedding layer, together with a dropout layer with p = 0.2, [18].
Fig. 1. The block-diagram of the proposed network.
[19], with initial learning rate equal to 3 × 10−3 and final equal to 10−5 , and we drop it by a factor of 2 when no
progress is attained for 3 consecutive epochs on the validation
set. The algorithm typically converges after 50-60 epochs.
We train the network using the cross-entropy criterion with
softmax layer over training words. This criterion serves for
both tasks we examine, i.e. closed-set word identification and
low-shot learning, while its generalizability to unseen classes
is in general equally good compared to other pairwise losses
(e.g. contrastive loss), [3] [20] [21] [22].
3. EXPERIMENTAL RESULTS
We demonstrate the effectiveness of the proposed architectures with respect to two experimental settings. The first is
the standard closed-set identification task in which the network is trained with all available instances per target word
(between 800 and 1000, [14]). The second setting aims to address to problem of word recognition on words unseen during
training. The few training instances of the target words are
merely used for estimating word-conditional densities on the
embedding domain, via PLDA. To this end, the network is
trained using a subset of 350 words of LRW and the test pairs
are drawn from the remaining 150 words.
2.2. Preprocessing, loss and optimizer
The preprocessing and data augmentation are identical to
[15]. Moreover, as in [15], we start training the network using a simpler convolutional backend which we subsequently
replace with the LSTM backend, once the ResNet is properly
initialized1 . Contrary to [15], we use the Adam optimizer
1 Code
and pretrained models in Torch7
https://github.com/tstafylakis/Lipreading-ResNet
are
available
at
3.1. Closed-set word identification
For our first experiments in word identification, the reduced
word set (consisting of the 350 words out of 500) will be used
for both training and testing. These networks will also be used
for low-shot learning on the remaining 150 words. Our proposed networks is retrained and tested on the full 500-word
set.
3.1.1. Baseline and state-of-the-art
We compare our architecture with two approaches which according to our knowledge are the two best performing approaches in LRW. The first in proposed in [7], and it deploys
an encoder-decoder with temporal attention mechanism. Although the architecture is designed to address the problem
of sentences-level recognition, it has also been tested on the
LRW database, after fine-tuning on the LRW training set. The
whole set of experimental results can be found in [7] and
the results on LRW are repeated here in Table 1 (denoted by
Watch-Attend-Spell). The second architecture is introduced
by our team in [15] and its differences with the proposed one
have been discussed above. The experimental results on LRW
are given in Table 1 (denoted by ResNet-LSTM). Both experiment use the full set of words during training and evaluation
(i.e. 500 words).
System
Top-1(%)
Top-5(%)
Watch-Attend-Spell [7]
ResNet-LSTM [15]
23.80
17.03
3.72
Table 1. Baseline and state-of-the-art results on the full set
(500 words).
3.1.2. Experiments on the reduced set of words
For the first experiment we use the proposed architecture
without dropouts or batch normalization at the backend. The
results are given in Table 2 (denoted by N1) and the network
attains 13.13% error rate on the reduced set. For the second
experiment (denoted by N2) we add dropouts to the backend
but again we do not apply batch normalization to the embedding layer. The error rate drops to 12.67%, showing the gains
by applying dropouts at the backend. The next configuration
uses both dropouts or batch normalization at the backend,
and a single BiLSTM layer (denoted by N3). The network
attains 12.59% error rate on the reduced set, showing that
good results can be obtain even with a single BiLSTM layer.
In the next configuration we experimented with extracting the
embedding from the last output of the BiLSTM (as proposed
in [23]), rather than with average pooling across all time
steps. The network attains 12.15% error rate and it is denoted
by N4. The next configuration is the proposed architecture
without the use of word boundaries. The network (denotes
by N5) attains 15.23%, showing that the network yields good
results even without specifying the boundaries of the target
words. Finally, the proposed architecture (denoted by N6)
attains 11.29% error rate on the reduced set, which is clearly
better that the other configurations examined. Moreover, by
comparing N6 with N2 we notice the strength of batch normalization at the embedding layer. We should also mention
that we experimented with the typical stacking approach of
BiLSTM. In this case, the outputs of the first BiLSTM are
concatenated and used as input to the second BiLSTM. The
network failed to attain good results (error rates above 20%),
despite our efforts to tune parameters such as learning rate
and dropout probabilities.
Net #L WB DO BN EM
N1
N2
N3
N4
N5
N6
2
2
1
2
2
2
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Top-1(%)
Top-5(%)
13.13
12.67
12.59
12.15
15.23
11.29
2.26
2.10
2.05
1.89
2.87
1.74
A
A
A
L
A
A
Table 2. Results on the reduced set (350 words) for various network configurations. Abbreviations: #L: number of
BLSTM layers, WB: use of word boundaries, DO: use of
dropouts at the backend, BN: use of batch normalization at the
embedding, EM: embedding extracted using average pooling
(A) or from last time step (L).
3.1.3. Experiments on the full set of words
The networks N5 and N6 are retrained from scratch and
scored on the full set, and their performance is given in Table
3. Compared to the current state-of-the-art we observe an
absolute improvement equal to 5.11% using about 2/3 of the
parameters. The LSTM is indeed capable of learning how to
use the word boundaries, without having to drop out out-ofboundaries frames or to apply frame masking. Even without
word boundaries though, our new architecture yields 1.36%
absolute improvement over [15]. Finally, our architecture
halves the error rates attained by the baseline (attentional
encoder-decoder, [7]).
Net #L WB DO BN EM
N5
N6
2
2
X
X
X
X
X
Top-1(%)
Top-5(%)
15.67
11.92
3.04
1.94
A
A
Table 3. Results on the full set (500 words) for various network configurations. Abbreviations same as in Table 2.
3.2. Low-shot learning experiments
In this set of experiments we assess the capacity of the embeddings in generalizing to words unseen during training. To
this end, we assume few instances for each of the 150 unseen
words. These words are not included in the training set of the
architecture (which is composed of 350 words) and they are
merely deployed to estimate shallow word-dependent models
on the embedding space. We design two experiments, namely
closed-set identification and word matching.
3.2.1. PLDA modeling of embeddings
We model the embeddings using PLDA, [24]. We train a
PLDA model with expectation-maximization on the set of
350 words, drawn from the test set of LRW (50 instances per
word, i.e. 17500 training instances). PLDA is chosen due to
its probabilistic nature, which enables us to form likelihood
ratios, that are extensively used in biometric tasks, such as
speaker and face verification, [25] [26] [27]. Its parameters
are defined by P = (µ, V, Σ), where µ the mean value, V a
matrix that models the word subspace and Σ a full symmetric
positive definite matrix modelling the within-class variability.
The PLDA generative model is the following
xi = µ + Vyci + i ,
(1)
where xi is an embedding in Rdx belonging to class ci , yc ∼
N (0, I) is a random vector in Rdy shared by all instances of
the same class, i ∼ N (0, Σ) a random vector in Rdx , and
dx ≥ dy . In the following experiments we use dx = 512 and
dy = 200.
3.2.2. Closed-set identification on unseen words
We are interested in examining the performance of the embeddings in closed-set identification on the unseen set of
words. To this end, the embeddings of the 150 unseen words
are extracted. The overall number of available embeddings
per word is 50 from the LRW validation set and another
50 from the test set. The validation set serves to estimate
class-conditional density functions, based on the PLDA parameters, i.e. p(·|c, P) = p(·|{xi }ci =c , P), where c the
class (i.e. word) label, {xi }ci =c a set of instances from the
validation set from class c.
A class conditional density for each word c given P is
estimated using variable number of instances per word Nc
(from 1 to 16) drawn from the validation set of LRW. Subsequently, the models are evaluated on test embeddings (50 per
word, from LRW test set) and the estimated word is derived
using maximum likelihood. The Top-1 error rates for several
number of training instances per word Nc are given in Table
4 (denoted by ID-W350). For comparison, we include results
where the embeddings are extracted from the network trained
with the full set of 500 words (denoted by ID-W500), i.e. the
one used in Sect. 3.1.3.
3.2.3. Word matching on unseen words
For the final experiment, we evaluate log likelihood ratios
(LLRs) between the hypotheses (a) the word instance x belongs to the same word-class c with a collection of word instances {xi }ci =c , and (b) x and {xi }ci =c belong to different
classes. Contrary to closed-set identification, we assume that
each instance may belong to an unknown set of classes. Moreover, since we are scoring pairs of word models and instances,
more than one model per word can be created. We use again
the validation set of LRW to create these models and the test
set to create test instances. We measure the performance in
terms of Equal Error Rate (EER), defined as the error rate attained when the LLRs are thresholded in such a way so that
Missed Detection and False Alarm rates are equal. The results using variable number of training instances per model
Nc is given in Table 4 (denoted by EER-W350). For comparison, results with embeddings extracted from the same network trained on the full set of words are also given (denoted
by EER-W500).
Nc
1
2
4
8
16
ID-W350 (%)
ID-W500 (%)
EER-W350 (%)
EER-W500 (%)
43.8
34.3
6.11
4.52
32.5
23.0
4.22
3.01
23.7
16.6
3.31
2.49
19.2
13.1
3.16
2.28
17.3
11.9
3.03
2.21
Table 4. Top-1 identification error and equal error rates on
the unseen set of 150 words using PLDA for various training
embeddings per word. W350 indicates that the network is
trained on the reduced set while W500 on the full set.
Overall, the experiment on low-shot learning demonstrate
the generalizability of the proposed architecture. Even with a
modest number of training words (i.e. 350), the architecture
succeeds in learning how to break down word instances into
their “visemic” content and in extracting embeddings with
good word discriminative properties.
4. CONCLUSION
In this paper, we proposed a deep learning architecture for
lipreading that is capable of attaining performance beyond
state-of-the-art in the challenging LRW database. The architecture combines spatiotemporal convolution, ResNets and
LSTMs and an average pooling layer from which word embeddings are extracted. We explored several configurations
of the LSTM-based back-end and we proposed an efficient
method of using the word boundaries. We also attempted to
address the problem of low-shot learning. To this end, we retrained the network on a subset of words (350 out of 500) and
tested it on the remaining 150 words, using PLDA modelling.
The experiments on low-shot learning show that good results
can be attained even for words unseen during training.
For future work, we will train and test our architecture on
the LipReading Sentences in-the-wild database ([7]) and we
will experiment with word embeddings for large vocabulary
visual speech recognition, using words as recognition units.
5. ACKNOWLEDGEMENTS
This work has been funded by the European Commission program Horizon 2020, under grant agreement no. 706668 (Talking Heads).
6. REFERENCES
[1] R. Prabhavalkar, K. Rao, T. N. Sainath, B. Li, L. Johnson, and N. Jaitly, “A Comparison of Sequence-toSequence Models for Speech Recognition,” in Interspeech, 2017.
[2] T. Hori, S. Watanabe, Y. Zhang, and W. Chan,
“Advances in Joint CTC-Attention Based End-to-End
Speech Recognition with a Deep CNN Encoder and
RNN-LM,” in Interspeech, 2017.
[3] H. Kamper, W. Wang, and K. Livescu, “Deep convolutional acoustic word embeddings using word-pair side
information,” in ICASSP, 2016.
[4] K. Audhkhasi, B. Ramabhadran, G. Saon, M. Picheny,
and D. Nahamoo, “Direct Acoustics-to-Word Models
for English Conversational Speech Recognition,” in Interspeech, 2017.
[16] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings
in deep residual networks,” in ECCV, 2016.
[17] G. Cheng, V. Peddinti, D. Povey, V. Manohar, S. Khudanpur, and Y. Yan, “An exploration of dropout with
LSTMs,” in Interspeech, 2017.
[18] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in ICML, 2015.
[19] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” in ICLR, 2014.
[20] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “Deepface: Closing the gap to human-level performance in
face verification,” in CVPR, 2014.
[21] O. M. Parkh, A. Vedaldi, and A. Zisserman, “Deep face
recognition,” in BMVC, 2015.
[5] H. Soltau, H. Liao, and H. Sak, “Neural speech recognizer: Acoustic-to-word LSTM model for large vocabulary speech recognition,” in Interspeech, 2017.
[22] D. Snyder, D. Garcia-Romero, D. Povey, and S. Khudanpur, “Deep Neural Network Embeddings for TextIndependent Speaker Verification,” in Interspeech,
2017.
[6] S. Bengio and G. Heigold, “Word embeddings for
speech recognition.” in Interspeech, 2014.
[23] M. Wand, J. Koutnı́k, and J. Schmidhuber, “Lipreading
with long short-term memory,” in ICASSP, 2016.
[7] J. S. Chung, A. Senior, O. Vinyals, and A. Zisserman,
“Lip reading sentences in the wild,” in CVPR, 2017.
[24] S. Ioffe, “Probabilistic Linear Discriminant Analysis,”
in ECCV, 2006.
[8] J. Chung and A. Zisserman, “Lip reading in profile,” in
BMVC, 2017.
[25] P. Kenny, T. Stafylakis, P. Ouellet, M. Alam, and P. Dumouchel, “PLDA for speaker verification with utterances of arbitrary duration,” in ICASSP, 2013.
[9] Y. M. Assael, B. Shillingford, S. Whiteson, and
N. de Freitas, “Lipnet: Sentence-level lipreading,” arXiv
preprint arXiv:1611.01599, 2016.
[10] K. Thangthai and R. Harvey, “Improving Computer
Lipreading via DNN Sequence Discriminative Training
Techniques,” in Interspeech, 2017.
[11] M. Wand and J. Schmidhuber, “Improving SpeakerIndependent Lipreading with Domain-Adversarial
Training,” in Interspeech, 2017.
[12] A. Koumparoulis, G. Potamianos, Y. Mroueh, and S. J.
Rennie, “Exploring ROI size in deep learning based
lipreading,” in AVSP, 2017.
[13] S. Petridis, Z. Li, and M. Pantic, “End-to-end visual
speech recognition with LSTMs,” in ICASSP, 2017.
[14] J. Chung and A. Zisserman, “Lip reading in the wild,”
in ACCV, 2016.
[15] T. Stafylakis and G. Tzimiropoulos, “Combining Residual Networks with LSTMs for Lipreading,” in Interspeech, 2017.
[26] S. Prince and J. Elder, “Probabilistic linear discriminant
analysis for inferences about identity,” in ICCV, 2007.
[27] D. Chen, X. Cao, L. Wang, F. Wen, and J. Sun,
“Bayesian face revisited: A joint formulation,” in
ECCV, 2012.
| 7 |
Online Dominating Set ∗
Joan Boyar 1
Stephan J. Eidenbenz 2
Michal Kotrbčı́k 1
arXiv:1604.05172v1 [] 18 Apr 2016
1
Lene M. Favrholdt 1
Kim S. Larsen 1
University of Southern Denmark, Odense, Denmark
{joan,lenem,kotrbcik,kslarsen}@imada.sdu.dk
2
Los Alamos National Laboratory, Los Alamos, NM, USA
[email protected]
Abstract
This paper is devoted to the online dominating set problem and its variants on trees, bipartite, bounded-degree, planar, and general graphs, distinguishing between connected and
not necessarily connected graphs. We believe this paper represents the first systematic study
of the effect of two limitations of online algorithms: making irrevocable decisions while not
knowing the future, and being incremental, i.e., having to maintain solutions to all prefixes
of the input. This is quantified through competitive analyses of online algorithms against two
optimal algorithms, both knowing the entire input, but only one having to be incremental. We
also consider the competitive ratio of the weaker of the two optimal algorithms against the
other. In most cases, we obtain tight bounds on the competitive ratios. Our results show that
requiring the graphs to be presented in a connected fashion allows the online algorithms to
obtain provably better solutions. Furthermore, we get detailed information regarding the significance of the necessary requirement that online algorithms be incremental. In some cases,
having to be incremental fully accounts for the online algorithm’s disadvantage.
1
Introduction
We consider online versions of a number of NP-complete graph problems, dominating set (DS),
and variants hereof. Given an undirected graph G = (V, E) with vertex set V and edge set E, a
set D ⊆ V is a dominating set for G if for all vertices u ∈ V , either u ∈ D (containment) or there
exists an edge {u, v} ∈ E, where v ∈ D (dominance). The objective is to find a dominating set of
minimum cardinality.
In the variant connected dominating set (CDS), we add the requirement that D be connected (if G
is not connected, D should be connected for each connected component of G). In the variant total
dominating set (TDS), every vertex must be dominated by another, corresponding to the definition
above with the “containment” option removed. We also consider independent dominating set
∗
The first, third, fourth, and fifth authors were supported in part by the Danish Council for Independent Research
and the Villum Foundation.
1
(IDS), where we add the requirement that D be independent, i.e., if {u, v} ∈ E, then {u, v} 6⊆
D. In both this introduction and the preliminaries section, when we refer to dominating set, the
statements are relevant to all the variants unless explicitly specified otherwise.
The study of dominating set and its variants dates back at least to seminal books by König [18],
Berge [3], and Ore [20]. The concept of domination readily lends itself to modelling many conceivable practical problems. Indeed, at the onset of the field, Berge [3] mentions a possible application of keeping all points in a network under surveillance by a set of radar stations, and Liu [19]
notes that the vertices in a dominating set can be thought of as transmitting stations that can transmit messages to all stations in the network. Several monographs are devoted to domination [13],
total domination [14], and connected domination [11], and we refer the reader to these for further
details.
We consider online [5] versions of these problems. More specifically, we consider the vertexarrival model where the vertices of the graph arrive one at a time and with each vertex, the edges
connecting it to previous vertices are also given. The online algorithm must maintain a dominating
set, i.e., after each vertex has arrived, D must be a dominating set for the subgraph given so far.
In particular, this means that the first vertex must always be included in the solution, except for
the case of total dominating set. Since the graph consisting of a single vertex does not have a total
dominating set at all, we allow an online algorithm for TDS to not include isolated vertices in the
solution, unlike the other variants of DS. If the online algorithm decides to include a vertex in the
set D, this decision is irrevocable. Note, however, that not just a new vertex but also vertices given
previously may be added to D at any time. An online algorithm must make this decision without
any knowledge about possible future vertices.
Defining the nature of the irrevocable decisions is a modelling issue, and one could alternatively
have made the decision that also the act of not including the new vertex in D should be irrevocable,
i.e., not allowing algorithms to include already given vertices in D at a later time. The main reason
for our choice of model is that it is much better suited for applications such as routing in wireless
networks for which domination is intensively studied; see for instance [9] and the citations thereof.
Indeed, when domination models a (costly) establishment of some service, there is no reason why
not establishing a service at a given time should have any inherent costs or consequences, such as
preventing one from doing so later. Furthermore, the stricter variant of irrevocability results in a
problem for which it becomes next to impossible for an online algorithm to obtain a non-trivial
result in comparison with an optimal offline algorithm. Consider, for example, an instance where
the adversary starts by giving a vertex followed by a number of neighbors of that vertex. If the
algorithm ever rejects one of these neighbors, the remaining part of the sequence will consist of
neighbors of the rejected vertex and the neighbors must all be selected. This shows that, using
this model of irrevocability, online algorithms for DS or TDS would have to select at least n − 1
vertices, while the optimal offline algorithm selects at most two. For CDS it is even worse, since
rejecting any vertex could result in a nonconnected dominating set. A similar observation is made
in [17] for this model; their focus is on a different model, where the vertices are known in advance,
and all edges incident to a particular vertex are presented when that vertex arrives.
An online algorithm can be seen as having two characteristics: it maintains a feasible solution
at any time, and it has no knowledge about future requests. We also define a larger class of
algorithms: An incremental algorithm is an algorithm that maintains a feasible solution at any
time. It may or may not know the whole input from the beginning.
We analyze the quality of online algorithms for the dominating set problems using competitive
analysis [21, 15]. Thus, we consider the size of the dominating set an online algorithm computes
up against the result obtained by an optimal offline algorithm, O PT.
2
As something a little unusual in competitive analysis, we are working with two different optimal
algorithms. This is with the aim of investigating whether it is predominantly the requirement to
maintain feasible solutions or the lack of knowledge of the future which makes the problem hard.
Thus, we define O PT INC to be an optimal incremental algorithm and O PT OFF to be an optimal offline
algorithm, i.e., it is given the entire input, and then produces a dominating set for the whole graph.
The reason for this distinction is that in order to properly measure the impact of the knowledge of
the future, it is necessary that it is the sole difference between the algorithm and O PT. Therefore,
O PT has to solve the same problem and hence the restriction on O PT INC . While such an attention
to comparing algorithms to an appropriate O PT already exists in the literature, to the best of our
knowledge the focus also on the comparison of different optimum algorithms is a novel aspect of
our work. Previous results requiring the optimal offline algorithm to solve the same problem as
the online algorithm include (1) [6] which considers fair algorithms that have to accept a request
whenever possible, and thus require O PT to be fair as well, (2) [7] which studies k-bounded-space
algorithms for bin packing that have at any time at most k open bins and requires O PT to also
adhere to this restriction, and (3) [4] which analyzes the performance of online algorithms for a
variant of bin packing against a restricted offline optimum algorithm that knows the future, but has
to process the requests in the same order as the algorithm under consideration.
Given an input sequence I and an algorithm A LG, we let A LG(I) denote the size of the dominating
set computed by A LG on I, and we define A LG to be c-competitive if there exists a constant α
such that for all input sequences I, A LG(I) ≤ c O PT(I) + α, where O PT may be O PT INC or
O PT OFF , depending on the context. The (asymptotic) competitive ratio of A LG is the infimum over
all such c and we denote this CRINC (A LG) and CROFF (A LG), respectively. In some results, we use
the strict competitive ratio, i.e., the inequality above holds without an additive constant. For these
results, when the strict result is linear in n, we write the asymptotic competitive ratio in Table 2
without any additive constant.
We consider the four dominating set problem variants on various graph types, including trees,
bipartite, bounded-degree (letting ∆ denote the maximum degree), and to some extent planar
graphs. In all cases, we also consider the online variant where the adversary is restricted to giving
the vertices in such a manner that the graph given at any point in time is connected. In this case,
the graph is called always-connected. One motivation is that graphs in applications such as routing
in networks are most often connected. The connectivity assumption allows us to obtain provably
better bounds on the performance of online algorithms, at least compared to O PT OFF , and these
bounds are of course more meaningful for the relevant applications.
The results for online algorithms are summarized in Tables 1 and 2. The results for O PT INC against
O PT OFF are identical to the results of Table 2, except that for DS on trees, CROFF (O PT INC ) = 2 and
for DS on always-connected planar graphs, CROFF (O PT INC ) = dn/2e. The results are discussed
in the conclusion.
Graph class
DS
Trees
[n/4, n/2]
IDS
1
n/4
Always-connected bipartite
Always-connected bounded degree
TDS
1
2
Bipartite
Bounded degree
CDS
[∆
2 ; ∆ + 1]
[∆
2 ; ∆]
[∆
2 ; ∆ − 1]
[∆
2 ; ∆]
Table 1: Bounds on the competitive ratio of any online algorithm with respect to O PT INC .
3
Graph class
Trees
DS
CDS
TDS
[2; 3]
1
2
n
Bipartite
n/2
IDS
n
n/2
Always-connected bipartite
Bounded degree
[∆; ∆ + 1]
∆+1
Always-connected bounded degree
[∆
2 ; ∆ + 1]
[∆ − 2; ∆ − 1]
Planar
n
[∆ − 1; ∆]
n/2
∆
[∆ − 1; ∆]
n
Always-connected planar
Table 2: Bounds on the competitive ratio of any online algorithm with respect to O PT OFF .
2
Preliminaries
Since we are studying online problems, the order in which vertices are given is important. We
assume throughout the paper that the indices of the vertices of G, v1 , . . . , vn , indicate the order
in which they are given to the online algorithm, and we use A LG(G) to denote the size of the
dominating set computed by A LG using this ordering. When no confusion can occur, we implicitly
assume that the dominating set being constructed by an online algorithm A LG is denoted by D.
We use the phrase select a vertex to mean that the vertex in question is added to the dominating
set in question. We use Gi to denote the subgraph of G induced by {v1 , . . . , vi }. We let Di denote
the dominating set constructed by A LG after processing the first i vertices of the input. When
no confusion can occur, we sometimes implicitly identify a dominating set D and the subgraph
it induces. For example, we may say that D has k components or is connected, meaning that the
subgraph of G induced by D has k components or is connected, respectively.
Online algorithms must compute a solution for all prefixes of the input seen by the algorithm.
Given the irrevocable decisions, this can of course affect the possible final sizes of a dominating
set. When we want to emphasize that a bound is derived under this restriction, we use the word
incremental to indicate this, i.e., if we discuss the size of an incremental dominating set D of G,
this means that D1 ⊆ D2 ⊆ · · · ⊆ Dn = D and that Di is a dominating set of Gi for each i. Note
in particular that any incremental algorithm, including O PT INC , for DS, CDS, or IDS must select
the first vertex.
Throughout the text, we use standard graph-theoretic notation. In particular, the path on n vertices
is denoted Pn . A star with n vertices is the complete bipartite graph K1,n−1 . A leaf is a vertex of
degree 1, and an internal vertex is a vertex of degree at least 2. We use c(G) to denote the number
of components of a graph G. The size of a minimum dominating set of a graph G is denoted
by γ(G). We use indices to indicate variants, using γC (G), γT (G), and γI (G) for connected, total,
and independent dominating set, respectively. This is an alternative notation for the size computed
by O PT OFF . We also use these indices on O PT INC to indicate which variant is under consideration.
We use ∆ to denote the maximum degree of the graph under consideration. Similarly, we always
let n denote the number of vertices in the graph.
In many of the proofs of lower bounds on the competitive ratio, when the path, Pn , is considered,
either as the entire input or as a subgraph of the input, we assume that it is given in the standard
order, the order where the first vertex given is a leaf, and each subsequent vertex is a neighbor of
4
the vertex given in the previous step. When the path is a subgraph of the input graph, we often
extend this standard order of the path to an adversarial order of the input graph – a fixed ordering
of the vertices that yields an input attaining the bound.
In some online settings, we are interested in connected graphs, where the vertices are given in
an order such that the subgraph induced at any point in time is connected. In this case, we use
the term always-connected, indicating that we are considering a connected graph G, and all the
partial graphs Gi are connected. We implicitly assume that trees are always-connected and we
drop the adjective. Since all the classes we consider are hereditary (that is, any induced subgraph
also belongs to the class), no further restriction of partial inputs Gi is necessary. In particular,
these conventions imply that for trees, the vertex arriving at any step (except the first) is connected
to exactly one of the vertices given previously, and since we consider unrooted trees, we can think
of that vertex as the parent of the new vertex.
3
The Cost of Being Online
In this section we focus on the comparison of algorithms bound to the same irrevocable decisions. We do so by comparing any online algorithm with O PT INC and O PT OFF , investigating the
role played by the (absence of) knowledge of the future. We start by using the size of a given
dominating set to bound the sizes of some connected or incremental equivalents.
Theorem 1. Let G be always-connected, let S be a dominating set of G, and let R be an incremental dominating set of G. Then the following hold:
1. There is a connected dominating set S 0 of G such that |S 0 | ≤ |S| + 2(c(S) − 1).
2. There is an incremental connected dominating set R0 of G such that |R0 | ≤ |R| + c(R) − 1.
3. If G is a tree, there is an incremental dominating set R00 of G such that |R00 | ≤ |S| + c(S).
Moreover, all three bounds are tight for infinitely many graphs.
Proof. Let S be any dominating set of G. We argue that by selecting additionally at most 2(c(S)−
1) vertices, we can connect all the components in S. We do this inductively. If there are two
components separated by a path of at most two unselected vertices, we select all the vertices on
this path and continue inductively. Otherwise, assume to the contrary that all pairs of components
require the selection of at least three vertices to become connected. We choose a shortest such
path of length k consisting of vertices u1 , . . . , uk , where ui is dominated by a component Ci for
all i. If C1 6= C2 , we can connect them by selecting u1 and u2 , which would be a contradiction.
If C1 = C2 , then we have found a shorter path between C1 and Ck ; also a contradiction. We
conclude that |S 0 | ≤ |S| + 2(c(S) − 1), which proves 1.
To see that the bound is tight, consider a path Pn in the standard order, where n ≡ 0 (mod 3).
Clearly, the size of a minimum dominating set S of Pn is n/3 and c(S) = n/3. On the other hand,
the size of any minimum connected dominating set of Pn is n − 2 and n − 2 = |S| + 2(c(S) − 1).
To prove 2., we label the components of R in the order in which their first vertices arrive. Thus,
let C1 , . . . , Ck be the components of R, and, for 1 ≤ i ≤ k, let vji be the first vertex of Ci that
arrives. Assume that vji arrives before vji+1 for each i = 1, . . . , k − 1. We prove that for each
component Ci of R, there is a path of length 2 joining vji with Ch in Gji for some h < i, i.e.,
a path with only one vertex not belonging to either component. Let P = vl1 , . . . , vlm , vji be a
5
shortest path in Gji connecting vji and some component Ch , h < i, and assume for the sake of
contradiction that m ≥ 3. In Gji , the vertex vl3 is not adjacent to a vertex in any component Ch0 ,
where h0 < i, since in that case a shorter path would exist. However, since vertices cannot be
unselected as the online algorithm proceeds, it follows that in Gl3 , vl3 is not dominated by any
vertex, which is a contradiction. Thus, selecting just one additional vertex at the arrival of vij
connects Ci to an earlier component, and the result follows inductively. To see that the bound is
tight, observe that the optimal incremental connected dominating set of Pn has n − 1 vertices,
while for even n, there is an incremental dominating set of size n/2 with n/2 components.
To obtain 3., consider an algorithm A LG processing vertices greedily, while always selecting all
vertices from S. That is, v1 and all vertices of S are always selected, and when a vertex v not in S
arrives, it is selected if and only if it is not dominated by already selected vertices, in which case
it is called a bad vertex. Clearly, A LG produces an incremental dominating set, R00 , of G.
To prove the upper bound on |R00 |, we gradually mark components of S. For a bad vertex vi , let
v be a vertex from S dominating vi , and let C be the component of S containing v. Mark C. To
prove the claim it suffices to show that each component of S can be marked at most once, since
each bad vertex leads to some component of S being marked.
Assume for the sake of contradiction that some component, C, of S is marked twice. This happens
because a vertex v of C is adjacent to a bad vertex b, and a vertex v 0 (not necessarily different from
v) of C is adjacent to some later bad vertex b0 . Since G is always-connected and b0 was bad, b and
b0 are connected by a path not including v 0 . Furthermore, v and v 0 are connected by a path in C.
Thus, the edges {b, v} and {b0 , v 0 } imply the existence of a cycle in G, contradicting the fact that
it is a tree.
To see that the bound is tight, let v1 , . . . , vm , m ≡ 2 (mod 6), be a path in the standard order. Let
G be obtained from Pm by attaching m pendant vertices (new vertices of degree 1) to each of the
vertices v2 , v5 , v8 , . . . , vm , where the pendant vertices arrive in arbitrary order, though respecting
that G should be always-connected. Each minimum incremental dominating set of G contains
each of the vertices v2 , v5 , v8 , . . . , vm , the vertex v1 , and one of the vertices v3i and v3i+1 for
each i, and thus it has size 2(m + 1)/3. On the other hand, the vertices v2 , v5 , v8 , . . . , vm form a
dominating set S of G with c(S) = (m + 1)/3.
Theorem 1 is best possible in the sense that none of the assumptions can be omitted. Indeed,
Proposition 21 implies that it is not even possible to bound the size of an incremental (connected)
dominating set in terms of the size of a (connected) dominating set, much less to bound the size of
an incremental connected dominating set in terms of the size of a dominating set. Therefore, 1. and
2. in Theorem 1 cannot be combined even on bipartite planar graphs. The situation is different for
trees: Corollary 10 1. essentially leverages the fact that any connected dominating set D on a tree
can be produced by an incremental algorithm without increasing the size of D.
Proposition 2. For any graph G, there is a unique incremental independent dominating set.
Proof. We fix G and proceed inductively. The first vertex has to be selected due to the online
requirement. When the next vertex, vi+1 , is given, if it is dominated by a vertex in Di , it cannot be
selected, since then Di+1 would not be independent. If vi+1 is not dominated by a vertex in Di ,
then vi+1 or one of its neighbors must be selected. However, none of vi+1 ’s neighbors can be
selected, since if they were not selected already, then they are dominated, and selecting one of
them would violate the independence criteria. Thus, vi+1 must be selected. In either case, Di+1 is
uniquely defined.
6
Since a correct incremental algorithm is uniquely defined by this proposition by a forced move in
every step, O PT INC must behave exactly the same. This fills the column for independent dominating set in Table 1.
We let PARENT denote the following algorithm for trees. The algorithm selects the first vertex.
When a new vertex v arrives, if v is not already dominated by a previously arrived vertex, then the
parent vertex that v is adjacent to is added to the dominating set. For connected dominating set on
trees, PARENT is 1-competitive, even against O PT OFF :
Proposition 3. For any tree T , PARENT(T ) outputs a connected dominating set of T and
(
PARENT(T ) =
γC (T ) + 1 if v1 is a leaf of T
γC (T )
otherwise.
Proof. For trees with at least two vertices, PARENT selects the internal vertices plus at most one
leaf. Clearly, the size of the minimal connected dominating set of any tree T equals the number of
its internal vertices.
To show that for total dominating set on trees, PARENT is also 1-competitive against O PT INC , we
prove the following:
Lemma 4. For any incremental total dominating set D for an always-connected graph G, all Di
are connected.
Proof. For the sake of a contradiction, suppose that for some i, the set Di induces a subgraph of G
with at least two components, and let i be the smallest index with this property. It follows that the
vertex vi constitutes a singleton component of the subgraph induced by Di . Thus, vi cannot be
dominated by any other vertex of Di , contradicting that the solution was incremental.
Corollary 5. For any tree T on n vertices,
(
INC
INC
O PTT (T ) = O PTC (T ) =
int(T ) + 1 if v1 is a leaf of T
int(T )
otherwise,
where int(T ) is the number of internal vertices of T . Consequently, when given in the standard
INC
order O PTC
(Pn ) = O PTTINC (Pn ) = n − 1 for every n ≥ 3.
Proposition 6. For any positive integer n and Pn given in the standard order, O PT INC (Pn ) =
dn/2e.
Proof. Clearly, Pn admits an incremental dominating set of size dn/2e, consisting of every second
vertex, starting with v1 . Assume to the contrary that Pn has an incremental dominating set D such
that |D| ≤ dn/2e − 1. Since c(D) ≤ |D|, Theorem 1 2. implies that there is an incremental
connected dominating set C of Pn such that |C| ≤ |D| + c(D) − 1 ≤ 2dn/2e − 3 ≤ n − 2, which
contradicts Corollary 5.
Proposition 7. For any online algorithm A LG for dominating set and for any n > 0, there is a
tree T with n vertices such that the dominating set constructed by A LG for T contains at least
n − 1 vertices.
7
Proof. We prove that the adversary can maintain the invariant that at most one vertex is not included in the solution of A LG. The algorithm has to select the first vertex, so the invariant holds
initially. When presenting a new vertex vi , the adversary checks whether all vertices given so far
are included in A LG’s solution. If this is the case, vi is connected to an arbitrary vertex, and the
invariant still holds. Otherwise, vi is connected to the unique vertex not included in Di−1 . Now vi
is not dominated, so A LG must select an additional vertex.
Proposition 8. For any always-connected bipartite graph G, the smaller partite set of G (plus,
possibly, the vertex v1 ) forms an incremental dominating set.
Proof. The smaller partite set S of any connected bipartite graph G is a dominating set of G. If the
first presented vertex v1 belongs to S, then S is an incremental dominating set of G. Otherwise,
S ∪ {v1 } is an incremental dominating set of G.
As a corollary of Proposition 7 and Proposition 8, we get the following result.
Corollary 9. For any online algorithm A LG for DS on trees, CRINC (A LG) ≥ 2.
Corollary 10. For trees, the following hold.
1. For DS, CRINC (PARENT) = 2 and CROFF (PARENT) = 3.
2. For CDS, CRINC (PARENT) = CROFF (PARENT) = 1.
3. For TDS, CRINC (PARENT) = 1 and CROFF (PARENT) = 2.
We extend the PARENT algorithm for graphs that are not trees as follows. When a vertex vi , i > 1,
arrives, which is not already dominated by one of the previously presented vertices, PARENT
selects any of the neighbors of vi in Gi .
Proposition 11. For any always-connected graph G, the set computed by PARENT on G is an
incremental connected dominating set of G.
Proof. We prove the claim by induction on n. Since PARENT always selects v1 , the statement
holds for n = 1. Consider the graph Gi , for some i > 1, and assume that Di−1 is an incremental
connected dominating set of Gi−1 . If vi is already dominated by a vertex in Di−1 , then PARENT
keeps D unchanged (that is, Di = Di−1 ) and thus Di is an incremental connected dominating
set of Gi . If vi is not dominated by Di−1 , then PARENT chooses a neighbor v of vi in Gi−1 .
Clearly, this implies that Di is an incremental dominating set of Gi . Since Di−1 is an incremental
connected dominating set of Gi−1 and the vertex v is adjacent to the only component of Di−1 , Di
is connected, which concludes the proof.
Proposition 12. For DS and CDS on always-connected bipartite graphs, CROFF (PARENT) ≤ n/2.
Proof. If γC (G) ≥ 2, then there is nothing to prove. Therefore, we assume that there is a single
vertex v adjacent to every other vertex. Since G is bipartite, there is no edge between any of the
vertices adjacent to v, so G is a star. Since Gi is connected for each i, the vertex v arrives either as
the first or the second vertex. Furthermore, if another vertex arrives after v, then v is selected by
PARENT. Once v is selected, all future vertices are already dominated by v, so no more vertices
are selected, implying that PARENT(G) ≤ 2, which concludes the proof.
Proposition 13. Let G be a graph with n vertices and maximum degree ∆. For any graph G,
γC (G) ≥ γ(G) ≥ n/(∆ + 1) and γT (G) ≥ n/∆.
8
Proof. Clearly, any vertex can dominate at most itself and its at most ∆ neighbors. For total
dominating set, a vertex can only dominate its at most ∆ neighbors.
Proposition 13 implies that any algorithm computing an incremental dominating set is no worse
than (∆ + 1)-competitive.
Corollary 14. For any algorithm A LG for DS, CROFF (A LG) ≤ ∆ + 1. Furthermore, for any
algorithm A LG for TDS, CROFF (A LG) ≤ ∆.
Proposition 15. For any algorithm A LG for CDS, CROFF (A LG) ≤ ∆ − 1.
Proof. Let D be a minimum connected dominating set of G with |D| = k. Since D is connected,
any spanning tree of the subgraph induced by D contains k −1 edges and each endpoint is adjacent
to the other endpoint in the spanning tree, so the vertices of D are altogether adjacent to at least
2k − 2 vertices in G. Thus, there are at most k∆ − (2k − 2) vertices not in D which D dominates,
giving n ≤ k∆ − k + 2 = k(∆ − 1) + 2 vertices in G. It follows that γC (G) ≥ (n − 2)/(∆ − 1)
and thus, for any incremental algorithm A LG for CDS, CROFF (A LG) ≤ ∆ − 1.
The next proposition follows from the fact that on always-connected graphs with γ(G) = 1 with
at least four vertices, PARENT selects at most n − 2 vertices.
Proposition 16. For DS and CDS on always-connected graphs, for n ≥ 4, the inequality
CROFF (PARENT) ≤ n − 2
holds for the strict competitive ratio.
Proof. We need to consider only the case of γ(G) = 1, since otherwise there is nothing to prove,
and thus there is a vertex v adjacent to every other vertex of G. Since after the arrival of any
vertex, PARENT increases the size of the dominating set by at most one, it suffices to prove that,
immediately after some vertex has been processed, there are two vertices not selected by PARENT.
First note that once v is selected, PARENT does not select any other vertex and thus we can assume
that v is not the first vertex. Suppose that v arrives after vi , i ≥ 2. The vertex vi has not yet been
selected when v arrives, and v is dominated by v1 , so there are two vertices not selected. The last
remaining case is when v arrives as the second vertex. In this case we distinguish whether v3 is
adjacent to v1 , or not. If v3 is adjacent to v1 , then v is not selected, there are two vertices not
selected (v and v3 ), and we are done. If v3 is not adjacent to v1 , then PARENT selects v when v3
arrives. No further vertex will be added to the dominating set, concluding the proof.
In the next result and in Proposition 20 in Section 4 we use layers in an always-connected graph
G defined by letting L assign layer numbers to vertices in the following manner. Let L(v1 ) = 0
and for i > 1, L(vi ) = 1 + min {L(vj ) | vj is a neighbor of vi in Gi }.
Our next aim is to show that for always-connected bipartite graphs, there is an n/4-competitive
algorithm against O PT INC . This is achieved by considering the following first parent algorithm, denoted F IRST PARENT, which generalizes PARENT. For DS and CDS, the algorithm F IRST PARENT
always selects v1 and for each vertex vi , i > 1, if vi is not dominated by one of the already selected vertices, it selects a neighbor of vi with the smallest layer number. For TDS, we add the
following to F IRST PARENT, so that the dominating set produced is total: If, when vi arrives, vi
and vj (j < i) are the only vertices of a component of size 2, then besides vj , F IRST PARENT also
selects vi .
9
Theorem 17. For DS, CDS, and TDS on always-connected bipartite graphs, we have
CRINC (F IRST PARENT) ≤ n/4
for n ≥ 4.
Proof. We consider DS and CDS first. Since F IRST PARENT is an instantiation of PARENT, Proposition 11 implies that the incremental dominating set constructed by F IRST PARENT is connected.
Therefore, the fact that for any graph G with at least three vertices O PT INC (G) ≤ O PTTINC (G) ≤
INC
O PTC
(G) + 1 implies that it is sufficient to prove that F IRST PARENT is n/4-competitive against
INC
O PT . Furthermore, we only need to consider the case O PT INC (G) < 4, since otherwise
F IRST PARENT is trivially n/4-competitive. Since G is bipartite, there are no edges between vertices of a single layer. Our first aim is to bound the number of layers.
Claim: If O PT INC (G) < 4, then G has at most 6 layers.
To establish the claim, we prove that if an always-connected graph G has 2k + 1 layers, then
O PT INC (G) > k. For the sake of contradiction, suppose that there exist graphs G that are alwaysconnected with 2k + 1 layers such that O PT INC (G) ≤ k, and among all such graphs choose one,
G, with the smallest number of vertices. Since any dominating set contains at least one vertex,
we have k ≥ 1. Let D be an incremental dominating set of G with |D| ≤ k and let l be the
largest integer such that Gl has 2k − 1 layers. Since G is the smallest counterexample, we have
O PT INC (Gl ) ≥ k. Recall that Dl is defined as D∩Gl . The fact that D is an incremental dominating
set implies that Dl is a dominating set of Gl . We claim that |Dl | = k, since otherwise Dl would be
an incremental dominating set of Gl with |Dl | < k, contradicting the fact that O PT INC (Gl ) ≥ k.
The fact that |Dl | = k is equivalent to D ⊆ V (Gl ) and, in particular, L(v) ≤ 2k − 1 for each
vertex v from D. Let w be a vertex of G such that L(w) = 2k + 1, such a vertex exists since G
has 2k + 1 layers. By the definition of layers the vertex w does not have a neighbor in any of the
first 2k − 1 layers and thus is not adjacent to any vertex of D, contradicting the fact that D is a
dominating set of G. This concludes the proof of the claim.
In the rest of the proof, we distinguish several cases according to the number of layers of G. If there
are at most two layers, then F IRST PARENT selects only the root v1 and the result easily follows.
Let li denote the size of the i-th layer and si the number of vertices selected by F IRST PARENT from
the i-th layer. For convenience, we will ignore the terms s0 and l0 , both of which are one, which
is viable since we are dealing with the asymptotic competitive ratio. Because F IRST PARENT can
add a vertex from the i-th layer to the dominating set only when a (non-dominated) vertex from
the (i + 1)-st layer arrives, we have
si ≤ li+1 .
(Ai)
Clearly,
si ≤ li .
(Bi)
The letter i in equations (A) and (B) indicates the layer for which the equation is applied. If there
are precisely three layers, then O PT INC (G) ≥ 2 and we must prove that s1 + s2 ≤ n/2. However,
s2 = 0, and s1 /2 ≤ l1 /2 by (B1) and s1 /2 ≤ l2 /2 by (A1). Adding the last two inequalities
yields s1 ≤ l1 /2 + l2 /2 = n/2, as required.
We use the same idea as for three layers also in the cases of four and five layers, albeit the counting
is slightly more complicated. First we deal separately with the case where O PT INC (G) = 2,
and, consequently, there are four layers. Note that the two vertices in the optimal solution are
necessarily in layers 0 and 2, and it follows that l2 = 1. Furthermore, (A1) implies that s1 ≤ 1
and (B2) implies that s2 ≤ 1. Since s3 = 0, F IRST PARENT always selects at most 3 vertices,
which yields the desired result. Assume now that O PT INC (G) ≥ 3 and therefore, our aim is to
10
prove that F IRST PARENT(G) ≤ 3n/4. Adding 1/4 times (A1), 3/4 times (B1), 1/2 times (A2),
and 1/2 times (B2) yields
s1 + s2 ≤ 3l1 /4 + 3l2 /4 + l3 /2.
(1)
If there are four layers, then s3 = 0 and the right-hand side of (1) satisfies 3l1 /4 + 3l2 /4 + l3 /2 ≤
3(l1 + l2 + l3 )/4 = 3n/4, which yields the desired result. If there are five layers, we add 3/4
times (A3) and 1/4 times (B3) to (1), which gives s1 + s2 + s3 ≤ 3(l1 + l2 + l3 + l4 )/4 = 3n/4,
as required. The last remaining case is that of six layers and O PT INC (G) = 3, which is dealt with
similarly to that of four layers and O PT INC (G) = 2. In particular, the vertices selected by O PT INC
necessarily lie in layers 0, 2, and 4, and thus l0 = l2 = l4 = 1. Now observing that s5 = 0 and
adding (Bi) for all even i to (Ai) for i = 1 and i = 3 yields that F IRST PARENT(G) ≤ 5, which
implies the result in the always-connected case.
For TDS, the additional vertices accepted by F IRST PARENT must by accepted by any incremental
online algorithm, so the result also holds for TDS.
Proposition 18. For DS, CDS, and TDS, we have CRINC (F IRST PARENT) ≤ n/2 for n ≥ 2.
Proof. Since for any graph, F IRST PARENT constructs an incremental dominating set, we need to
INC
consider only the cases where O PT INC (G) ≤ 1, O PTC
(G) ≤ 1, and O PTTINC (G) ≤ 1. For TDS,
either G has no edges, in which case the empty set of vertices is a feasible solution constructed
both by O PTTINC and F IRST PARENT, or G contains an edge, in which case O PTTINC (G) ≥ 2 and
INC
the bound follows. Since O PT INC (G) ≤ O PTC
(G), it is sufficient to consider the case where
INC
O PT (G) = 1. If, at any point, Gi has more than one component, then O PT INC (Gi ) ≥ 2. Thus,
if O PT INC (Gi ) = 1, G is a star and is always-connected. Thus, the center vertex must arrive as
either the first or second request, so F IRST PARENT(G) ≤ 2 ≤ n.
Figure 1: A two-layer construction; the minimum connected dominating set is depicted in red
(Proposition 19).
Proposition 19. For any online algorithm A LG for DS, CDS, or TDS on always-connected bipartite graphs, CRINC (A LG) ≥ n/4 and CRINC (A LG) ≥ ∆/2.
Proof. We prove that for any online algorithm A LG for DS, CDS, or TDS and for any integer
∆ ≥ 2, there is a bipartite graph G with maximum degree ∆ such that A LG(G) = ∆ ≥ n/2
INC
and O PT INC (G) = O PTC
(G) = O PTTINC (G) = 2. Consider the graph consisting of a root v,
∆ vertices u1 , . . . , u∆ adjacent to the root and constituting the first layer, and an additional ∆ −
1 vertices w1 , . . . , w∆−1 , which will be given in that order, constituting the second layer, with
adjacencies as follows: For i = 1, . . . , ∆ − 1, the i-th vertex wi of the second layer is adjacent to
∆−i+1 vertices of the first layer in such a way that we obtain the following strict set containment
of sets of neighbors of these vertices: N (wi ) ⊃ N (wi+1 ) for all i = 1, . . . , ∆ − 2. An example
of this construction for ∆ = 4 is depicted in Figure 1. After the entire first layer is presented to
the algorithm, the vertices of the first layer are indistinguishable to the algorithm and D∆+1 does
not necessarily contain more than one vertex. For each i = 1, . . . , ∆ − 1, the neighbors of wi are
chosen from the first layer in such a way that N (wi−1 ) ⊃ N (wi ), the degree of wi is ∆−i+1, and
11
N (wi ) contains as many vertices not contained in the dominating set constructed by A LG so far as
possible. Consider the situation when the vertex wi arrives. It is easy to see that if the set N (wi )
does not contain a vertex from the dominating set constructed so far, then A LG must select at least
one additional vertex at this time. The last observation implies that A LG selects at least ∆ − 1
vertices from the first and second layer, plus the root.
Since there is a vertex u in the first layer that is adjacent to all vertices in the second layer, {u, v}
is an incremental connected dominating set of G, which concludes the proof.
4
The Cost of Being Incremental
This section is devoted to comparing the performance of incremental algorithms and O PT OFF .
Since O PT OFF performs at least as well as O PT INC and O PT INC performs at least as well as any
online algorithm, each lower bound in Table 2 is at least the maximum of the corresponding lower
bound in Table 1 and the corresponding lower bound for CROFF (O PT INC ). Similarly, each upper
bound in Table 1 and corresponding upper bound for CROFF (O PT INC ) is at least the corresponding
upper bound in Table 2. In both cases, we mention only bounds that cannot be obtained in this
way from cases considered already.
The following result, which improves bounds of Proposition 16, generalizes the idea of Proposition 8.
Proposition 20. For DS on always-connected graphs, CROFF (O PT INC ) ≤ n/2.
Proof. For a fixed ordering of G, consider the layers L(v) assigned to vertices of G. It is easy to
see that the set of vertices in the even layers is an incremental solution for DS and similarly for the
set of vertices in odd layers plus the vertex v1 . Therefore, O PT INC can select the smaller of these
two sets, which necessarily has at most n/2 vertices.
Proposition 21. The following hold for the strict competitive ratio. For DS on bipartite planar
graphs, CROFF (O PT INC ) ≥ n − 1 and CROFF (O PT INC ) ≥ ∆. For CDS on bipartite planar graphs,
CROFF (O PT INC ) ≥ n.
Proof. We prove that for each ∆ ≥ 3, i > 0, and n = i(∆ + 1), there is a bipartite planar graph G
INC
with n vertices and maximum degree ∆ such that O PT INC (G) = n∆/(∆ + 1), O PTC
(G) = n,
and γ(G) = γC (G) = n/(∆ + 1). Let G consist of i disjoint copies of the star on ∆ + 1
vertices, with the center of each star arriving as the last vertex among the vertices of that particular
star. Clearly, γ(G) = γC (G) = n/(∆ + 1). On the other hand, any incremental dominating
set has to contain every vertex, except the last vertex of each star, since all these vertices are
pairwise nonadjacent. In addition, any incremental connected dominating set has to contain the
centers of the stars to preserve connectedness of the solution in each component. It follows that
for dominating set, O PT INC selects n∆/(∆ + 1) vertices, which proves the claim for boundeddegree graphs. For connected bipartite planar graphs, setting ∆ = n − 1 and i = 1 in the above
construction gives the result for both dominating set and connected dominating set.
Proposition 22. For IDS and for the strict competitive ratio,
CROFF (O PT INC ) ≥ ∆ and CROFF (O PT INC ) ≥ n − 1.
Proof. Let G be a star K1,∆ , ∆ ≥ 2, where v2 is the unique vertex of degree ∆. Clearly, γI (G) =
1. Since v1 is always selected by any algorithm constructing an incremental solution, the vertex
12
v2 cannot be selected. Consequently, all n − 1 = ∆ vertices of degree 1 have to be selected in the
dominating set, which proves the lower bound of the first part.
To prove the upper bound, consider any graph G and let S = {s1 , . . . , sk } be an independent
dominating set of G with size k = γI (G). Let Ri be a set of vertices being dominated by si
for each i, where Ri are pairwise disjoint. Let Ri0 be the set Ri \ {si }. For each i, the vertex si
is in D if and only if all the vertices of Ri0 are not in D. It follows that |D|/|S| is bounded by
the maximum size of Ri0 , which is ∆, concluding the proof of the upper bound. The second part
follows from the first by choosing ∆ = n − 1 for each n ≥ 3.
Proposition 23. For IDS on always-connected graphs, ∆ − 1 ≤ CROFF (O PT INC ) ≤ ∆.
Proof. The upper bound follows from Proposition 22. To prove the lower bound, consider a path
Pn in the standard order, with ∆ − 2 vertices of degree 1 attached to vi for each even i, where
the vertices of degree 1 arrive after all the vertices of the path. The even vertices vi are centers
of stars of degree ∆. Furthermore, any incremental algorithm for IDS on a path in the standard
order selects exactly the odd vertices of the path and thus also select all the vertices of degree 1.
Let k = bn/2c. It follows that O PT INC selects k∆ − (k − 1) vertices, while the optimal offline
solution has size k, which implies the result.
Theorem 1 3. implies the following bound on the performance of O PT INC on trees.
Corollary 24. For DS on trees, CROFF (O PT INC ) ≤ 2.
A fan of degree ∆ is the graph obtained from a path P∆ by addition of a vertex v that is adjacent
to all vertices of the path, as in Figure 2. The adversarial order of a fan is defined by the standard
order of the underlying path, followed by the vertex v.
Proposition 25. For always-connected planar graphs (and, thus, also on general planar graphs),
the following strict competitive ratio results hold.
• For DS, CROFF (O PT INC ) ≥ n/2.
• For CDS, CROFF (O PT INC ) ≥ n − 2.
• For TDS, CROFF (O PT INC ) ≥ n/2 − 1.
Proof. Let G be a fan of degree ∆, where n = ∆ + 1, in the adversarial order. We prove that
INC
O PT INC (G) = n/2, O PTC
(G) = O PTTINC (G) = n − 2, γ(G) = γC (G) = 1, and γT (G) = 2.
Since Gn−1 induces a path, by Proposition 6 the size of any incremental dominating set of G is at
least n/2. Similarly, Corollary 5 implies the size of any incremental connected (total) dominating
set of G is at least n − 2. Moreover, it is easy to see that there is an incremental solution of size
exactly n − 2 for all considered problems. On the other hand, vn forms a connected dominating
set of size 1, and vn with, say, v1 , form a total dominating set of size 2, which concludes the
proof.
An alternating fan with k fans of degree ∆ consists of k copies of the fan of degree ∆, where the
individual copies are joined in a path-like manner by identifying some of the vertices of degree 2,
as in Figure 2. Thus, n = k(∆ + 1) − (k − 1) and k = (n − 1)/∆. The adversarial order of an
alternating fan is defined by the concatenation of the adversarial orders of the underlying fans.
Proposition 26. For DS on always-connected graphs, CROFF (O PT INC ) ≥ (∆ − 1)/2.
13
Figure 2: A fan with ∆ = 4 (left; Proposition 25) and an alternating fan with k = 3 and ∆ = 4
(right; Proposition 26).
Proof. Let G be an alternating fan with k fans of degree ∆ for any ∆ ≥ 4 given in the adversarial
order. We prove that O PT INC (G) > (∆ − 1)n/(2∆) and γ(G) = (n − 1)/∆. (In Figure 2,
the vertices belonging to the dominating set are red.) Since, by Proposition 6, any incremental
dominating set on a path P in the standard order has at least d|V (P )|e vertices, O PT INC must select
at least d(n−k)/2e vertices of G. Inserting k = (n−1)/∆ into (n−k)/2 gives (n(∆−1)+1)/2.
into the results above proves the proposition.
A modular bridge of degree ∆ with k sections, where k is even, is the graph obtained from a path
on k(∆ − 1) vertices, with an additional k chord vertices. There is a perfect matching on the
chord vertices u1 , . . . , uk with u2i is adjacent to u2i−1 for all i = 1, . . . , k/2. Furthermore, the
i-th chord vertex is adjacent to the vertices of the i-th section; see Figure 3 for an example. The
adversarial order of a modular bridge is defined by the standard order of the path, followed by the
chord vertices in any order.
Figure 3: A modular bridge with k = 4 and ∆ = 5 (Proposition 27).
Proposition 27. For TDS on always-connected graphs, CROFF (O PT INC ) ≥ ∆ − 1.
Proof. Let G be a modular bridge of degree ∆ with k sections given in the adversarial order. Let
m = k(∆ − 1). Since Gm is a path, by Corollary 5, we have A LG(G) ≥ k(∆ − 1) − 1. Clearly,
γT (G) ≤ k.
A bridge of degree ∆ with k sections is obtained from a modular bridge of degree ∆ − 1 with k
sections by joining vertices u2i and u2i+1 by an edge for each i = 1, . . . , k/2 − 1; see Figure 4
for an example. The adversarial order of a bridge is identical with the adversarial order of the
underlying modular bridge.
Figure 4: A bridge with k = 4 and ∆ = 6 (Proposition 28).
Proposition 28. For CDS on always-connected graphs, CROFF (O PT INC ) ≥ ∆ − 2.
Proof. Let G be a bridge of degree ∆ with k sections, given in the adversarial order. Let m =
k(∆ − 2). Since Gm induces a path, we have A LG(G) ≥ A LG(Gm ) = k(∆ − 2) − 1, by
Corollary 5. The chord vertices form a connected dominating set of G and, thus, γC (G) ≤ k.
14
A rotor of degree ∆, where ∆ ≥ 2 is even, is a graph obtained from a star, K1,∆ , on ∆ + 1 vertices
by adding the edges of a perfect matching on the pendant vertices, as in Figure 5. The adversarial
order of a rotor G of degree ∆ is any fixed order such that G2i is a graph with a perfect matching
for each i = 1, . . . , ∆/2 and the central vertex of the original star is the last vertex to arrive.
Proposition 29. For CDS, CROFF (O PT INC ) ≥ ∆ + 1, and for TDS, CROFF (O PT INC ) ≥ ∆/2.
Proof. Let G be a rotor of degree ∆ given in the adversarial order. Since any algorithm producing
an incremental solution D of either CDS or TDS on a K2 must select vertices, D∆ contains all
vertices of G∆ . Furthermore, at least one additional vertex is required to make D∆ connected
INC
and thus O PTC
(G) ≥ ∆ + 1. On the other hand, the set D = V (G) is clearly an incremental
INC
connected dominating set of G and thus O PTC
(G) = ∆ + 1. For total dominating set, it is easy
to see that D∆ is an incremental total dominating set of G and thus O PTTINC (G) = ∆. The proof
is concluded by observing that the central vertex (the central vertex plus an arbitrary other vertex)
forms a connected (total) dominating set of G and thus γC (G) = 1 and γT (G) = 2.
For any n ≥ 2, the two-sided fan of size n is the graph obtained from a path on n − 2 vertices by
attaching two additional vertices, one to the even-numbered vertices of the path and the other to
the odd-numbered vertices of the path. The adversarial order of a two-sided fan is defined by the
standard order of the path, followed by the two additional vertices. See Figure 5 for an illustration
of a two-sided fan of size 10.
Figure 5: The rotor of degree 8 (left, Proposition 29) and two-sided fan of size 10 (right, Proposition 30).
Proposition 30. For any incremental algorithm A LG for CDS or TDS on always-connected bipartite graphs, CROFF (A LG) ≥ (n − 3)/2 holds for the strict competitive ratio.
Proof. Let Gn be a two-sided fan of size n, given in the adversarial order. It suffices to prove
INC
that O PTC
(Gn ) = O PTTINC (Gn ) = n − 3 and γ(Gn ) = γC (Gn ) = γT (Gn ) = 2. This is
straightforward from the facts that the first n − 2 vertices of G induce a path, that online connected
and total dominating sets coincide, and that any incremental connected dominating set on a path
of length k has size at least k − 1.
5
Conclusion and Open Problems
Online algorithms for four variants of the dominating set problem are compared using competitive analysis to O PT INC and O PT OFF , two reasonable alternatives for the optimal algorithm having
knowledge of the entire input. Several graph classes are considered, and tight results are obtained
in most cases.
The difference between O PT INC and O PT OFF is that O PT INC is required to maintain an incremental
solution (as any online algorithm), while O PT OFF is only required to produce an offline solution for
15
the final graph. The algorithms are compared to both O PT INC and O PT OFF , and O PT INC is compared
to O PT OFF , in order to investigate why all algorithms tend to perform poorly against O PT OFF . Is
this due to the requirement that online algorithms have to maintain an incremental solution at all
times, or is it because of the lack of knowledge of the future that both O PT INC and O PT OFF have?
Inspecting the results in the tables, perhaps the most striking conclusion is that the competitive
ratios of any online algorithm and O PT INC , respectively, against O PT OFF , are almost identical. This
indicates that the requirement to maintain an incremental dominating set is a severe restriction,
which can be offset by the full knowledge of the input only to a very small extent. On the other
hand, when we restrict our attention to online algorithms against O PT INC , it turns out that the
handicap of not knowing the future still presents a barrier, leading to competitive ratios of the
order of n or ∆ in most cases.
One could reconsider the nature of the irrevocable decisions, which originally stemmed from
practical applications. Which assumptions on irrevocability are relevant for practical applications,
and which irrevocability components make the problem hard from an online perspective? We
expect that these considerations will apply to many other online problems as well.
There is relatively little difference observed between three of the variants of dominating set considered: dominating set, connected dominating set, and total dominating set. In fact, the results
for total dominating set generally followed directly from those for connected dominating set as a
consequence of Lemma 4. The results for independent dominating set were significantly different
from the others. It can be viewed as the minimum maximal independent set problem since any
maximal independent set is a dominating set. This problem has been studied in the context of
investigating the performance of the greedy algorithm for the independent set problem. In fact,
the unique incremental independent dominating set is the set produced by the greedy algorithm
for independent set.
In yet another orthogonal dimension, we compare the results for various graph classes. Dominating set is a special case of set cover and is notoriously difficult in classical complexity, being
NP-hard [16], W [2]-hard [10], and not approximable within c log n for any constant c on general
graphs [12]. On the positive side, on planar graphs, the problem is FPT [1] and admits a PTAS [2],
and it is approximable within log ∆ on bounded degree graphs [8]. On the other hand, the relationship between the performance of online algorithms and structural properties of graphs is not
particularly well understood. In particular, there are problems where the absence of knowledge
of the future is irrelevant; examples of such problems in this work are CDS and TDS on trees,
and IDS on any graph class. As expected, for bounded degree graphs, the competitive ratios are
of the order of ∆, but closing the gap between ∆/2 and ∆ seems to require additional ideas. On
the other hand, for planar graphs, the problem, rather surprisingly, seems to be as difficult as the
general case when compared to O PT OFF . When online algorithms for planar graphs are compared
to O PT INC , we suspect there might be an algorithm with constant competitive ratio. At the same
time, this case is the most notable open problem directly related to our results. Drawing inspiration from classical complexity, one may want to eventually consider more specific graph classes
in the quest for understanding exactly what structural properties make the problem solvable. From
this perspective, our consideration of planar, bipartite, and bounded degree graphs is a natural first
step.
Acknowledgment The authors would like to thank anonymous referees for constructive comments on an earlier version of the paper.
16
References
[1] J. Alber, H. L. Bodlaender, H. Fernau, T. Kloks, and R. Niedermeier. Fixed parameter algorithms for dominating set and related problems on planar graphs. Algorithmica, 33(4):461–
493, 2002.
[2] B. S. Baker. Approximation algorithms for NP-complete problems on planar graphs. Journal
of the ACM, 41(1):153–180, 1994.
[3] C. Berge. Theory of Graphs and its Applications. Meuthen, London, 1962.
[4] M. Böhm, J. Sgall, and P. Veselý. Online colored bin packing. In E. Bampis and O. Svensson, editors, 12th International Workshop on Approximation and Online Algorithms (WAOA),
volume 8952 of Lecture Notes in Computer Science, pages 35–46. Springer, 2015.
[5] A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis. Cambridge
University Press, 1998.
[6] J. Boyar and K. S. Larsen. The seat reservation problem. Algorithmica, 25(4):403–417,
1999.
[7] M. Chrobak, J. Sgall, and G. J. Woeginger. Two-bounded-space bin packing revisited. In
C. Demetrescu and M. M. Halldórsson, editors, 19th Annual European Symposium (ESA),
volume 6942 of Lecture Notes in Computer Science, pages 263–274. Springer, 2011.
[8] V. Chvátal. A greedy heuristic for the set-covering problem. Mathematics of Operations
Research, 4(3):233–235, 1979.
[9] B. Das and V. Bharghavan. Routing in ad-hoc networks using minimum connected dominating sets. In IEEE International Conference on Communications (ICC), volume 1, pages
376–380, 1997.
[10] R. G. Downey and M. R. Fellows. Fixed-parameter tractability and completeness I: Basic
results. SIAM Journal on Computing, 24(4):873–921, 1995.
[11] D.-Z. Du and P.-J. Wan. Connected Dominating Set: Theory and Applications. Springer,
New York, 2013.
[12] U. Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 45(4):634–
652, 1998.
[13] T. W. Haynes, S. Hedetniemi, and P. Slater. Fundamentals of Domination in Graphs. Marcel
Dekker, New York, 1998.
[14] M. Henning and A. Yao. Total Domination in Graphs. Springer, New York, 2013.
[15] A. R. Karlin, M. S. Manasse, L. Rudolph, and D. D. Sleator. Competitive snoopy caching.
Algorithmica, 3:79–119, 1988.
[16] R. M. Karp. Reducibility among combinatorial problems. In R. E. Miller and J. W. Thatcher,
editors, Complexity of Computer Computations, The IBM Research Symposia Series, pages
85–103. Plenum Press, New York, 1972.
[17] G.-H. King and W.-G. Tzeng. On-line algorithms for the dominating set problem. Inform.
Process. Lett., 61(1):11–14, 1997.
17
[18] D. König. Theorie der Endlichen und Unendlichen Graphen. Chelsea, New York, 1950.
[19] C. L. Liu. Introduction to Combinatorial Mathematics. McGraw-Hill, New York, 1968.
[20] O. Ore. Theory of Graphs, volume 38 of Colloquium Publications. American Mathematical
Society, Providence, 1962.
[21] D. D. Sleator and R. E. Tarjan. Amortized efficiency of list update and paging rules. Communications of the ACM, 28(2):202–208, 1985.
18
| 8 |
A subsystems approach for parameter estimation of ODE
models of hybrid systems
Anastasis Georgoulas
Allan Clark
SynthSys— Synthetic and Systems Biology
University of Edinburgh
Edinburgh, United Kingdom
[email protected]
[email protected]
Andrea Ocone
[email protected]
Stephen Gilmore
Guido Sanguinetti
School of Informatics
University of Edinburgh
Edinburgh, United Kingdom
[email protected]
[email protected]
We present a new method for parameter identification of ODE system descriptions based on data
measurements. Our method works by splitting the system into a number of subsystems and working
on each of them separately, thereby being easily parallelisable, and can also deal with noise in the
observations.
1
Introduction
Kinetic modelling of biochemical systems is a growing area of research in systems biology. The combination of mechanistic insight and predictive power afforded by kinetic models means that these have
become a popular tool of investigation in biological modelling. Within the class of kinetic models, ordinary differential equations (ODEs) are by far the dominant modelling paradigm. The importance of
nonlinear systems of ODEs stems not only from their value in modelling population data (e.g. microarrays, luciferase assays) but also for their role in describing the average evolution of stochastic discrete
and hybrid systems, using tools such as the fluid approximation[2] or the linear noise approximation
([13, 12]). The availability of many analysis tools for ODEs means that qualitative (e.g. the presence of
bistability) and quantitative information about the system can readily be obtained from the analysis of
the average behaviour of the system.
While the use of non-linear ODEs is an undoubtable success story in systems biology, it is not an
unqualified one. The high predictive power of non-linear systems of ODEs often comes at the cost
of an explosion in the number of parameters, leading to significant difficulties in calibrating models.
Parameter estimation requires a large amount of high quality data even for moderate sized systems.
From the computational point of view, this is often exceptionally intensive as it requires solving the
system of ODEs many times: typically, the number of solutions scales exponentially with the number
of parameters (a particular form of curse of dimensionality). Therefore, many algorithms for parameter
estimation inevitably do not reach convergence, calling into question the validity of model predictions.
Here, we attack the curse of dimensionality of parameter estimation by proposing an approximate
solution which effectively reduces a high dimensional problem into several weakly coupled low dimensional problems. Our strategy builds on the fact that, in many biochemical networks, knowledge of the
true trajectory of some species would effectively decouple the parameter estimation problem across different subsystems. Therefore, we propose to use a statistical procedure based on Gaussian Process (GP)
Ezio Bartocci and Luca Bortolussi (Eds.): HSB 2012
EPTCS 92, 2012, pp. 30–41, doi:10.4204/EPTCS.92.3
c Georgoulas, Clark, Ocone, Gilmore & Sanguinetti
Georgoulas, Clark, Ocone, Gilmore & Sanguinetti
31
regression to infer the full trajectory of each species in the network based on the limited observations.
Parameters of each subsystem can then be updated in parallel using a statistically correct Markov Chain
Monte Carlo procedure. We show on simulated data sets that this approach is as accurate as the existing
state of the art. Furthermore, it is parallelisable, leading to significant computational speed ups, and its
statistical nature means that we can accurately quantify the uncertainty in the parameter estimates.
The rest of the paper is organised as follows; we start by introducing some essential statistical concepts which are key ingredients of our approach. We then describe how these concepts can be used in
a parameter estimation problem, and present a parallel implementation of the method. We present results on two benchmark data sets, reporting competitive accuracy against state of the art methods. We
conclude by discussing the potential and limitations of our approach, as well as indicating avenues for
further research.
2
2.1
Background
Parameter estimation
There exist a wide variety of parameter estimation methods that have been proposed in the literature and
are in use. They all share the notion of exploring the parameter space in order to obtain the optimal
values of the parameters, but they can differ significantly in their approach.
The notion of optimality can be expressed through a fitness function, which expresses how well a
certain parameter value can explain the data. Usually, calculating the fitness function involves simulating
the behaviour of the system assuming that parameter value is true, and then comparing the results to the
observed behaviour. In general, however, the function can reflect one’s prior knowledge of the problem
or any constraints that are deemed suitable— for example, by including terms to penalize large values
of the parameter. The problem of parameter estimation can then be seen as an optimization problem in
terms of the fitness function which we aim to maximize (or, equivalently, an error function which must
be minimized).
One point that should be stressed is the importance of the number of parameters. The higher this is,
the higher the dimension of the search space and the harder it becomes to efficiently search it. Intuitively,
there are more directions in which we can (or must) move in order to explore the search space, therefore
the parameter estimation task becomes more complex and time-consuming.
A particularly difficult problem arises when there are multiple parameter sets which can give rise to
very similar data. This is further exacerbated by the presence of noise, which means we are often unable
to say with certainty what the true values of the data are, making it harder to choose between slightly
different results. Recent research ([5, 7]) has shown that wide ranges of parameter sets often produce
virtually indistinguishable results, indicating that this is a widespread problem rather than a sporadic one.
2.2
Markov Chain Monte Carlo
In this work, we use a Markov Chain Monte Carlo (MCMC) approach, which allows us to selectively
and efficiently explore the parameter space (for more details, see, for example, [6]).
We first assign a prior distribution P(p) to the parameters, which represents our previous beliefs
about their values. For instance, if we have no intrinsic reason to favour one value over another, we can
use a uniform distribution as a prior, which would indicate that any parameter value would be equally
likely without seeing any data.
32
A subsystems approach for parameter estimation
However, having some observations introduces additional information which may alter our prior
belief. Therefore, the posterior distribution P(p | D) represents the probability of a parameter having the
value p after observing the dataset D. For example, if we see data which are more likely to have been
generated by a specific subset of parameter values, we may start to abandon our uniform prior in favour
of a distribution which is biased towards these likelier values. In other words, the posterior distribution
represents our belief for the parameter values as determined by the additional knowledge of the observed
data.
The MCMC approach can be summarised as follows. Starting with a random set of values p for the
parameters, we calculate the posterior probability of the parameters given the data, P(p | D). We sample a
new set of parameters p0 from a Gaussian distribution centred on the current values and repeat the process.
0 |D)
The new parameters have probability α of being accepted as a better estimate, where α = min ( P(p
P(p|D) , 1).
In other words, if the new parameters give rise to a higher likelihood, they are always accepted; otherwise,
they can still be accepted with a certain probability. After a number of steps, this procedure converges
and we can sample from the posterior distribution of parameters.
This has the advantage of not providing a single-point estimate of the parameters, but rather an entire
probability distribution. In practice, we can take a number of samples from the distribution and use
them as its representatives. For example, we can perform Kernel Density Estimation[3], which returns a
smoothed histogram approximating the distribution. We can therefore also obtain information about the
confidence or uncertainty of the estimated parameter values.
3
Gaussian Processes regression
Our method relies critically on a statistical imputation of gene expression profiles, which enables us to
break down dependencies between subsets of the parameters. To achieve this, we interpolate experimental points by using a non-parametric method based on Gaussian Processes (GPs). In this section we
briefly review the statistical foundations of GPs; for a thorough review, the reader is referred to [11].
A GP is a (finite or infinite) collection of random variables any finite subset of which is distributed according to a multivariate normal distribution. As a random function f (x) can be seen as a collection of
random variables indexed by its input argument, GPs are a natural way of describing probability distributions over function spaces. A GP is characterised by its mean function µ(x) and covariance function
k(x, x0 ), a symmetric function of two variables which has to satisfy the Mercer conditions ([11]). In
formulae, the definition of GP can be written as
f ∼ G P (µ, k) ↔ [ f (x1 ) , . . . , f (xN )] ∼ N ([µ (x1 ) , . . . , µ (xN )] , K)
for any finite set of inputs x1 , . . . , xN . Here
Ki j = k (xi , x j ) .
The choice of mean and covariance functions is largely determined by the problem under consideration.
In this paper, we will use a zero mean GP with MLP (Multi-Linear Perceptron) covariance function
[11]; this choice is motivated by the ability of the non-stationary MLP covariance to capture saturating
behaviours such as those frequently encountered in gene expression data.
Given some observations y of the function f at certain input values X, and given a noise model
p(y | f, X), one can use Bayes’ theorem to obtain a posterior over the function values at the inputs
p (f | y, X, θ ) =
p (y | f, X, θ ) p (f | X, θ )
p (y | X, θ )
(1)
Georgoulas, Clark, Ocone, Gilmore & Sanguinetti
33
where θ denotes the parameters of the GP prior, called hyperparameters. One can then obtain a predictive
distribution for the function value f ∗ at a new input point x∗ by averaging the conditional distribution of
p( f ∗ | f) under the posterior (1)
∗
∗
p ( f | y, X, x , θ ) =
Z
p ( f ∗ | f, X, x∗ , θ ) p (f | y, X, θ ) df.
If the noise model p(y | f) is Gaussian, then one can obtain an analytical expression for the posterior
as all the integrals are Gaussian (1). We notice that this analytical property remains even if the variance
of the Gaussian noise is different at different input points. A further advantage of the Gaussian noise
setting is the ability to obtain a closed form expression for the marginal likelihood p(y|x, θ ), which then
enables straightforward estimation of the hyperparameters (and the observation noise variance) through
type II maximum likelihood.
4
The subsystems approach
The novelty of our approach lies in splitting the system into modules and then performing parameter
estimation for each such subsystem independently. Each subsystem is associated with some of the species
of the original system and is responsible for producing parameter estimates and simulated time-series for
them. It also has a number of inputs, which are other species that influence its behaviour— in general,
these will be the species that appear in the ODEs for the subsystem’s own species.
There are a number of ways in which the decomposition of the complete system can be realised.
For the sake of simplicity, we define each subsystem as being associated with a single species, and
consider as its inputs the GP interpolation of the time-series of the other species that appear in the
ODE for that species’s concentration. In this way, a potentially large (autonomous) system involving N
species is broken down into N non-autonomous subsystems with a single species; the input to each of
these subsystems are the GP interpolations of the chemical species which influence the subsystem in the
original large system.
As an illustration, consider the example in Figure 1, in which the nodes are species and an edge from
x to y indicates that the concentration of y depends on that of x. We want to reason about the likelihood
of a given time-series for species B but, because of the dependence of B on A, we cannot do that unless
we know the values of A. However, if we assume that we know the behaviour of A, then B can be
treated as an independent part of the system. This is based on the concept of conditional independence—
knowledge of A makes our belief about B independent of any other species. This is why we use the
interpolated time-series of A as input for the B subsystem. Similarly, the E subsystem would require the
interpolations of C and D. Essentially, using the interpolation in this way decouples each species from
the others, allowing us to follow this modular approach.
There has been a significant amount of research into various approaches to modularisation of biological models (e.g. [4]), as well as the conceptual and practical advantages this offers[9]. In our case,
there are two benefits that stand out. Firstly, as each subsystem has a reduced number of parameters
directly involved in it, the resulting parameter estimation is performed by searching over a space of
lower dimension, and is thus much simpler and more efficient. Secondly, and perhaps more significantly,
this procedure is straightforward to parallelise, with minimal synchronisation required. Indeed, each parameter estimation sub-task is independent of the others. Therefore, apart from being simpler than the
original, the resulting sub-problems can be solved in parallel.
34
A subsystems approach for parameter estimation
A
B
C
D
E
Figure 1: An example of a system with dependences.
5
5.1
Experiments
Method
We estimate the parameters of each subsystem using MCMC, as described previously. We take the prior
distribution to be the uniform over an interval [l, u] where l and u are, respectively, the lower and upper
limits of each parameter. To calculate the posterior probability of a parameter set p, we solve the ODE
that corresponds to the subsystem by using these parameter values and the input time-series. We then
compare the ODE solution x = {xi (p)} to the input time-series y = {yi } for the subsystem’s species to
obtain a measure of the likelihood of the data. This is also proportional to the posterior probability of the
parameters given the data, P(p | D). Specifically, we calculate the square error between the simulated
and the input time-series:
P(p | D) ∝ ∑(yi − xi (p))2
i
This corresponds to the likelihood of the simulated time-series, assuming a normal distribution with
constant variance.
There is an additional component to our method, which involves using a GP-based interpolation
method on the original. The benefit of this is two-fold: first, the interpolation smoothes the time-series
by removing the noise— in fact, it estimates the noise level, which does not need to be known a priori; secondly, it allows us to obtain denser time-series, which we can use as inputs for the different
subsystems, as described previously.
In summary, our method is as follows. We begin by performing an interpolation on the original data,
obtaining denser time-series. We initially use these interpolated time-series as inputs for the subsystems.
For each subsystem, we estimate an optimal set of parameters and calculate the corresponding timeseries using these parameters. As mentioned previously, this can be done in parallel. We then gather the
calculated time-series and feed them back into the subsystems, replacing the interpolation results. We
can repeat this procedure until there is no noticeable difference in the results.
5.2
Test cases
We present the performance of our method on two systems. The first is a model of the genetic regulatory
network used in the parameter estimation challenge of the DREAM6 contest[1]. It involves seven species,
with ODEs for the concentrations of both proteins and mRNA. It was split into seven subsystems, each
Georgoulas, Clark, Ocone, Gilmore & Sanguinetti
Name
k1
k2
k3
k4
V
Km
Value
0.07
0.6
0.05
0.3
0.017
0.3
35
Estimate
0.064, 0.172
0.489, 0.411, 0.345
0.046, 0.058, 0.025
0.306, 0.235
0.027, 0.036
0.725, 1.224
Table 1: Parameter estimates for the cascade model (multiple values correspond to each parameter’s
involvement in more than one subsystem)
encompassing a protein and the corresponding mRNA. The second is a model of a signalling cascade
from [14], which contains five species and was split into five subsystems as described above.
For the evaluation of our method, we focus on three aspects of the results: we look at how well the
interpolation procedure approximates the true data; we examine the fit to the data based on the estimated
parameters; and finally, we evaluate the quality of the parameter estimation itself by analysing the results
and comparing them to results obtained using state-of-the-art methods.
It is clear that these three aspects are not orthogonal. For example, an inaccurate interpolation will
negatively affect the parameter estimation, since the latter uses the former, resulting in an unsatisfactory
fit.
5.3
Results and analysis
For the signalling cascade model, Figure 2 shows the observed data, the interpolated time-series and
the real data for different levels of measurement noise. We can see that the GP interpolation is very
accurate in approximating the real data— although, as expected, the accuracy deteriorates as the noise
level increases.
Figure 3 presents the predicted output of the model compared to the true data, while Table 1 shows
the results of the parameter estimation. The first thing to note is that we do not obtain unique estimates for
each parameter, as they are all involved in more than one ODE (and thus more than one subsystem).We
do not have a trivial and fail-safe way to choose between the different estimates, so this remains an open
question for our method. However, we should point out that in reality our method does not return a
single-point estimate but rather a distribution (the maximum point of which is the value presented in
these tables), therefore there is room to “reconcile” different estimates if we consider them as intervals
rather than single values.
Another interesting aspect of the results is seeing how the subsystems approach compares to using
MCMC on the system as a whole. As can be seen by comparing Figure 4 to the previous results, the
estimates obtained by working on each subsystem separately are much closer to the real values of the
parameters, and this is reflected in a better fit to the observed data. This is an encouraging indication that
the theoretical benefits of decomposition are also observed in practice.
The results also reveal different confidence levels for the parameter estimates. Figure 5 shows an
approximation of the posterior distribution for parameters k3 and V , as obtained through Kernel Density
Estimation on the sampling results. It is clear that the first is quite sharply peaked, whereas in the second
the mass is spread over a wider range of values. This shows a higher confidence in the estimate for k3 .
For the gene regulatory network, the performance of our algorithm is more mixed. The time-series
for some species are matched very closely to the real data, while for others the fit is not as good. For the
A subsystems approach for parameter estimation
1
1
0.9
0.9
0.8
0.8
0.7
0.7
Concentration
Concentration
36
0.6
0.5
0.4
0.6
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
10
20
30
40
50
Time
60
70
80
90
0
0
100
10
(a) noiseless measurements
20
30
40
Time
50
60
(b) noise with standard deviation 0.5
1
0.9
0.8
Concentration
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
50
Time
60
70
80
90
100
(c) noise with standard deviation 1
Figure 2: Signalling cascade model: interpolation (solid line), real data (circles) and noisy measurements
(crosses)
70
80
37
1
1
0.9
0.9
0.8
0.8
0.7
0.7
Concentration
Concentration
Georgoulas, Clark, Ocone, Gilmore & Sanguinetti
0.6
0.5
0.4
0.6
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
10
20
30
40
50
Time
60
70
80
90
0
0
100
10
(a) noiseless measurements
20
30
40
Time
0.9
0.8
Concentration
0.7
0.6
0.5
0.4
0.3
0.2
0.1
10
20
30
40
50
Time
60
70
80
90
100
(c) noise with standard deviation 1
Figure 3: Signalling cascade model: predictions (solid line) and real data (crosses)
1
0.9
0.8
Concentration
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
Time
50
60
70
80
(a) GP interpolation and fit using estimated parameters
60
(b) noise with standard deviation 0.5
1
0
0
50
Name
k1
k2
k3
k4
V
Km
Value
0.07
0.6
0.05
0.3
0.017
0.3
Estimate
0.348039
3.0005
0.160029
1.41002
0.744395
6.37266
(b) Estimated parameters
Figure 4: Results for the cascade model when working with the entire system.
70
80
38
A subsystems approach for parameter estimation
(a)
(b)
Figure 5: Probability density functions for the posterior distributions of the parameters (a) k3 and (b)
V in the signalling cascade model, with noise standard deviation 0.5 (similar results were obtained for
other noise levels)
sake of brevity, we do not present the plots of all 14 time-series.
This is reflected in the estimated parameter values, and Table 2 shows the estimates for each parameter along with the relative error compared to the true values. We can immediately see that the errors span
a very wide range. The parameter with the worst estimate is a degradation rate for an mRNA species and
it is only involved in one ODE, in a term of the form pp7 mrna degradation rate ∗ pp7 mrna. However, the concentration of that mRNA is very low in the measured data, and so the value of the parameter
has little to no effect, as the term will always have a value of essentially zero. This makes obtaining
an accurate estimate for it impossible using these conditions. We would therefore regard this more as a
shortcoming of the data set rather than our method.
To evaluate these results, we used COPASI[8], a software package for the analysis and simulation of
biochemical networks, on the same system and data. The software comes with a number of optimization methods; for the purposes of this comparison, we used simulated annealing[10], which explores
the search space by attempting to minimize an energy function while also permitting moves towards
apparently less optimal solutions, albeit with reduced probability as the search goes on. It is perhaps
interesting that some of the other built-in methods could not always produce a result, which we believe
was due to numerical computation issues (attempting to invert a singular matrix).
The parameter estimation results using COPASI are presented in Table 2 as well, where we can also
observe a wide spread of the errors. Comparing the last two columns, we see that the mean of the errors
using our method is slightly better than that using COPASI (1.521 vs 1.862), as is the median (0.448 vs
0.503). For a more general picture, we looked at the distribution of the error values in each case, which
is presented in the histograms of figure 6. To present this more clearly, we have excluded the parameters
whose estimate had a relative error of more than 10 (i.e. the estimate was an order of magnitude off).
This resulted in the exclusion of one parameter for COPASI and of two for our method.
We can see that, using our method, the mass of the errors seems to be more concentrated towards
lower values, whereas with COPASI there are more “outliers” with higher errors. For example, if we
look at parameters whose estimate is more than 100% off the real value, we can find 5 such instances
Georgoulas, Clark, Ocone, Gilmore & Sanguinetti
Parameter
Value
pp7 mrna degradation rate 0.217
v3 h
0.647
pro2 strength
1.614
v1 Kd
1.752
pro5 strength
0.374
pro7 strength
0.984
v3 Kd
4.245
pp2 mrna degradation rate 4.928
v4 h
7.094
v8 h
5.995
v2 h
6.838
v1 h
9.456
rbs5 strength
2.565
v5 h
1.871
v2 Kd
6.173
v7 Kd
1.595
6.876
v10 h
v5 Kd
9.930
pro4 strength
0.653
7.923
v10 Kd
p3 degradation rate
9.948
pp5 mrna degradation rate 3.229
rbs3 strength
4.432
4.523
v9 h
Rel. error Rel. error
(MCMC) (COPASI)
29.23
33.28
13.37
2.468
4.278
4.376
3.438
3.837
2.882
6.979
1.342
2.463
1.278
0.028
0.955
0.506
0.918
0.380
0.908
0.417
0.907
0.364
0.904
0.675
0.885
0.436
0.875
0.777
0.733
0.284
0.650
5.078
0.598
0.538
0.589
0.053
0.581
8.001
0.494
0.499
0.490
0.413
0.480
0.930
0.464
0.565
0.453
0.060
39
Parameter
pp6 mrna degradation
pro6 strength
v4 Kd
rbs6 strength
p6 degradation rate
rbs4 strength
p4 degradation rate
pp4 mrna degradation
rbs2 strength
pp1 mrna degradation
pro1 strength
v9 Kd
p2 degradation rate
v8 Kd
rbs1 strength
rbs7 strength
pp3 mrna degradation
p1 degradation rate
p7 degradation rate
v6 h
pro3 strength
v7 h
p5 degradation rate
v6 Kd
rate
rate
rate
rate
Value
4.716
6.953
9.674
1.124
5.885
8.968
4.637
1.369
4.266
3.271
7.530
4.153
8.921
4.044
3.449
9.542
7.698
1.403
5.452
7.958
4.366
7.009
0.672
9.322
Rel. error Rel. error
(MCMC) (COPASI)
0.443
0.284
0.436
0.090
0.424
0.397
0.422
0.525
0.415
0.413
0.366
0.317
0.323
0.805
0.284
5.767
0.272
0.782
0.249
0.631
0.222
0.079
0.193
0.150
0.183
0.105
0.151
1.286
0.149
0.788
0.147
0.828
0.145
0.253
0.137
0.280
0.108
0.025
0.076
0.395
0.067
0.808
0.029
0.576
0.026
0.017
0.0004
0.360
Table 2: Real parameter values and relative errors for the gene regulatory network example, using our
method (MCMC) and COPASI.
using our method but 9 using COPASI (7 vs 10, if we include the ones excluded from Figure 6), out of
a total of 48. This is consistent with the median and mean of the errors being lower with our method, as
reported above.
6
Conclusions and future work
Parameter estimation remains a central challenge in dynamical modelling of biological systems. While
it is most often dealt with in the context of systems of non-linear ODEs, the importance of parameter
estimation extends to stochastic and hybrid models, due to the fact that the mean behaviour of a stochastic system is described by a differential equation. In this contribution, we presented a computational
approach to speed up parameter estimation in (potentially large) systems of ODEs by using ideas from
statistical machine learning. Our preliminary results indicate that the method is competitive with the
state of the art, but can achieve significant speed-ups through the ease of parallelisation it entails. Furthermore, the computational resources can be redirected to exploring thoroughly the parameter space of
each subsystem, which is often impossible in large systems.
There are several limitations of the current method which clearly point to subsequent developments.
Frequently, one is presented with measurements not of a single species in the system, but of a combination of species. For example, one may have access to total protein measurements, but not to measurements of phosphorylated/ unphosphorylated protein levels. A further challenge may arise when the
40
A subsystems approach for parameter estimation
10
10
9
9
8
8
7
7
6
6
5
5
4
4
3
3
2
2
1
1
0
0
1
2
3
4
5
(a)
6
7
8
9
10
0
0
1
2
3
4
5
6
7
8
9
10
(b)
Figure 6: Distributions of the relative error of the parameter estimates for the gene regulatory network,
using (a) our method (2 values excluded) (b) COPASI (1 value excluded).
same parameter controls more than one reaction (e.g. in a mass action chemical reaction cascade). A
possible solution to this could be to extend the size of the subsystems involved; automatic identification
of a minimal size of subsystems however remains problematic.
Acknowledgements
The authors would like to thank Dirk Husmeier and Frank Dondelinger for their input and advice.
SynthSys (formerly “Centre for Systems Biology at Edinburgh”) is a Centre for Synthetic and Systems
Biology funded by BBSRC and EPSRC, ref. BB/D019621/1.
References
[1] DREAM6 Estimation of Model Parameters Challenge — The Dream Project.
http://www.
the-dream-project.org/challenges/dream6-estimation-model-parameters-challenge.
[2] Luca Bortolussi (2011): Hybrid Limits of Continuous Time Markov Chains. In: Quantitative Evaluation of
Systems (QEST), 2011 Eighth International Conference on, pp. 3 –12, doi:10.1109/QEST.2011.10.
[3] Adrian W. Bowman & Adelchi Azzalini: Bayesian Data Analysis. Oxford University Press.
[4] Clive G. Bowsher (2011): Automated analysis of information processing, kinetic independence and modular
architecture in biochemical networks using MIDIA 27(4), pp. 584–586. doi:10.1093/bioinformatics/btq694.
[5] Kamil Erguler & Michael P. H. Stumpf (2011): Practical limits for reverse engineering of dynamical systems:
a statistical analysis of sensitivity and parameter inferability in systems biology models. Mol. BioSyst. 7, pp.
1593–1602, doi:10.1039/C0MB00107D.
[6] Andrew Gelman, Christian Robert, Nicolas Chopin & Judith Rousseau (1995): Bayesian Data Analysis.
[7] Ryan N Gutenkunst, Joshua J Waterfall, Fergal P Casey, Kevin S Brown, Christopher R Myers & James P
Sethna (2007): Universally Sloppy Parameter Sensitivities in Systems Biology Models. PLoS Comput Biol
3(10), pp. 1871–1878, doi:10.1371/journal.pcbi.0030189.
Georgoulas, Clark, Ocone, Gilmore & Sanguinetti
41
[8] Stefan Hoops, Sven Sahle, Ralph Gauges, Christine Lee, Jrgen Pahle, Natalia Simus, Mudita Singhal, Liang
Xu, Pedro Mendes & Ursula Kummer (2006): COPASIa COmplex PAthway SImulator. Bioinformatics
22(24), pp. 3067–3074, doi:10.1093/bioinformatics/btl485.
[9] Hans-Michael Kaltenbach & Jorg Stelling (2012): Modular Analysis of Biological Networks. In Igor I.
Goryanin & Andrew B. Goryachev, editors: Advances in Systems Biology, Advances in Experimental
Medicine and Biology 736, Springer New York, pp. 3–17, doi:10.1007/978-1-4419-7210-1 1.
[10] S. Kirkpatrick, C. D. Gelatt & M. P. Vecchi (1983): Optimization by Simulated Annealing.
220(4598), pp. 671–680, doi:10.1126/science.220.4598.671.
Science
[11] Carl Edward Rasmussen & Christopher K. I. Williams: Gaussian Processes for Machine Learning. MIT
Press.
[12] Philipp Thomas, Arthur Straube & Ramon Grima (2012): The slow-scale linear noise approximation: an
accurate, reduced stochastic description of biochemical networks under timescale separation conditions.
BMC Systems Biology 6(1), p. 39, doi:10.1186/1752-0509-6-39.
[13] N. G. Van Kampen (2007): Stochastic Processes in Physics and Chemistry. Elsevier Science & Technology.
[14] Vladislav Vyshemirsky & Mark A. Girolami (2008): Bayesian ranking of biochemical system models. Bioinformatics 24(6), pp. 833–839, doi:10.1093/bioinformatics/btm607.
| 5 |
Hohmann Transfer via Constrained Optimization
Li Xie∗
State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources
North China Electric Power University, Beijing 102206, P.R. China
arXiv:1712.01512v1 [] 5 Dec 2017
Yiqun Zhang† and Junyan Xu‡
Beijing Institute of Electronic Systems Engineering, Beijing 100854, P.R. China
In the first part of this paper, inspired by the geometric method of Jean-Pierre Marec, we
consider the two-impulse Hohmann transfer problem between two coplanar circular orbits
as a constrained nonlinear programming problem. By using the Kuhn-Tucker theorem, we
analytically prove the global optimality of the Hohmann transfer. Two sets of feasible solutions
are found, one of which corresponding to the Hohmann transfer is the global minimum, and the
other is a local minimum. In the second part, we formulate the Hohmann transfer problem as
two-point and multi-point boundary-value problems by using the calculus of variations. With
the help of the Matlab solver bvp4c, two numerical examples are solved successfully, which
verifies that the Hohmann transfer is indeed the solution of these boundary-value problems.
Via static and dynamic constrained optimization, the solution to the orbit transfer problem
proposed by W. Hohmann ninety-two years ago and its global optimality are re-discovered.
I. Introduction
I
n 1925, Dr. Hohmann, a civil engineer published his seminal book [1] in which he first described the well-known
optimal orbit transfer between two circular coplanar space orbits by numerical examples, and this transfer is now
generally called the Hohmann transfer. Hohmann claimed that the minimum-fuel impulsive transfer orbit is an elliptic
orbit tangent to both the initial and final circular orbits. However a mathematical proof to its optimality was not
addressed until 1960s. The first proof targeted to the global optimality was presented by Barrar [2] in 1963, where the
Whittaker theorem in classical analytical dynamics was introduced and the components of the velocity at any point on
an elliptic orbit, perpendicular to its radius vector and the axis of the conic respectively, were used to be coordinates
in a plane; see also Prussing and Conway’s book [3, eq. (3.13)] in detail. Before that, Ting in [4] obtained the local
optimality of the Hohmann transfer. By using a variational method, Lawden investigated the optimal control problem
of a spacecraft in an inverse square law field, and invented the prime vector methodology [5]. According to different
thrusts, the trajectory of a spacecraft is divided into the arcs of three types: (1) null thrust arc; (2) maximum thrust arc;
∗ Professor,
School of Control and Computer Engineering; the correspondence author, [email protected]
Research Scientist, [email protected]
‡ Associate Research Scientist, [email protected]
† Senior
(3) intermediate arc. If we approximate the maximum thrust by an impulse thrust, then the orbit transfer problem can
be studied by the the prime vector theory and the Hohmann transfer is a special case, which can also be seen in the
second part of this paper. Lawden derived all necessary conditions that the primer vector must satisfy. A systemic
design instruction can be found in [6].
In 2009, Pontani in [7] reviewed the literature concerning the optimality of the Hohmann transfer during 1960s
and 1990s, for example [8–14]. Among of them, Moyer verified the techniques devised respectively by Breakwell
and Constensou via variational methods for general space orbit transfers. The dynamical equations involved were
established for orbital elements. Battin and Marec used a Lagrange multiplier method and a geometric method in light of
a hodograph plane, respectively. It is Marec’s method that enlightens us to consider the Hohmann transfer in a different
way. Based on Green’s theorem, Hazelrigg established the global optimal impulsive transfers. Palmore provided an
elemental proof with the help of the gradient of the characteristic velocity, and Prussing simplified Palmore’s method by
utilizing the partial derivatives of the characteristic velocity, and a similar argument also appeared in [15] which was
summarized in the book [16]; see more work of Prussing in [3, Chapter 6]. Yuan and Matsushima carefully made use of
two lower bounds of velocity changes for orbit transfers and showed that the Hohmann transfer is optimal both in total
velocity change and each velocity change.
Recently, Gurfil and Seidelmann in their new book [17, Chapter 15] consider the effects of the Earth’s oblateness J2
on the Hohmann transfer. Then the velocity of a circular orbit is calculated by
v
u
t
v=
2
3J2 req
µ
1+
r
2r 2
!
By using Lagrange multipliers, the optimal impulsive transfer is derived. This transfer is referred as to the extended
Hohmann transfer which degenerates to the standard Hohmann transfer as J2 = 0. Avendaño et al. in [18] present a pure
algebraic approach to the minimum-cost multi-impulse orbit-transfer problem. By using Lagrange multipliers, as a
particular example, the optimality of the Hohmann transfer is also provided by this algebraic approach. These authors
are all devoted to proving the global optimality of the Hohmann transfer.
In the first part of this paper, we present a different method to prove the global optimality of the Hohmann transfer.
Inspired by the geometric method of Marec in [10, pp. 21-32], we transform the Hohmann transfer problem into a
constrained nonlinear programming problem, and then by using the results in nonlinear programming such as the
well-known Kuhn-Tucker theorem, we analytically prove the global optimality of the Hohmann transfer. Here by the
global optimality, we mean that the Hohmann transfer is optimum among all possible two-impulse coplanar transfers.
Two sets of feasible solutions are found, one of which corresponding to the Hohmann transfer is the global minimum,
and the other is a local minimum. In the second part of the paper, we consider the Hohmann transfer problem as
dynamic optimization problems. By using variational method, we first present all necessary conditions for two-point and
2
multi-point boundary-value problems related to such dynamic optimization problems, and we then solve two numerical
examples with the help of Matlab solver bvp4c, which verifies that the Hohmann transfer is indeed the solution of these
constrained dynamic optimization problems. By formulating the Hohmann transfer problem as constrained optimization
problems, the solution to the orbit transfer problem proposed by W. Hohmann 92 years ago and its global optimality are
re-discovered.
II. The optimality of the Hohmann Transfer
The Hohmann transfer is a two-impulse orbital transfer from one circular orbit to another; for the background see,
e.g., [3, 10, 19, 20]. We use the same notation as Marec in [10, pp. 21-32]. Let O0 denote the initial circular orbit
of a spacecraft, and the radius and the velocity of the initial circular orbit are r0 and V0 respectively. Let O f denote
the final circular orbit, and its radius and velocity are r f and Vf respectively. For simplicity, let r f be greater than r0 .
We use boldface to denote vectors. The center of these two coplanar circular orbits is at the origin of the Cartesian
inertia reference coordinate system. Suppose all transfer orbits are coplanar to O0 . Hence we need only to consider the
x − y plane of the reference coordinate system. Assume that the spacecraft is initially located at the point (r0, 0) of the
initial circular orbit. At the initial time t0 , the first velocity impulse vector ∆V0 is applied, and its components in the
direction of the radius and the direction perpendicular to the radius are ∆X0, ∆Y0 respectively. Similarly, at the final time
t f , the second velocity impulse ∆V f occurs, and its components are ∆X f , ∆Yf . Hence at the time t0+ just after t0 , the
components of the velocity of the spacecraft are (∆X0, V0 + ∆Y0 ). Then in order to enter the final circular orbit, the
components of the velocity of the spacecraft must be (−∆X f , Vf − ∆Yf ) at the time t −f just before t f .
During the period from t0+ to t −f , the spacecraft is on a transfer orbit (conic) and the angular momentum and the
energy (per unit mass) are conserved, hence we have
h = r0 (V0 + ∆Y0 ) = r f (Vf − ∆Yf )
i µ
µ
1
1h
E=
(V0 + ∆Y0 )2 + ∆X02 −
=
(Vf − ∆Yf )2 + ∆X f2 −
2
r0 2
rf
(1)
µ
, µ is gravitational constant. It follow from (1)
r
that we can express ∆X0, ∆Y0 in terms of ∆X f , ∆Yf and vice versa. In order to simplify equations, at first, using the
Notice that for a circular orbit, its radius and velocity satisfy V =
r
initial orbit radius and velocity as reference values, we define new variables:
r̄ f =
rf
,
r0
vf =
Vf
,
V0
y0 =
∆Y0
,
V0
x0 =
3
∆X0
,
V0
yf =
∆Yf
,
V0
xf =
∆X f
V0
(2)
Substituting (2) into (1) yields the following non-dimensional angular momentum and energy equalities
h = (1 + y0 ) = r̄ f (v f − y f )
i 1
1h
1
(1 + y0 )2 + x02 − 1 =
(v f − y f )2 + x 2f −
E=
2
2
r̄ f
(3)
see [10, p.22]. Then we express x f , y f in terms of x0, y0
y f = v f − (1 + y0 )r̄ f−1 = r̄ f−1/2 − (1 + y0 )r̄ f−1
(4)
x 2f = x02 + (1 + y0 )2 − 2(1 − r̄ f−1 ) − (v f − y f )2
= x02 + (1 + y0 )2 (1 − r̄ f−2 ) − 2(1 − r̄ f−1 )
(5)
from which we have
∆v 2f
Vf
=
v0
2
= x 2f + y 2f
2
= x02 + (1 + y0 )2 (1 − r̄ f−2 ) − 2(1 − r̄ f−1 ) + r̄ f−1/2 − (1 + y0 )r̄ f−1
2
= x02 + y0 + 1 − r̄ f−3/2 − (r̄ f − 1)(2r̄ f2 − r̄ f − 1)r̄ f−3
2
= x02 + y0 + 1 − r̄ f−3/2 − (r̄ f − 1)2 (2r̄ f + 1)r̄ f−3
∆v02 = x02 + y02
where we define ∆v0 =
q
x02 + y02 > 0 and ∆v f =
q
x 2f + y 2f > 0. Thus the cost functional (i.e., the characteristic
velocity) can be written as
∆v(x0, y0 ) = ∆v0 + ∆v f
=
q
r
x02
+
y02
+
2
x02 + y0 + 1 − r̄ f−3/2 − (r̄ f − 1)2 (2r̄ f + 1)r̄ f−3
Observing (5), it is noted that we must make the following constraint on the first impulse
x02 + (1 + y0 )2 (1 − r̄ f−2 ) − 2(1 − r̄ f−1 ) ≥ 0
(6)
That is, when we use the energy conservation to calculate the non-dimensional component of the second impulse x f , the
inequality (6) must hold such that a non-negative number is assigned to x 2f , which actually requires that the transfer
orbit intersects the inner circle and the outer circle.
4
The above background can be found in [10, Section 2.2] which is specified here. Marec used the independent
variables x0, y0 as coordinates in a hodograph plan and in geometric language, an elegant and simple proof was given
to show the optimality of the Hohmann transfer. Marec’s geometric method enlightens us to consider the Hohmann
transfer in a different way. Then the global optimality of the Hohmann transfer analytically appears.
We now formulate the Hohmann transfer problem as a nonlinear programming problem subject to the inequality
constraint (6) in which x0 and y0 are independent variables.
Theorem II.1 The classical Hohmann transfer is the solution of the following constrained optimization problem
min ∆v(x0, y0 )
(7)
x0,y0
s.t.
x02 + (1 + y0 )2 (1 − r̄ f−2 ) − 2(1 − r̄ f−1 ) ≥ 0
Also this solution is the global minimum.
Proof. We say that the Hohmann transfer is the solution to the above optimization problem if the pair (x0, y0 )
corresponding to the Hohmann transfer is feasible to (7). As usual, we use Lagrange multiplier method and define the
Lagrangian function as follows
F(x0, y0 ) = ∆v(x0, y0 ) + λ −x02 − (1 + y0 )2 (1 − r̄ f−2 ) + 2(1 − r̄ f−1 )
(8)
In view of the Kuhn-Tucker theorem, a local optimum of ∆v(x0, y0 ) must satisfy the following necessary conditions
∂F
x0
x0
=
+
− 2λx0 = 0
∂ x0 ∆v0 ∆v f
(9)
y0 + 1 − r̄ f−3/2
∂F
y0
=
+
− 2λ(1 + y0 )(1 − r̄ f−2 ) = 0
∂ y0 ∆v0
∆v f
λ −x02 − (1 + y0 )2 (1 − r̄ f−2 ) + 2(1 − r̄ f−1 ) = 0
(11)
− x02 − (1 + y0 )2 (1 − r̄ f−2 ) + 2(1 − r̄ f−1 ) ≤ 0
(12)
λ≥0
(13)
(10)
Notice that we have assumed that ∆v0, ∆v f > 0. In order to find a feasible solution, we divide the proof of the first part
of this theorem into two cases.
(i) If λ = 0, then due to the equations (9) and (10), we have x0 = 0 and
y0 + 1 − r̄ f−3/2
y0
+
=0
∆v0
∆v f
5
(14)
respectively. We claim that the equality (14) does not hold because the equality x0 = 0 implies that
y0
y0
= ±1
=
∆v0
|y0 |
which contradicts
1<
y0 + 1 − r̄ f−3/2
∆v f
y0 + 1 − r̄ f−3/2
=r
,
2
−3/2
−3
2
y0 + 1 − r̄ f
− (r̄ f − 1) (2r̄ f + 1)r̄ f
or
y0 + 1 − r̄ f−3/2
∆v f
< −1
Therefore the equation (14) has no solution under the assumption λ = 0.
(ii) We now consider the case λ > 0. Then the equation (11) yields
−x02 − (1 + y0 )2 (1 − r̄ f−2 ) + 2(1 − r̄ f−1 ) = 0
(15)
which together with (5) leads to x 2f = 0. Thus
∆v f =
q
x 2f + y 2f =
q
y 2f = y f
(16)
and also
x02 = −(1 + y0 )2 (1 − r̄ f−2 ) + 2(1 − r̄ f−1 )
(17)
∆v02 = x02 + y02 = r̄ f−2 (1 + y0 )2 − (1 + 2y0 ) + 2(1 − r̄ f−1 )
2
= r̄ f−1 (1 + y0 ) − r̄ f + 2(1 + y0 ) − r̄ f2 − (1 + 2y0 ) + 2(1 − r̄ f−1 )
2
= r̄ f−1 (1 + y0 ) − r̄ f + 3 − r̄ f2 − 2r̄ f−1
(18)
Then we obtain ∆v02 as follows
Thanks to (9)
x0
x0
+
− 2λx0 = 0
∆v0 ∆v f
(19)
We now show that x0 = 0 by contradiction. If x0 , 0, then the equality (19) implies that
2λ =
1
1
+
∆v0 ∆v f
6
(20)
Substituting (20) into (10) gives
y0 + 1 − r̄ f−3/2
y0
1
1
−2
+
= (1 + y0 )(1 − r̄ f )
+
∆v0
∆v f
∆v0 ∆v f
Arranging the above equation and using (4) and (16), we get
y0 − (1 + y0 )(1 − r̄ f−2 )
∆v0
r̄ f−3/2 − (y0 + 1)r̄ f−2
=
∆v f
=
r̄ f−1/2 − (y0 + 1)r̄ f−1
∆v f
r̄ f−1 =
yf
yf
r̄ f−1
where y f , 0 since we have assumed v f > 0 and just concluded x f = 0. By further rearranging the above equation, we
have
r̄ f−1 (1 + y0 ) − r̄ f
∆v0
=
yf
yf
=
1,
if y f > 0
−1,
if y f < 0
(21)
It is noted that (18) gives a strict inequality
2
2
∆v02 = r̄ f−1 (1 + y0 ) − r̄ f + 3 − r̄ f2 − 2r̄ f−1 < r̄ f−1 (1 + y0 ) − r̄ f
(22)
where the function 3 − r̄ f2 − 2r̄ f−1 < 0 since it is decreasing with respect to r̄ f and r̄ f > 1. Hence (21) contradicts to
(22) and does not hold, which implies that the assumption λ > 0 and the equality x0 , 0 do not hold simultaneously.
Therefore λ > 0 leads to x f = 0 and x0 = 0.
Summarizing the two cases above, we now conclude that a feasible solution to (7) must have λ∗ > 0, x ∗f = 0 and
x0∗ = 0. Then the corresponding normal component y0 can be solved from (15)
s
y0∗ =
s
2r̄ f
− 1,
1 + r̄ f
ŷ0∗ = −
2r̄ f
−1
1 + r̄ f
(23)
Substituting them into (4) and (10) yields
s
y ∗f
=
r̄ f−1/2
ŷ ∗f = r̄ f−1/2
!
2
1−
,
1 + r̄ f
s
!
2
1+
,
1 + r̄ f
y0∗ + 1 − r̄ f−3/2 ª
1
©
λ =
1 +
®>0
y ∗f
2(1 + y0∗ )(1 − r̄ f−2 )
«
¬
∗ + 1 − r̄ −3/2
ŷ
1
0
f
©
ª
λ̂∗ =
1 +
®>0
∗
∗
−2
ŷ
2(1 + ŷ0 )(1 − r̄ f )
f
«
¬
∗
(24)
One can see that there exist two sets of feasible solutions, one of which (x0∗, y0∗ ) is corresponding to the Hohmann
transfer, and its the Lagrange multiplier is λ∗ .
7
We are now in a position to show that the Hohmann transfer is the global minimum. We first show that (x0∗, y0∗ ) is a
strict local minimum. Theorem 3.11 in [21] gives a second order sufficient condition for a strict local minimum. To
apply it, we need to calculate the Hessian matrix of the Lagrangian function F defined by (8) at (x0∗, y0∗ )
∂2 F
©
∂ x2
0
∇2 F(x0∗, y0∗, λ∗ ) =
∂2 F
« ∂ y0 ∂ x0
∂2 F
ª
∂ x0 ∂ y0 ®®
®
∂ 2 F ®®
∂ y02 ¬(x ∗,y ∗ )
0 0
(25)
After a straightforward calculation, by the equations (9) and (10) and using the fact x0∗ = 0, we have
∂2 F
∂2 F
(x0∗, y0∗ ) =
(x ∗, y ∗ ) = 0
∂ x0 ∂ y0
∂ y0 ∂ x0 0 0
and also
1 + r̄ f−1 (1 + y0∗ )
1
1
∂2 F ∗ ∗
∗
(x
,
y
)
=
>0
+
−
2λ
=
y0∗ y ∗f
∂ x02 0 0
y0∗ (1 + y0∗ )(1 + r̄ f−1 )
(26)
∂2 F ∗ ∗
(x , y ) =
∂ y02 0 0
(27)
−b
(y0∗
+
a)2
−b
∗
−2
3/2 − 2λ (1 − r̄ f ) < 0
where a = 1 − r̄ f−3/2, b = (r̄ f − 1)2 (2r̄ f + 1)r̄ f−3 > 0. Then one can see that the Hessian matrix is an indefinite matrix.
Thus we cannot use the positive definiteness of the Hessian matrix, that is,
zT ∇2 F(x0∗, y0∗, λ∗ )z > 0,
∀z , 0
as a sufficient condition to justify the local minimum of (x0∗, y0∗ ) as usual. Fortunately, Theorem 3.11 in [21] tells us that
for the case λ∗ > 0, that is, the inequality constraint is active, when we use the positive definiteness of the Hessian
matrix to justify the local minimum, we need only to consider the positive definiteness of the first block of the Hessian
matrix with the non-zero vector z , 0 defined by
z ∈ Z(x0∗, y0∗ ) = {z : z T ∇g(x0∗, y0∗ ) = 0}
where g(x0, y0 ) is the constraint function
g(x0, y0 ) = −x02 − (1 + y0 )2 (1 − r̄ f−2 ) + 2(1 − r̄ f−1 )
8
(28)
With this kind of z, if zT ∇2 F(x0∗, y0∗, λ∗ )z > 0, then (x0∗, y0∗ ) is a strict local minimum. Specifically, the vector z defined
by the set (28) satisfies
z T ∇g(x0∗, y0∗ ) = z1
−2x0
=0
z2
−2(1 + y )(1 − r̄ −2 )
0
f
(x0∗,y0∗ )
(29)
Notice that x0∗ = 0, −2(1 + y0∗ )(1 − r̄ f−2 ) , 0. Hence (29) implies that the components z1 , 0 and z2 = 0. Obviously it
follows from (26) that
z
1
∂2 F ∗ ∗
2
∗ ∗ ∗
0 ∇ F(x0, y0, λ ) = z1 2 (x0, y0 )z1 > 0
∂ x0
0
z1
(30)
Therefore in light of Theorem 3.11 in [21, p.48], we can conclude that (x0∗, y0∗ ) is a strict local minimum. The same
argument can be used to show that ( x̂0∗, ŷ0∗ ) is also a strict local minimum in view of (30). Meanwhile a straightforward
calculation gives
∆v(x0∗, y0∗ ) < ∆v( x̂0∗, ŷ0∗ )
Hence the pair (x0∗, y0∗ ) is a global minimum, and further it is the global minimum due to the uniqueness; see [22, p.194]
for the definition of a global minimum. This completes the proof of the theorem.
III. The Hohmann transfer as dynamic optimization problems
In this section, the orbit transfer problem is formulated as two optimal control problems of a spacecraft in an inverse
square law field, driven by velocity impulses, with boundary and interior point constraints. The calculus of variations is
used to solve the resulting two-point and multi-point boundary-value problems (BCs).
A. Problem formulation
Consider the motion of a spacecraft in the inverse square gravitational field, and the state equation is
rÛ = v
vÛ = −
µ
r
r3
(31)
where r(t) is the spacecraft position vector and v(t) is its velocity vector. The state vector consists of r(t) and v(t). We
use (31) to describe the state of the transfer orbit, which defines a conic under consideration; see, e.g., [19, Chapter 2].
Problem III.1 Given the initial position and velocity vectors of a spacecraft on the initial circular orbit, r(t0 ), v(t0 ).
9
The terminal time t1 is not specified. Let t0+ signify just after t0 and t1− signify just before t1 .∗ During the period from t0+
to t1− , the state evolves over time according to the equation (31). To guarantee at the time t1+ , the spacecraft enters the
finite circular orbit, we impose the following equality constraints on the finial state
gr1 (r(t1+ )) = r(t1+ ) − r f = 0
gv1 (v(t1+ )) = v(t1+ ) − v f = 0
g2 (r(t1+ ), v(t1+ )) = r(t1+ ) · v(t1+ ) = 0
where v f is the orbit velocity of the final circular orbit v f =
p
(32)
µ/r f . Suppose that there are velocity impulses at time
instants t0 and t1
v(ti+ ) = v(ti− ) + ∆vi,
i = 0, 1
(33)
where v(t0− ) = v(t0 ). The position vector r(t) is continuous at these instants. The optimal control problem is to design
∆vi that minimize the cost functional
J = |∆v0 | + |∆v1 |
(34)
subject to the constraints (32).
In control theory, such an optimal control problem is called impulse control problems in which there are state or
control jumps. Historically an optimal problem with the cost functional (34) is also referred to as the minimum-fuel
problem. By Problem III.1,† the orbit transfer problem has been formulated as a dynamic optimization problem instead
of a static one considered in Section II. The advantage of this formulation is that the coplanar assumption of the transfer
orbit to the initial orbit is removed, but the computation complexity follows.
In the next problem we consider the orbit transfer problem as a dynamic optimization problem with interior point
constraints.
Problem III.2 Consider a similar situation as in Problem III.1. Let t HT be the Hohmann transfer time. Instead of the
unspecified terminal time, here the terminal time instant t f > t HT is given and the time instant t1 now is an unspecified
interior time instant. The conditions (32) become a set of interior boundary conditions. The optimal control problem is
to design ∆vi that minimize the cost functional (34) subject to the interior boundary conditions (32).
∗ In
side.
mathematical language, t0+ represents the limit of t0 approached from the right side and t1− represents the limit of t1 approached from the left
† A Matlab script for the Hohmann transfer was given as a numerical example in the free version of a Matlab-based software GPOPS by an direct
method. The constrained (32) was also used to describe the terminal conditions. We here use the variational method (i.e., indirect method) to optimal
control problems
10
B. Two-point boundary conditions for Problem III.1
Problem III.1 is a constrained optimization problem subject to static and dynamic constraints. We use Lagrange
multipliers to convert it into an unconstrained one. Define the augmented cost functional
J˜ : = |∆v0 | + |∆v1 |
+
+
T
T
+ qr1
r(t0 ) − r(t0 ) + qr2
r(t1 ) − r(t1− )
+ qTv1 v(t0+ ) − v(t0 ) − ∆v0 + qTv2 v(t1+ ) − v(t1− ) − ∆v1
+ γr1 gr1 (r(t1+ )) + γv1 gv1 (v(t1+ )) + γ2 gr2 (r(t1+ ), v(t1+ ))
∫ t−
µ
1
+
prT (v − rÛ ) + pTv − 3 r − vÛ dt
r
t0+
where Lagrange multipliers pr , pv are also called costate vectors; in particular, Lawden [5] termed −pv the primer
vector. Introducing Hamiltonian function
H(r, v, p) :=
prT v
−
µ
pTv 3 r,
r
T
p = pr
pv
(35)
By taking into account of all perturbations, the first variation of the augmented cost functional is
δ J˜ =
∆vT0
|∆v0 |
δv0 +
∆vT1
δv1
|∆v1 |
T
T
+ qr1
dr(t0+ ) − dr(t0− ) + qr2
dr(t1+ ) − dr(t1− )
+ qTv1 dv(t0+ ) − dv(t0− ) − δv0 + qTv2 dv(t1+ ) − dv(t1− ) − δv1
+ drT (t1+ )
∂gv1 (v(t1+ ))
∂gr1 (r(t1+ ))
T +
γ
+
dv
(t
)
γv1
r1
1
∂r(t1+ ) ∗
∂v(t1+ ) ∗
∂g2 (r(t1+ ), v(t1+ ))
∂g2 (r(t1+ ), v(t1+ ))
T +
γ
+
dv
(t
)
γ2
2
1
∗
∗
∂r(t1+ )
∂v(t1+ )
(∗) + H∗ − prT rÛ − pTv vÛ −∗ δt1
+ drT (t1+ )
t1
(∗∗) + prT (t0+ )δr(t0+ ) − prT (t1−∗ )δr(t1−∗ ) + pTv (t0+ )δv(t0+ ) − pTv (t1−∗ )δv(t1−∗ )
T
T #
∫ t −∗ "
1
∂H(r, v, p)
∂H(r, v, p)
+
+ pÛ r (t) δr +
+ pÛ v (t) δv dt
∂r
∂v
t0+
∗
∗
(36)
where we use d(·) to denote the difference between the varied path and the optimal path taking into account the
differential change in a time instant (i.e., differential in x), for example,
dv(t1+ ) = v(t1+ ) − v∗ (t1+∗ )
11
and δ(·) is the variation, for example, δv(t1∗ ) is the variation of v as an independent variable at t1∗ . Notice that
dr(t0− ) = 0, dv(t0− ) = 0 since t0 is fixed. The parts of δ J˜ in (36) marked with asterisks are respectively due to the linear
term of
∫
t1−∗ +δt1
t1−∗
H(r, v, p) − prT rÛ − pTv vÛ dt
and the first term in the right-hand side of the following equation
∫
t1−∗
t0+
t −∗
prT δÛrdt = prT δr t1+ −
0
∫
t1−∗
t0+
pÛ rT δrdt
obtained by integrating by parts. In order to derive boundary conditions, we next use the following relation
dv(t1− ) = δv(t1−∗ ) + vÛ (t1−∗ )δt1
(37)
see [23, Section 3.5] and [20, Section 3.3].
In view of the necessary condition δ J˜ = 0 and the fundamental lemma, we have the costate equations
pÛ r (t) = −
∂H(r, v, p)
,
∂r
pÛ v (t) = −
∂H(r, v, p)
∂v
(38)
Based on the definition of the Hamiltonian function in (35), the costate equation (38) can be rewritten as
pÛ r =
∂ µ
µ 3
r pv = − 3 ( 2 rrT − I3×3 )pv,
3
∂r r
r r
pÛ v = −pr
(39)
where I3×3 is the 3 × 3 identity matrix.
Using (37) and regrouping terms in (36) yields the following terms or equalities for ∆v1, ∆v2, dv
∆v0
− qv1
|∆v0 |
T
∆v0
T
∆v1
− qv2 ∆v1
|∆v1 |
qTv1 dv(t0+ ) + pTv (t0+ )δv(t0+ ) = qTv1 + pTv (t0+ ) dv(t0+ )
−qTv2 dv(t1− ) − pTv (t1− ) δv(t1−∗ ) + vÛ (t1−∗ )δt1 = − qTv2 + pTv (t1− ) dv(t1− )
∂gv1 (v(t1+ ))
∂g2 (r(t1+ ), v(t1+ ))
qTv2 + γv1
+
γ
dv(t1+ )
2
∗
∂vT (t1+ ) ∗
∂vT (t1+ )
(40)
As usual, in order to assure δ J˜ = 0, we choose Lagrange multipliers to make the coefficients of ∆v0 , ∆v1 , dv(t0+ ), dv(t1− ),
12
dv(t1+ ) in (40) vanish respectively
qv1 −
∆v0
= 0,
|∆v0 |
pv (t0+ ) + qv1 = 0,
qv2 + γv1
qv2 −
∆v1
=0
|∆v1 |
pv (t1− ) + qv2 = 0
∂gv1 (v(t1+ ))
∂g2 (r(t1+ ), v(t1+ ))
+
γ
2
∂v(t1+ ) ∗
∂v(t1+ )
∗
=0
(41)
Applying the similar argument as above for pr gives
pr (t0+ ) + qr1 = 0,
qr2 + γr1
pr (t1− ) + qr2 = 0
∂g2 (r(t1+ ), v(t1+ ))
∂gr1 (r(t1+ ))
+
γ
2
∂r(t1+ ) ∗
∂r(t1+ )
∗
=0
(42)
We now choose H(t1− ) to cause the coefficient of δt1 in (36) to vanish
H(t1− ) = prT (t1− )v(t1− ) − pTv (t1− )
µ
r 3 (t1− )
r(t1− ) = 0
Finally, by solving the first and second component equations of the last vector equation in (41), we obtain the
Lagrangian multipliers γv1, γ2 , and then substituting them into the first component equations of the last vector equation
in (42) yields the Lagrangian multiplier γr1 . With these Lagrangian multipliers and rearranging the second and third
component equations of the last vector equation in (42), we obtain a boundary value equation denoted by
g r(t1+ ), v(t1+ ); pr (t1− ), pv (t1− ) = 0
III.1 Two-point boundary conditions for Problem III.1
(1) r(t0 ) − r0 = 0,
(2) p M v (t0+ ) +
v(t0+ ) − v(t0− ) − ∆v0 = 0
∆v0
= 0,
|∆v0 |
p Mv (t1− ) +
∆v1
=0
|∆v1 |
(3) H(t1− ) = 0
r(t1+ ) − r f = 0
(4) v(t + ) − v f = 0
1
r(t1+ ) · v(t1+ ) = 0
(5) g r(t1+ ), v(t1+ ); pr (t1− ), pv (t1− ) = 0
In summary, a complete list including 19 BCs is shown in List III.1 where r(t1+ ) = r(t1− ), v(t1+ ) = v(t1− ) + ∆v1 .
13
C. Multi-point boundary conditions for Problem III.2
Define the augmented cost functional
J˜ : = |∆v0 | + |∆v1 |
+
+
T
T
+ qr1
r(t0 ) − r(t0 ) + qr2
r(t1 ) − r(t1− )
+ qTv1 v(t0+ ) − v(t0 ) − ∆v0 + qTv2 v(t1+ ) − v(t1− ) − ∆v1
+ γr1 gr1 (r(t1+ )) + γv1 gv1 (v(t1+ )) + γ2 gr2 (r(t1+ ), v(t1+ ))
∫ tf
∫ t−
µ
µ
1
T
T
Û
Û
prT (v − rÛ ) + pTv − 3 r − vÛ dt
+
pr (v − r) + pv − 3 r − v dt +
r
r
t1+
t0+
We introduce the Hamiltonian functions for two time sub-intervals [t0+, t1− ] ∪ [t1+, t f ]
T
Hi (r, v, pi ) := pri
v − pTvi
µ
r,
r3
pi = pri
T
pvi
,
i = 1, 2
(43)
A tedious and similar argument as used in Problem III.1 can be applied to Problem III.2 to derive the costate
equations and boundary conditions. The two-phase costate equations are given by
pÛ ri (t) = −
∂Hi (r, v, pi )
,
∂r
pÛ vi (t) = −
∂Hi (r, v, pi )
,
∂v
i = 1, 2
A complete list for boundary conditions is given in List III.2.
III.2 31 boundary conditions for Problem III.2
(1)
r(t0 ) − r0 = 0,
v(t0− ) − v(t0+ ) − ∆v0 = 0
r(t1− ) − r(t1+ ) = 0, v(t1− ) − v(t1+ ) − ∆v1 = 0
∆v1
∆v0
+
= 0, pv1 (t1− ) +
=0
pv1 (t0 ) +
|∆v
|
|∆v
0
1|
(2)
pv2 (t f ) = 0, pr2 (t f ) = 0
T (t − )∆v = 0
(3) H1 (t1− ) − H2 (t1+ ) = 0 or − pr1
1
1
+
r(t1 ) − r f = 0
(4) v(t + ) − v f = 0
1
r(t1+ ) · v(t1+ ) = 0
(5) g r(t1+ ), v(t1+ ); pr1 (t1− ), pr1 (t1+ ), pv1 (t1− ), pv1 (t1+ ) = 0
14
IV. Numerical Examples
7
The magnitude of the primer vector
x 10
1
4
3
0.95
2
ry m
1
0.9
0
0.85
−1
−2
0.8
−3
−4
−6
−4
−2
0
2
rx m
Fig. 1
4
0.75
6
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
t secon d
7
x 10
1.8
2
4
x 10
The Hohmann transfer and the magnitude of the primer vector
In order to use Matlab solver bvp4c or bvp5c to solve two-point and multi-point boundary value problems with
unspecified switching time instants in Section III, two time changes must be introduced. For the time change, we refer
to, e.g., [20, Appendix A] and [24, Section 3] for details. The solver bvp4c or bvp5c accepts boundary value problems
with unknown parameters; see [25] and [26].
Example IV.1 This example in [10, p.25] is used to illustrate that the Hohmann transfer is the solution to Problem
III.1. The altitude of the initial circular orbit is 300km, and the desired final orbit is geostationary, that is, its radius is
42164km. By the formulas of the Hohmann transfer, the magnitudes of two velocity impulses are given by
∆v1 = 2.425726280326563e + 03,
∆v2 = 1.466822833675619e + 03
m/s
(44)
Setting the tolerance of bvp4c equal to 1e-6, we have the solution to the minimum-fuel problem III.1 with an unspecified
terminal time given by bvp4c
dv1 = 1.0e+03*[0.000000000000804;
2.425726280326426;
dv2 = 1.0e+03*[0.000000000000209;
-1.466822833675464;
0]
0]
Compared with (44), the accuracy of the numerical solution is found to be satisfactory since only the last three digits of
fifteen digits after decimal place are different. It is desirable to speed up the computational convergence by scaling
the state variables though bvp4c already has a scale procedure. The computation time is about 180 seconds on Intel
Core i5 (2.4 GHz, 2 GB). Figure 1 shows the Hohmann transfer and the magnitude of the primer vector. The solution
15
provided by bvp4c depends upon initial values. Figure 2 shows the transfer orbit and the magnitude of the primer vector
corresponding to the local minimum given by (24) in the proof of Theorem II.1.
7
The magnitude of the primer vector
x 10
1
4
0.9
3
2
0.8
ry m
1
0.7
0
−1
0.6
−2
−3
0.5
−4
−6
−4
−2
0
2
4
Fig. 2
0.4
6
rx m
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
t secon d
7
x 10
1.8
2
4
x 10
The orbit transfer and the magnitude of the primer vector for the local minimum
6
The magnitude of the primer vector
x 10
8
1
6
0.8
4
2
ry m
0.6
0
0.4
−2
0.2
−4
−6
0
−8
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
rx m
Fig. 3
0
1
500
1000
1500
2000
2500
3000
t secon d
7
x 10
The Hohmann transfer and the magnitude of the primer vector
Example IV.2 Consider the Hohmann transfer as the multi-point boundary value problem (Problem III.2). The altitudes
of the initial and final circular orbits are 200km and 400km respectively. By the formulas of the Hohmann transfer, the
magnitudes of two velocity impulses are given by
∆v1 = 58.064987253967857,
16
∆v2 = 57.631827424189602
(45)
The Hohmann transfer time t HT = 2.715594949192177e + 03 seconds, hence we choose the terminal time t f = 2800
seconds. The constants, the initial values of the unknown parameters, and the solver and its tolerance are specified in
Table 1.
Table 1
Constants
Constants and initial values
Gravitational constant µ = 3.986e + 14
Earth’s radius Re = 6378145
r0 m
v0 m/s
The initial values of the state
6578145
0
0
0
7.7843e+03
pv
0
pr
The initial values of the costate
-0.0012
0
0
0
-0.9
∆v1
0
∆v2
The initial values of the velocity impulses
0
102
0
The initial value of the scaled time
0
102
0
0.9
Solver bvp4c
Tolerance=1e-7
By trial and error, this example is successfully solved by using bvp4c. We obtain the following solution message
The solution was obtained on a mesh of 11505 points.
The maximum residual is
4.887e-08.
There were 2.2369e+06 calls to the ODE function.
There were 1442 calls to the BC function.
Elapsed time is 406.320776 seconds.
The first velocity impulse vector dv1 = [0.000000000000001,
58.064987253970472, 0]
The second velocity impulse vector dv2 = [0.000000000000924, -57.631827424187151, 0]
The scaled instant of the second velocity impulse 0.969855338997211
The time instant of the second velocity impulse 2.715594949192190e+03
The maximal error of boundary conditions 4.263256e-13
Figure 3 shows the Hohmann transfer and the magnitude of the primer vector. The magnitude of the velocity and the
velocity vector on the vx − vy plane are shown in Figure 4, where the symbol + corresponds to the time instant at which
the velocity impulse occurs.
We now assume the initial values of the velocity impulses
dv10=[0, 45, 0],
dv20=[0, -60, 0]
17
The magnitude of the velocity vector
The velocity vector
8000
7850
6000
7800
4000
2000
vy m/s
7750
7700
0
−2000
−4000
7650
−6000
0
500
1000
Fig. 4
1500
time second
2000
−8000
−8000
2500
−7000
−6000
−5000
−4000 −3000
vx m/s
−2000
−1000
0
1000
The magnitude of the velocity and the velocity vector on the vx − vy plane
In stead of bvp4c, we use the other solver bvp5c. Setting Tolerance=1e-6, the solution corresponding to the local
minimum is found.
The solution was obtained on a mesh of 2859 points.
The maximum error is
5.291e-14.
There were 809723 calls to the ODE function.
There were 1382 calls to the BC function.
Elapsed time is 190.920474 seconds.
The first velocity impulse vector
The second velocity impulse vector
dv1 = 1.0e+4*[0.000000000000001, -1.562657038966310, 0]
dv2 = 1.0e+4*[0.000000000000011, -1.527946697280544, 0]
The scaled instant of the second velocity impulse 0.969855338997205
The time instant of the second velocity impulse 2.715594949192175e+03
The maximal error of boundary conditions 4.403455e-10
Figure 5 shows the orbit transfer and the magnitude of the primer vector corresponding to the local minimum, where the
spacecraft starts from the initial point on the initial orbit, moves clockwise along the transfer orbit until the second pulse
point, and then travels counterclockwise along the final circle orbit until the terminal point marked with the symbol ×.
V. Conclusion
In this paper, by a static constrained optimization, we study the global optimality of the Hohmann transfer using a
nonlinear programming method. Specifically, an inequality presented by Marec in [10, pp. 21-32] is used to define
an inequality constraint, then we formulate the Hohmann transfer problem as a constrained nonlinear programming
18
6
The magnitude of the primer vector
x 10
8
1
6
0.8
4
2
ry m
0.6
0
0.4
−2
0.2
−4
−6
0
−8
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
rx m
Fig. 5
0.6
0.8
0
1
500
1000
1500
2000
2500
3000
t secon d
7
x 10
The orbit transfer and the magnitude of the primer vector corresponding to the local minimum
problem. A natural application of the well-known results in nonlinear programming such as the Kuhn-Tucker theorem
clearly shows the the global optimality of the Hohmann transfer. In the second part of the paper, we introduce two
optimal control problems with two-point and multi-point boundary value constraints respectively. With the help of
Matlab solver bvp4c, the Hohmann transfer is solved successfully.
Acknowledgments
The first author is supported by the National Natural Science Foundation of China (no. 61374084).
References
[1] Hohmann, W., The Attainability of Heavenly Bodies (1925), NASA Technical Translation F-44, 1960.
[2] Barrar, R. B., “An Analytic Proof that the Hohmann-Type Transfer is the True Minimum Two-Impulse Transfer,” Astronautica
Acta, Vol. 9, No. 1, 1963, pp. 1–11.
[3] Prussing, J. E., and Conway, B. A., Orbital Mechanics, 1st ed., Oxford University Press, 1993.
[4] Ting, L., “Optimum Orbital Transfer by Impulses,” ARS Journal, Vol. 30, 1960, pp. 1013–1018.
[5] Lawden, D. F., Optimal Trajectories for Space Navigation, Butter Worths, London, 1963.
[6] Prussing, J. E., “Primer Vector Theory and Applications,” Spacecraft Trajectory Optimization, edited by B. A. Conway,
Cambridge, 2010, Chap. 2, pp. 16–36.
[7] Pontani, M., “Simple Method to Determine Globally Optimal Orbital Transfers,” Journal of Guidance, Control, and Dynamics,
Vol. 32, No. 3, 2009, pp. 899–914.
19
[8] Moyer, H. G., “Minimum Impulse Coplanar Circle-Ellipse Transfer,” AIAA Journal, Vol. 3, No. 4, 1965, pp. 723–726.
[9] Battin, R. H., An Introduction to the Mathematics and Methods of Astrodynamics, AIAA Education Series, AIAA, New York,
1987.
[10] Marec, J.-P., Optimal Space Trajectories, Elsevier, New York, 1979.
[11] Hazelrigg, G. A., “Globally Optimal Impulsive Transfers via Green’s Theorem,” Journal of Guidance, Control, and Dynamics,
Vol. 7, No. 4, 1984, pp. 462–470.
[12] Palmore, J. I., “An Elementary Proof the Optimality of Hohmann Transfers,” Journal of Guidance, Control, and Dynamics,
Vol. 7, No. 5, 1984, pp. 629–630.
[13] Prussing, J. E., “Simple Proof the Global Optimality of the Hohmann Transfer,” Journal of Guidance, Control, and Dynamics,
Vol. 15, No. 4, 1992, pp. 1037–1038.
[14] Yuan, F., and Matsushima, K., “Strong Hohmann Transfer Theorem,” Journal of Guidance, Control, and Dynamics, Vol. 18,
No. 2, 1995, pp. 371–373.
[15] Vertregt, M., “Interplanetary orbits,” Journal of the British Interplanetary Society, Vol. 16, 1958, pp. 326–354.
[16] Cornelisse, J. W., Schöyer, H. F. R., and Walker, K. F., Rocket Propulsion and Spaceflight Dynamics, Pitman, London, 1979.
[17] Gurfil, P., and Seidelmann, P. K., Celestial Mechanics and Astrodynamics Theory and Practice, Springer, 2016.
[18] Avendaño, M., Martín-Molina, V., Martín-Morales, J., and Ortigas-Galindo, J., “Algebraic Approach to the Minimum-Cost
Multi-Impulse Orbit-Transfer Problem,” Journal of Guidance, Control, and Dynamics, Vol. 39, No. 8, 2016, pp. 1734–1743.
[19] Curtis, H. D., Orbital Mechanics for Engineering Students, 3rd ed., Elsevier Ltd., 2014.
[20] James M. Longuski, J. J. G., and Prussing, J. E., Optimal Control with Aerospace Applications, Springer, 2014.
[21] Avriel, M., Nonlinear Programming: Analysis and Methods, an unabridged republication of the edition published by
Prentice-Hall in 1976 ed., Dover Publications Inc., 2003.
[22] Bertsekas, D., Nonlinear Programming, 2nd ed., Athena Scientific, 1999.
[23] Bryson Jr., A. E., and Ho, Y.-C., Applied Optimal Control, Hemisphere Publishing Corp., London, 1975.
[24] Zefran, M., Desai, J. P., and Kumar, V., “Continuous Motion Plans for Robotic Systems with Changing Dynamic Behavior,”
The second Int. Workshop on Algorithmic Fundations of Robotics, Toulouse, France, 1996.
[25] Shampine, L., Gladwell, I., and Thompson, S., Solving ODEs with Matlab, Cambridge University Press, 2003.
[26] Kierzenka, J., “Studies in the numerical solution of ordinary differential equations,” Ph.D. thesis, Department of Mathematics,
Southern Methodist University, Dallas, TX., 1998.
20
| 3 |
DeepPath: A Reinforcement Learning Method for
Knowledge Graph Reasoning
Wenhan Xiong and Thien Hoang and William Yang Wang
Department of Computer Science
University of California, Santa Barbara
Santa Barbara, CA 93106 USA
{xwhan,william}@cs.ucsb.edu, [email protected]
arXiv:1707.06690v2 [cs.CL] 8 Jan 2018
Abstract
We study the problem of learning to reason
in large scale knowledge graphs (KGs).
More specifically, we describe a novel reinforcement learning framework for learning multi-hop relational paths: we use a
policy-based agent with continuous states
based on knowledge graph embeddings,
which reasons in a KG vector space by
sampling the most promising relation to
extend its path. In contrast to prior work,
our approach includes a reward function
that takes the accuracy, diversity, and efficiency into consideration. Experimentally, we show that our proposed method
outperforms a path-ranking based algorithm and knowledge graph embedding
methods on Freebase and Never-Ending
Language Learning datasets.1
1
Introduction
Deep neural networks for acoustic modeling in
speech recognitionIn recent years, deep learning techniques have obtained many state-of-theart results in various classification and recognition
problems (Krizhevsky et al., 2012; Hinton et al.,
2012; Kim, 2014). However, complex natural language processing problems often require multiple inter-related decisions, and empowering deep
learning models with the ability of learning to reason is still a challenging issue. To handle complex queries where there are no obvious answers,
intelligent machines must be able to reason with
existing resources, and learn to infer an unknown
answer.
More specifically, we situate our study in the
context of multi-hop reasoning, which is the task
1
Code and the NELL dataset are available at https://
github.com/xwhan/DeepPath.
of learning explicit inference formulas, given a
large KG. For example, if the KG includes the
beliefs such as Neymar plays for Barcelona, and
Barcelona are in the La Liga league, then machines should be able to learn the following formula: playerPlaysForTeam(P,T) ∧ teamPlaysInLeague(T,L) ⇒ playerPlaysInLeague(P,L). In the
testing time, by plugging in the learned formulas,
the system should be able to automatically infer
the missing link between a pair of entities. This
kind of reasoning machine will potentially serve
as an essential components of complex QA systems.
In recent years, the Path-Ranking Algorithm
(PRA) (Lao et al., 2010, 2011a) emerges as a
promising method for learning inference paths in
large KGs. PRA uses a random-walk with restarts
based inference mechanism to perform multiple
bounded depth-first search processes to find relational paths. Coupled with elastic-net based learning, PRA then picks more plausible paths using
supervised learning. However, PRA operates in
a fully discrete space, which makes it difficult to
evaluate and compare similar entities and relations
in a KG.
In this work, we propose a novel approach
for controllable multi-hop reasoning: we frame
the path learning process as reinforcement learning (RL). In contrast to PRA, we use translationbased knowledge based embedding method (Bordes et al., 2013) to encode the continuous state of
our RL agent, which reasons in the vector space
environment of the knowledge graph. The agent
takes incremental steps by sampling a relation to
extend its path. To better guide the RL agent for
learning relational paths, we use policy gradient
training (Mnih et al., 2015) with a novel reward
function that jointly encourages accuracy, diversity, and efficiency. Empirically, we show that our
method outperforms PRA and embedding based
methods on a Freebase and a Never-Ending Language Learning (Carlson et al., 2010a) dataset.
Our contributions are three-fold:
• We are the first to consider reinforcement
learning (RL) methods for learning relational
paths in knowledge graphs;
• Our learning method uses a complex reward
function that considers accuracy, efficiency,
and path diversity simultaneously, offering
better control and more flexibility in the pathfinding process;
• We show that our method can scale up to
large scale knowledge graphs, outperforming PRA and KG embedding methods in two
tasks.
In the next section, we outline related work in
path-finding and embedding methods in KGs. We
describe the proposed method in Section 3. We
show experimental results in Section 4. Finally,
we conclude in Section 5.
2
Related Work
The Path-Ranking Algorithm (PRA) method (Lao
et al., 2011b) is a primary path-finding approach
that uses random walk with restart strategies for
multi-hop reasoning. Gardner et al. (2013; 2014)
propose a modification to PRA that computes feature similarity in the vector space. Wang and
Cohen (2015) introduce a recursive random walk
approach for integrating the background KG and
text—the method performs structure learning of
logic programs and information extraction from
text at the same time. A potential bottleneck for
random walk inference is that supernodes connecting to large amount of formulas will create huge
fan-out areas that significantly slow down the inference and affect the accuracy.
Toutanova et al. (2015) provide a convolutional
neural network solution to multi-hop reasoning.
They build a CNN model based on lexicalized dependency paths, which suffers from the error propagation issue due to parse errors. Guu et al. (2015)
uses KG embeddings to answer path queries. Zeng
et al. (2014) described a CNN model for relational extraction, but it does not explicitly model
the relational paths. Neelakantan et al. (2015) propose a recurrent neural networks model for modeling relational paths in knowledge base completion
(KBC), but it trains too many separate models, and
therefore it does not scale. Note that many of the
recent KG reasoning methods (Neelakantan et al.,
2015; Das et al., 2017) still rely on first learning
the PRA paths, which only operates in a discrete
space. Comparing to PRA, our method reasons
in a continuous space, and by incorporating various criteria in the reward function, our reinforcement learning (RL) framework has better control
and more flexibility over the path-finding process.
Neural symbolic machine (Liang et al., 2016)
is a more recent work on KG reasoning, which
also applies reinforcement learning but has a different flavor from our work. NSM learns to compose programs that can find answers to natural language questions, while our RL model tries to add
new facts to knowledge graph (KG) by reasoning
on existing KG triples. In order to get answers,
NSM learns to generate a sequence of actions that
can be combined as a executable program. The action space in NSM is a set of predefined tokens. In
our framework, the goal is to find reasoning paths,
thus the action space is relation space in the KG. A
similar framework (Johnson et al., 2017) has also
been applied to visual reasoning tasks.
3
Methodology
In this section, we describe in detail our RL-based
framework for multi-hop relation reasoning. The
specific task of relation reasoning is to find reliable predictive paths between entity pairs. We
formulate the path finding problem as a sequential decision making problem which can be solved
with a RL agent. We first describe the environment and the policy-based RL agent. By interacting with the environment designed around the KG,
the agent learns to pick the promising reasoning
paths. Then we describe the training procedure of
our RL model. After that, we describe an efficient
path-constrained search algorithm for relation reasoning with the paths found by the RL agent.
3.1
Reinforcement Learning for Relation
Reasoning
The RL system consists of two parts (see Figure 1). The first part is the external environment
E which specifies the dynamics of the interaction
between the agent and the KG. This environment
is modeled as a Markov decision process (MDP).
A tuple < S, A, P, R > is defined to represent
the MDP, where S is the continuous state space,
A = {a1 , a2 , ..., an } is the set of all available ac-
Query Node: Band of Brothers
Reason Task: tvProgramLanguage
Policy Based Agent
The KG Environment
English
Next State
Caesars
Entertain…
personLanguages
Actor
Neal
McDonough
serviceLocation-1
United
States
countryOfOrigin
Graham
writtenBy
Yost
tvProgramCreator
HBO
profession
Tom Hanks
castActor
Band of
Brothers
...
Reward
State
ReLU
ReLU
Reason Step
Softmax
awardWorkWinner
Michael
music
Kamen
tvProgramGenre
𝛑(a|s)
Mini-Series
Figure 1: Overview of our RL model. Left: The KG environment E modeled by a MDP. The dotted arrows (partially) show the
existing relation links in the KG and the bold arrows show the reasoning paths found by the RL agent. −1 denotes the inverse
of an relation. Right: The structure of the policy network agent. At each step, by interacting with the environment, the agent
learns to pick a relation link to extend the reasoning paths.
0
tions, P(St+1 = s |St = s, At = a) is the transition probability matrix, and R(s, a) is the reward
function of every (s, a) pairs.
The second part of the system, the RL
agent, is represented as a policy network
πθ (s, a) = p(a|s; θ) which maps the state vector to a stochastic policy. The neural network
parameters θ are updated using stochastic gradient descent. Compared to Deep Q Network
(DQN) (Mnih et al., 2013), policy-based RL
methods turn out to be more appropriate for our
knowledge graph scenario. One reason is that
for the path finding problem in KG, the action
space can be very large due to complexity of the
relation graph. This can lead to poor convergence
properties for DQN. Besides, instead of learning
a greedy policy which is common in value-based
methods like DQN, the policy network is able to
learn a stochastic policy which prevent the agent
from getting stuck at an intermediate state. Before
we describe the structure of our policy network,
we first describe the components (actions, states,
rewards) of the RL environment.
Actions Given the entity pairs (es , et ) with
relation r, we want the agent to find the most
informative paths linking these entity pairs.
Beginning with the source entity es , the agent use
the policy network to pick the most promising
relation to extend its path at each step until it
reaches the target entity et . To keep the output
dimension of the policy network consistent, the
action space is defined as all the relations in the
KG.
States The entities and relations in a KG are
naturally discrete atomic symbols. Since existing practical KGs like Freebase (Bollacker et al.,
2008) and NELL (Carlson et al., 2010b) often have
huge amounts of triples. It is impossible to directly model all the symbolic atoms in states. To
capture the semantic information of these symbols, we use translation-based embeddings such as
TransE (Bordes et al., 2013) and TransH (Wang
et al., 2014) to represent the entities and relations.
These embeddings map all the symbols to a lowdimensional vector space. In our framework, each
state captures the agent’s position in the KG. After
taking an action, the agent will move from one entity to another. These two are linked by the action
(relation) just taken by the agent. The state vector
at step t is given as follows:
st = (et , etarget − et )
where et denotes the embeddings of the current
entity node and etarget denotes the embeddings of
the target entity. At the initial state, et = esource .
We do not incorporate the reasoning relation in
the state, because the embedding of the reasoning
relation remain constant during path finding,
which is not helpful in training. However, we
find out that by training the RL agent using a set
of positive samples for one particular relation,
the agent can successfully discover the relation
semantics.
Rewards There are a few factors that contribute to
the quality of the paths found by the RL agent. To
encourage the agent to find predictive paths, our
reward functions include the following scoring criteria:
Global accuracy: For our environment settings,
the number of actions that can be taken by the
agent can be very large. In other words, there are
much more incorrect sequential decisions than the
correct ones. The number of these incorrect decision sequences can increase exponentially with
the length of the path. In view of this challenge,
the first reward function we add to the RL model
is defined as follows:
(
+1, if the path reaches etarget
rGLOBAL =
−1, otherwise
the agent is given an offline positive reward +1 if
it reaches the target after a sequence of actions.
Path efficiency: For the relation reasoning task,
we observe that short paths tend to provide more
reliable reasoning evidence than longer paths.
Shorter chains of relations can also improve the
efficiency of the reasoning by limiting the length
of the RL’s interactions with the environment. The
efficiency reward is defined as follows:
rEFFICIENCY =
1
length(p)
where path p is defined as a sequence of relations
r1 → r2 → ... → rn .
Path diversity: We train the agent to find paths using positive samples for each relation. These training sample (esource , etarget ) have similar state representations in the vector space. The agent tends
to find paths with similar syntax and semantics.
These paths often contains redundant information
since some of them may be correlated. To encourage the agent to find diverse paths, we define a diversity reward function using the cosine similarity
between the current path and the existing ones:
|F |
rDIVERSITY
1 X
=−
cos(p, pi )
|F |
i=1
Pn
where p =
i=1 ri represents the path embedding for the relation chain r1 → r2 → ... → rn .
Policy Network We use a fully-connected neural network to parameterize the policy function
π(s; θ) that maps the state vector s to a probability distribution over all possible actions. The
neural network consists of two hidden layers, each
followed by a rectifier nonlinearity layer (ReLU).
The output layer is normalized using a softmax
function (see Figure 1).
3.2
Training Pipeline
In practice, one big challenge of KG reasoning is
that the relation set can be quite large. For a typical KG, the RL agent is often faced with hundreds (thousands) of possible actions. In other
words, the output layer of the policy network often has a large dimension. Due to the complexity
of the relation graph and the large action space,
if we directly train the RL model by trial and errors, which is typical for RL algorithms, the RL
model will show very poor convergence properties. After a long-time training, the agents fails
to find any valuable path. To tackle this problem, we start our training with a supervised policy
which is inspired by the imitation learning pipeline
used by AlphaGo (Silver et al., 2016). In the Go
game, the player is facing nearly 250 possible legal moves at each step. Directly training the agent
to pick actions from the original action space can
be a difficult task. AlphaGo first train a supervised
policy network using experts moves. In our case,
the supervised policy is trained with a randomized
breadth-first search (BFS).
Supervised Policy Learning For each relation,
we use a subset of all the positive samples (entity pairs) to learn the supervised policy. For each
positive sample (esource , etarget ), a two-side BFS
is conducted to find same correct paths between
the entities. For each path p with a sequence of
relations r1 → r2 → ... → rn , we update the parameters θ to maximize the expected cumulative
reward using Monte-Carlo Policy Gradient (RE-
INFORCE) (Williams, 1992):
X
Rst ,at )
J(θ) = Ea∼π(a|s;θ) (
t
=
XX
t
π(a|st ; θ)Rst ,at
(1)
a∈A
where J(θ) is the expected total rewards for one
episode. For supervised learning, we give a reward of +1 for each step of a successful episode.
By plugging in the paths found by the BFS, the
approximated gradient used to update the policy
network is shown below:
XX
∇θ J(θ) =
π(a|st ; θ)∇θ log π(a|st ; θ)
t
≈ ∇θ
a∈A
X
log π(a = rt |st ; θ)
(2)
Algorithm 1: Retraining Procedure with reward functions
1 Restore parameters θ from supervised policy;
2 for episode ← 1 to N do
3
Initialize state vector st ← s0
4
Initialize episode length steps ← 0
5
while num steps < max length do
6
Randomly sample action a ∼ π(a|st )
7
Observe reward Rt , next state st+1
// if the step fails
8
if Rt = −1 then
9
Save < st , a > to Mneg
10
if success or steps = max length
then
11
break
12
Increment num steps
t
where rt belongs to the path p.
However, the vanilla BFS is a biased search algorithm which prefers short paths. When plugging in these biased paths, it becomes difficult
for the agent to find longer paths which may potentially be useful. We want the paths to be
controlled only by the defined reward functions.
To prevent the biased search, we adopt a simple trick to add some random mechanisms to the
BFS. Instead of directly searching the path between esource and etarget , we randomly pick a intermediate node einter and then conduct two BFS
between (esource , einter ) and (einter , etarget ). The
concatenated paths are used to train the agent. The
supervised learning saves the agent great efforts
learning from failed actions. With the learned experience, we then train the agent to find desirable
paths.
Retraining with Rewards To find the reasoning
paths controlled by the reward functions, we use
reward functions to retrain the supervised policy
network. For each relation, the reasoning with one
entity pair is treated as one episode. Starting with
the source node esource , the agent picks a relation
according to the stochastic policy π(a|s), which is
a probability distribution over all relations, to extend its reasoning path. This relation link may lead
to a new entity, or it may lead to nothing. These
failed steps will cause the agent to receive negative
rewards. The agent will stay at the same state after these failed steps. Since the agent is following
a stochastic policy, the agent will not get stuck by
repeating a wrong step. To improve the training efficiency, we limit the episode length with an upper
13
14
15
// penalize failed steps
Update θ P
using
g ∝ ∇θ Mneg log π(a = rt |st ; θ)(−1)
if success then
Rtotal ← λ1 rGLOBAL + λ2 rEFFICIENCY +
λ3 rDIVERSITY
Update θ P
using
g ∝ ∇θ t log π(a = rt |st ; θ)Rtotal
bound max length. The episode ends if the agent
fails to reach the target entity within max length
steps. After each episode, the policy network is
updated using the following gradient:
X
log π(a = rt |st ; θ)Rtotal (3)
∇θ J(θ) = ∇θ
t
where Rtotal is the linear combination of the defined reward functions. The detail of the retrain
process is shown in Algorithm 1. In practice, θ is
updated using the Adam Optimizer (Kingma and
Ba, 2014) with L2 regularization.
3.3
Bi-directional Path-constrained Search
Given an entity pair, the reasoning paths learned
by the RL agent can be used as logical formulas
to predict the relation link. Each formula is verified using a bi-directional search. In a typical KG,
one entity node can be linked to a large number
of neighbors with the same relation link. A simple example is the relation personNationality−1 ,
which denotes the inverse of personNationality.
Following this link, the entity United States can
reach numerous neighboring entities. If the for-
Algorithm 2: Bi-directional search for path
verification
1 Given a reasoning path
p : r1 → r2 → ... → rn
2 for (ei , ej ) in test set D do
3
start ← 0; end ← n
4
lef t ← ∅; right ← ∅
5
while start < end do
6
lef tEx ← ∅; rightEx ← ∅
7
if len(left) < len(right) then
8
Extend path on the left side
9
Add connected nodes to lef tEx
10
lef t ← lef tEx
Dataset
FB15K-237
NELL-995
# Ent.
14,505
75,492
# R.
237
200
# Triples
310,116
154.213
# Tasks
20
12
Table 1: Statistics of the Datasets. # Ent. denotes the number
of unique entities and # R. denotes the number of relations
mula consists of such links, the number of intermediate entities can exponentially increase as we
follow the reasoning formula. However, we observe that for these formulas, if we verify the formula from the inverse direction. The number of intermediate nodes can be tremendously decreased.
Algorithm 2 shows a detailed description of the
proposed bi-directional search.
are subsets of larger datasets. The triples in
FB15K-237 (Toutanova et al., 2015) are sampled
from FB15K (Bordes et al., 2013) with redundant relations removed. We perform the reasoning
tasks on 20 relations which have enough reasoning paths. These tasks consists of relations from
different domains like Sports, People, Locations,
Film, etc. Besides, we present a new NELL subset that is suitable for multi-hop reasoning from
the 995th iteration of the NELL system. We first
remove the triples with relation generalizations or
haswikipediaurl. These two relations appear more
than 2M times in the NELL dataset, but they have
no reasoning values. After this step, we only select the triples with Top-200 relations. To facilitate
path finding, we also add the inverse triples. For
each triple (h, r, t), we append (t, r−1 , h) to the
datasets. With these inverse triples, the agent is
able to step backward in the KG.
For each reasoning task ri , we remove all the
triples with ri or ri−1 from the KG. These removed
triples are split into train and test samples. For
the link prediction task, each h in the test triples
{(h, r, t)} is considered as one query. A set of
candidate target entities are ranked using different
methods. For fact prediction, the true test triples
are ranked with some generated false triples.
4
4.2
11
12
13
14
15
16
17
18
else
Extend path on the right side
Add connected nodes to rightEx
right ← rightEx
if lef t ∩ right 6= ∅ then
return True
else
return False
Experiments
To evaluate the reasoning formulas found by our
RL agent, we explore two standard KG reasoning tasks: link prediction (predicting target entities) and fact prediction (predicting whether an
unknown fact holds or not). We compare our
method with both path-based methods and embedding based methods. After that, we further analyze
the reasoning paths found by our RL agent. These
highly predictive paths validate the effectiveness
of the reward functions. Finally, we conduct a experiment to investigate the effect of the supervised
learning procedure.
4.1
Dataset and Settings
Table 1 shows the statistics of the two datasets
we conduct our experiments on. Both of them
Baselines and Implementation Details
Most KG reasoning methods are based on either
path formulas or KG embeddings. we explore
methods from both of these two classes in our experiments. For path based methods, we compare
our RL model with the PRA (Lao et al., 2011a)
algorithm, which has been used in a couple of reasoning methods (Gardner et al., 2013; Neelakantan et al., 2015). PRA is a data-driven algorithm
using random walks (RW) to find paths and obtain
path features. For embedding based methods, we
evaluate several state-of-the-art embeddings designed for knowledge base completion, such as
TransE (Bordes et al., 2013), TransH (Wang et al.,
2014), TransR (Lin et al., 2015) and TransD (Ji
et al., 2015) .
The implementation of PRA is based on the
FB15K-237
NELL-995
Tasks
PRA
RL
TransE
TransR
Tasks
PRA
RL
TransE
TransR
teamSports
birthPlace
personNationality
filmDirector
filmWrittenBy
filmLanguage
tvLanguage
capitalOf
organizationFounded
musicianOrigin
...
0.987
0.441
0.846
0.349
0.601
0.663
0.960
0.829
0.281
0.426
0.955
0.531
0.823
0.441
0.457
0.670
0.969
0.783
0.309
0.514
0.896
0.403
0.641
0.386
0.563
0.642
0.804
0.554
0.390
0.361
0.784
0.417
0.720
0.399
0.605
0.641
0.906
0.493
0.339
0.379
athletePlaysForTeam
athletePlaysInLeague
athleteHomeStadium
athletePlaysSport
teamPlaySports
orgHeadquaterCity
worksFor
bornLocation
personLeadsOrg
orgHiredPerson
...
0.547
0.841
0.859
0.474
0.791
0.811
0.681
0.668
0.700
0.599
0.750
0.960
0.890
0.957
0.738
0.790
0.711
0.757
0.795
0.742
0.627
0.773
0.718
0.876
0.761
0.620
0.677
0.712
0.751
0.719
0.673
0.912
0.722
0.963
0.814
0.657
0.692
0.812
0.772
0.737
Overall
0.541
0.572
0.532
0.540
0.675
0.796
0.737
0.789
Table 2: Link prediction results (MAP) on two datasets.
code released by (Lao et al., 2011a). We use the
TopK negative mode to generate negative samples
for both train and test samples. For each positive samples, there are approximately 10 corresponding negative samples. Each negative sample
is generated by replacing the true target entity t
0
with a faked one t in each triple (h, r, t). These
positive and negative test pairs generated by PRA
make up the test set for all methods evaluated in
this paper. For TransE,R,H,D, we learn a separate
embedding matrix for each reasoning task using
the positive training entity pairs. All these embeddings are trained for 1,000 epochs. 2
Our RL model make use of TransE to get the
continuous representation of the entities and relations. We use the same dimension as TransE, R
to embed the entities. Specifically, the state vector we use has a dimension of 200, which is also
the input size of the policy network. To reason
using the path formulas, we adopt a similar linear regression approach as in PRA to re-rank the
paths. However, instead of using the random walk
probabilities as path features, which can be computationally expensive, we simply use binary path
features obtained by the bi-directional search. We
observe that with only a few mined path formulas,
our method can achieve better results than PRA’s
data-driven approach.
4.3
4.3.1
Results
Quantitative Results
Link Prediction This task is to rank the target entities given a query entity. Table 2 shows the mean
average precision (MAP) results on two datasets.
2
The implementation we used can be found at https:
//github.com/thunlp/Fast-TransX
Fact Prediction Results
Methods
FB15K-237
NELL-995
RL
TransE
TransH
TransR
TransD
0.311
0.277
0.309
0.302
0.303
0.493
0.383
0.389
0.406
0.413
Table 3: Fact prediction results (MAP) on two datasets.
# of Reasoning Paths
Tasks
PRA
RL
worksFor
teamPlaySports
teamPlaysInLeague
athletehomestadium
organizationHiredPerson
...
Average #
247
113
69
37
244
25
27
21
11
9
137.2
20.3
Table 4: Number of reasoning paths used by PRA and our RL
model. RL achieved better MAP with a more compact set of
learned paths.
Since path-based methods generally work better
than embedding methods for this task, we do not
include the other two embedding baselines in this
table. Instead, we spare the room to show the detailed results on each relation reasoning task.
For the overall MAP shown in the last row of the
table, our approach significantly outperforms both
the path-based method and embedding methods on
two datasets, which validates the strong reasoning
ability of our RL model. For most relations, since
the embedding methods fail to use the path infor-
120
success ratio within 10 steps
100
number of paths
0.20
NELL-995
FB15K-237
80
60
40
20
0
0
5
10
15
20
distribution of reasoning paths
mation in the KG, they generally perform worse
than our RL model or PRA. However, when there
are not enough paths between entities, our model
and PRA can give poor results. For example,
for the relation filmWrittenBy, our RL model only
finds 4 unique reasoning paths, which means there
is actually not enough reasoning evidence existing
in the KG. Another observation is that we always
get better performance on the NELL dataset. By
analyzing the paths found from the KGs, we believe the potential reason is that the NELL dataset
has more short paths than FB15K-237 and some
of them are simply synonyms of the reasoning relations.
Fact Prediction Instead of ranking the target entities, this task directly ranks all the positive and
negative samples for a particular relation. The
PRA is not included as a baseline here, since the
PRA code only gives a target entity ranking for
each query node instead of a ranking of all triples.
Table 3 shows the overall results of all the methods. Our RL model gets even better results on this
task. We also observe that the RL model beats all
the embedding baselines on most reasoning tasks.
4.3.2
Qualitative Analysis of Reasoning Paths
To analyze the properties of reasoning paths, we
show a few reasoning paths found by the agent
in Table 5. To illustrate the effect of the efficiency reward function, we show the path length
distributions in Figure 2. To interpret these paths,
take the personNationality relation for example,
the first reasoning path indicates that if we know
facts placeOfBirth(x,y) and locationContains(z,y)
then it is highly possible that person x has nationality z. These short but predictive paths indicate
the effectiveness of the RL model. Another important observation is that our model use much
0.10
0.05
0.00
25
Figure 2: The distribution of paths lengths on two datasets
0.15
0
50
100
150
training episodes
200
Figure 3: The success ratio (succ10 ) during training. Task:
athletePlaysForTeam.3
fewer reasoning paths than PRA, which indicates
that our model can actually extract the most reliable reasoning evidence from KG. Table 4 shows
some comparisons about the number of reasoning
paths. We can see that, with the pre-defined reward functions, the RL agent is capable of picking
the strong ones and filter out similar or irrelevant
ones.
4.3.3
Effect of Supervised Learning
As mentioned in Section 3.2, one major challenge
for applying RL to KG reasoning is the large action space. We address this issue by applying
supervised learning before the reward retraining
step. To show the effect of the supervised training, we evaluate the agent’s success ratio of reaching the target within 10 steps (succ10 ) after different number of training episodes. For each training episode, one pair of entities (esource , etarget )
in the train set is used to find paths. All the correct paths linking the entities will get a +1 global
reward. We then plug in some true paths for training. The succ10 is calculated on a held-out test set
that consists of 100 entity pairs. For the NELL995 dataset, since we have 200 unique relations,
the dimension of the action space will be 400 after we add the backward actions. This means that
random walks will get very low succ10 since there
may be nearly 40010 invalid paths. Figure 3 shows
the succ10 during training. We see that even the
agent has not seen the entity before, it can actually
pick the promising relation to extend its path. This
also validates the effectiveness of our state representations.
3
The confidence band is generated using 50 different runs.
Relation
Reasoning Path
filmCountry
filmReleaseRegion
featureFilmLocation → locationContains−1
actorFilm−1 → personNationality
personNationality
placeOfBirth→ locationContains−1
peoplePlaceLived → locationContains−1
peopleMarriage → locationOfCeremony → locationContains−1
tvProgramLanguage
tvCountryOfOrigin → countryOfficialLanguage
tvCountryOfOrigin → filmReleaseRegion−1 → filmLanguage
tvCastActor → filmLanguage
personBornInLocation
personBornInCity
graduatedUniversity → graduatedSchool−1 → personBornInCity
personBornInCity → atLocation−1 → atLocation
athletePlaysForTeam
athleteHomeStadium → teamHomeStadium−1
athletePlaysSport → teamPlaysSport−1
athleteLedSportsTeam
personLeadsOrganization
worksFor
organizationTerminatedPerson−1
mutualProxyFor−1
Table 5: Example reasoning paths found by our RL model. The first three relations come from the FB15K-237 dataset. The
others are from NELL-995. Inverses of existing relations are denoted by −1 .
5
Conclusion and Future Work
In this paper, we propose a reinforcement learning framework to improve the performance of relation reasoning in KGs. Specifically, we train a
RL agent to find reasoning paths in the knowledge
base. Unlike previous path finding models that are
based on random walks, the RL model allows us
to control the properties of the found paths. These
effective paths can also be used as an alternative to
PRA in many path-based reasoning methods. For
two standard reasoning tasks, using the RL paths
as reasoning formulas, our approach generally outperforms two classes of baselines.
For future studies, we plan to investigate
the possibility of incorporating adversarial learning (Goodfellow et al., 2014) to give better rewards than the human-defined reward functions
used in this work. Instead of designing rewards
according to path characteristics, a discriminative
model can be trained to give rewards. Also, to address the problematic scenario when the KG does
not have enough reasoning paths, we are interested
in applying our RL framework to joint reasoning
with KG triples and text mentions.
Acknowledgments
We gratefully acknowledge the support of
NVIDIA Corporation with the donation of one Titan X Pascal GPU used for this research.
References
Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim
Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM
SIGMOD international conference on Management
of data, pages 1247–1250. ACM.
Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko.
2013. Translating embeddings for modeling multirelational data. In Advances in neural information
processing systems, pages 2787–2795.
Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr
Settles, Estevam R. Hruschka Jr., and Tom M.
Mitchell. 2010a. Toward an architecture for neverending language learning. In AAAI.
Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr
Settles, Estevam R. Hruschka Jr., and Tom M.
Mitchell. 2010b. Toward an architecture for neverending language learning. In Proceedings of the
Twenty-Fourth Conference on Artificial Intelligence
(AAAI 2010).
Rajarshi Das, Arvind Neelakantan, David Belanger,
and Andrew McCallum. 2017. Chains of reasoning
over entities, relations, and text using recurrent neural networks. EACL.
Ni Lao, Jun Zhu, Xinwang Liu, Yandong Liu, and
William W Cohen. 2010. Efficient relational learning with hidden variable detection. In NIPS, pages
1234–1242.
Matt Gardner, Partha Pratim Talukdar, Bryan Kisiel,
and Tom M Mitchell. 2013. Improving learning
and inference in a large knowledge-base using latent
syntactic cues. In EMNLP, pages 833–838.
Chen Liang, Jonathan Berant, Quoc Le, Kenneth D
Forbus, and Ni Lao. 2016.
Neural symbolic
machines: Learning semantic parsers on freebase with weak supervision.
arXiv preprint
arXiv:1611.00020.
Matt Gardner, Partha Pratim Talukdar, Jayant Krishnamurthy, and Tom Mitchell. 2014. Incorporating vector space similarity in random walk inference over
knowledge bases.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza,
Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information
Processing Systems, pages 2672–2680.
Kelvin Guu, John Miller, and Percy Liang. 2015.
Traversing knowledge graphs in vector space. In
EMNLP.
Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl,
Abdel-rahman Mohamed, Navdeep Jaitly, Andrew
Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N
Sainath, et al. 2012. Deep neural networks for
acoustic modeling in speech recognition: The shared
views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97.
Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun
Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In ACL (1), pages 687–696.
Justin Johnson, Bharath Hariharan, Laurens van der
Maaten, Judy Hoffman, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Inferring and executing programs for visual reasoning. arXiv preprint
arXiv:1705.03633.
Yoon Kim. 2014.
Convolutional neural networks for sentence classification. arXiv preprint
arXiv:1408.5882.
Diederik Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural
information processing systems, pages 1097–1105.
Ni Lao, Tom Mitchell, and William W Cohen. 2011a.
Random walk inference and learning in a large scale
knowledge base. In Proceedings of the Conference
on Empirical Methods in Natural Language Processing, pages 529–539. Association for Computational Linguistics.
Ni Lao, Tom M. Mitchell, and William W. Cohen.
2011b. Random walk inference and learning in a
large scale knowledge base. In EMNLP, pages 529–
539. ACL.
Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and
Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In AAAI,
pages 2181–2187.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari
with deep reinforcement learning. arXiv preprint
arXiv:1312.5602.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver,
Andrei A Rusu, Joel Veness, Marc G Bellemare,
Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level
control through deep reinforcement learning. Nature, 518(7540):529–533.
Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. 2015. Compositional vector space models for knowledge base completion. arXiv preprint
arXiv:1504.06662.
David Silver, Aja Huang, Chris J Maddison, Arthur
Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering
the game of go with deep neural networks and tree
search. Nature, 529(7587):484–489.
Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon.
2015. Representing text for joint embedding of text
and knowledge bases. In EMNLP, volume 15, pages
1499–1509. Citeseer.
William Yang Wang and William W Cohen. 2015.
Joint information extraction and reasoning: A scalable statistical relational learning approach. In ACL.
Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng
Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In AAAI, pages 1112–1119.
Citeseer.
Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256.
Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou,
Jun Zhao, et al. 2014. Relation classification via
convolutional deep neural network. In COLING,
pages 2335–2344.
| 2 |
ON THE LENGTH AND DEPTH OF FINITE GROUPS
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
arXiv:1802.02194v1 [] 6 Feb 2018
With an appendix by D.R. Heath-Brown
Abstract. An unrefinable chain of a finite group G is a chain of subgroups G = G0 >
G1 > · · · > Gt = 1, where each Gi is a maximal subgroup of Gi−1 . The length (respectively, depth) of G is the maximal (respectively, minimal) length of such a chain. We
studied the depth of finite simple groups in a previous paper, which included a classification of the simple groups of depth 3. Here we go much further by determining the
finite groups of depth 3 and 4. We also obtain several new results on the lengths of finite
groups. For example, we classify the simple groups of length at most 9, which extends
earlier work of Janko and Harada from the 1960s, and we use this to describe the structure of arbitrary finite groups of small length. We also present a number-theoretic result
of Heath-Brown, which implies that there are infinitely many non-abelian simple groups
of length at most 9.
Finally we study the chain difference of G (namely the length minus the depth). We
obtain results on groups with chain difference 1 and 2, including a complete classification
of the simple groups with chain difference 2, extending earlier work of Brewster et al.
We also derive a best possible lower bound on the chain ratio (the length divided by the
depth) of simple groups, which yields an explicit linear bound on the length of G/R(G)
in terms of the chain difference of G, where R(G) is the soluble radical of G.
1. Introduction
An unrefinable chain of length t of a finite group G is a chain of subgroups
G = G0 > G1 > · · · > Gt−1 > Gt = 1,
(1)
where each Gi is a maximal subgroup of Gi−1 . The length of G, denoted by l(G), is the
maximal length of an unrefinable chain. This notion arises naturally in several different
contexts, finding a wide range of applications. For example, Babai [3] investigated the
length of symmetric groups in relation to the computational complexity of algorithms for
finite permutation groups. In a different direction, Seitz, Solomon and Turull studied the
length of finite groups of Lie type in a series of papers in the early 1990s [30, 32, 33],
motivated by applications to fixed-point-free automorphisms of finite soluble groups. In
fact, the notion predates both the work of Babai and Seitz et al. Indeed, Janko and
Harada studied the simple groups of small length in the 1960s, culminating in Harada’s
description of the finite simple groups of length at most 7 in [16].
Given the definition of l(G), it is also natural to consider the minimal length of an
unrefinable chain for G. Following [8], we call this number the depth of G, denoted by
λ(G). For example, if G is a cyclic group of order n > 2, then λ(G) = Ω(n), the number
of prime divisors of n (counting multiplicities). In particular, λ(G) = 1 if and only if G
has prime order. This notion appears in the earlier work of several authors. For example,
in [31] Shareshian and Woodroofe investigate the length of various chains of subgroups
The third author acknowledges the hospitality of Imperial College, London, while part of this work was
carried out. He also acknowledges the support of ISF grant 686/17 and the Vinik chair of mathematics
which he holds.
Date: February 8, 2018.
2010 Mathematics Subject Classification. Primary 20E32, 20E15; Secondary 20E28.
1
2
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
of finite groups G in the context of lattice theory (in their paper, the depth of G is
denoted by minmaxl(G)). There are also several papers on the so-called chain difference
cd(G) = l(G) − λ(G) of a finite group G. For example, a well known theorem of Iwasawa
[18] states that cd(G) = 0 if and only if G is supersoluble. The simple groups G with
cd(G) = 1 have been determined by Brewster et al. [7] (also see [17] and [28] for related
results).
In [8], we focus on the depth of finite simple groups. One of the main results is [8,
Theorem 1], which determines the simple groups of depth 3 (it is easy to see that λ(G) > 3
for every non-abelian simple group G); the groups that arise are recorded in Table 1. We
also show that alternating groups have bounded depth (indeed, λ(An ) 6 23 for all n,
whereas l(An ) tends to infinity with n) and we obtain upper bounds on the depth of each
simple group of Lie type. We refer the reader to [9] for results on analogous notions of
length and depth for connected algebraic groups over algebraically closed fields.
G
Ap
L2 (q)
Lǫn (q)
2B
2 (q)
M23 , B
Conditions
p and (p − 1)/2 prime, p 6∈ {7, 11, 23}
(q + 1)/(2, q − 1) or (q − 1)/(2, q − 1) prime, q 6= 9; or
q prime and q ≡ ±3, ±13 (mod 40); or
q = 3k with k > 3 prime
n −ǫ
both prime, n > 3 and
n and (q−ǫ)q (n,q−ǫ)
(n, q, ǫ) 6= (3, 4, +), (3, 3, −), (3, 5, −), (5, 2, −)
q − 1 prime
Table 1. The simple groups G with λ(G) = 3
Our goal in this paper is to extend the depth results in [8] in several different directions,
both for simple groups, as well as arbitrary finite groups. We also revisit some of the
aforementioned results of Janko and Harada from the 1960s, providing a precise description
of the simple groups of small length. In turn, this allows us to describe the structure of
arbitrary finite groups of small length and we can use this to classify the simple groups G
with cd(G) = 2, which extends one of the main results in [7].
By a theorem of Shareshian and Woodroofe [31, Theorem 1.4], it follows that λ(G) > 3
for every insoluble finite group G. Our first main result determines all finite groups of
depth 3. In particular, notice that an obvious consequence of the theorem is that almost
simple groups of depth 3 are simple.
Theorem 1. A finite group G has depth 3 if and only if either G is soluble of chief length
3, or G is a simple group as in Table 1.
The next result classifies the finite groups of depth 4. In part (iv) of the statement, a
twisted wreath product T twrφ S for non-abelian simple groups S, T is as defined and studied
in [4]. The ingredients are a transitive action of S on k points with point stabiliser S1 , and
a homomorphism φ : S1 → Aut(T ) with image containing Inn(T ). Thus T is isomorphic
to a proper section of S; indeed, the subgroups C = φ−1 (Inn(T )) and D = ker(φ) ∩ C
satisfy C/D ∼
= T , and (C, D) forms an S-maximal section of S, as defined in [4, Definition
4.1]. Moreover T twrφ S is a semidirect product T k .S having S as a maximal subgroup.
Theorem 2. Suppose G is a finite group of depth 4. Then one of the following holds,
where p is a prime:
(i) G is soluble of chief length 4.
ON THE LENGTH AND DEPTH OF FINITE GROUPS
3
(ii) G = T × T , T × Cp or T p .Cp , where T is simple of depth 3 (as in Table 1).
(iii) G = (Cp )k .T (a possibly nonsplit extension), where T is simple of depth 3, and
acts irreducibly on (Cp )k .
(iv) G = T twrφ S, a twisted wreath product, where S, T are simple, T is a proper
section of S, and S has depth 3.
(v) G is quasisimple, and Z(G) is either 1 or Cp .
(vi) G is almost simple with socle T , and G/T is either 1 or Cp .
Remark 1. Let us make some comments on the statement of Theorem 2.
(a) Note that in part (iv), and also (ii) (apart from T p .Cp ), the group G has a maximal
subgroup that is simple of depth 3. In (iii), for split extensions this is also the case;
for nonsplit, G must have a maximal subgroup M = pk .M0 with M0 maximal in
T acting irreducibly on pk (so λ(M0 ) = 2 and λ(M ) = 3). See Remark 3.1 for
further comments on the nonsplit groups G = pk .T arising in part (iii).
(b) In (iv), being of depth 3, the possibilities for S can be found in Table 1. Every
proper non-abelian simple section T of S can occur. The simple sections of the
groups Ap and Lǫn (q) in Table 1 cannot be listed. However, the proper simple
sections of the other groups in the table can be listed: for L2 (q) they are A5 ,
L2 (q0 ) (with q = q0k and q0 > 4); for 2B2 (q) there are none (since q − 1 is prime);
and those for M23 and B can be listed using [12].
(c) Consider case (v), where G is quasisimple with Z(G) = Cp (p prime) and let
T = G/Z(G), a simple group. Here λ(G) = λ(T ) + 1, as explained in Remark 3.2,
so λ(G) = 4 if and only if T is one of the simple groups in Table 1. In particular, by
considering the Schur multipliers of the simple groups in Table 1 (see [21, Theorem
5.1.4], for example), we deduce that a quasisimple group with nontrivial centre has
depth 4 if and only if it appears in Table 2.
G
2.Ap
SLǫn (q)
SL2 (q)
2.2 B2 (8), 2.B
Conditions
p and (p − 1)/2 prime, p 6∈ {7, 11, 23}
q n −ǫ
both prime, n > 3 and
n = (n, q − ǫ) and n(q−ǫ)
(n, q, ǫ) 6= (3, 4, +), (3, 5, −)
(q + 1)/2 or (q − 1)/2 prime, q 6= 9; or
q prime and q ≡ ±3, ±13 (mod 40); or
q = 3k with k > 3 prime
Table 2. The quasisimple groups G with Z(G) 6= 1 and λ(G) = 4
The next result sheds further light on the almost simple groups of depth 4 (case (vi) of
Theorem 2).
Theorem 3. Let G be an almost simple group with socle T and depth 4. Then one of the
following holds:
(i) G/T = Cp , p prime and λ(T ) = 3;
(ii) (G, T ) is one of the cases in Table 3 (in each case, λ(T ) = 4);
(iii) G = T has a soluble maximal subgroup M of chief length 3; the possibilities for
(G, M ) are listed in Table 4;
(iv) G = T has a simple maximal subgroup of depth 3.
4
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
In Table 4, the required conditions when G = L2 (q) and q is odd are rather complicated
to state (mainly due to the fact that we need λ(G) 6= 3). To simplify the presentation of
the table, we refer to the following conditions on q:
Ω(q ± 1) > 3
q 6≡ ±3, ±13 (mod 40) if q is prime
(2)
q 6= 3k with k > 3 prime.
We refer the reader to Remark 3.3 for further details on the simple groups that arise in
part (iv) of Theorem 3.
T
A6
A7 , A11 , A23
L2 (q)
L3 (4)
U3 (5)
G
Conditions
PGL2 (9), M10
S7 , S11 , S23
PGL2 (q)
q prime, q ≡ ±11, ±19 (mod 40), Ω(q ± 1) > 3
PGL3 (4)
PGU3 (5)
Table 3. The almost simple groups G = T.p with λ(G) = λ(T ) = 4
G
Ap
A6
L2 (q)
Lǫ3 (q)
Lǫn (q)
2B
2 (q)
M
p:((p − 1)/2)
S4 , 32 :4
Fq :((q − 1)/2), Dq−1
Dq+1
S4
Fq :(q − 1), D2(q−1)
D2(q+1)
(Cq−ǫ )2 :S3
q n −ǫ
(q−ǫ)(n,q−ǫ)
:n
D2(q−1)
√
(q ± 2q + 1):4
√
2 G (q) (q ± 3q + 1):6
2
3 D (q) (q 4 − q 2 + 1):4
4
J1
7:6, 11:10, 19:6, 23 :7:3
J4
43:14
Ly
67:22
Fi′24
29:14
Th
31:15
Conditions
p prime, Ω(p − 1) = 3
q
q
q
q
q
q
odd, Ω(q − 1) = 3, (2) holds
odd, Ω(q + 1) = 3, (2) holds
prime, q ≡ ±1 (mod 8), (2) holds
even, Ω(q − 1) = 2, Ω(q + 1) > 2
even, Ω(q + 1) = 2, Ω(q − 1) > 2
> 8 even, q − ǫ prime, Ω(q 2 + ǫq + 1) > 2
n
q −ǫ
n > 3 prime, Ω( (q−ǫ)(n,q−ǫ)
)=2
Ω(q − 1) = 2
√
q ± 2q + 1 prime, Ω(q − 1) > 2
√
q ± 3q + 1 prime, q > 3
q 4 − q 2 + 1 prime
Table 4. The simple groups G of depth 4 with a soluble maximal subgroup
M of depth 3
Next we turn to our main results on the lengths of finite groups. Recall that the finite
simple groups of small length were studied by Janko and Harada in the 1960s, beginning
with [19] which classifies the simple groups of length 4 (since λ(G) > 3, Iwasawa’s theorem
implies that l(G) > 4 for every non-abelian simple group G). In a second paper [20], Janko
describes the simple groups of length 5 and this was extended by Harada [16] to length at
ON THE LENGTH AND DEPTH OF FINITE GROUPS
5
most 7. In both papers, the main results state that either G = L2 (q) for some unspecified
prime powers q, or G belongs to a short list of specific groups.
The following result extends this earlier work by giving a precise classification of the
simple groups of length at most 9.
Theorem 4. Let G be a non-abelian finite simple group of length at most 9. Then G and
l(G) are recorded in Table 5, where p is a prime number.
l(G)
4
5
6
7
8
9
G
A5 , L2 (q)
L2 (q)
A7 , J1 , L2 (q)
M11 , U3 (3), U3 (5)
L2 (q)
Conditions
q = p > 5, max{Ω(q ± 1)} = 3 and q ≡ ±3, ±13 (mod 40)
q ∈ {7, 8, 9, 11, 19, 27, 29}, or q = p and max{Ω(q ± 1)} = 4
q ∈ {25, 125}, or q = p and max{Ω(q ± 1)} = 5
q ∈ {16, 32, 49, 121, 169}, or q = p and max{Ω(q ± 1)} = 6, or
q = p3 , Ω(q − 1) = 4 and Ω(q + 1) 6 6
M12 , 2 B2 (8), L3 (3)
L2 (q)
q
q
q
q
q
A8 , U4 (2), L3 (4)
U3 (q)
q
q
L2 (q)
q
q
q
q
q
q
q
= p and max{Ω(q ± 1)} = 7, or
= p2 , Ω(q − 1) = 6 and Ω(q + 1) 6 7, or
= p3 , Ω(q − 1) = 5 and Ω(q + 1) 6 7, or
= p3 , Ω(q − 1) 6 4 and Ω(q + 1) = 7, or
= p5 , Ω(q − 1) = 3 and Ω(q + 1) 6 7
∈ {4, 11, 13, 29}, or q = p, Ω(q ± 1) = 3, Ω(q 2 − q + 1) 6 8,
≡ 2 (mod 3) and q ≡ ±3, ±13 (mod 40)
∈ {81, 128, 2187}, or q = p and max{Ω(q ± 1)} = 8, or
= p2 , Ω(q − 1) = 7 and Ω(q + 1) 6 8, or
= p2 , Ω(q − 1) = 6 and Ω(q + 1) = 8, or
= p3 , Ω(q − 1) = 6 and Ω(q + 1) 6 8, or
= p3 , Ω(q − 1) 6 5 and Ω(q + 1) = 8, or
= p5 , Ω(q − 1) = 4 and Ω(q + 1) 6 8, or
= p5 , Ω(q − 1) = 3 and Ω(q + 1) = 8
Table 5. The simple groups G of length at most 9
The proof of Theorem 4 is given in Section 4.1, together with the proof of the following
corollary, which describes the structure of finite groups of small length. Recall that soluble
groups G have length Ω(|G|), so we focus on insoluble groups.
Corollary 5. Let G be a finite insoluble group, in which case l(G) > 4.
(i) l(G) = 4 if and only if G is simple as in line 1 of Table 5.
(ii) l(G) = 5 if and only if one of the following holds:
(a) G is simple as in line 2 of Table 5; or
(b) G = T × Cp with T simple of length 4 (as in Table 5) and p a prime; or
(c) G = SL2 (q) or PGL2 (q), and either q = 5, or q > 5 is a prime such that
max{Ω(q ± 1)} = 3 and q ≡ ±3, ±13 (mod 40).
(iii) l(G) = 6 if and only if one of the following holds:
(a) G is simple as in line 3 of Table 5; or
6
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
(b) G = T ×Cp , or a quasisimple group p.T , or an almost simple group T.p, where
T is simple of length 5 (as in Table 5) and p a prime; the quasisimple groups
occurring are SL2 (q), 3.L2 (9), and the almost simple groups are PGL2 (q),
M10 , S6 , L2 (8).3 and L2 (27).3; or
(c) G = L2 (q) × (p.r), (L2 (q) × p).2, SL2 (q) × p or 2.L2 (q).2, where p, r are primes
and L2 (q) has length 4, as in Table 5.
Let G = L2 (q), where q is a prime, and consider the conditions on q in the first row
of Table 5. One checks that the first ten primes that satisfy the given conditions are as
follows:
q ∈ {13, 43, 67, 173, 283, 317, 653, 787, 907, 1867},
but it is not known if there are infinitely many such primes. The following more general
problem is addressed in [1]: Does there exist an infinite set S of non-abelian finite simple
groups and a positive integer N such that l(G) 6 N for all G ∈ S? The main result of [1]
gives a positive answer to this question. The key ingredient is a purely number theoretic
result [1, Theorem C], which states that for each positive integer n, there is an infinite
set of primes P and a positive integer N such that Ω(pn − 1) 6 N for all p ∈ P. More
precisely, for n = 2 they show that the conclusion holds with N = 21, which immediately
implies that there are infinitely many primes p with l(L2 (p)) 6 20. The same problem
arises in work of Gamburd and Pak (see [14, p.416]), who state that l(L2 (p)) 6 13 for
infinitely many primes p (giving [15] as a reference).
We establish the following strengthening of the results in [1, 14].
Theorem 6. There are infinitely many finite non-abelian simple groups G with l(G) 6 9.
In fact we show that l(L2 (p)) 6 9 for infinitely many primes p. As explained in Section
4.2, this is easily deduced from the following number-theoretic result of Heath-Brown,
which is of independent interest.
Theorem 7 (Heath-Brown). There are infinitely many primes p ≡ 5 (mod 72) for which
Ω((p2 − 1)/24)) 6 7.
See Appendix A for the proof of this theorem, which implies that there are infinitely
many primes p for which max{Ω(p ± 1)} 6 8.
Next we consider the chain difference of simple groups. The following result determines
the simple groups of chain difference two (see Section 5.1 for the proof). This extends
earlier work of Brewster et al. [7, Theorem 3.3] (also see Theorem 5.1), who described the
simple groups of chain difference one.
Theorem 8. Let G be a finite simple group. Then cd(G) = 2 if and only if one of the
following holds:
(i) G = A7 , J1 or U3 (5).
(ii) G = L2 (q) and either q ∈ {7, 8, 11, 27, 125}, or q is a prime and one of the following
holds:
(a) max{Ω(q ± 1)} = 4 and either min{Ω(q ± 1)} = 2, or q ≡ ±3, ±13 (mod 40).
(b) max{Ω(q ± 1)} = 5, min{Ω(q ± 1)} > 3 and q 6≡ ±3, ±13 (mod 40).
The chain ratio of a finite group G is given by cr(G) = l(G)/λ(G) and the next result establishes a best possible lower bound on the chain ratio of simple groups. In the
statement, we define
G = {G : G simple, l(G) = 5 and λ(G) = 4}.
(3)
ON THE LENGTH AND DEPTH OF FINITE GROUPS
7
By combining Theorem 4 with [8, Theorem 1], it follows that the groups in G are of the form
L2 (q) and either q ∈ {9, 19, 29}, or q is a prime with max{Ω(q ±1)} = 4, min{Ω(q ±1)} > 3
and q 6≡ ±3, ±13 (mod 40).
Theorem 9. Let G be a non-abelian finite simple group. Then
5
cr(G) > ,
4
with equality if and only if G ∈ G.
It follows from [8, Corollary 9] that there exists an absolute constant a such that l(G) 6
a cd(G) for every non-abelian simple group G. As an immediate corollary of Theorem 9,
we deduce that a = 5 is the best possible constant.
Corollary 10. Let G be a non-abelian finite simple group. Then l(G) 6 5 cd(G), with
equality if and only if G ∈ G.
Our final result, which applies Theorem 9, relates the structure of an arbitrary finite
group G with its chain difference. We let R(G) denote the soluble radical of G.
Theorem 11. Let G be a finite group. Then
l(G/R(G)) 6 10 cd(G).
Combining this theorem with [1, Proposition 2.2], it follows that
Ω(|G/R(G)|) 6 100 cd(G)2 .
Note that the length of G itself need not be bounded in terms of cd(G); indeed, if G
is supersoluble then cd(G) = 0 while l(G) may be arbitrarily large. However, we show
in Proposition 5.10 below, that, if ss(G) denotes the direct product of the non-abelian
composition factors of G (with multiplicities), then
l(ss(G)) 6 5 cd(G).
This extends Corollary 10 dealing with simple groups, and serves as a useful tool in the
proof of Theorem 11 above.
2. Preliminaries
We begin by recording some preliminary results, which will be needed in the proofs of
our main theorems. Given a finite group G, we write chiefl(G) for the length of a chief
series of G. Recall that l(G) and λ(G) denote the length and depth of G, respectively, as
defined in the Introduction. Let cd(G) = l(G) − λ(G) be the chain difference of G.
Lemma 2.1. Let G be a finite group and let N be a normal subgroup of G.
(i) l(G) = l(N ) + l(G/N ).
(ii) λ(G/N ) 6 λ(G) 6 λ(N ) + λ(G/N ).
Proof. This is straightforward. For example, see [10, Lemma 2.1] for part (i).
Notice that Lemma 2.1(i) implies that the length of a finite group is equal to the sum
of the lengths of its composition factors. In particular, if G is soluble then l(G) = Ω(|G|),
which is the number of prime divisors of |G| (counting multiplicities).
The next result is [7, Lemma 1.3].
Lemma 2.2. If G is a finite group, B 6 G and A is a normal subgroup of B, then
cd(G) > cd(B/A) + cd(A).
In particular, cd(G) > cd(L) for every section L of G.
8
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
Lemma 2.3. Let G be a finite group.
(i) If G is soluble, then λ(G) = chiefl(G).
(ii) If G is insoluble, then λ(G) > chiefl(G) + 2.
Proof. Part (i) is [22, Theorem 2] and part (ii) is [31, Theorem 1.4].
Lemma 2.4. Let H be soluble, p a prime, and suppose G = H p hαi where αp ∈ H p and α
permutes the p factors transitively. Then λ(G) > λ(H) + 1.
Proof. We proceed by induction on |H|. The base case H = 1 is trivial. Now assume
H 6= 1, and let N be a minimal normal subgroup of H. By Lemma 2.3(i) we have
λ(H/N ) = λ(H) − 1.
Q
αi
Let M = p−1
i=0 N . Then 1 6= M ⊳ G, and λ(G) > λ(G/M ) + 1, again by Lemma 2.3(i).
Applying the induction hypothesis to G/M ∼
= (H/N )p .p, we have
λ(G/M ) > λ(H/N ) + 1.
It follows that λ(G) > λ(H/N ) + 2 = λ(H) + 1, as required.
The next lemma on the length of L2 (q) will be useful later.
Lemma 2.5. Let G = L2 (q), where q = pf > 5 and p is a prime.
(i) If q is even, then l(G) = Ω(q − 1) + f + 1.
(ii) If q is odd, then either
l(G) = max{Ω(q − 1) + f, Ω(q + 1) + 1},
(4)
or q ∈ {7, 11, 19, 29} and l(G) = 5, or q = 5 and l(G) = 4.
Proof. Part (i) is a special case of [32, Theorem 1], noting that 2f :(2f − 1) is a Borel
subgroup of G. Now assume q is odd. The case f = 1 follows from [10, Proposition 5.2],
so let us assume f > 2. We proceed by induction on Ω(f ).
First assume Ω(f ) = 1, so f is a prime, and let M be a maximal subgroup of G. By
inspecting [6, Tables 8.1, 8.2], either M = PGL2 (p) or A5 (for f = 2 only), or M =
pf :((pf − 1)/2), Dpf ±1 or L2 (p), which gives
l(G) = max{Ω(q − 1) + f, Ω(q + 1) + 1, l(L2 (p)) + 1 + δ2,f },
where δi,j is the familiar Kronecker delta. It is easy to check that (4) holds if p ∈
{3, 5, 7, 11, 19, 29}. For example, if p = 29 and f = 2, then Ω(q − 1) = 6 and l(L2 (p)) = 5.
For any other prime p,
l(L2 (p)) + 1 + δ2,f = max{Ω(p ± 1)} + 2 + δ2,f 6 max{Ω(q − 1) + f, Ω(q + 1) + 1}
and the result follows.
Similarly, if Ω(f ) > 2 then
l(G) = max{Ω(q − 1) + f, Ω(q + 1) + 1, l(L2 (q 1/r )) + 1 + δ2,r : r ∈ π(f )},
where π(f ) is the set of prime divisors of f , and induction gives
Therefore
l(L2 (q 1/r )) = max{Ω(q 1/r − 1) + f /r, Ω(q 1/r + 1) + 1}.
l(L2 (q 1/r )) + 1 + δ2,r 6 max{Ω(q − 1) + f, Ω(q + 1) + 1}
and we conclude that (4) holds.
3. Depth
In this section we prove our results on depths, namely Theorems 1, 2 and 3.
ON THE LENGTH AND DEPTH OF FINITE GROUPS
9
3.1. Proof of Theorem 1. Let G be a finite group. For soluble groups, the theorem is
an immediate corollary of Lemma 2.3(i), so let us assume G is insoluble. Here Lemma
2.3(ii) implies that λ(G) > chiefl(G) + 2. Hence if λ(G) = 3, then chiefl(G) = 1 and thus
G is simple, and is as in Table 1 by [8, Theorem 1]. Conversely, the groups in the table
indeed have depth 3.
3.2. Proof of Theorem 2. Suppose G is a finite group and λ(G) = 4. If G is soluble
then it has chief length 4 by Lemma 2.3(i), as in part (i) of Theorem 2. Now assume G is
insoluble. Since λ(G) > chiefl(G) + 2 by Lemma 2.3(ii), it follows that chiefl(G) 6 2. If
chiefl(G) = 1 then G is simple, so (v) holds.
Now assume that chiefl(G) = 2. Then G has a minimal normal subgroup N ∼
= Tk
for some simple (possibly abelian) group T , and G/N ∼
= S is simple (also possibly nonabelian).
Suppose first that k = 1 and S, T are both non-abelian. Then G ∼
= S ×T by the Schreier
hypothesis. When S ∼
6 T , any maximal subgroup of G is of the form S0 ×T or S ×T0 (with
=
S0 , T0 maximal in S, T respectively), and neither of these can have depth 3, by Theorem
1. Hence S ∼
= T . Now G has a maximal subgroup M of depth 3, and M cannot be of the
above form S0 × T or S × T0 . It follows that M is a diagonal subgroup isomorphic to T ,
and hence λ(T ) = 3 and G is as in conclusion (ii) of Theorem 2.
Next suppose k = 1 and S or T is an abelian simple group Cp . Then G is one of T × Cp ,
S ×Cp, a quasisimple group p.S or an almost simple group T.p. The latter two possibilities
are conclusions (v) and (vi). Now assume G = T × Cp , and let M be a maximal subgroup
of G of depth 3. Then M is either T or T0 × Cp , where T0 is maximal in T . In the
latter case Theorem 1 shows that T0 × Cp is soluble of chief length 3, hence chiefl(T0 ) = 2.
Therefore in both cases T has depth 3, and so G is as in conclusion (ii).
We may now assume that k > 1. Suppose T is non-abelian and S = Cp . Then k = p
and G = N hαi = T p hαi, where αp ∈ N and α permutes the p factors transitively. Let M
be a maximal subgroup of G of depth 3. Then M 6= N by Theorem 1, so, replacing α by
Q
αi
∼ p
another element in the coset N α if necessary, M is of the form p−1
i=0 H hαi = H .p for
some maximal subgroup H of T . Also M is soluble, again by Theorem 1. It now follows
from Lemma 2.4 that λ(M ) > λ(H) + 1, and hence λ(H) 6 2. Therefore λ(T ) 6 3, so
λ(T ) = 3 and conclusion (ii) of Theorem 2 holds.
Next consider the case where T = Cp and S is non-abelian. Here G = (Cp )k .S, where
S acts irreducibly on V := (Cp )k . Let M be a maximal subgroup of G of depth 3. If
V 66 M then M maps onto S, hence M ∼
= S by Theorem 1, and conclusion (iii) holds.
Now assume V 6 M , so that M is soluble, by Theorem 1. Then M/V = S0 , a maximal
subgroup of S, and by Lemma 2.3(i), λ(S0 ) < λ(M ) = 3. Hence λ(S) = 3 and again (iii)
holds.
It remains to handle the case where both T and S are non-abelian. Here G = T k .S
and S acts transitively on the k factors. Let M be a maximal subgroup of G of depth 3.
Then T k 66 M by Theorem 1. Hence M maps onto S, so M ∼
= S, again by Theorem 1. In
particular, λ(S) = 3. Write Ω = (G : M ), the coset space of M in G. As M is a core-free
maximal subgroup of G, it follows that G acts primitively on Ω and T k is a regular normal
subgroup. At this point the O’Nan-Scott theorem (see [24], for example) implies that G
is a twisted wreath product T twrφ S, as in conclusion (iv) of Theorem 2.
This completes the proof of Theorem 2.
Remark 3.1. Let us consider the nonsplit groups G = pk .T arising in part (iii) of Theorem
2. Here T is a simple group of depth 3 (so the possibilities for T are given in Table 1) and
V = (Cp )k is a nontrivial irreducible module for T over Fq with q = pf for a prime p. In
10
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
particular, dim V > 2, p divides |T | and the second cohomology H 2 (T, V ) is nontrivial.
As noted in Remark 1(a), G has a maximal subgroup M = pk .S with λ(M ) = 3, where
S < T is maximal and acts irreducibly on V . Note that S is soluble and has chief length
2. It will be difficult to give a complete classification of the depth 4 groups of this form,
but we can identify some genuine examples:
Example. Let T = M23 , so S = 23:11. Now T has an 11-dimensional irreducible module
V over F2 . Moreover, one checks that S acts irreducibly on V and H 2 (T, V ) 6= 0, hence
there is a nonsplit group G = 211 .M23 of depth 4.
Example. Take T = A5 and S = A4 . Set q = 5f with f > 1 and let V be a 3-dimensional
irreducible module for T over Fq . Then S acts irreducibly on V and H 2 (T, V ) 6= 0, so for
each f > 1 there is a nonsplit group 53f .A5 of depth 4.
n
−1
:n.
Example. Suppose T = Ln (r), where n > 3 is a prime and (n, r −1) = 1, so S = rr−1
Let V be the natural module for T = SLn (r). Then S acts irreducibly on V , and a theorem
of Bell [5] implies that H 2 (T, V ) 6= 0 if and only if
(n, r) ∈ {(3, 3a > 3), (3, 2), (3, 5), (4, 2), (5, 2)}.
In particular, there is a nonsplit group 39 .L3 (27) of depth 4 (note that we need λ(L3 (r)) =
3, which in this case means that r 2 + r + 1 is a prime). Thanks to Bell’s result, there are
also nonsplit groups 23 .L3 (2), 53 .L3 (5) and 25 .L5 (2), each of which has depth 4.
Remark 3.2. Let G be a quasisimple group with Z(G) = Cp (p prime) and G/Z(G) = T .
We claim that λ(G) = λ(T ) + 1, so λ(G) = 4 if and only if T is one of the simple groups
in Table 1. To see this, first note that λ(T ) 6 λ(G) 6 λ(T ) + 1 by Lemma 2.1(ii). Also
observe that if M is a maximal subgroup of a group of the form N.K, where N = Cp is
normal, then either M = N.L with L < K maximal, or M ∼
= K. Seeking a contradiction,
suppose λ(G) = λ(T ) = t and consider an unrefinable chain of G as in (1). Set Z = Z(G).
Since λ(G) = λ(T ), it follows that G1 ∼
6 T , so G1 = Z.T1 where T1 < T is maximal.
=
Similarly, G2 = Z.T2 with T2 < T1 maximal, and so on. In particular, Gt−1 = Z.Tt−1 with
Tt−1 < Tt−2 maximal. But Tt−1 6= 1 and thus |Gt−1 | is composite. This is a contradiction
and the claim follows.
3.3. Proof of Theorem 3. Let G be an almost simple group with λ(G) = 4 and socle
T . First assume G 6= T , so Theorem 2 implies that G/T = Cp for a prime p. If λ(T ) = 3,
then we are in case (i) of Theorem 3. Now assume λ(T ) > 4. We claim that (ii) holds, so
λ(T ) = 4 and (G, T ) is one of the cases in Table 3.
To see this, let M be a maximal subgroup of G of depth 3. Then M 6= T , so G = T M
and M ∩ T ⊳ M has index p. By Theorem 1, M is soluble and Lemma 2.3(i) implies that
λ(M ∩ T ) = 2, so M ∩ T is not maximal in T . Therefore, G has a novelty soluble maximal
subgroup M of chief length 3. This property is highly restrictive and we can determine
all the possibilities for G and M .
First assume T = An is an alternating group, so G = An .2. If n = 6 then λ(T ) = 4 and
one checks that λ(PGL2 (9)) = λ(M10 ) = 4, while λ(S6 ) = 5. Now assume n 6= 6, so that
G = Sn . By part (I) of the main theorem of [23], the novelty soluble maximal subgroups
of Sn are S2 ≀ S4 < S8 and Cp :Cp−1 < Sp for p ∈ {7, 11, 17, 23}. Of these, only Cp :Cp−1
for p ∈ {7, 11, 23} have depth 3, so S7 , S11 and S23 are the only depth 4 groups arising in
this case.
If T is a sporadic group then G = T.2 and one checks (by inspection of the Atlas [12])
that G does not have a maximal subgroup M with the required properties.
Now assume T is a simple group of Lie type over Fq . If T is an exceptional group of Lie
type, then all the maximal soluble subgroups of G are known (see [11, 25]) and one checks
ON THE LENGTH AND DEPTH OF FINITE GROUPS
11
that there are no relevant examples (it is helpful to note that if T = 2 B2 (q), 2 G2 (q), 2 F4 (q)
or 3 D4 (q), then G does not have any novelty maximal subgroups). Finally, suppose T is a
classical group. For the low-rank groups, it is convenient to consult the relevant tables in
[6]; in this way, one checks that the only cases that arise are the ones listed in Table 3 (in
each case, λ(T ) = 4). For example, if G = PGL2 (q), where q is a prime and q ≡ ±11, ±19
(mod 40), then G has a maximal subgroup M = S4 and M ∩ T = A4 is non-maximal in T
(note that the additional condition Ω(q ± 1) > 3 is needed to ensure that λ(T ) > 4, which
means that λ(T ) = 4 by [8, Lemma 3.1]). By inspecting [21], it is easy to check that no
examples arise when G is one of the remaining classical groups not covered by [6]. We
conclude that part (ii) of Theorem 3 holds.
To complete the proof, we may assume G = T has depth 4. Let M be a maximal
subgroup of G with λ(M ) = 3. By Theorem 1, either M is simple (and we are in part
(iv) of Theorem 3), or M is soluble of chief length 3. It remains to show that in the latter
case, the possibilities for G and M are given in Table 4. To do this, we essentially repeat
the above argument, but now there are more cases to consider because G = T and there
is no novelty condition.
First assume G = An . It is easy to verify the result for n 6 16 (with the aid of Magma,
for example), so let us assume n > 17. By the O’Nan-Scott theorem (see [24]), the only
soluble maximal subgroups of G are of the form M = AGL1 (p) ∩ G = Cp :C(p−1)/2 , with
n = p a prime. Here Lemma 2.3(i) implies that λ(M ) = Ω(p − 1), which explains the
condition Ω(p − 1) = 3 in Table 4.
Next assume G is a sporadic group. The groups with λ(G) = 4 can be read off from
[8, Lemma 3.3] and the cases appearing in Table 4 are obtained by inspecting the lists of
maximal subgroups of G in the Atlas [12].
Finally suppose G is a simple group of Lie type over Fq . As noted above, if G is an
exceptional group then all of the soluble maximal subgroups of G are known and it is
routine to read off the cases with such a subgroup of depth 3 (for G = 2 B2 (q), note that
we need the extra condition Ω(q −1) > 2 to ensure that λ(G) = 4). Similarly, the result for
classical groups is obtained by carefully inspecting [6] (for the low-rank groups) and [21]
(in the remaining cases). Once again, extra conditions on q are needed to get λ(G) = 4.
This completes the proof of Theorem 3.
Remark 3.3. It is not feasible to give a complete description of the depth 4 examples
(G, M ) arising in part (iv) of Theorem 3, but we can give some partial information:
Sporadic groups. By inspecting the Atlas [12], it is easy to show that if G 6= M is a
sporadic group, then (G, M ) is one of the following:
(M11 , L2 (11)) (M12 , L2 (11)) (M22 , L2 (11)) (M24 , L2 (7)) (M24 , L2 (23))
(M24 , M23 )
(J1 , L2 (11))
(J2 , A5 )
(Suz, L2 (25)) (Co2 , M23 )
(Co3 , M23 )
(Fi23 , L2 (23)) (Th, L3 (3))
At present, the maximal subgroups of the Monster group have not been completely determined; among the 44 conjugacy classes of known maximal subgroups, L2 (59) is the only
simple maximal group of depth 3. According to Wilson’s recent survey article [34], any
additional maximal subgroup of the Monster is almost simple with socle L2 (8), L2 (13),
L2 (16), U3 (4) and U3 (8). Note that each of these simple groups has depth 3. The latter
possibility is eliminated in [35], and Wilson reports that he has also ruled out L2 (8) and
U3 (4) in unpublished work (see the final paragraph in Section 3 of [34]). This leaves open
the two remaining possibilities.
12
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
Alternating groups. Let G = An be an alternating group. With the aid of Magma, it is
easy to check that for n 6 100, the possibilities for (M, n) are as follows:
(A5 , 6)
(L5 (2), 31)
(L2 (59), 60)
(L2 (13), 78)
(L2 (7), 7)
(L2 (37), 38)
(A59 , 60)
(L2 (83), 84)
(L3 (3), 13)
(L2 (43), 44)
(L2 (61), 62)
(A83 , 84)
(L2 (13), 14) (M23 , 23)
(L3 (5), 31)
(L2 (47), 48) (A47 , 48)
(L2 (53), 54)
(L2 (25), 65) (L2 (67), 68) (L2 (73), 74)
(A87 , 88)
The main theorem of [24] on the maximal subgroups of symmetric and alternating groups
provides some useful information in the general case, but it is not possible to state a precise
result.
Exceptional groups. Let G be an exceptional group of Lie type over Fq . If G = 2 B2 (q),
2 G (q), 2 F (q)′ , 3 D (q) or G (q), then the maximal subgroups of G are known and we can
2
4
4
2
read off the relevant examples: either G = 2 F4 (2)′ and M = L2 (25), or
• G = 2 B2 (q) and M = 2 B2 (q0 ) with q = q0k , q0 > 2 and both k and q0 − 1 are
primes; or
• G = G2 (q) and M = L2 (13) or L2 (8), with various conditions on q needed for the
maximality of M (see [6], for example).
In the remaining cases, by combining results of Liebeck and Seitz [27] with recent work
of Craven [13], we deduce that there are no examples with M an alternating or sporadic
group. Strong restrictions on the remaining possibilities when M is a group of Lie type can
be obtained by applying [26, Theorem 8] (defining characteristic) and the main theorem
of [27] (non-defining characteristic).
Classical groups. Finally, suppose G is a simple classical group with natural n-dimensional
module V . By Aschbacher’s subgroup structure theorem [2], either M belongs to a collection C(G) of geometric subgroups, or M ∈ S(G) is almost simple and acts irreducibly on
V . By inspecting [6, 21], it is possible to determine the relevant examples with M ∈ C(G)
(the precise list of cases will depend on some delicate number-theoretic conditions). For
example, suppose G = Un (q) is a unitary group, where n = q + 1 and q > 5 is a prime. If
(q n−1 + 1)/(q + 1) is also a prime, then G has a simple maximal subgroup M = Un−1 (q)
of depth 3 (here M is the stabiliser of a non-degenerate 1-space). For instance, G = U6 (5)
has a maximal subgroup M = U5 (5) of depth 3. It is not feasible to determine all the
cases that arise with M ∈ S(G), although this can be achieved for the low-dimensional
classical groups (that is, the groups with n 6 12) by inspecting the relevant tables in [6,
Chapter 8].
4. Length
In this section we prove our main results on length, namely Theorems 4 and 6, and
Corollary 5.
4.1. Proof of Theorem 4 and Corollary 5. Let G be a non-abelian finite simple group.
First recall that λ(G) > 3 and cd(G) > 1, so l(G) > 4. The simple groups of length 4 were
classified by Janko [19, Theorem 1]. In a second paper [20], he proves that every simple
group of length 5 is of the form L2 (q) for some prime power q (but he does not give any
further information on the prime powers that arise). In later work of Harada [16], this
result was extended to simple groups of length at most 7.
Theorem 4.1 (Harada, [16]). Let G be a finite simple group with l(G) 6 7. Then either
(i) G = U3 (3), U3 (5), A7 , M11 , J1 ; or
(ii) G = L2 (q) for some prime power q.
ON THE LENGTH AND DEPTH OF FINITE GROUPS
13
For the groups in part (i) of Theorem 4.1, it is easy to check that A7 and J1 have length
6, the others have length 7. We are now ready to prove Theorem 4.
Lemma 4.2. Theorem 4 holds if G ∼
= L2 (q).
Proof. Write q = pf , where p is a prime. In view of Lemma 2.5, the result is clear if p = 2
or f = 1, so let us assume p > 3 and f > 2, in which case
l(G) = max{Ω(q − 1) + f, Ω(q + 1) + 1}.
Since l(G) 6 9, it follows that f 6 7 and it is easy to verify the result when p ∈ {3, 5}.
Now assume p > 7, in which case Ω(p2 − 1) > 5 and f ∈ {2, 3, 5}.
Suppose f = 5, so Ω(q − 1) > 3 and we have l(G) ∈ {8, 9}. For l(G) = 8 we must have
Ω(q − 1) = 3 and Ω(q + 1) 6 7; one checks that there are primes p with these properties:
p ∈ {3, 7, 23, 83, 263, 1187, . . .}.
Similarly, for l(G) = 9 we need Ω(q − 1) = 4 and Ω(q + 1) 6 8, or Ω(q − 1) = 3 and
Ω(q + 1) = 8; in both cases, there are primes satisfying these conditions.
Next consider the case f = 3, so Ω(q − 1) > 3 and l(G) ∈ {6, 7, 8, 9}. First assume
l(G) = 6, so Ω(q − 1) = 3 and thus (p − 1)/2 and p2 + p + 1 are both primes. In particular,
p ≡ −1 (mod 12) and p > 11. Therefore, Ω(p + 1) > 4 and p2 − p + 1 is divisible by
3, hence Ω(q + 1) > 6 and we have reached a contradiction. Next assume l(G) = 7, so
Ω(q − 1) = 3 or 4. If Ω(q − 1) = 3 then we need Ω(q + 1) = 6, which forces Ω(p + 1) = 4
and p2 − p + 1 = 3r for some prime r. But 7 divides p6 − 1, so 7 must divide p + 1 and
thus p = 83 is the only possibility. But then p2 + p + 1 is composite, so this case does not
arise. However, there are primes
p ∈ {7, 11, 83, 1523, 20507, 28163, . . .}
satisfying the conditions Ω(q − 1) = 4 and Ω(q + 1) 6 6, so this case is recorded in Table
5. Similarly, if l(G) = m ∈ {8, 9} then either Ω(q − 1) = m − 3 and Ω(q + 1) 6 m − 1, or
Ω(q − 1) 6 m − 4 and Ω(q + 1) = m − 1. Moreover, one can check that there are primes p
satisfying these conditions.
Finally, let us assume f = 2. Here the condition p > 7 implies that Ω(q − 1) > 5, so
l(G) ∈ {7, 8, 9}. Suppose l(G) = 7. Here Ω(q − 1) = 5 and either (p − 1)/2 or (p + 1)/2
is a prime. Suppose (p − 1)/2 is a prime, so p ≡ 3 (mod 4) and (p + 1)/2 = 2r for some
prime r. Therefore, r, 2r − 1 and 4r − 1 are all primes. If r ∈ {2, 3} then p ∈ {7, 11} and
one checks that l(G) = 7. Now assume r > 5. If r ≡ 1 (mod 3) then 4r − 1 is divisible by
3. Similarly, if r ≡ 2 (mod 3) then 3 divides 2r − 1, so there are no examples with r > 5.
A similar argument applies if we assume (p + 1)/2 is a prime: here we need a prime r such
that 2r + 1 and 4r + 1 are also primes, and one checks that r = 3 is the only possibility,
which corresponds to the case G = L2 (169) with l(G) = 7.
Next assume l(G) = 8 and f = 2, so Ω(q − 1) = 5 or 6. The case Ω(q − 1) = 5 is
ruled out by arguing as in the previous paragraph. On the other hand, if Ω(q − 1) = 6
then we need Ω(q + 1) 6 7 and there are primes p with these properties. Finally, let us
assume l(G) = 9, so Ω(q − 1) ∈ {5, 6, 7}. The case Ω(q − 1) = 5 is ruled out as above,
whereas there are primes p such that Ω(q − 1) = 7 and Ω(q + 1) 6 8, or Ω(q − 1) = 6 and
Ω(q + 1) = 8.
In view of Theorem 4.1, it remains to determine the simple groups G ∼
6 L2 (q) with
=
l(G) = 8 or 9.
Lemma 4.3. Theorem 4 holds if l(G) = 8.
14
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
Proof. Let G be a simple group with l(G) = 8. By inspection, if G is a sporadic group then
G = M12 is the only example (also see [10, Tables III and IV]). For alternating groups,
[10, Theorem 1] gives
3n − 1
− bn − 1,
(5)
l(An ) =
2
where bn is the number of ones in the base 2 expansion of n. From the formula, it is easy
to check that no alternating group has length 8.
Now assume G is a simple group of Lie type over Fq , where q = pf with p a prime.
First we handle the exceptional groups. If G = 2 B2 (q) then p = 2, f > 3 is odd and
l(G) = Ω(q − 1) + 2f + 1
by [32, Theorem 1], hence l(G) = 8 if and only if f = 3. Next assume G = 2 G2 (q), so
p = 3 and f > 3 is odd. Since a Borel subgroup of G has order q 3 (q − 1), it follows that
l(G) > Ω(q − 1) + 3f + 1 > 8.
If G = G2 (q) then q > 3 and l(G) > l(GL2 (q)) + 5f + 1 > 8, so no examples arise. All of
the other exceptional groups can be eliminated in a similar fashion.
Finally, let us assume G is a classical group. If G = PΩǫn (q) is an orthogonal group with
n > 7, then it is clear that l(G) > 8 (indeed, a Sylow p-subgroup of G has length greater
than 8). Similarly, we can eliminate symplectic groups PSpn (q) with n > 6. Now assume
G = PSp4 (q) with q = pf > 3. Here l(G) > l(SL2 (q))+ 3f + 1, so we may assume f = 1, in
which case G has a maximal subgroup 24 .A5 or 24 .S5 (according to the value of q modulo
8) and thus l(G) > 1 + 4 + l(A5 ) = 9. Similarly, it is easy to show that l(Lǫn (q)) > 8 if
n > 4.
To complete the proof, we may assume that G = U3 (q) or L3 (q). Let B be a Borel
subgroup of G and first assume G = U3 (q), so q > 3. If q is even, then [32, Theorem 1]
gives
l(G) = Ω(|B|) + 1 = Ω(q 2 − 1) + 3f + 1 − Ω((3, q + 1))
(6)
and thus l(G) > 9. For q odd we have
l(G) > Ω(q 2 − 1) + 3f + 1 − Ω((3, q + 1))
(7)
l(G) = Ω(|B|) + 2 = 2Ω(q − 1) + 3f + 2 − Ω((3, q − 1))
(8)
l(G) > l(L2 (q)) + 2f + 2 + Ω(q − 1) − Ω((3, q − 1))
(9)
and we quickly deduce that f = 1, so q > 7 (since U3 (3) and U3 (5) have length 7). Now
Ω(q 2 − 1) > 5, so we must have Ω(q 2 − 1) = 5 and q ≡ 2 (mod 3), hence (q − 1)/2 and
(q + 1)/6 are both prime. This implies that q = 6r − 1, where r and 3r − 1 are primes, so
r = 2 is the only option and one checks that l(U3 (11)) = 9.
Finally, let us assume G = L3 (q). If q is even then
and it is easy to see that l(G) 6= 8. Now assume q is odd. If q = 3 then one can check that
l(G) = 8, so let us assume q > 5. Let H be a maximal parabolic subgroup of G. Then
l(G) > l(H) + 1, so
and we deduce that l(G) > 9.
Lemma 4.4. Theorem 4 holds if l(G) = 9.
Proof. This is very similar to the proof of the previous lemma. Let G be a simple group
with l(G) = 9. By inspection, G is not a sporadic group. In view of (5), G = A8 is the
only alternating group of length 9. Now assume G is a group of Lie type over Fq , where
q = pf with p a prime. The exceptional groups are easily eliminated by arguing as in the
proof of Lemma 4.3. Similarly, if G is a classical group then it is straightforward to reduce
to the cases G = PSp4 (q)′ and Lǫ3 (q) (note that L4 (2) ∼
= A8 and U4 (2) ∼
= PSp4 (3)).
ON THE LENGTH AND DEPTH OF FINITE GROUPS
15
Suppose G = PSp4 (q)′ . If q = 2 then G ∼
= A6 and l(G) = 5. Now assume q > 3. As
noted in the proof of the previous lemma, l(G) > l(SL2 (q)) + 3f + 1 and so we may assume
q = p is odd. Now G has a maximal subgroup H of type Sp2 (q) ≀ S2 , which implies that
l(G) > l(H) + 1 = 3 + 2l(L2 (p)).
If p = 3 then this lower bound is equal to 9 and one checks that l(PSp4 (3)) = 9. For p > 3
we get l(G) > 11.
Next assume G = L3 (q). If q is even, then (8) holds and one checks that l(G) = 9 if and
only if q = 4. Now assume q is odd. We have already noted that l(L3 (3)) = 8, so we may
assume q > 5. Moreover, in view of (9), we may assume that q = p. Since l(L2 (p)) > 4
and Ω(p − 1) > 2, it follows that l(L2 (p)) = 4, Ω(p − 1) = 2 and p ≡ 1 (mod 3). Clearly,
p = 7 is the only prime satisfying the latter two conditions, but l(L2 (7)) = 5.
To complete the proof of the lemma, we may assume G = U3 (q). If q is even then (6)
holds and we deduce that l(G) = 9 if and only if q = 4. Now suppose q is odd, so (7)
holds. If f > 2 then Ω(q 2 − 1) > 5 and thus l(G) > 11. Therefore, we may assume q = p
is odd. We have already noted that l(U3 (3)) = l(U3 (5)) = 7 and l(U3 (11)) = 9, and it is
straightforward to check that l(U3 (7)) = 10. Now assume q > 13. If Ω(q 2 − 1) > 7 then
(7) implies that l(G) > 10, so we must have Ω(q 2 − 1) = 5 or 6.
If Ω(q 2 − 1) = 5 then q = 13 is the only possibility (see the proof of Lemma 4.2) and
one checks that l(U3 (13)) = 9. Now assume Ω(q 2 − 1) = 6, so q ≡ 2 (mod 3) by (7). By
considering the maximal subgroups of G (see [6, Tables 8.5 and 8.6]), we see that
l(G) = max{9, Ω(q + 1) + l(L2 (q)) + 1, Ω(q 2 − q + 1) + 1}.
Note that Ω(q + 1) > 3 and l(L2 (q)) > 4 since q > 13 and q ≡ 2 (mod 3). If Ω(q + 1) > 4
then l(L2 (q)) > 5 and thus l(G) > 10. Therefore, Ω(q +1) = Ω(q −1) = 3 and l(L2 (q)) 6 5,
so either q = 29 or q ≡ ±3, ±13 (mod 40). In addition, we need Ω(q 2 − q + 1) 6 8 and one
checks there are primes p that satisfy these conditions:
p ∈ {173, 317, 653, 2693, 3413, 3677, . . .}.
This completes the proof of Theorem 4.
Proof of Corollary 5. First observe that part (i) is clear from the additivity of length (see
Lemma 2.1(i)) and the fact that every non-abelian simple group has length at least 4.
Next, assume G is a finite insoluble group with l(G) = 5. The simple groups of length 5
are given in Theorem 4, so we may assume that G is not simple. Therefore, G must have
exactly two composition factors: a non-abelian simple group T of length 4 and depth 3,
and a cyclic group Cp of prime order. In particular, λ(G) = 4 and so G is one of the groups
in Theorem 2. By inspecting the various possibilities, we see that either G = T × Cp , or
G is quasisimple with G/Z(G) = T and Z(G) = Cp , or G is almost simple with socle
T and G/T = Cp . Since l(T ) = 4, [19, Theorem 1] implies that T = L2 (q) and q is a
prime satisfying the conditions in the first row of Table 5. In particular, the only valid
quasisimple and almost simple groups are of the form SL2 (q) and PGL2 (q), respectively.
This completes the proof of part (ii) of Corollary 5.
Finally, suppose G is insoluble of length 6. Again, the simple groups of length 6 are
given by Theorem 4, so we may assume G is not simple. Then G has a unique non-abelian
composition factor T of length 4 or 5, and 6 − l(T ) abelian composition factors. It is
readily checked that the possibilities for G when l(T ) = 5 (resp. 4) are those in (iii)(b)
(resp. (c)) of Corollary 5.
16
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
4.2. Proof of Theorem 6. Set G = L2 (p), where p > 5 is a prime. By Lemma 2.5(ii),
l(G) 6 1 + max{4, Ω(p ± 1)}.
The result follows from Theorem 7 (see Appendix A), which implies that there are infinitely
many primes p with max{Ω(p ± 1)} 6 8.
5. Chain differences and ratios
In this section we prove our main results concerning chan differences and chain ratios,
namely Theorems 8, 9 and 11.
5.1. Proof of Theorem 8. For comparison, we start by recalling [7, Theorem 3.3], which
describes the simple groups of chain difference one.
Theorem 5.1 (Brewster et al. [7]). Let G be a finite simple group. Then cd(G) = 1 if
and only if G = L2 (q) and either q ∈ {4, 5, 9}, or q is a prime and one of the following
holds:
(i) 3 6 Ω(q ± 1) 6 4 and either q ≡ ±1 (mod 10) or q ≡ ±1 (mod 8).
(ii) Ω(q ± 1) 6 3 and q ≡ ±3, ±13 (mod 40).
We begin the proof of Theorem 8 by handling the alternating and sporadic groups.
Lemma 5.2. Let G be a simple alternating or sporadic group. Then cd(G) = 2 if and
only if G = A7 or J1 .
Proof. First assume G = An is an alternating group. A formula for l(G) is given in (5) and
it is easy to compute λ(An ) directly for small values of n: we get cd(A5 ) = cd(A6 ) = 1,
cd(A7 ) = 2 and cd(A8 ) = 4. By Lemma 2.2, it follows that cd(An ) > 4 for all n > 8.
The length and depth of each sporadic group G is given in [10, Tables III and IV] and [8,
Table 2], respectively, and we immediately deduce that cd(G) = 2 if and only if G = J1 .
(Note that the “possible values” for l(G) listed in [10, Table IV] are in fact the exact
values.)
Lemma 5.3. Let G be a simple group of Lie type over Fq with G ∼
6 L2 (q). Then cd(G) = 2
=
if and only if G = U3 (5).
Proof. We will follow a similar approach to the proof of [7, Theorem 3.3]. Set q = pf ,
where p is a prime and f > 1.
First assume G = L3 (q), in which case q > 3 since L3 (2) ∼
= L2 (7). If q = 3 then
l(G) = 8 and λ(G) = 3, so we may assume q > 4 and thus l(G) > 9 by Theorem 4.
If p is odd then L3 (p) has a maximal subgroup SO3 (p) ∼
= PGL2 (p), so λ(L3 (p)) 6 6
by [8, Corollary 3.4] and thus cd(L3 (p)) > 3. In view of Lemma 2.2, this implies that
cd(G) > 3 since L3 (p) 6 G. Now assume p = 2 and let H = QL be a maximal parabolic
subgroup of G, where Q is elementary abelian of order q 2 and L 6 GL2 (q) has index
d = (3, q − 1). Note that L acts irreducibly on Q, so L is a maximal subgroup of H.
Therefore, l(H) = Ω((q − 1)/d) + 2f + l(L2 (q)) and
λ(H) 6 λ(L) + 1 = Ω((q − 1)/d) + λ(L2 (q)) + 1,
so cd(H) > 2f − 1 + cd(L2 (q)) > 2f and the result follows since f > 2.
Now assume G = Ln (q) with n > 4. Since G has a subgroup L3 (q), the desired result
follows from Lemma 2.2 if q > 3. Similarly, if q = 2 then G has a subgroup L4 (2) and it
is easy to check that cd(L4 (2)) = 4.
Next suppose G is a classical group of rank r > 3, or an untwisted exceptional group,
or one of 3 D4 (q) or 2 E6 (q). It is easy to see that G has a section isomorphic to L3 (q) and
ON THE LENGTH AND DEPTH OF FINITE GROUPS
17
thus cd(G) > 3 if q > 3. Now assume q = 2. If G = G2 (2)′ ∼
= U3 (3) then cd(G) = 3. Since
−
U3 (3) < Sp6 (2) < Ω−
(2),
it
follows
that
cd(G)
>
3
if
G
=
Sp
6 (2) or Ω8 (2). In each of the
8
remaining cases (with q = 2), G has a section isomorphic to L4 (2) and thus cd(G) > 4.
Next assume G = U3 (q), so q > 3. Let H = QL be a Borel subgroup of G, where
d = (3, q + 1). Here Q = q 1+2 , L = (q 2 − 1)/d and Q/Z(Q) is elementary abelian of
order q 2 . Moreover, L acts irreducibly on Q/Z(Q). Therefore, l(H) = 3f + Ω(L) and
λ(H) 6 f + 1 + Ω(L), so cd(H) > 2f − 1 and we may assume q = p. If p ∈ {3, 7, 11}
then it is easy to check that cd(G) > 3, whereas cd(G) = 2 if p = 5. For p > 11,
Theorem 4 implies that l(G) > 9 and we get cd(G) > 3 since G has a maximal subgroup
SO3 (p) ∼
= PGL2 (p) of depth at most 5. Now assume G = Un (q) with n > 4. If q is even
then G has a section isomorphic to U4 (2) and one checks that cd(U4 (2)) = 4. Similarly,
if q is odd and q 6= 5 then G has a section U3 (q) with cd(U3 (q)) > 3. Finally, suppose
q = 5. One checks that λ(U4 (5)) = 5 and thus cd(U4 (5)) > 5 by Theorem 4. The result
now follows since G has a section isomorphic to U4 (5).
To complete the proof, we may assume G = PSp4 (q), 2 F4 (q)′ , 2 G2 (q) or 2 B2 (q). Suppose
G = PSp4 (q) with q > 3. If q is even then G has a maximal subgroup H = L2 (q) ≀ S2 .
Now l(H) = 2l(L2 (q)) + 1 and λ(H) 6 λ(L2 (q)) + 2, so cd(H) > l(L2 (q)) > 4. Similarly,
if q is odd then H = 24 .Ω−
4 (2) < G and the result follows since cd(H) = 4. Next assume
2
′
G = F4 (q) . One checks that the Tits group 2 F4 (2)′ has depth 4, so cd(2 F4 (2)′ ) > 6 by
Theorem 4 and thus cd(G) > 6 by Lemma 2.2.
Next assume G = 2 G2 (q), so q = 3f and f > 3 is odd. Let H be a Borel subgroup of G
and let K = 2 × L2 (q) be the centralizer of an involution. Then
l(G) > l(H) + 1 = Ω(q − 1) + 3f + 1
λ(G) 6 λ(K) + 1 6 λ(L2 (q)) + 2 6 Ω(q − 1) + 3
and thus cd(G) > 3f − 2 > 7. Finally, suppose G = 2 B2 (q), where q = 2f and f > 3 is
odd. Here l(G) = Ω(q − 1) + 2f + 1 by [32, Theorem 1] and λ(G) 6 Ω(q − 1) + 2 (since G
has a maximal subgroup D2(q−1) ). Therefore, cd(G) > 2f − 1 > 5.
The next result completes the proof of Theorem 8.
Lemma 5.4. If G = L2 (q) with q > 5, then cd(G) = 2 if and only if
(i) q ∈ {7, 8, 11, 27, 125}; or
(ii) q is a prime and one of the following holds:
(a) max{Ω(q ± 1)} = 4 and either min{Ω(q ± 1)} = 2, or q ≡ ±3, ±13 (mod 40).
(b) max{Ω(q ± 1)} = 5, min{Ω(q ± 1)} > 3 and q 6≡ ±3, ±13 (mod 40).
Proof. As before, write q = pf . First assume p = 2, so q > 4. Here l(G) = Ω(q − 1) + f + 1
by Lemma 2.5(i) and λ(G) 6 Ω(q−1)+2 (since D2(q−1) is a maximal subgroup). Therefore,
cd(G) > f − 1 and thus f ∈ {2, 3}. If f = 2 then cd(G) = 1, while cd(G) = 2 if f = 3
(the case q = 8 is recorded in part (ii)(a) of Theorem 8).
Now assume p > 3. For q 6 11, one checks that cd(G) = 2 if and only if q = 7 or 11,
so we may assume q > 13. This implies that Dq−1 is a maximal subgroup of G and thus
λ(G) 6 Ω(q − 1) + 1. Now l(G) > Ω(q − 1) + f by Lemma 2.5(ii), so cd(G) > f − 1 and
thus we may assume f ∈ {1, 2, 3}.
First assume q = p > 13. By [8, Corollary 3.4] we have
3 min{Ω(p ± 1)} = 2 or p ≡ ±3, ±13 (mod 40)
λ(G) =
4 otherwise.
18
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
If λ(G) = 3 then we need l(G) = 5, in which case Theorem 4 implies that max{Ω(p±1)} =
4. There are primes p that satisfy these conditions. For example, if
p ∈ {23, 59, 83, 227, 347, 563, . . .},
then Ω(p − 1) = 2 and Ω(p + 1) = 4. Similarly, we have λ(G) = 4 and l(G) = 6 if and
only if Ω(p ± 1) > 3, p 6≡ ±3, ±13 (mod 40) and max{Ω(p ± 1)} = 5. Once again, there
are primes p with these properties.
Next assume q = p2 with p > 5. Since PGL2 (p) is a maximal subgroup of G, it
follows that λ(G) 6 6 and thus l(G) 6 8. Now, if l(G) 6 7 then Theorem 4 implies
that p ∈ {5, 7, 11, 13} and in each case one checks that cd(G) > 3. Therefore, we may
assume λ(G) = 6 and l(G) = 8. By Theorem 4, l(G) = 8 if and only if Ω(q − 1) = 6
and Ω(q + 1) 6 7. Similarly, λ(G) = 6 if and only if Ω(q ± 1) > 5, p ≡ ±1 (mod 10) and
λ(L2 (p)) = 4. The latter constraint yields the additional condition Ω(p ± 1) > 3, so we
need Ω(p − 1) = Ω(p + 1) = 3 since Ω(q − 1) = 6. We claim that there are no primes p that
satisfy these conditions. For example, suppose p ≡ 1 (mod 10). Then (p − 1)/10 must be
a prime. If p ≡ 1 (mod 3) then p = 31 is the only possibility, but this gives Ω(p + 1) = 5.
On the other hand, if p ≡ 2 (mod 3) then (p + 1)/6 is a prime. But p2 ≡ 1 (mod 8) and
thus (p − 1)/10 = 2 or (p + 1)/6 = 2, which implies that p = 11 and Ω(p − 1) = 2. A very
similar argument handles the case p ≡ −1 (mod 10).
Finally, suppose q = p3 . If p = 3 then one checks that λ(G) = 3 and l(G) = 5, so
cd(G) = 2 in this case. Now assume p > 5. Since L2 (p) is a maximal subgroup of G, it
follows that
4 min{Ω(p ± 1)} = 2 or p ≡ ±3, ±13 (mod 40)
λ(G) =
5 otherwise
and thus l(G) 6 7. By applying Theorem 4, we see that λ(G) = 4 and l(G) = 6 if and
only if p = 5. Similarly, λ(G) = 5 and l(G) = 7 if and only if Ω(q − 1) = 4, Ω(q + 1) 6 6,
Ω(p ± 1) > 3 and p 6≡ ±3, ±13 (mod 40). Note that the conditions Ω(q − 1) = 4 and
Ω(p ± 1) > 3 imply that Ω(p − 1) = 3 and p2 + p + 1 is a prime, so p ≡ 2 (mod 3) and
p2 − p + 1 is divisible by 3, whence Ω(p2 − p + 1) > 2. There are primes p such that
Ω(p3 − 1) = 4, Ω(p3 + 1) 6 6 and Ω(p ± 1) > 3: the smallest one is 433373. However, we
claim that there is no prime p that also satisfies the condition p 6≡ ±3, ±13 (mod 40).
If p ≡ 1, 9, 17, 33 (mod 40) then p − 1 is divisible by 8 and thus Ω(p − 1) > 4. Similarly,
if p ≡ 7, 23, 31, 39 (mod 40) then p + 1 is divisible by 24, and p 6= 23 since we need
Ω(p − 1) = 3, so Ω(p + 1) > 5. But we have already noted that Ω(p2 − p + 1) > 2,
whence Ω(p3 + 1) > 7. Finally, suppose p ≡ 11, 19, 29 (mod 40). These cases are similar,
so let us assume p ≡ 11 (mod 40). Here (p − 1)/10 is a prime and p + 1 is divisible by
12, so Ω(p + 1) > 4 since p 6= 11. Since Ω(p3 + 1) 6 6, it follows that (p + 1)/12 and
(p2 − p + 1)/3 are both primes. Now p6 ≡ 1 (mod 7), so one of (p − 1)/10, p2 + p + 1,
(p + 1)/12 or (p2 − p + 1)/3 must be equal to 7, but it is easy to see that this is not
possible. For example, if (p − 1)/10 = 7 then p = 71 does not satisfy the required
congruence condition.
5.2. Proof of Theorem 9. Recall that cr(G) = l(G)/λ(G) is the chain ratio of G. In this
section we prove Theorem 9, which states that cr(G) > 5/4 for every finite non-abelian
simple group G, with equality if and only if G ∈ G (see (3)). We partition the proof into
several cases.
Lemma 5.5. Theorem 9 holds if G is a sporadic or alternating group.
Proof. First assume G is a sporadic group. The length and depth of G is given in [10,
Tables III and IV] and [8, Table 2], respectively, and we immediately deduce that cr(G) >
3/2, with equality if and only if G = J1 .
ON THE LENGTH AND DEPTH OF FINITE GROUPS
19
Now assume G = An is an alternating group. The length of G is given in (5) and [8,
Theorem 2] states that λ(G) 6 23. One checks that this bound is sufficient if n > 23. For
example, if n = 23 then l(G) = 34 − 4 − 1 = 29 and thus cr(G) > 29/23 > 5/4. For n < 23
we can use Magma to show that λ(G) 6 6, with equality if and only if n = 16. In view of
the above formula for l(G), we deduce that cr(G) > 5/4 if n > 8. For the smallest values
of n, we get cr(A5 ) = 4/3, cr(A6 ) = 5/4 and cr(A7 ) = 3/2.
Lemma 5.6. Theorem 9 holds if G is an exceptional group of Lie type.
Proof. Let G be a finite simple exceptional group of Lie type over Fq , where q = pk and p
is a prime. Let B be a Borel subgroup of G and let r be the twisted Lie rank of G. Then
l(G) > Ω(|B|) + r
(10)
and [8, Theorem 4] gives
λ(G) 6 3Ω(k) + 36
2B
if G 6= 2 (q).
First assume G = E8 (q). Here |B| = q 120 (q − 1)8 so
5
5
5
l(G) > 120k + 8 > (3 log2 k + 36) > (3Ω(k) + 36) > λ(G)
4
4
4
and the result follows. The case E7 (q) is handled in exactly the same way, and similarly
E6ǫ (q) and F4 (q) with k > 2. Suppose G = F4 (p), so l(G) > 28 by (10). If p = 2 then
2 F (2) < G is maximal and λ(2 F (2)) = 5, so λ(G) 6 6 and the result follows. Similarly,
4
4
if p is odd then
λ(F4 (p)) 6 λ(2.Ω9 (p)) + 1 6 λ(Ω9 (p)) + 2
and one of A10 , S10 or A11 is a maximal subgroup of Ω9 (p) (see [6, Table 8.59]). Therefore
λ(Ω9 (p)) 6 7, so λ(G) 6 9 and once again we deduce that cr(G) > 5/4. If G = E6ǫ (p) then
F4 (p) < G is maximal, so the previous argument yields λ(G) 6 10 and the result quickly
follows.
Next assume G = G2 (q)′ . If q = 2 then G ∼
= U3 (3) and one checks that λ(G) = 4 and
6
l(G) = 7. Now assume q > 2. Since |B| = q (q − 1)2 it follows that l(G) > 6k + 4 and
by considering a chain of subfield subgroups we deduce that λ(G) 6 Ω(k) + λ(G2 (p)). If
p > 5 then G2 (2) < G2 (p) is maximal, so λ(G2 (p)) 6 6 and it is easy to check that the
same bound holds if p = 2 or 3. Therefore λ(G) 6 Ω(k) + 6 and we deduce that
5
5
5
l(G) > 6k + 4 > (log2 k + 6) > (Ω(k) + 6) > λ(G).
4
4
4
If G = 3 D4 (q) then G2 (q) < G is maximal and thus λ(G) 6 Ω(k) + 7. In addition,
|B| = q 12 (q 3 − 1)(q − 1), so l(G) > 12k + 2 and the result follows.
To complete the proof of the lemma, we may assume G = 2 F4 (q)′ , 2 G2 (q) or 2 B2 (q).
Suppose G = 2 F4 (q)′ , so q = 2k with k odd. If k = 1 then λ(G) = 4 and l(G) > 13
since G has a soluble maximal subgroup of the form 2.[28 ].5.4. Similarly, if k > 1 then
λ(G) 6 Ω(k) + 5 (see the proof of [8, Theorem 4]), l(G) > 12k + 2 and these bounds are
sufficient. The case G = 2 G2 (q)′ , where q = 3k with k odd, is very similar. If k = 1 then
G∼
= L2 (8), so λ(G) = 3 and l(G) = 5. If k > 1, then the proof of [8, Theorem 4] gives
λ(G) 6 Ω(k) + 4 and we have l(G) > 3k + 2 since |B| = q 3 (q − 1). It is easy to check that
these bounds are sufficient.
Finally, let us assume G = 2 B2 (q), where q = 2k with k > 3 odd. Since |B| = q 2 (q − 1),
it follows that l(G) > 2k + 1 + Ω(q − 1) > 2k + 2. By [8, Theorem 4], we also have
λ(G) 6 Ω(k) + 1 + Ω(q − 1) < Ω(k) + k + 1.
20
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
Therefore,
l(G) > 2k + 2 >
5
5
(log2 k + k + 1) > λ(G)
4
4
as required.
Lemma 5.7. Theorem 9 holds if G is a classical group and G ∼
6 L2 (q).
=
Proof. Let G be a finite simple classical group over Fq and r be the twisted rank of G. As
before, write q = pk with p a prime. Let B be a Borel subgroup of G and recall that (10)
holds. Our initial aim is to reduce the problem to groups of small rank. To do this, we
will consider each family of classical groups in turn.
First assume G = Lr+1 (q). We claim that cr(G) > 5/4 if r > 9. To see this, first
observe that
1
l(G) > Ω(|B|) + r > kr(r + 1) + r
2
and λ(G) 6 3Ω(k) + 36 by [8, Theorem 4]. For r > 9, it is routine to check that
1
5
5
l(G) > kr(r + 1) + r > (3 log2 k + 36) > λ(G),
2
4
4
which justifies the claim. In a similar fashion, we can reduce the problem to r 6 6 when
−
G = PSp2r (q), Ω2r+1 (q) or PΩ+
2r (q); r 6 5 when G = PΩ2r+2 (q); and r 6 4 for G = U2r (q).
Finally, suppose G = U2r+1 (q). Here l(G) > kr(2r + 1) + r and [8, Theorem 4] states
that λ(G) 6 3Ω(k) + 36 if q or k is odd. If q = 2k and k is even, then the same theorem
gives
a
λ(G) 6 3Ω(k) + 35 + 2Ω(22 + 1),
a
where k = 2a b and b is odd. Since Ω(22 + 1) 6 k, it follows that λ(G) 6 3Ω(k) + 2k + 35
for all possible values of q and k, and one checks that
5
5
5
l(G) > kr(2r + 1) + r > (3 log2 k + 2k + 35) > (3Ω(k) + 2k + 35) > λ(G)
4
4
4
if r > 5.
Therefore, in order to complete the proof of the lemma, we may assume that we are in
one of the following cases:
(a)
(b)
(c)
(d)
(e)
(f)
(g)
G = Ω2r+1 (q) with 3 6 r 6 6 and q odd;
G = PSp2r (q) with 2 6 r 6 6;
G = PΩ+
2r (q) with 4 6 r 6 6;
−
G = PΩ2r+2 (q) with 3 6 r 6 5;
G = Lr+1 (q) with 2 6 r 6 8;
G = U2r (q) with 2 6 r 6 4;
G = U2r+1 (q) with 1 6 r 6 4.
Let us start by handling case (a). By considering a chain of subfield subgroups, we see
that λ(G) 6 2Ω(k) + λ(Ω2r+1 (p)). In addition, the proof of [8, Theorem 4] implies that
λ(Ω2r+1 (p)) 6 4 + λ(Sn ) for some n 6 2r + 3 = 15. One checks that λ(Sn ) 6 6 for n 6 15,
2
hence λ(G) 6 2Ω(k) + 10. Now |B| = 12 (q − 1)r q r , so (10) yields l(G) > kr 2 + 2r − 1 and
one checks that
5
kr 2 + 2r − 1 > (2 log2 k + 10)
4
for all possible values of k and r. The result follows.
Next consider (b). First assume p = 2, in which case λ(G) 6 Ω(k) + λ(Sp2r (2)) and
λ(Sp2r (2)) 6 4 + λ(Sn ) for some n 6 14 (see the proof of [8, Theorem 4]). Therefore,
λ(G) 6 Ω(k) + 10. Since l(G) > kr 2 + r by (10), the result follows unless (r, k) = (3, 1),
ON THE LENGTH AND DEPTH OF FINITE GROUPS
21
or if r = 2 and k 6 3. If (r, k) = (2, 1) then G ∼
= A6 and cr(G) = 5/4. In each of the
remaining cases we have λ(G) 6 5 and cd(G) > 2, which implies the desired bound. Now
assume p is odd, so l(G) > kr 2 + 2r − 1 and the proof of [8, Theorem 4] yields
λ(G) 6 2Ω(k) + λ(PSp2r (p)) 6 2Ω(k) + 8 + λ(Sr ) 6 2Ω(k) + 13.
This gives cr(G) > 5/4 unless (r, k) = (3, 1), or if r = 2 and k 6 4. If G = PSp6 (p) then
G > L2 (p3 ).3 > L2 (p3 ) > L2 (p)
is unrefinable, so λ(G) 6 7 and the result follows since cd(G) > 2. Now assume r = 2
and k 6 4. Note that |B| = 12 q 4 (q − 1)2 , so Ω(|B|) = 4k + 2Ω(q − 1) − 1. If k = 4 then
λ(G) 6 4 + λ(PSp4 (p)) and we note that one of A6 , S6 or S7 is a maximal subgroup of
PSp4 (p), so λ(PSp4 (p)) 6 7 and thus λ(G) 6 11. In addition, Ω(q − 1) > 5, so l(G) > 27
and the result follows. Similarly, if k = 2 or 3 then λ(G) 6 8 and l(G) > 15. Finally, if
k = 1 then λ(G) 6 7 and the result follows since cd(G) > 2.
Now let us turn to case (c), so G = PΩ+
2r (q) and r = 4, 5 or 6. First assume p = 2, in
which case l(G) > kr(r − 1) + r and λ(G) 6 Ω(k) + λ(Ω+
2r (2)). It is easy to check that
+
λ(Ω2r (2)) 6 9. For example, if r = 6 then there is an unrefinable chain
−
−
Ω+
12 (2) > Sp10 (2) > Ω10 (2).2 > Ω10 (2) > A12
and λ(A12 ) = 5, so λ(Ω+
12 (2)) 6 9. Therefore, l(G) > 12k + 4, λ(G) 6 Ω(k) + 9 and one
checks that these bounds are sufficient. Now assume p > 2. Here l(G) > kr(r − 1) + 2r − 2
+
and λ(G) 6 3Ω(k) + λ(PΩ+
2r (p)). One checks that λ(PΩ2r (p)) 6 9. For instance, if r = 6
then there is an unrefinable chain
PΩ+
12 (p) > PSO11 (p) > Ω11 (p) > H
with H = A12 , S12 or A13 , and the claim follows since λ(H) 6 6. Therefore, l(G) > 12k+6,
λ(G) 6 3Ω(k) + 9 and we conclude that cr(G) > 5/4. A very similar argument applies in
case (d) and we omit the details.
Next consider case (e), so G = Lr+1 (q) and λ(G) 6 2Ω(k) + λ(Lr+1 (p)). Note that
|B| =
q r(r+1)/2 (q − 1)r
.
(r + 1, q − 1)
Suppose r ∈ {3, 5, 7} is odd. Now Lr+1 (p) has a maximal subgroup of the form PSpr+1 (p)
or PSpr+1 (p).2, and we noted that λ(PSpr+1 (p)) 6 13 in the analysis of case (b), whence
λ(G) 6 2Ω(k) + 15. One now checks that the bound l(G) > kr(r + 1)/2 + r from (10)
is sufficient when r = 5 or 7. Now suppose r = 3. If q = 2 then cr(G) = 9/5 so we
can assume q > 2, in which case l(G) > 6k + 5. Now λ(L4 (p)) = 5 if p = 2 or 3, and
λ(L4 (p)) 6 9 if p > 5 (this follows from the fact that PSp4 (p).2 is a maximal subgroup
of L4 (p)). Therefore, λ(G) 6 2Ω(k) + 9 and the result follows if k > 1. Finally suppose
G = L4 (p) with p > 3. If p = 3 then λ(G) = 5 and l(G) > 11. Similarly, λ(G) 6 9 and
l(G) > 13 if p > 5. The result follows.
Now let us assume G = Lr+1 (q) and r ∈ {2, 4, 6, 8}. First assume p is odd. There
is an unrefinable chain Lr+1 (p) > PSOr+1 (p) > Ωr+1 (p), so λ(Lr+1 (p)) 6 12 and thus
λ(G) 6 2Ω(k) + 12. Now l(G) > kr(r + 1)/2 + 2r − 1 and the desired bound follows if
r > 2. Now assume G = L3 (q). Since Ω3 (p) ∼
= L2 (p) we deduce that λ(L3 (p)) 6 6, so
λ(G) 6 2Ω(k) + 6 and one checks that the bound l(G) > 3k + 3 is good enough if k > 2.
If G = L3 (p2 ) then l(G) > 11 and λ(G) 6 8 since there is an unrefinable chain
L3 (p2 ) > L3 (p).2 > L3 (p) > PSO3 (p) > Ω3 (p).
Similarly, if G = L3 (p) then λ(G) 6 6 and we note that l(G) > 10 if p > 5 (this follows
from Theorem 4). Finally, if G = L3 (3) then cr(G) = 8/3.
22
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
To complete the analysis of case (e), let us assume r ∈ {2, 4, 6, 8} and p = 2. Here
l(G) > kr(r + 1)/2 + r and λ(G) 6 2Ω(k) + λ(Lr+1 (2)). As noted in the proof of [8,
Theorem 4], there is an unrefinable chain Lr+1 (2) > 2r .Lr (2) > Lr (2) and one can check
that λ(Lr (2)) 6 5, so λ(G) 6 2Ω(k) + 7. This gives the desired bound unless r = 2 and
k 6 3. We can exclude the case k = 1 since L3 (2) ∼
= L2 (7). For k ∈ {2, 3} we get λ(G) 6 4,
l(G) > 9 and the result follows.
To complete the proof of the lemma, it remains to handle the unitary groups of dimension at most 9 arising in cases (f) and (g). First consider (f), so G = U2r (q), r ∈ {2, 3, 4}
and
q r(2r−1) (q 2 − 1)r
.
|B| =
(2r, q + 1)
By arguing as in the proof of [8, Theorem 4], we see that λ(G) 6 λ(PSp2r (q)) + 2. If
p = 2, it follows that
λ(G) 6 Ω(k) + 2 + λ(Sp2r (2)) 6 Ω(k) + 12
(recall that λ(Sp2r (2)) 6 10). In view of (10) we have l(G) > kr(2r − 1) + 2r − 1 and
one checks that these bounds are sufficient unless r = 2 and k = 1, 2. Here we compute
λ(U4 (4)) = λ(U4 (2)) = 5 and the result follows. Now assume p > 2. Here
λ(G) 6 2Ω(k) + 2 + λ(PSp2r (p)) 6 2Ω(k) + 15
and (10) gives l(G) > kr(2r − 1) + 4r − 2. These estimates give the result, unless r = 2
and k = 1, 2. There is an unrefinable chain
U4 (p2 ) > PSp4 (p2 ).2 > PSp4 (p2 ) > L2 (p2 ) > L2 (p).2 > L2 (p)
and thus λ(G) 6 9 for G = U4 (p2 ). Similarly, one checks that λ(G) 6 9 if G = U4 (p). In
both cases l(G) > 12 and the result follows.
Finally, let us consider case (g), where G = U2r+1 (q), r ∈ {1, 2, 3, 4} and
|B| =
q r(2r+1) (q 2 − 1)r
.
(2r + 1, q + 1)
First assume p > 2. Here
λ(G) 6 λ(Ω2r+1 (q)) + 2 6 2Ω(k) + 2 + λ(Ω2r+1 (p)) 6 2Ω(k) + 12
and (10) gives l(G) > kr(2r + 1) + 4r − 2. One checks that these bounds are sufficient
unless r = 1 and k 6 5. Suppose G = U3 (pk ) with k 6 5. If k = 3 or 5 then λ(G) 6
2 + λ(U3 (p)) 6 8 and the result follows since l(G) > 11. If k = 4 then l(G) > 14 and there
is an unrefinable chain
U3 (p4 ) > PSO3 (p4 ) > Ω3 (p4 ) > L2 (p2 ).2 > L2 (p2 ) > L2 (p).2 > L2 (p),
so λ(G) 6 10. Similarly, if k = 2 then λ(G) 6 8 and l(G) > 11. Finally, suppose
G = U3 (p). If p > 7 then U3 (p) > PSO3 (p) > Ω3 (p) is unrefinable, so λ(G) 6 6. It is easy
to check that the same bound holds when p = 3 or 5, and the desired result now follows
since cd(G) > 2.
Now suppose p = 2. If k is odd then by considering a chain of subfield subgroups we
get λ(G) 6 2Ω(k) + λ(U2r+1 (2)) and one checks that λ(U2r+1 (2)) 6 6. For example,
J3 < U9 (2) is maximal and λ(J3 ) = 5, so λ(U9 (2)) 6 6. Therefore, λ(G) 6 2Ω(k) + 6.
Since l(G) > kr(2r + 1) + 2r − 1, the result follows unless r = 1 and k = 3 (note that
(r, k) 6= (1, 1) since U3 (2) is soluble). A routine computation gives cr(U3 (8)) = 4.
Finally, suppose p = 2 and k is even. Here l(G) > kr(2r + 1) + 3r − 1 and we recall
that λ(G) 6 3Ω(k) + 2k + 35. These bounds are sufficient unless (r, k) = (3, 2), or r = 2
and k ∈ {2, 4, 6}, or if r = 1. If (r, k) = (3, 2) then G = U7 (4) has a maximal subgroup
ON THE LENGTH AND DEPTH OF FINITE GROUPS
23
3277:7 of depth 3, so λ(G) 6 4 and the result follows. Similarly, if r = 2 and k ∈ {2, 4, 6}
then by considering an unrefinable chain through the maximal subgroup
(q 5 + 1)
:5,
(q + 1)(5, q + 1)
we deduce that λ(G) 6 5 and the result follows since l(G) > 10k + 5. Finally, let us
assume G = U3 (2k ) with k even. Now G has a reducible maximal subgroup of the form
q+1
.L2 (q)
(3, q + 1)
and thus
λ(G) 6 λ(L2 (q)) + Ω(q + 1) − Ω((3, q + 1)) + 1 6 Ω(q 2 − 1) + Ω(k) + 2 − Ω((3, q + 1))
since λ(L2 (q)) 6 Ω(q − 1) + Ω(k) + 1 by [8, Theorem 4]. We also have
l(G) = Ω(q 2 − 1) + 3k + 1 − Ω((3, q + 1))
by [32, Theorem 1], and one checks that
5
(Ω(q 2 − 1) + Ω(k) + 2 − Ω((3, q + 1)))
4
if Ω(q 2 −1)+6 < 7k. Since q = 2k we have Ω(q 2 −1)+6 < 2k +6 and the result follows.
Ω(q 2 − 1) + 3k + 1 − Ω((3, q + 1)) >
Lemma 5.8. Theorem 9 holds if G ∼
= L2 (q).
Proof. Write q = pk with p a prime. If k = 1 then λ(G) ∈ {3, 4} by [8, Corollary 3.4],
and we have l(G) > λ(G) + 1, so cr(G) > 5/4 and equality holds if and only if λ(G) = 4
and l(G) = 5 (that is, if and only if G ∈ G; see (3)). The result now follows by combining
Theorems 4 and 5.1. For the remainder, we may assume k > 2.
Suppose p = 2. Here l(G) = Ω(q − 1) + k + 1 and λ(G) 6 Ω(q − 1) + Ω(k) + 1 by [32,
Theorem 1] and [8, Theorem 4(i)]. Now
5
Ω(q − 1) + k + 1 > (Ω(q − 1) + Ω(k) + 1)
4
if and only if
Ω(q − 1) + 5Ω(k) + 1 < 4k.
(11)
Since
Ω(q − 1) + 5Ω(k) + 1 < k + 5 log 2 k + 1,
it is routine to check that (11) holds for all k > 2.
Now suppose p > 2 and observe that l(G) > Ω(q − 1) + k by Lemma 2.5(ii). First
assume k > 3 is odd. By considering a chain of subfield subgroups (as in the proof of
[8, Theorem 4]), we deduce that λ(G) 6 Ω(k) + λ(L2 (p)). Therefore, λ(G) 6 Ω(k) + 2 if
p = 3, so
5
5
5
l(G) > Ω(q − 1) + k > k + 2 > (log3 k + 2) > (Ω(k) + 2) > λ(G)
4
4
4
as required. Similarly, if p > 5 then l(G) > k + 3, λ(G) 6 Ω(k) + 4 and for k > 3 the result
follows in the same way. If k = 3 then λ(G) 6 5 and Theorem 5.1 implies that cd(G) > 2,
so cr(G) > 5/4.
Finally, let us assume p > 2 and k is even. If q = 9 then G ∼
= A6 and we have
already noted that cr(G) = 5/4 in this case. Now assume q > 9, so Ω(q − 1) > 4 and
thus l(G) > k + 4. Also observe that λ(G) 6 2Ω(k) + λ(L2 (p)). If p = 3 then k > 4,
λ(G) 6 2Ω(k) + 2 and one checks that
5
k + 4 > (2 log 2 k + 2),
4
24
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
which gives the desired result. Now assume p > 5. Here λ(G) 6 2Ω(k) + 4 and we have
5
k + 4 > (2 log2 k + 4)
4
if k > 8. If k ∈ {4, 6, 8} then Ω(q − 1) > 6, so l(G) > k + 6 and the result follows. Finally,
if k = 2 then λ(G) 6 6 (since PGL2 (p) < G is maximal) and cd(G) > 2 by Theorem 5.1,
so cr(G) > 5/4 as required.
This completes the proof of Theorem 9. Notice that Corollary 10 follows immediately.
Indeed, we have l(G) 6 a cd(G) if and only if cr(G) > a/(a − 1), so Theorem 9 implies
that a = 5 is the best possible constant.
5.3. Proof of Theorem 11. We begin by recording some immediate consequences of
Lemma 2.2.
Proposition 5.9. Let G be a finite group.
(i) If 1 = GP
m ✁ Gm−1 ✁ · · · ✁ G1 ✁ G0 = G is a chain of subgroups of G, then
cd(G) > i cd(Gi−1 /Gi ).
P
(ii) If G = G1 × · · · × Gm , then cd(G) > i cd(Gi ).
(iii) If T1 , . . . P
, Tm are the composition factors of G, listed with multiplicities, then
cd(G) > i cd(Ti ).
Note that in part (iii) above we havePcd(Ti ) = 0 if Ti is abelian, so only the nonabelian composition factors contribute to i cd(Ti ). Now let T1 , . . . , Tm be the non-abelian
composition factors of G (listed with multiplicities), and let
ss(G) =
m
Y
Ti
i=1
be their direct product.
Proposition 5.10. We have l(ss(G)) 6 5 cd(G) for every finite group G.
Proof. With the above notation we have
l(ss(G)) =
m
X
l(Ti ).
i=1
By Corollary 10 we have l(Ti ) 6 5 cd(Ti ) for each i, and by combining this with Proposition
5.9(iii) we obtain
l(ss(G)) 6
m
X
5 cd(Ti ) = 5
m
X
cd(Ti ) 6 5 cd(G)
i=1
i=1
as required.
By a semisimple group we mean a direct product of (non-abelian) finite simple groups.
Lemma 5.11. If G is a finite semisimple group, then l(Aut(G)) 6 2l(G).
Q
ki
Proof. Write G = m
i=1 Ti where the Ti are pairwise non-isomorphic finite (non-abelian)
simple groups and ki > 1. Then
Aut(G) ∼
=
m
Y
i=1
Aut(Tiki )
∼
=
m
Y
i=1
Aut(Ti ) ≀ Ski .
ON THE LENGTH AND DEPTH OF FINITE GROUPS
Hence Out(G) ∼
=
Qm
i=1 Out(Ti )
m
X
≀ Ski , so
ki l(Out(Ti )) + l(Ski ) 6
l(Out(G)) =
m
X
i=1
i=1
25
ki (log2 |Out(Ti )| + 3/2),
where the last inequality follows from the main theorem of [10] on the length of the
symmetric group. Using the well known orders of Out(T ) for the finite simple groups T
(for example, see [21], pp. 170-171), it is easy to verify that log2 |Out(Ti )| + 3/2 6 l(Ti )
for all i. We conclude that
m
X
ki l(Ti ) = l(G)
l(Out(G)) 6
i=1
and thus l(Aut(G)) 6 2l(G) as required.
We are now ready to prove Theorem 11. Let R(G) be the soluble radical of G and
consider the semisimple group Soc(G/R(G)). Applying Proposition 5.10 to the group
G/R(G) we obtain
l(Soc(G/R(G))) 6 l(ss(G/R(G))) 6 5 cd(G/R(G)).
It is well known that G/R(G) 6 Aut(Soc(G/R(G))). Applying the inequality above with
Lemma 5.11 we obtain
l(G/R(G)) 6 l(Aut(Soc(G/R(G)))) 6 2l(Soc(G/R(G))) 6 10 cd(G/R(G)) 6 10 cd(G)
and the result follows.
This completes the proof of Theorem 11.
Appendix A. On the number of prime divisors of p ± 1
by D.R. Heath-Brown
In this appendix, we prove the following result.
Theorem A.1. There are infinitely many primes p ≡ 5 (mod 72) for which
Ω((p2 − 1)/24)) 6 7.
Hence there are infinitely many primes p for which
max{Ω(p ± 1)} 6 8.
We begin by showing how the second claim follows from the first. For any prime
p ≡ 5 (mod 72) one has 24|(p2 − 1). Indeed for such primes one has (p − 1, 72) = 4
and (p + 1, 72) = 6. One necessarily has Ω((p − 1)/4) > 1 when p > 5, so that if
Ω((p2 − 1)/24)) 6 7 one must have Ω((p + 1)/6)) 6 6. It then follows that Ω(p + 1) 6 8.
The proof that Ω(p − 1) 6 8 is similar.
To handle the first statement of the theorem we use sieve methods, as described in
the book by Halberstam and Richert [15], and in particular the weighted sieve, as in [15,
Chapter 10]. To be specific, we apply [15, Theorem 10.2] to the set
A = {(p2 − 1)/24 : p ≡ 5 (mod 72), p 6 x}
and the set P of all primes. The expected value of
|Ad | := #{n ∈ A : d|n}
is Xω(d)/d, with X = Li(x)/24, and where ω(d) is a multiplicative function satisfying
0 if p = 2, 3,
ω(p) =
2 if p > 5.
26
TIMOTHY C. BURNESS, MARTIN W. LIEBECK, AND ANER SHALEV
Condition (Ω1 ), see [15, p.29], is then satisfied with A1 = 2, while condition (Ω∗2 (κ)), see
[15, p.252], holds with κ = 2 and a suitable numerical constant A2 . Moreover |Ap2 | =
O(xp−2 ), which shows that condition (Ω3 ), see [15, p.253], also holds, for an appropriate
numerical constant A3 . Finally we consider the condition (Ω(R(2, α)) given in [15, p.219].
The primes p ≡ 5 (mod 72) for which d divides (p2 − 1)/24 fall into ω(d) residue classes
modulo 72d, so that (Ω(R(2, 12 )) holds by an appropriate form of the Bombieri–Vinogradov
theorem, as in [15, Lemma 3.5]. This verifies all the necessary conditions for Theorem 10.2
of [15], and the inequality (2.2) of [15, p.278] is satisfied (with α = 21 ) for any constant
µ > 4, if x is large enough.
Theorem 10.2 of [15] then tells us that there are ≫ X(log X)−2 elements n ∈ A which
are “Pr -numbers” (that is to say, one has Ω(n) 6 r), provided that
Rv
1
1 − ut dtt
2 u σ2 (v(α−1/t))
.
r > 2u − 1 +
1 − η2 (αv)
Finally, we refer to the calculations of Porter [29], and in particular the last 3 lines of [29,
p.420], according to which it will suffice to have r > 6.7 if one takes u = 2.2 and v = 22.
Since we then have α−1 < u < v and αv = 11 > ν2 = 4.42 . . ., by Porter [29, Table 2], the
final conditions (2.3) of [15, Theorem 10.2] are satisfied, and our theorem follows.
References
[1] K. Alladi, R. Solomon and A. Turull, Finite simple groups of bounded subgroup chain length, J. Algebra
231 (2000), 374–386.
[2] M. Aschbacher, On the maximal subgroups of the finite classical groups, Invent. Math. 76 (1984),
469–514.
[3] L. Babai, On the length of subgroup chains in the symmetric group, Comm. Algebra 14 (1986), 1729–
1736.
[4] R.W. Baddeley, Primitive permutation groups with a regular nonabelian normal subgroup, Proc. London Math. Soc. 67 (1993), 547–595.
[5] G.W. Bell, On the cohomology of the finite special linear groups, I, J. Algebra 54 (1978), 216–238.
[6] J.N. Bray, D.F. Holt and C.M. Roney-Dougal, The maximal subgroups of the low-dimensional finite
classical groups, London Math. Soc. Lecture Note Series, vol. 407, Cambridge University Press, 2013.
[7] B. Brewster, M. Ward and I. Zimmermann, Finite groups having chain difference one, J. Algebra 160
(1993), 179–191.
[8] T.C. Burness, M.W. Liebeck and A. Shalev, The depth of a finite simple group, Proc. Amer. Math.
Soc., to appear.
[9] T.C. Burness, M.W. Liebeck and A. Shalev, The length and depth of algebraic groups, submitted
(arXiv:1712.08214).
[10] P.J. Cameron, R. Solomon and A. Turull, Chains of subgroups in symmetric groups, J. Algebra 127
(1989), 340–352.
[11] A.M. Cohen, M.W. Liebeck, J. Saxl, and G.M. Seitz, The local maximal subgroups of exceptional
groups of Lie type, finite and algebraic, Proc. London Math. Soc. 64 (1992), 21–48.
[12] J.H. Conway, R.T. Curtis, S.P. Norton, R.A. Parker and R.A. Wilson, Atlas of Finite Groups, Oxford
University Press, 1985.
[13] D.A. Craven, Alternating subgroups of exceptional groups Lie type, Proc. Lond. Math. Soc. 115 (2017),
449–501.
[14] A. Gamburd and I. Pak, Expansion of product replacement graphs, Combinatorica 26 (2006), 411–429.
[15] H. Halberstam and H.-E. Richert, Sieve methods, LMS Monographs, Academic Press, London, 1974.
[16] K. Harada, Finite simple groups with short chains of subgroups, J. Math. Soc. Japan 20 (1968),
655-672.
[17] M.A. Hartenstein and R.M. Solomon, Finite groups of chain difference one, J. Algebra 229 (2000),
601–622.
[18] K. Iwasawa, Über die endlichen Gruppen und die Verbände ihrer Untergruppen, J. Fac. Sci. Imp. Univ.
Tokyo. Sect. I. 4 (1941), 171–199.
[19] Z. Janko, Finite groups with invariant fourth maximal subgroups, Math. Z. 82 (1963), 82–89.
[20] Z. Janko, Finite simple groups with short chains of subgroups, Math. Z. 84 (1964), 428–437.
[21] P. Kleidman and M.W. Liebeck, The subgroup structure of the finite classical groups, London Math.
Soc. Lecture Note Series, vol. 129, Cambridge University Press, 1990.
ON THE LENGTH AND DEPTH OF FINITE GROUPS
27
[22] J. Kohler, A note on solvable groups, J. London Math. Soc. 43 (1968), 235–236.
[23] M.W. Liebeck, C.E. Praeger and J. Saxl, A classification of the maximal subgroups of the finite
alternating and symmetric groups, J. Algebra 111 (1987), 365–383.
[24] M.W. Liebeck, C.E. Praeger and J. Saxl, On the O’Nan-Scott Theorem for finite primitive permutation
groups, J. Austral. Math. Soc. 44 (1988), 389–396.
[25] M.W. Liebeck, J. Saxl, and G.M. Seitz, Subgroups of maximal rank in finite exceptional groups of Lie
type, Proc. London Math. Soc. 65 (1992), 297–325.
[26] M.W. Liebeck and G.M. Seitz, A survey of of maximal subgroups of exceptional groups of Lie type,
in Groups, combinatorics & geometry (Durham, 2001), 139–146, World Sci. Publ., River Edge, NJ,
2003.
[27] M.W. Liebeck and G.M. Seitz, On finite subgroups of exceptional algebraic groups, J. reine angew.
Math. 515 (1999), 25–72.
[28] J. Petrillo, On the length of finite simple groups having chain difference one, Arch. Math. 88 (2007),
297–303.
[29] J.W. Porter, Some numerical results in the Selberg sieve method, Acta Arith. 20 (1972), 417–421.
[30] G.M. Seitz, R. Solomon and A. Turull, Chains of subgroups in groups of Lie type, II, J. London Math.
Soc. 42 (1990), 93–100.
[31] J. Shareshian and R. Woodroofe, A new subgroup lattice characterization of finite solvable groups, J.
Algebra 351 (2012), 448–458.
[32] R. Solomon and A. Turull, Chains of subgroups in groups of Lie type, I, J. Algebra 132 (1990),
174–184.
[33] R. Solomon and A. Turull, Chains of subgroups in groups of Lie type, III. J. London Math. Soc. 44
(1991), 437–444.
[34] R.A. Wilson, Maximal subgroups of sporadic groups, in Finite Simple Groups: Thirty Years of the
Atlas and Beyond, 57–72, Contemp. Math., 694, Amer. Math. Soc., Providence, RI, 2017.
[35] R.A. Wilson, The uniqueness of PSU3 (8) in the Monster, Bull. London Math. Soc. 49 (2017), 877–880.
T.C. Burness, School of Mathematics, University of Bristol, Bristol BS8 1TW, UK
E-mail address: [email protected]
D.R. Heath-Brown, Mathematical Institute, Radcliffe Observatory Quarter, Woodstock
Road, Oxford OX2 6GG, UK
E-mail address: [email protected]
M.W. Liebeck, Department of Mathematics, Imperial College, London SW7 2BZ, UK
E-mail address: [email protected]
A. Shalev, Institute of Mathematics, Hebrew University, Jerusalem 91904, Israel
E-mail address: [email protected]
| 4 |
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
arXiv:1709.09598v2 [] 2 Oct 2017
MOSES GANARDI, DANIEL KÖNIG, MARKUS LOHREY, AND GEORG ZETZSCHE
Abstract. In recent years, knapsack problems for (in general non-commutative) groups
have attracted attention. In this paper, the knapsack problem for wreath products
is studied. It turns out that decidability of knapsack is not preserved under wreath
product. On the other hand, the class of knapsack-semilinear groups, where solutions
sets of knapsack equations are effectively semilinear, is closed under wreath product. As
a consequence, we obtain the decidability of knapsack for free solvable groups. Finally,
it is shown that for every non-trivial abelian group G, knapsack (as well as the related
subset sum problem) for the wreath product G ≀ Z is NP-complete.
1. Introduction
In [23], Myasnikov, Nikolaev, and Ushakov began the investigation of classical discrete
optimization problems, which are formulated over the integers, for arbitrary (possibly noncommutative) groups. The general goal of this line of research is to study to what extent
results from the commutative setting can be transferred to the non-commutative setting.
Among other problems, Myasnikov et al. introduced for a finitely generated group G the
knapsack problem and the subset sum problem. The input for the knapsack problem is a
sequence of group elements g1 , . . . , gk , g ∈ G (specified by finite words over the generators
of G) and it is asked whether there exists a solution (x1 , . . . , xk ) ∈ Nk of the equation
g1x1 · · · gkxk = g. For the subset sum problem one restricts the solution to {0, 1}k . For
the particular case G = Z (where the additive notation x1 · g1 + · · · + xk · gk = g is usually
preferred) these problems are NP-complete (resp., TC0 -complete) if the numbers g1 , . . . , gk , g
are encoded in binary representation [11, 8] (resp., unary notation [3]).
Another motivation is that decidability of knapsack for a group G implies that the membership problem for polycyclic subgroups of G is decidable. This follows from the well-known
fact that every polycyclic group A has a generating set {a1 , . . . , ak } such that every element
of A can be written as an1 1 · · · ank k for n1 , . . . , nk ∈ N, see e.g. [27, Chapter 9].
In [23], Myasnikov et al. encode elements of the finitely generated group G by words
over the group generators and their inverses, which corresponds to the unary encoding of
integers. There is also an encoding of words that corresponds to the binary encoding of
integers, so called straight-line programs, and knapsack problems under this encodings have
been studied in [18]. In this paper, we only consider the case where input words are explicitly
represented. Here is a (non-complete) list of known results concerning knapsack and subset
sum problems:
• Subset sum and knapsack can be solved in polynomial time for every hyperbolic
group [23]. In [4] this result was extended to free products of any number of hyperbolic groups and finitely generated abelian groups.
• For every virtually nilpotent group, subset sum belongs to NL (nondeterministic
logspace) [12]. On the other hand, there are nilpotent groups of class 2 for which
knapsack is undecidable. Concrete examples are direct products of sufficiently many
1991 Mathematics Subject Classification. 20F10.
Key words and phrases. knapsack, wreath products, decision problems in group theory.
The fourth author is supported by a fellowship within the Postdoc-Program of the German Academic
Exchange Service (DAAD) and by Labex DigiCosme, Univ. Paris-Saclay, project VERICONISS..
1
2
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
•
•
•
•
•
copies of the discrete Heisenberg group H3 (Z) [12], and free nilpotent groups of class
2 and sufficiently high rank [22].
Knapsack for the discrete Heisenberg group H3 (Z) is decidable [12]. In particular, together with the previous point it follows that decidability of knapsack is not
preserved under direct products.
For the following groups, subset sum is NP-complete (whereas the word problem can
be solved in polynomial time): free metabelian non-abelian groups of finite rank, the
wreath product Z≀Z, Thompson’s group F , the Baumslag-Solitar group BS(1, 2) [23],
and every polycyclic group that is not virtually nilpotent [26].
Knapsack is decidable for every co-context-free group (a group is co-context-free if
the set of all words over the generators that do not represent the group identity is
a context-free language) [12].
Knapsack belongs to NP for every virtually special group [18]. A group is virtually
special if it is a finite extension of a subgroup of a graph group. For graph groups
(also known as right-angled Artin groups) a complete classification of the complexity
of knapsack was obtained in [19]: If the underlying graph contains an induced path
or cycle on 4 nodes, then knapsack is NP-complete; in all other cases knapsack can
be solved in polynomial time (even in LogCFL).
Decidability of knapsack is preserved under finite extensions, HNN-extensions over
finite associated subgroups and amalgamated free products over finite subgroups
[18].
In this paper, we study the knapsack problem for wreath products. The wreath product
is a fundamental construction in group theory and semigroup theory, see Section 4 for the
definition. An important application of wreath products in group theory is the Magnus
embedding theorem [20], which allows to embed the quotient group Fk /[N, N ] into the
wreath product Zk ≀ (Fk /N ), where Fk is a free group of rank k and N is a normal subgroup
of Fk . From the algorithmic point of view, wreath products have some nice properties: The
word problem for a wreath product G ≀ H is AC0 -reducible to the word problems for the
factors G and H, and the conjugacy problem for G ≀ H is TC0 -reducible to the conjugacy
problems for G and H and the so called power problem for H [21].
As in the case of direct products, it turns out that decidability of knapsack is not preserved
under wreath products: For this we consider direct products of the form H3 (Z) × Zℓ , where
H3 (Z) is the discrete 3-dimensional Heisenberg group. It was shown in [12] that for every
ℓ ≥ 0, knapsack is decidable for H3 (Z) × Zℓ . We prove in Section 6 that for every non-trivial
group G and every sufficiently large ℓ, knapsack for G ≀ (H3 (Z) × Zℓ ) is undecidable.
By the above discussion, we need stronger assumptions on G and H to obtain decidability
of knapsack for G≀H. We exhibit a very weak condition on G and H, knapsack-semilinearity,
which is sufficient for decidability of knapsack for G ≀ H. A finitely generated group G is
knapsack-semilinear if for every knapsack equation, the set of all solutions (a solution can
be seen as an vector of natural numbers) is effectively semilinear.
Clearly, for every knapsack-semilinear group, the knapsack problem is decidable. While
the converse is not true, the class of knapsack-semilinear groups is extraordinarily wide.
The simplest examples are finitely generated abelian groups, but it also includes the rich
class of virtually special groups [18], all hyperbolic groups (see Appendix A), and all cocontext-free groups [12]. Furthermore, it is known to be closed under direct products (an
easy observation), finite extensions, HNN-extensions over finite associated subgroups and
amalgamated free products over finite subgroups (the last three closure properties are simple
extensions of the transfer theorems in [18]). In fact, the only non-knapsack-semilinear groups
with a decidable knapsack problem that we are aware of are the groups H3 (Z) × Zn .
We prove in Section 7 that the class of knapsack-semilinear groups is closed under wreath
products. As a direct consequence of the Magnus embedding, it follows that knapsack is
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
3
decidable for every free solvable group. Recall, that in contrast, knapsack for free nilpotent
groups is in general undecidable [22].
Finally, we consider the complexity of knapsack for wreath products. We prove that
for every non-trivial finitely generated abelian group G, knapsack for G ≀ Z is NP-complete
(the hard part is membership in NP). This result includes important special cases like for
instance the lamplighter group Z2 ≀ Z and Z ≀ Z. Wreath products of the form G ≀ Z with G
abelian turn out to be important in connection with subgroup distortion [1]. Our proof also
shows that for every non-trivial finitely generated abelian group G, the subset sum problem
for G ≀ Z is NP-complete. In [23] this result is only shown for infinite abelian groups G.
2. Preliminaries
We assume standard notions concerning groups. A group G is finitely generated if there
exists a finite subset Σ ⊆ G such that every element g ∈ G can be written as g = a1 a2 · · · an
with a1 , a2 , . . . , an ∈ Σ. We also say that the word a1 a2 · · · an ∈ Σ∗ evaluates to g (or
represents g). The set Σ is called a finite generating set of G. We always assume that Σ
is symmetric in the sense that a ∈ Σ implies a−1 ∈ Σ. An element g ∈ G is called torsion
element if there is an n ≥ 1 with g n = 1. The smallest such n is the order of g and denoted
ord(g). If g is not a torsion element, we set ord(g) = ∞.
A set of vectors A ⊆ Nk is linear if there exist vectors v0 , . . . , vn ∈ Nk such that
A = {v0 + λ1 · v1 + · · · + λn · vn | λ1 , . . . , λn ∈ N}.
The tuple of vectors (v0 , . . . , vn ) is a linear represention of A. A set A ⊆ Nk is semilinear
if it is a finite union of linear sets A1 , . . . , Am . A semilinear representation of A is a list of
linear representations for the linear sets A1 , . . . , Am . It is well-known that the semilinear
subsets of Nk are exactly the sets definable in Presburger arithmetic. These are those sets
that can be defined with a first-order formula ϕ(x1 , . . . , xk ) over the structure (N, 0, +, ≤
) [7]. Moreover, the transformations between such a first-order formula and an equivalent
semilinear representation are effective. In particular, the semilinear sets are effectively closed
under Boolean operations.
3. Knapsack for groups
Let G be a finitely generated group with the finite symmetric generating set Σ. Moreover,
let V be a set of formal variables that take values from N. For a subset U ⊆ V , we use NU to
denote the set of maps ν : U → N, which we call valuations. An exponent expression over G is
a formal expression of the form E = v0 ux1 1 v1 ux2 2 v2 · · · uxk k vk with k ≥ 0 and words ui , vi ∈ Σ∗ .
Here, the variables do not have to be pairwise distinct. If every variable in an exponent
expression occurs at most once, it is called a knapsack expression. Let VE = {x1 , . . . , xk } be
the set of variables that occur in E. For a valuation ν ∈ NU such that VE ⊆ U (in which case
ν(x )
ν(x )
ν(x )
we also say that ν is a valuation for E), we define ν(E) = v0 u1 1 v1 u2 2 v2 · · · uk k vk ∈ Σ∗ .
We say that ν is a solution of the equation E = 1 if ν(E) evaluates to the identity element
1 of G. With Sol(E) we denote the set of all solutions ν ∈ NVE of E. We can view Sol(E)
Pk
as a subset of Nk . The length of E is defined as |E| = |v0 | + i=1 |ui | + |vi |, whereas k is
its depth. If the length of a knapsack expression is not needed, we will write an exponent
expression over G also as E = h0 g1x1 h1 g2x2 h2 · · · gkxk hk where gi , hi ∈ G. We define solvability
of exponent equations over G, ExpEq(G) for short, as the following decision problem:
Input: A finite
Tnlist of exponent expressions E1 , . . . , En over G.
Question: Is i=1 Sol(Ei ) non-empty?
The knapsack problem for G, KP(G) for short, is the following decision problem:
Input: A single knapsack expression E over G.
Question: Is Sol(E) non-empty?
4
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
We also consider the uniform knapsack problem for powers
Gm = G × · · · × G .
|
{z
}
m many
∗
We denote this problem with KP(G ). Formally, it is defined as follows:
Input: A number m ≥ 0 (represented in unary notation) and a knapsack expression E over
the group Gm .
Question: Is Sol(E) non-empty?
It turns out that the problems KP(G∗ ) and ExpEq(G) are interreducible:
Proposition 3.1. For every finitely generated group G, KP(G∗ ) is decidable if and only if
ExpEq(G) is decidable.
Proof. Clearly, every instance of KP(G∗ ) can be translated to an instance of ExpEq(G)
by projecting onto the m factors of a power Gm . For the converse direction, assume that
KP(G∗ ) is decidable. Then in particular, G has a decidable word problem. Let Ej =
xk,j
x
hk,j be an exponent expression over G for every j ∈ [1, m]. By adding
h0,j g1,j1,j h1,j · · · gk,j
dummy powers of the form 1x we may assume that the Ej have the same depth k. We
distinguish two cases.
Case 1. G is a torsion group. Since G has a decidable word problem, we can compute ℓ ∈ N
ℓ
so that gi,j
= 1 for every i ∈ [1, k] and j ∈ [1, m]. Then there is a solution to the exponent
equation system if and only if there is a solution ν with 0 ≤ ν(x) < ℓ for every variable x.
Hence, solvability is clearly decidable.
Case 2. There is some a ∈ G with ord(a) = ∞. We first rename the variables in E1 , . . . , Em
such that every variable occurs at most once in the entire system of expressions. Let
′
be the resulting system of knapsack expressions and let U be the set of variables
E1′ , . . . , Em
′
. We can compute an equivalence relation ∼ ⊆ U × U such that
that occur in E1′ , . . . , Em
′
the system E1 = 1, . . . , Em = 1 has a solution if and only if the system E1′ = 1, . . . , Em
=1
′
′
has a solution ν with ν(x) = ν(x ) for x ∼ x . We can equip U with a linear order ≤ so that
if x occurs left of x′ in some Ej′ , then x < x′ .
Now for each pair (x, x′ ) ∈ U × U with x ∼ x′ and x < x′ , we add the knapsack
′
′
for some ℓ ≥ 0 such
expression ax (a−1 )x . This yields knapsack expressions E1′ , . . . , Em+ℓ
′
′
that E1 = 1, . . . , Em+ℓ = 1 is solvable if and only if E1 = 1, . . . , Em = 1 is solvable.
Moreover, whenever x occurs to the left of x′ in some expression, then x < x′ .
′
By padding the expressions with trivial powers, we turn E1′ , . . . , Em+ℓ
into expressions
′′
that all exhibit the same variables (in the same order). Now, it is easy to turn
E1′′ , . . . , Em+ℓ
′′
into a single knapsack expression over Gm+ℓ .
E1′′ , . . . , Em+ℓ
Note that the equation v0 ux1 1 v1 ux2 2 v2 · · · uxk k vk = 1 is equivalent to
−1
· · · v0−1 )xk (v0 · · · vk ) = 1.
(v0 u1 v0−1 )x1 (v0 v1 u2 v1−1 v0−1 )x2 · · · (v0 · · · vk−1 uk vk−1
Hence, it suffices to consider exponent expressions of the form ux1 1 ux2 2 · · · uxk k v.
The group G is called knapsack-semilinear if for every knapsack expression E over G, the
set Sol(E) is a semilinear set of vectors and a semilinear representation can be effectively
computed from E. The following classes of groups only contain knapsack-semilinear groups:
• virtually special groups [17]: these are finite extensions of subgroups of graph groups
(aka right-angled Artin groups). The class of virtually special groups is very rich.
It contains all Coxeter groups, one-relator groups with torsion, fully residually free
groups, and fundamental groups of hyperbolic 3-manifolds.
• hyperbolic groups: see Appendix A
• co-context-free groups [12], i.e., groups where the set of all words over the generators
that do not represent the identity is a context-free language. Lehnert and Schweitzer
[14] have shown that the Higman-Thompson groups are co-context-free.
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
5
Since the emptiness of the intersection of finitely many semilinear sets is decidable, we have:
Lemma 3.2. If G is knapsack-semilinear, then KP(G∗ ) and ExpEq(G) are decidable.
An example of a group G, where KP(G) is decidable but KP(G∗ ) (and hence ExpEq(G))
are undecidable is the Heisenberg group H3 (Z), see [12]. It is the group of all matrices of
the following form, where a, b, c ∈ Z:
1 a c
0 1 b
0 0 1
In particular, H3 (Z) is not knapsack-semilinear.
4. Wreath products
L
Let G and H be groups. Consider the direct sum K = h∈H Gh , where Gh is a copy of
G. We view K as the set G(H) of all mappings f : H → G such that supp(f ) = {h ∈ H |
f (h) 6= 1} is finite, together with pointwise multiplication as the group operation. The set
supp(f ) ⊆ H is called the support of f . The group H has a natural left action on G(H) given
by hf (a) = f (h−1 a), where f ∈ G(H) and h, a ∈ H. The corresponding semidirect product
G(H) ⋊ H is the wreath product G ≀ H. In other words:
• Elements of G ≀ H are pairs (f, h), where h ∈ H and f ∈ G(H) .
• The multiplication in G ≀ H is defined as follows: Let (f1 , h1 ), (f2 , h2 ) ∈ G ≀ H. Then
(f1 , h1 )(f2 , h2 ) = (f, h1 h2 ), where f (a) = f1 (a)f2 (h−1
1 a).
The following intuition might be helpful: An element (f, h) ∈ G ≀ H can be thought of
as a finite multiset of elements of G \ {1G } that are sitting at certain elements of H (the
mapping f ) together with the distinguished element h ∈ H, which can be thought of as
a cursor moving in H. If we want to compute the product (f1 , h1 )(f2 , h2 ), we do this as
follows: First, we shift the finite collection of G-elements that corresponds to the mapping
f2 by h1 : If the element g ∈ G \ {1G } is sitting at a ∈ H (i.e., f2 (a) = g), then we remove
g from a and put it to the new location h1 a ∈ H. This new collection corresponds to the
mapping f2′ : a 7→ f2 (h−1
1 a). After this shift, we multiply the two collections of G-elements
pointwise: If in a ∈ H the elements g1 and g2 are sitting (i.e., f1 (a) = g1 and f2′ (a) = g2 ),
then we put the product g1 g2 into the location a. Finally, the new distinguished H-element
(the new cursor position) becomes h1 h2 .
By identifying f ∈ G(H) with (f, 1H ) ∈ G ≀ H and h ∈ H with (1G(H) , h), we regard
(H)
G
and H as subgroups of G ≀ H. Hence, for f ∈ G(H) and h ∈ H, we have f h =
(f, 1H )(1G(H) , h) = (f, h). There are two natural projection morphism σG≀H : G ≀ H → H
and τG≀H : G ≀ G(H) with
(1)
σG≀H (f, h) = h,
(2)
τG≀H (f, h) = f.
If G (resp. H) is generated by the set Σ (resp. Γ) with Σ ∩ Γ = ∅, then G ≀ H is generated
by the set {(fa , 1H ) | a ∈ Σ} ∪ {(f1G , b) | b ∈ Γ}, where for g ∈ G, the mapping fg : H → G
is defined by fg (1H ) = g and fg (x) = 1G for x ∈ H \ {1H }. This generating set can be
identified with Σ ⊎ Γ. We will need the following embedding lemma:
Lemma 4.1. Let G, H, K be finitely generated groups where K has a decidable word problem.
Then, given n ∈ N with n ≤ |K|, one can compute an embedding of Gn ≀ H into G ≀ (H × K).
Proof. Let Σ, Γ, and Θ be finite generating sets of G, H, and K, respectively. Suppose
n ∈ N is given. Since K has a decidable word problem and |K| ≥ n, we can compute words
w1 , . . . , wn ∈ Θ∗ that represent pairwise distinct elements k1 , . . . , kn of K.
Let πi : Gn → G be the projection on the i-th coordinate. Since the statement of the
lemma does not depend on the chosen generating sets of Gn ≀ H and G ≀ (H × K), we may
6
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
choose one. The group Gn is generated by the tuples si := (1, . . . , 1, s, 1, . . . , 1) ∈ Gn , for
s ∈ Σ and i ∈ [1, n], where s is at the i-th coordinate. Hence, ∆ = {si | s ∈ Σ, i ∈ [1, n]} ⊎ Γ
is a finite generating set of Gn ≀ H.
The embedding ι : ∆∗ → (Σ ∪ Γ ∪ Θ)∗ is defined by ι(si ) = wi swi−1 for s ∈ Σ, i ∈ [1, n]
and ι(t) = t for t ∈ Γ. It remains to be shown that ι induces an embedding of Gn ≀ H into
G ≀ (H × K).
Consider the injective morphism ϕ : (Gn )(H) → G(H×K) where for ζ ∈ (Gn )(H) , we have
(
πi (ζ(h)) if k = ki
[ϕ(ζ)](h, k) =
1
if k ∈
/ {k1 , . . . , kn }
We claim that ϕ extends to an injective morphism ϕ̂ : (Gn )(H) ⋊ H → G(H×K) ⋊ H where
H acts on G(H×K) by (hζ)(a, k) = ζ(h−1 a, k) for h, a ∈ H, k ∈ K. To show this, it suffices
to establish ϕ(hζ) = hϕ(ζ) for all ζ ∈ (Gn )(H) , h ∈ H, i.e., the action of H commutes with
the morphism ϕ. To see this, note that
[ϕ(hζ)](a, ki ) = πi ((hζ)(a)) = πi (ζ(h−1 a)) = [ϕ(ζ)](h−1 a, ki ) = [hϕ(ζ)](a, ki )
and if k ∈
/ {k1 , . . . , kn }, we have
[ϕ(hζ)](a, k) = 1 = [ϕ(ζ)](h−1 a, k) = [hϕ(ζ)](a, k).
Since the above action of H on G(H×K) is the restriction of the action of H × K on G(H×K) ,
we have G(H×K) ⋊ H ≤ G(H×K) ⋊ (H × K) = G ≀ (H × K). Thus ϕ̂ can be viewed as an
embedding ϕ̂ : Gn ≀ H → G ≀ (H × K).
We complete the proof by showing that ι represents ϕ̂, i.e. ϕ̂(w) = ι(w) for every w ∈ ∆∗ ,
where w denotes the element of (Gn )(H) ⋊H represented by the word w and similarly for ι(w).
It suffices to prove this in the case w ∈ ∆ ⊆ (Gn )(H) ⋊ H. If w = si with s ∈ Σ, i ∈ [1, n], we
observe that ϕ̂(si ) = ki ski−1 = ι(si ). Moreover, for t ∈ Γ ⊆ H we have ϕ̂(t) = t = ι(t).
5. Main results
In this section, we state the main results of the paper. We begin with a general necessary
condition for knapsack to be decidable for a wreath product. Note that if H is finite, then
G ≀ H is a finite extension of G|H| [16, Proposition 1], meaning that KP(G ≀ H) is decidable
if and only if KP(G|H| ) is decidable [18, Theorem 11]1. Therefore, we are only interested in
the case that H is infinite.
Proposition 5.1. Suppose H is infinite. If KP(G ≀ H) is decidable, then KP(H) and
KP(G∗ ) are decidable.
Proof. As a subgroup of G ≀ H, H inherits decidability of the knapsack problem. According
to Lemma 4.1, given m ∈ N, we can compute an embedding of Gm into G ≀ H and thus solve
knapsack instances over Gm uniformly in m.
Proposition 5.1 shows that KP(H3 (Z) ≀ Z) is undecidable: It was shown in [12] that
KP(H3 (Z)) is decidable, whereas for some m > 1, the problem KP(H3 (Z)m ) is undecidable.
Proposition 5.1 raises the question whether decidability of KP(H) and KP(G∗ ) implies
decidability of KP(G ≀ H). The answer turns out to be negative. Let us first recall the
following result from [12]:
Theorem 5.2 ([12]). For every ℓ ∈ N, KP(H3 (Z) × Zℓ ) is decidable.
Hence, by the following result, which is shown in Section 6, decidability of KP(H) and
KP(G∗ ) does in general not imply decidability of KP(G ≀ H):
1Strictly speaking, only preservation of NP-membership was shown there. However, the proof also yields
preservation of decidability.
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
7
Theorem 5.3. There is an ℓ ∈ N such that for every group G 6= 1, KP(G ≀ (H3 (Z) × Zℓ ))
is undecidable.
We therefore need to strengthen the assumptions on H in order to show decidability of
KP(G ≀ H). By adding the weak assumption of knapsack-semilinearity for H, we obtain a
partial converse to Proposition 5.1. In Section 7 we prove:
Theorem 5.4. Let H be knapsack-semilinear. Then KP(G ≀ H) is decidable if and only if
KP(G∗ ) is decidable.
In fact, in case G is also knapsack-semilinear, our algorithm constructs a semilinear
representation of the solution set. Therefore, we get:
Theorem 5.5. The group G ≀ H is knapsack-semilinear if and only if both G and H are
knapsack-semilinear.
Since every free abelian group is clearly knapsack-semilinear, it follows that the iterated
wreath products G1,r = Zr and Gd+1,r = Zr ≀ Gd,r are knapsack-semilinear. By the wellknown Magnus embedding, the free solvable group Sd,r embeds into Gd,r . Hence, we get:
Corollary 5.6. Every free solvable group is knapsack-semilinear. Hence, solvability of
exponent equations is decidable for free solvable groups.
Finally, we consider the complexity of knapsack for wreath products. We prove NPcompleteness for an important special case:
Theorem 5.7. For every non-trivial finitely generated abelian group G, KP(G ≀ Z) is NPcomplete.
6. Undecidability: Proof of Theorem 5.3
Our proof of Theorem 5.3 employs the undecidability of the knapsack problem for certain
powers of H3 (Z). In fact, we need a slightly stronger version, which states undecidability
already for knapsack instances of bounded depths.
Theorem 6.1 ([12]). There is a fixed constant m and a fixed list of group elements g1 , . . . , gk ∈
Q
H3 (Z)m such that membership in the product ki=1 hgi i is undecidable. In particular, there
are k, m ∈ N such that solvability of knapsack instances of depth k is undecidable for H3 (Z)m .
We prove Theorem 5.3 by showing the following.
Proposition 6.2. There are m, ℓ ∈ N such that for every non-trivial group G, the knapsack
problem for Gm ≀ (H3 (Z) × Zℓ ) is undecidable.
Let k and m be the constants from Theorem 6.1. In order to prove Proposition 6.2,
consider a knapsack expression
E = g1x1 · · · gkxk gk+1
(3)
with g1 , . . . , gk+1 ∈ H3 (Z)m . We can write gi = (gi,1 , . . . , gi,m ) for i ∈ [1, k + 1], which leads
to the expressions
x
x
k,j
gk+1,j .
Ej = g1,j1,j · · · gk,j
(4)
Let ℓ = m·k and let α : H3 (Z)×Zℓ → H3 (Z) and β : H3 (Z)×Zℓ → Zℓ be the projection onto
the left and right component, respectively. For each p ∈ [1, ℓ], let ep ∈ Zℓ be the p-th unit
vector ep = (0, . . . , 0, 1, 0, . . . , 0). For j ∈ [1, m] we define the following knapsack expressions
over H3 (Z) × Zℓ (0 denotes the zero vector of dimension ℓ):
Ej′ =
k
Y
i=1
(gi,j , e(j−1)k+i )xi,j (gk+1,j , 0)
and
Mj =
ℓ
Y
(1, −et )yj,t,0 (1, et )yj,t,1 .
t=1
8
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
Note that the term (j − 1)k + i assumes all numbers 1, . . . , m · k as i ranges over 1, . . . , k
and j ranges over 1, . . . , m.
Since G is non-trivial, there is some a ∈ G \ {1}. For each j ∈ [1, m], let aj =
(1, . . . , 1, a, 1, . . . , 1) ∈ Gm , where the a is in the j-th coordinate. With this, we define
Y
Y
zi
m
m
m
k Y
Y
−1
′
aj M j .
aj Ej C
and F =
(1, −e(j−1)k+i )
C=
i=1
j=1
j=1
j=1
m
ℓ
m
ℓ
Since G and H3 (Z) × Z are subgroups of G ≀ (H3 (Z) × Z ), we can treat F as a knapsack
expression over Gm ≀ (H3 (Z) × Zℓ ). We will show that Sol(F ) 6= ∅ if and only if Sol(E) 6= ∅.
For this we need another simple lemma:
Lemma 6.3. Let G, H be groups and let a ∈ G \ {1} and f, g, h ∈ H. Regard G and H as
subsets of G ≀ H. Then f aga−1 h = 1 if and only if g = 1 and f h = 1.
Proof. The right-to-left direction is trivial. For the converse, suppose f aga−1 h = 1 and
g 6= 1. By definition of G ≀ H, we can write f aga−1 h = (ζ, p) with ζ ∈ G(H) and p ∈ H,
where ζ(f ) = a 6= 1, ζ(f g) = a−1 6= 1, and p = f gh. This clearly implies f aga−1 h 6= 1, a
contradiction. Hence, f aga−1 h = 1 implies g = 1 and thus f h = 1.
In the proof of the following lemma, we use the simple fact that every morphism ϕ : G →
G′ extends uniquely to a morphism ϕ̂ : G ≀ H → G′ ≀ H such that ϕ̂↾G = ϕ and ϕ̂↾H = idH
(the identity mapping on H).
Lemma 6.4. A valuation ν for F satisfies ν(F ) = 1 if and only if for every i ∈ [1, k],
j ∈ [1, m], t ∈ [1, m − 1], we have
(5)
ν(Ej ) = 1,
(6)
ν(Mt ) =
ν(xi,j ) = ν(zi ),
ν(Et′ ),
ν(M1 · · · Mm ) = 1.
m
Proof. Let πj : G → G be the projection morphism onto the j-th coordinate and let
π̂j : Gm ≀ (H3 (Z) × Zℓ ) → G ≀ (H3 (Z) × Zℓ ) be its extension with π̂j ↾H3 (Z)×Zℓ = idH3 (Z)×Zℓ . Of
course, for g ∈ Gm ≀ (H3 (Z) × Zℓ ), we have g = 1 if and only if π̂j (g) = 1 for every j ∈ [1, m].
Observe that
Y
r−1
r−1
m
m
Y
Y Y
Mj
Mj a−1
Ej′ C
Ej′ a
π̂r (ν(F )) = ν
j=1
j=r
j=1
j=r
for every r ∈ [1, m]. Therefore, according to Lemma 6.3, ν(F ) = 1 holds if and only if for
every r ∈ [1, m], we have
(7)
′
ν(E1′ · · · Em
CM1 · · · Mm ) = 1
and
′
ν(Er′ · · · Em
CM1 · · · Mr−1 ) = 1.
We claim that Eq. (7) holds for all r ∈ [1, m] if and only if
(8)
′
ν(E1′ · · · Em
C) = 1,
ν(Et′ ) = ν(Mt )
and
ν(M1 · · · Mm ) = 1
for all t ∈ [1, m − 1]. First assume that Eq. (8) holds for all t ∈ [1, m − 1]. We clearly get
′
′
ν(E1′ · · · Em
CM1 · · · Mm ) = 1 and ν(Er′ · · · Em
CM1 · · · Mr−1 ) = 1 for r = 1. The equations
′
′
′
ν(Er · · · Em CM1 · · · Mr−1 ) = 1 for r ∈ [2, m] are obtained by conjugating ν(E1′ · · · Em
C) = 1
′
′
with ν(E1 ) = ν(M1 ), . . . , ν(Er−1 ) = ν(Mr−1 ). Now assume that Eq. (7) holds for all r ∈
′
[1, m]. Taking r = 1 yields ν(E1′ · · · Em
C) = 1 and hence ν(M1 · · · Mm ) = 1. Moreover, we
′
′
′
′
have ν(E1 · · · Er−1 ) = ν(E1 · · · Em CM1 · · · Mr−1 ) = ν(M1 · · · Mr−1 ) for all r ∈ [1, m], which
implies ν(Et′ ) = ν(Mt ) for all t ∈ [1, m − 1].
Observe that by construction of Ej′ and C, we have
(9)
(10)
α(ν(Ej′ )) = ν(Ej ),
α(ν(C)) = 1,
for every i ∈ [1, k] and j ∈ [1, m].
′
π(j−1)k+i (β(ν(E1′ · · · Em
))) = ν(xi,j ),
π(j−1)k+i (β(ν(C))) = −ν(zi ).
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
9
Note that the equations in Eq. (8) only involve elements of H3 (Z)×Zℓ . Since for elements
g ∈ H3 (Z) × Zℓ , we have g = 1 if and only if α(g) = 1 and β(g) = 1, the equation
′
′
′
ν(E1′ · · · Em
C) = 1 is equivalent to α(ν(E1′ · · · Em
C)) = 1 and β(ν(E1′ · · · Em
C)) = 1. By
Eqs. (9) to (10), this is equivalent to ν(E1 · · · Em ) = 1 and ν(xi,j ) = ν(zi ) for all i ∈ [1, k]
and j ∈ [1, m]. Finally, ν(Et′ ) = ν(Mt ) implies ν(Et ) = α(ν(Et′ )) = α(ν(Mt )) = 1 for all
t ∈ [1, m − 1] and hence also ν(Em ) = 1. Thus, Eq. (8) is equivalent to the conditions in the
lemma.
Lemma 6.5. Sol(F ) 6= ∅ if and only if Sol(E) 6= ∅.
Proof. If ν(F ) = 1, then according to Lemma 6.4, the valuation also satisfies ν(Ej ) = 1 and
ν(xi,j ) = ν(zi ) for i ∈ [1, k] and j ∈ [1, m]. In particular ν(xi,j ) = ν(xi,j ′ ) for j, j ′ ∈ [1, m].
Thus, we have
ν(x )
ν(x )
g1 1,1 · · · gk k,1 gk+1 = 1
and hence Sol(E) 6= ∅.
Suppose now that Sol(E) 6= ∅. Then there is a valuation ν with ν(Ej ) = 1 and ν(xi,j ) =
ν(xi,j ′ ) for i ∈ [1, k] and j, j ′ ∈ [1, m]. We shall prove that we can extend ν so as to satisfy
the conditions of Lemma 6.4.
The left-hand equation in Eq. (5) is fulfilled already. Since ν(xi,j ) = ν(xi,j ′ ), setting
ν(zi ) = ν(xi,1 ) will satisfy the right-hand equation of Eq. (5). Finally, observe that by
assigning suitable values to the variables yj,s,b for j ∈ [1, m], s ∈ [1, ℓ], and b ∈ {0, 1}, we
can enforce any value from {1}×Zℓ for ν(Mj ). Therefore, we can extend ν so that it satisfies
Eq. (6) as well.
This completes the proof of Proposition 6.2, which allows us to prove Theorem 5.3.
Proof of Theorem 5.3. By Proposition 6.2, there are ℓ, m ∈ N such that the knapsack problem is undecidable for Gm ≀(H3 (Z)×Zℓ ). According to Lemma 4.1, the group Gm ≀(H3 (Z)×Zℓ )
is a subgroup of G≀(H3 (Z)×Zℓ+1 ), meaning that the latter also has an undecidable knapsack
problem.
7. Decidability: Proof of Theorem 5.4 and Theorem 5.5
Let us fix a wreath product G ≀ H. Recall the projection homomorphisms σ = σG≀H : G ≀
H → H and τ = τG≀H : G≀H → G(H) from (1). For g ∈ G≀H we write supp(g) for supp(τ (g)).
A knapsack expression E = h0 g1x1 h1 · · · gkxk hk over G ≀ H is called torsion-free if for each
i ∈ [1, k], either σ(gi ) = 1 or σ(gi ) has infinite order. A map ϕ : Na → Nb is called affine if
there is a matrix A ∈ Nb×a and a vector µ ∈ Nb such that ϕ(ν) = Aν + µ for every ν ∈ Na .
Proposition 7.1. Let knapsack be decidable for H. For every knapsack expression E over
G ≀ H, one can S
construct torsion-free expressions E1 , . . . , Er and affine maps ϕ1 , . . . , ϕr such
r
that Sol(E) = i=1 ϕi (Sol(Ei )).
Proof. First of all, note that since knapsack is decidable for H, we can decide for which
i the element σ(gi ) ∈ H has finite or infinite order. For a knapsack expression F =
h0 g1x1 h1 · · · gkxk hk , let t(F ) be the set of indices of i ∈ [1, k] such that σ(gi ) 6= 1 and σ(gi ) has
finite order. We show that if |t(E)| > 0, then one can construct expressions E0 , . . . , Er−1
Sr−1
and affine maps ϕ0 , . . . , ϕr−1 such that |t(Ej )| < |t(E)| and Sol(E) = j=0 ϕj (Sol(Ej )).
This suffices, since the composition of affine maps is again an affine map.
Suppose E = h0 g1x1 h1 · · · gkxk hk and σ(gi ) 6= 1 has finite order r. Note that we can
compute r. For every j ∈ [0, r − 1], let
x
x
i+1
i−1
hi+1 · · · gkxk hk .
Ej = h0 g1x1 h1 · · · gi−1
hi−1 (gir )xi (gij hi )gi+1
Let X = {x1 , . . . , xr }. Moreover, let ϕ : NX → NX be the affine map such that for ν ∈
NX , we have ϕj (ν)(xℓ ) = ν(xℓ ) for ℓ 6= i and ϕj (ν)(xi ) = r · ν(xi ) + j. Note that then
10
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
σ(gir ) = σ(gi )r = 1 and thus t(Ej ) = t(E) \ {i}. Furthermore, we clearly have Sol(E) =
Sr−1
j=0 ϕj (Sol(Ej )).
Since the image of a semilinear set under an affine map is again semilinear, Proposition 7.1
tells us that it suffices to prove Theorems 5.4 and 5.5 for torsion-free knapsack expressions.
For the rest of this section let us fix a torsion-free knapsack expression E over G ≀ H. We
can assume that E = g1x1 g2x2 · · · gkxk gk+1 (note that if g has infinite order than also c−1 gc
has infinite order). We partition the set VE = {x1 , . . . , xk } of variables in E as VE = S ⊎ M ,
where S = {xi ∈ VE | σ(gi ) = 1} and M = {xi ∈ VE | ord(σ(gi )) = ∞}. In this situation,
the following notation will be useful. If U = A ⊎ B for a set of variables U ⊆ V and µ ∈ NA
and κ ∈ NB , then we write µ ⊕ κ ∈ NU for the valuation with (µ ⊕ κ)(x) = µ(x) for x ∈ A
and (µ ⊕ κ)(x) = κ(x) for x ∈ B.
Computing powers. A key observation in our proof is that in order to compute the group
element τ (g m )(h) (in the cursor intuition, this is the element labelling the point h ∈ H in the
wreath product element g m ) for h ∈ H and g ∈ G ≀ H, where σ(g) has infinite order, one only
has to perform at most |supp(g)| many multiplications in G, yielding a bound independent
of m. Let us make this precise. Suppose h ∈ H has infinite order. For h′ , h′′ ∈ H, we
write h′ 4h h′′ if there is an n ≥ 0 with h′ = hn h′′ . Then, 4h is transitive. Moreover,
since h has infinite order, 4h is also anti-symmetric and thus a partial order. Observe that
if knapsack is decidable for H, given h, h′ , h′′ ∈ H, we can decide whether h has infinite
order and whether h′ 4h h′′ . It turns out that for g ∈ G ≀ H, the order 4σ(g) tells us how
to evaluate the mapping τ (g m ) at a certain element of H. Before we make this precise, we
need some notation.
We will sometimes want to multiply all elements ai for i ∈ I such that the order in which
we multiply is specified by some linear order on I. If (I, ≤) is a finite linearly ordered set
Q≤
Qn
with I = {i1 , . . . , in }, i1 < i2 < . . . < in , then we write i∈I ai for j=1 aij . If the order ≤
Q
is clear from the context, we just write i∈I ai .
Lemma 7.2. Let g ∈ G ≀ H such that ord(σ(g)) = ∞ and let h ∈ H, m ∈ N. Moreover let
F = supp(g) ∩ {σ(g)−i h | i ∈ [0, m − 1]}. Then F is linearly ordered by 4σ(g) and
Y4σ(g)
τ (g)(h′ ).
τ (g m )(h) =
h′ ∈F
Proof. By definition of G≀H, we have τ (g1 g2 )(h) = τ (g1 )(h)·τ (g2 )(σ(g1 )−1 h). By induction,
this implies
n
m−1
Y
Y
−i
m
τ (g)(σ(g)−ij h),
τ (g)(σ(g) h) =
τ (g )(h) =
i=0
j=1
where {i1 , . . . , in } = {i ∈ [0, m − 1] | σ(g)−i h ∈ supp(g)} with i1 < · · · < in . Note
that then F = {σ(g)−ij h | j ∈ [1, n]}. Since σ(g)−ij h = σ(g)ij+1 −ij σ(g)−ij+1 h, we have
σ(g)−i1 h 4σ(g) · · · 4σ(g) σ(g)−in h.
Lemma 7.3. Let g ∈ G ≀ H with σ(g) = 1 and h ∈ H. Then τ (g m )(h) = (τ (g)(h))m .
Proof. Recall that for g1 , g2 ∈ G ≀ H, we have τ (g1 g2 )(f ) = τ (g1 )(h) · τ (g2 )(σ(g1 )−1 h).
Qm−1
Therefore, if σ(g) = 1, then τ (g m )(h) = i=0 τ (g)(σ(g)−i h) = (τ (g)(h))m .
Addresses. A central concept in our proof is that of an address. Intuitively, a solution
to the equation E = 1 can be thought of as a sequence of instructions on how to walk
through the Cayley graph of H and place elements of G at those nodes. Here, being a
solution means that in the end, all the nodes contain the identity of G. In order to express
that every node carries 1 in the end, we want to talk about at which points in the product
E = g1x1 g2x2 · · · gkxk gk+1 a particular node is visited. An address is a datum that contains just
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
11
enough information about such a point to determine which element of G has been placed
during that visit.
A pair (i, h) with i ∈ [1, k + 1], and h ∈ H is called an address if h ∈ supp(gi ). The set
of addresses of the expression E is denoted by A. Note that A is finite and computable. To
each address (i, h), we associate the group element γ(i, h) = gi of the expression E.
A linear order on addresses. We will see that if a node is visited more than once, then
(i) each time2 it does so at a different address and (ii) the order of these visits only depends
on the addresses. To S
capture the order of these visits, we define a linear order on addresses.
We partition A = i∈[1,k+1] Ai , where Ai = {(i, h) | h ∈ supp(gi )} for i ∈ [1, k + 1]. Then,
for a ∈ Ai and a′ ∈ Aj , we let a < a′ if and only if i < j. It remains to order addresses
within each Ai . Within Ak+1 , we pick an arbitrary order. If i ∈ [1, k] and σ(gi ) = 1, we also
order Ai arbitrarily. Finally, if i ∈ [1, k] and σ(gi ) has infinite order, then we pick a linear
order ≤ on Ai so that for h, h′ ∈ supp(gi ), h 4σ(gi ) h′ implies (i, h) ≤ (i, h′ ). Note that this
is possible since 4σ(gi ) is a partial order on H.
Cancelling profiles. In order to express that a solution for E yields the identity at every
node of the Cayley graph of H, we need to compute the element of G that is placed after
the various visits at a particular node. We therefore, associate to each address an expression
over G that yields the element placed during a visit at this address a ∈ A. In analogy
to τ (g) for g ∈ G ≀ H, we denote this expression by τ (a). If a = (k + 1, h), then we set
τ (a) = τ (gk+1 )(h). Now, let a = (i, h) for i ∈ [1, k]. If σ(gi ) = 1, then τ (a) = τ (gi )(h)xi .
Finally, if σ(gi ) has infinite order, then τ (a) = τ (gi )(h).
This allows us to express the element of G that is placed at a node h ∈ H if h has
been visited withQa particular set of addresses. To each subset C ⊆ A, we assign the
expression EC = a∈C τ (a), where the order of multiplication is given by the linear order
on A. Observe that only variables in S ⊆ {x1 , . . . , xk } occur in EC . Therefore, given κ ∈ NS ,
we can evaluate κ(EC ) ∈ G. We say that C ⊆ A is κ-cancelling if κ(EC ) = 1.
In order to record which sets of addresses can cancel simultaneously (meaning: for the
same valuation), we use profiles. A profile is a subset of P(A) (the power set of A). A profile
P ⊆ P(A) is said to be κ-cancelling if every C ∈ P is κ-cancelling. A profile is cancelling if
it is κ-cancelling for some κ ∈ NS .
Clusters. We also need to express that there is a node h ∈ H that is visited with a particular
set of addresses. To this end, we associate to each address a ∈ A another expression σ(a).
As opposed to τ (a), the expression σ(a) is over H and variables M ′ = M ∪ {yi | xi ∈ M }.
Let a = (i, h) ∈ A. When we define σ(a), we will also include factors σ(gj )xj and σ(gj )yj
where σ(gj ) = 1. However, since these factors do not affect the evaluation of the expression,
this should be interpreted as leaving out such factors.
(1) If i = k + 1 then σ(a) = σ(g1 )x1 · · · σ(gk )xk h.
(2) If i ∈ [1, k] then σ(a) = σ(g1 )x1 · · · σ(gi−1 )xi−1 σ(gi )yi h.
ν(x )
ν(x )
We now want to express that when multiplying g1 1 · · · gk k gk+1 , there is a node h ∈ H
such that the set of addresses with which one visits h is precisely C ⊆ A. In this case, we
will call C a cluster.
′
Let µ ∈ NM and µ′ ∈ NM . We write µ′ ⊏ µ if µ′ (xi ) = µ(xi ) for xi ∈ M and
µ′ (yi ) ∈ [0, µ(xi ) − 1] for every yi ∈ M ′ . We can now define the set of addresses at which
one visits h ∈ H: For h ∈ H, let
′
Aµ,h = {a ∈ A | µ′ (σ(a)) = h for some µ′ ∈ NM with µ′ ⊏ µ}.
A subset C ⊆ A is called a µ-cluster if C 6= ∅ and there is an h ∈ H such that C = Aµ,h .
2Here, we count two visits inside the same factor g , i ∈ [1, k], with σ(g ) = 1 as one visit.
i
i
12
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
Lemma 7.4. Let ν ∈ NVE with ν = µ ⊕ κ for µ ∈ NM and κ ∈ NS . Moreover, let h ∈ H
and C = Aµ,h . Then τ (ν(E))(h) = κ(EC ).
Proof. Recall that for k1 , k2 ∈ G≀H and h ∈ H, we have τ (k1 k2 )(h) = τ (k1 )(h)·τ (σ(k1 )−1 h).
Therefore, we can calculate τ (ν(E))(h) as
τ (ν(E))(h) =
k
Y
ν(x )
τ gi i
σ(pi−1 )−1 h · τ (gk+1 ) σ(pk )−1 h ,
i=1
ν(x )
g1 1
where pi =
on A, we have
κ(EC ) =
Y
a∈C
ν(x )
· · · gi i
κ(τ (a)) =
for i ∈ [0, k]. On the other hand, by definition of the linear order
Y
!
κ(τ (a)) · · ·
a∈C∩A1
Therefore, it suffices to show that
ν(x )
σ(pi−1 )−1 h =
τ gi i
(11)
(12)
τ (gk+1 ) σ(pk )−1 h =
Y
a∈C∩Ak
Y
!
κ(τ (a))
Y
a∈C∩Ak+1
κ(τ (a)) .
κ(τ (a))
a∈C∩Ai
Y
κ(τ (a)),
a∈C∩Ak+1
for i ∈ [1, k].
We begin with Eq. (12). Note that by definition of C = Aµ,h , if a ∈ C ∩ Ak+1 =
′
Aµ,h ∩ Ak+1 with a = (k + 1, t), then there is a µ′ ∈ NM with µ′ ⊏ µ such that µ′ (σ(a)) = h.
Moreover, since a ∈ Ak+1 , σ(a) contains only variables in M and thus µ′ (σ(a)) = µ(σ(a)) =
ν(σ(a)). Note that then
h = µ′ (σ(a)) = ν(σ(a)) = ν(σ(g1 )x1 · · · σ(gk )xk t) = σ(pk )t,
meaning that there is only one such t, namely t = σ(pk )−1 h. Moreover, recall that if
a = (k + 1, t), then τ (a) = τ (gk+1 )(t) ∈ G. Therefore, the right-hand side of Eq. (12) is
κ(τ (a)) = τ (gk+1 )(t) = τ (gk+1 ) σ(pk )−1 h ,
which is the left-hand side of Eq. (12).
It remains to verify Eq. (11). Let us analyze the addresses in C ∩Ai for i ∈ [1, k]. Consider
a ∈ C ∩ Ai = Aµ,h ∩ Ai with a = (i, t). Since a ∈ Aµ,h , there is a µ′ ⊏ µ with µ′ (σ(a)) = h.
Since i ∈ [1, k] we have
(13) h = µ′ (σ(a)) = µ′ (σ(g1 )x1 · · · σ(gi−1 )xi−1 σ(gi )yi t)
′
′
= σ(g1 )ν(x1 ) · · · σ(gi−1 )ν(xi−1 ) σ(gi )µ (yi ) t = σ(pi−1 )σ(gi )µ (yi ) t.
′
Here again, if σ(gj ) = 1, we mean that the factor σ(gj )ν(xj ) (resp., σ(gi )µ (yi ) ) does not
appear. We now distinguish two cases.
Case 1. σ(gi ) = 1. In this case, Eq. (13) tells us that h = σ(pi−1 )t, i.e., t = σ(pi−1 )−1 h.
Thus, C ∩ Ai = {(i, σ(pi−1 )−1 h)}. Moreover, since σ(gi ) = 1, τ (a) is defined as (τ (gi )(t))xi .
Therefore, the right-hand side of Eq. (11) reads
ν(x )
ν(x )
(τ (gi )(t))κ(xi ) = (τ (gi )(t))ν(xi ) = (τ (gi i ))(t) = τ gi i
σ(pi−1 )−1 h ,
where the second equality is due to Lemma 7.3. This is precisely the left-hand side of
Eq. (11).
Case 2. σ(gi ) has infinite order. Let
F = supp(gi ) ∩ {σ(gi )−j σ(pi−1 )−1 h | j ∈ [0, ν(xi ) − 1]}.
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
13
We claim that t ∈ F if and only if (i, t) ∈ C. If (i, t) ∈ C then Equation (13) directly
implies that t ∈ F . Conversely, assume that t ∈ F and let t = σ(gi )−j σ(pi−1 )−1 h for
j ∈ [0, ν(xi ) − 1]. Then, according to Eq. (13), setting µ′ (yi ) := j guarantees µ′ (σ(a)) = h
for a = (i, t), i.e., (i, t) ∈ C.
Observe that F is linearly ordered by 4σ(gi ) : If j < j ′ , then
σ(gi )−j σ(pi−1 )−1 h = σ(gi )j
′
−j
′
σ(gi )−j σ(pi−1 )−1 h.
Therefore, we can compute the right-hand side of Eq. (11) as
Y4σ(gi )
Y
Y
τ (gi )(t).
τ (a) =
κ(τ (a)) =
a∈C∩Ai
a∈C∩Ai
t∈F
According to Lemma 7.2, this equals the left-hand side of Eq. (11).
VE
M
S
Proposition 7.5. Let ν ∈ N with ν = µ ⊕ κ for µ ∈ N and κ ∈ N . Then ν(E) = 1
if and only if σ(ν(E)) = 1 and there is a κ-cancelling profile P such that every µ-cluster is
contained in P .
Proof. Note that ν(E) = 1 if and only if τ (ν(E)) = 1 and σ(ν(E)) = 1. Therefore, we show
that τ (ν(E)) = 1 if and only if there is a κ-cancelling profile P such that every µ-cluster is
contained in P .
First, let suppose that there is a κ-cancelling profile P such that every µ-cluster is contained in P . We need to show that then τ (ν(E)) = 1, meaning τ (ν(E))(h) = 1 for every
h ∈ H. Consider the set C = Aµ,h . If C = ∅, then by definition, we have EC = 1. Thus,
κ(EC ) = 1, which by Lemma 7.4 implies τ (ν(E))(h) = 1. If C 6= ∅, then C is a µ-cluster
and hence κ-cancelling. Therefore, by Lemma 7.4, τ (ν(E))(h) = κ(EC ) = 1. This shows
that τ (ν(E)) = 1.
Now suppose τ (ν(E)) = 1 and let P ⊆ P(A) be the profile consisting of all sets Aµ,h with
h ∈ H. Then P is κ-cancelling, because if C ∈ P with C = Aµ,h , then by Lemma 7.4, we
have κ(EC ) = τ (ν(E))(h) = 1.
Lemma 7.6. Suppose KP(G∗ ) is decidable. Given an instance of knapsack for G ≀ H, we
can compute the set of cancelling profiles. If G is knapsack-semilinear, then for each profile
P , the set of κ such that P is κ-cancelling is semilinear.
Proof. A profile P ⊆ P(A) is κ-cancelling if and only if κ(EC ) = 1 for every C ∈ P .
Together, the expressions EC for C ∈ P constitute an instance of ExpEq(G) (and according
to Proposition 3.1, ExpEq(G) is decidable if KP(G∗ ) is decidable) and this instance is
solvable if and only if P is cancelling. This proves the first statement of the lemma. The
second
statement holds because the set of κ ∈ NS such that P is κ-cancelling is precisely
T
C∈P Sol(EC ) and because the class of semilinear sets is closed under Boolean operations.
Let LP ⊆ NM be the set of all µ ∈ NM such that every µ-cluster belongs to P .
Lemma 7.7. Let H be knapsack-semilinear. For every profile P ⊆ P(A), the set LP is
effectively semilinear.
Proof. We claim that the fact that every µ-cluster belongs to P can be expressed in Presburger arithmetic. This implies the lemma.
In addition to the variables in M ′ , we will use the variables in M ′ = {x | x ∈ M ′ }. For a
z1 −1
−1 −1 zm
knapsack expression F = r0 sz11 r1 · · · szmm rm with variables in M ′ , let F −1 = rm
(sm ) · · · r1−1 (s−1
1 ) r0 .
′
′
Moreover, let F = r0 s1z1 r1 · · · szmm rm . For µ ∈ NM , the valuation µ ∈ NM is defined as
′
′
M′
µ(x) = µ(x) for all x ∈ M . Furthermore, for µ ∈ N , we define the valuation µ ∈ NM by
′
′
µ(x) = µ(x) for x ∈ M ′ . Thus if µ ∈ NM or µ ∈ NM , then µ = µ.
As a first step, for each pair a, b ∈ A, we construct a Presburger formula ηa,b with free
′
′
variables M ′ ∪ M ′ such that for µa ∈ NM and µb ∈ NM , we have µa ⊕ µb |= ηa,b if and
14
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
only if µa (σ(a)) = µb (σ(b)). This is possible because µa (σ(a)) = µb (σ(b)) is equivalent to
(µa ⊕ µb )(σ(a)σ(b)−1 ) = 1 and the solution set of the knapsack expression σ(a)σ(b)−1 is
effectively semilinear by assumption.
Next, for each non-empty subset C ⊆ A, we construct a formula γC with free variables
in M ′ such that µ |= γC if and only if C is a µ-cluster. Since C 6= ∅, we can pick a fixed
a ∈ C and let γC express the following:
^
′
′
∃µ′ ∈ NM : µ′ ⊏ µ ∧
∃µ′′ ∈ NM : µ′′ ⊏ µ ∧ µ′ (σ(a)) = µ′′ (σ(b))
b∈C
(14)
∧
^
b∈A\C
′
′
∀µ′′ ∈ NM : µ′′ ⊏ µ → ¬ µ′ (σ(a)) = µ′′ (σ(b)) .
µ′′
Observe that µ ⊏ µ and
⊏ µ are easily expressible in Presburger arithmetic.
Let us show that in fact µ |= γC if and only if C is a µ-cluster. Consider some C ⊆ A and
′
let a ∈ C be the element picked to define γC . If µ |= γC , then there is a µ′ ∈ NM with the
′
properties stated in Eq. (14). We claim that with h := µ (σ(a)), we have C = Aµ,h . The
′
second of the three conjuncts in Eq. (14) states that for every b ∈ C there is a µ′′ ∈ NM
such that µ′′ ⊏ µ and µ′′ (σ(b)) = µ′ (σ(a)) = h. Thus, b ∈ Aµ,h , proving C ⊆ Aµ,h . The
third conjunct states that the opposite is true for every b ∈ A \ C, so that b ∈
/ Aµ,h for all
b ∈ A \ C. In other words, we have Aµ,h ⊆ C and thus Aµ,h = C.
Conversely, suppose C 6= ∅ and C = Aµ,h . Let a ∈ C be the element chosen to define γC .
Since a ∈ Aµ,h , there is a µ′ ⊏ µ with h = µ′ (σ(a)). Moreover, for every b ∈ C, there is a
µ′′ ⊏ µ with µ′′ (σ(b)) = h = µ′ (σ(a)). Hence, the second conjunct is satsfied. Furthermore,
for every b ∈ A\Aµ,h , there is no µ′′ ⊏ µ with µ′′ (σ(b)) = h, meaning that the third conjunct
is satisfied as well. Hence, C = Aµ,h and thus we have µ |= γC if and only if C is a µ-cluster.
Finally, we getVa formula with free variables M that expresses that every µ-cluster belongs
to P by writing C∈P(A)\P, C6=∅ ¬γC .
We are now ready to prove Theorems 5.4 and 5.5. Let H be knapsack-semilinear and let
KP(G∗ ) be decidable. For each profile P ⊆ P(A), let KP ⊆ NS be the set of all κ ∈ NS
such that P is κ-cancelling.
Observe that for ν = µ⊕κ, where µ ∈ NM and κ ∈ NS , the value of σ(ν(E)) only depends
on µ. Moreover, the set T ⊆ NM of all µ such that σ(ν(E)) = 1 is effectively
semilinear
S
because H is knapsack-semilinear. Proposition 7.5 tells us that Sol(E) = P ⊆P(A) KP ⊕
(LP ∩ T ) and Lemma 7.7 states that LP is effectively semilinear. This implies Theorem 5.4:
We can decide solvability of E by checking, for each of the finitely many profiles P , whether
KP 6= ∅ (which is decidable by Lemma 7.6) and whether LP ∩ T 6= ∅. Moreover, if G is
knapsack-semilinear, then Lemma 7.6 tells us that KP and thus Sol(E) is semilinear as well.
This proves Theorem 5.5.
8. Complexity: Proof of Theorem 5.7
Throughout the section we fix a finitely generated group G. The goal of this section is to
show that if G is abelian and non-trivial, then KP(G ≀ Z) is NP-complete.
8.1. Periodic words over groups. In this section we define a countable subgroup of Gω
(the direct product of ℵ0 many copies of G) that consists of all periodic sequences over
G. We show that the membership problem for certain subgroups of this group can be
solved in polynomial time if G is abelian. We believe that this is a result of independent
interest which might have other applications. Therefore, we prove the best possible complexity bound, which is TC0 .3 This is the class of all problems that can be solved with
uniform threshold circuits of polynomial size and constant depth. Here, uniformity means
3Alternatively, the reader can always replace TC0 by polynomial time in the further arguments.
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
15
DLOGTIME-uniformity, see e.g. [10] for more details. Complete problems for TC0 are multiplication and division of binary encoded integers (or, more precisely, the question whether
a certain bit in the output number is 1) [10]. TC0 -complete problems in the context of
group theory are the word problem for any infinite finitely generated solvable linear group
[13], the subgroup membership problem for finitely generated nilpotent groups [25], the conjugacy problem for free solvable groups and wreath products of abelian groups [21], and the
knapsack problem for finitely generated abelian groups [19].
With G+ we denote the set of all tuples (g0 , . . . , gq−1 ) over G of arbitrary length q ≥ 1.
With Gω we denote the set of all mappings f : N → G. Elements of Gω can be seen as
infinite sequences (or words) over the set G. We define the binary operation ◦ on Gω by
pointwise multiplication: (f ◦ g)(n) = f (n)g(n). In fact, Gω together with the multiplication
◦ is the direct product of ℵ0 many copies of G. The identity element is the mapping id with
id(n) = 1 for all n ∈P
N. For f1 , f2 , . . . , fn ∈ Gω we write ni=1 fi for f1 ◦ f2 ◦ · · · ◦ fn . If G
n
is abelian, we write i=1 fi for ni=1 fi . A function f ∈ Gω is periodic with period q ≥ 1 if
f (k) = f (k + q) for all k ≥ 0. Note that in this situation, f might also be periodic with a
smaller period q ′ < q. Of course, a periodic function f with period q can be specified by the
tuple (f (0), . . . , f (q − 1)). Vice versa, a tuple u = (g0 , . . . , gq−1 ) ∈ G+ defines the periodic
function fu ∈ Gω with
fu (n · q + r) = gr for n ≥ 0 and 0 ≤ r < q.
One can view this mapping as the sequence uω obtained by taking infinitely many repetitions
of u. Let Gρ be the set of all periodic functions from Gω . If f1 is periodic with period q1 and
f2 is periodic with period q2 , then f1 ◦ f2 is periodic with period q1 q2 (in fact, lcm(q1 , q2 )).
Hence, Gρ forms a countable subgroup of Gω . Note that Gρ is not finitely generated: The
subgroup generated by elements fi ∈ Gρ with period qi (1 ≤ i ≤ n) contains only functions
with period lcm(q1 , . . . , qn ). Nevertheless, using the representation of periodic functions by
elements of G+ we can define the word problem for Gρ , WP(Gρ ) for short:
Input: Tuples u1 , . . . , un ∈ G+ (elements of G are represented by finite words over Σ).
Question: Does ni=1 fui = id hold?
For n ≥ 0 we define the subgroup Gρn of all f ∈ Gρ with f (k) = 1 for all 0 ≤ k ≤ n − 1.
We also consider the uniform membership problem for subgroups Gρn , Membership(Gρ∗ ) for
short:
Input: Tuples u1 , . . . , un ∈ G+ (elements of G are represented by finite words over Σ) and
a binary encoded number m.
Question: Does ni=1 fui belong to Gρm ?
Lemma 8.1. WP(Gρ ) is TC0 -reducible to Membership(Gρ∗ )
Proof. Let u1 , . . . , un ∈ G+ and let qi be the length of ui . Let m = lcm(q1 , . . . , qn ). We
have ni=1 fui = id if and only ni=1 fui belongs to Gρm .
Theorem 8.2. For every finitely generated abelian group G, Membership(Gρ∗ ) belongs to
TC0 .
Proof. Since the word problem for a finitely generated abelian group belongs to TC0 , it
suffices to show the following claim:
Pn
Claim: Let u1 , . . . , un ∈ G+ and let qi be the length of ui . Let f = Pi=1 fui . If there
n
exists a position m such that f (m) 6= 0, then there exists a position m < i=1 qi such that
f (m) 6= 0.
Pn
Pn
Let m ≥ i=1 qi . We show that if f (j) = 0 for all j with m − i=1 qi ≤ j < m, then also
f (m) = 0, which proves the above claim.
Pn
Hence, let us assume that f (j) = 0 for all j with m − i=1 qi ≤ j < m. Note
P that
fui (j) = fui (j − qi ) for all j ≥ qi and 1 ≤ i ≤ n. For M ⊆ [1, n] let qM =
i∈M qi .
16
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
Moreover, for 1 ≤ k ≤ n let Mk = {M ⊆ [1, n], |M | = k}. For all 1 ≤ k ≤ n − 1 we get
X X
X
X
fui (m − qM ) = −
fui (m − qM )
M∈Mk i∈M
M∈Mk i∈[1,n]\M
=
X
−
X
fui (m − qM − qi )
M∈Mk i∈[1,n]\M
=
=
−
−
n
X
X
fui (m − qM∪{i} )
i=1 M∈Mk ,i∈M
/
n
X
X
fui (m − qM )
i=1 M∈Mk+1 ,i∈M
=
X
−
X
fui (m − qM ).
M∈Mk+1 i∈M
We can write
f (m) =
n
X
i=1
fui (m) =
n
X
X X
fui (m − qi ) =
i=1
fui (m − qM ).
M∈M1 i∈M
From the above identities we get by induction:
X X
f (m) = (−1)n+1
fui (m − qM )
M∈Mn i∈M
n+1
= (−1)
X
fui (m − q[1,n] )
i∈[1,n]
= (−1)n+1 f (m −
n
X
qi ) = 0.
i=1
This proves the claim and hence the theorem.
8.2. Automata for Cayley representations. The goal of this section is to show that if
ExpEq(G) and Membership(Gρ∗ ) both belong to NP, then also KP(G ≀ Z) belongs to NP.
An interval [a, b] ⊆ Z supports an element (f, d) ∈ G ≀ Z if {0, d} ∪ supp(f ) ⊆ [a, b]. If
(f, d) ∈ G ≀ Z is a product of length n over the generators, then the minimal interval [a, b]
which supports (f, d) satisfies b − a ≤ n. A knapsack expression E = v0 ux1 1 v1 · · · uxk k vk is
called rigid if each ui evaluates to an element (fi , 0) ∈ G ≀ Z. Intuitively, the movement of
the cursor is independent from the values of the variables xi up to repetition of loops. In
particular, every variable-free expression is rigid.
In the following we define so called Cayley representations of rigid knapsack expressions.
This is a finite word, where every symbol is a marked knapsack expression over G. A marked
knapsack expression over G is of the form E, E, E, or E, where E is a knapsack expression
over G. We say that E and E (resp., E and E) are top-marked (resp., bottom-marked).
Let E = v0 ux1 1 v1 · · · uxk k vk be a rigid knapsack expression over G ≀ Z. For an assignment
ν let (fν , d) ∈ G ≀ Z be the element to which ν(E) evaluates, i.e. (fν , d) = ν(E). Note that
d does not depend on ν. Because of the rigidity of E, there is an interval [a, b] ⊆ Z that
supports (fν , d) for all assignments ν. For each j ∈ [a, b] let Ej be a knapsack expression
over G with the variables x1 , . . . , xk such that fν (j) = ν(Ej ) for all assignments ν. Then
we call the formal expression
Ea Ea+1 · · · E−1 E0 E1 · · · Ed−1 Ed Ed+1 · · · Eb if d > 0
r = Ea Ea+1 · · · E−1 E0 E1 · · · Eb
if d = 0 .
Ea Ea+1 · · · Ed−1 Ed Ed+1 · · · E−1 E0 E1 · · · Eb if d < 0
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
-1
ax
0
ax
1
b
ax
ax b
1
bx
1
by
a
bx by a
2
3
by
b−1
b
by
a
a
by
b y a2
17
4
5
6
7
8
9
10
11
12
a
b−1
b
a
a
a
b−1
b
a
a
a
a2
a
b−1
b
a
a
a
a2
a
b−1
ab−1
a
a
a
a
a
a2
Figure 1. Cayley representation
a Cayley representation of E (or E is represented by r). Formally, a Cayley representation is
a sequence of marked knapsack expressions. For a Cayley representation r, we denote by |r|
the number of knapsack expressions in the sequence. If necessary, we separate consecutive
marked knapsack expressions in r by commas. For instance, if a1 and a2 are generators of
G, then a1 , a2 a1 , a2 is a Cayley representation of length 3, whereas a1 , a2 , a1 , a2 is a Cayley
representation of length 4. By this definition, r depends on the chosen supporting interval
[a, b]. However, compared to the representation of the minimal supporting interval, any
other Cayley representation differs only by adding 1’s (i.e., trivial knapsack expressions over
G) at the left and right end of r.
A Cayley representation of E records for each point in Z an expression that describes
which element will be placed at that point. Multiplying an element of G ≀ Z always begins
at a particular cursor position; in a Cayley representation, the marker on top specifies
the expression that is placed at the cursor position in the beginning. Moreover, a Cayley
representation describes how the cursor changes when multiplying ν(E): The marker on the
bottom specifies where the cursor is located in the end.
Example 8.3. Let us consider the wreath product F2 ≀Z where F2 is the free group generated
by {a, b} and Z is generated by t. Consider the rigid knapsack expression E = ux1 u2 uy3 u54
where
• u1 = at−1 at2 bt−1 , represented by a a b,
• u2 = t, represented by 1 1,
• u3 = btbtbt−2 , represented by b b b,
• u4 = at−1 bt2 b−1 tatat−1 , represented by b a b−1 a a.
A Cayley representation of ux1 is ax ax b−1 and a Cayley representation of uy3 is by by by . The
diagram in Fig. 1 illustrates how to compute a Cayley representation r of E, which is shown
in the bottom line. Here, we have chosen the supporting interval minimal. Note that if
we replace the exponents 5 in u54 by a larger number, then we only increase the number of
repetitions of the factor a, a2 in the Cayley representation.
Example 8.3 also illustrates the concept of so called consistent tuples, which will be
used later. A tuple (γ1 , . . . , γn ), where every γi is a marked knapsack expression over G
is consistent if, whenever γi is bottom-marked and i < n, then γi+1 is top-marked. Every
column in Fig. 1 is a consistent tuple.
Let E be an arbitrary knapsack expression over G ≀ Z. We can assume that E has the
form ux1 1 · · · uxk k uk+1 . We partition the set of variables X = {x1 , . . . , xk } as X = X0 ∪ X1 ,
where X0 contains all variables xi where ui evaluates to an element (f, 0) ∈ G ≀ Z, and
X1 contains all other variables. For a partial assignment ν : X1 → N we obtain a rigid
knapsack expression Eν by replacing in E every variable xi ∈ X1 by ν(xi ). A set R of
Cayley representations is a set representation of E if
• for each assignment ν : X1 → N there exists r ∈ R such that r represents Eν ,
18
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
• for each r ∈ R there exists an assignment ν : X1 → N such that r represents Eν and
ν(x) ≤ |r| for all x ∈ X1 .
Example 8.4. Let us consider again the wreath product F2 ≀ Z and consider the (nonrigid) knapsack expression E ′ = ux1 u2 uy3 uz4 where u1 , u2 , u3 , u4 are taken from Example 8.3.
We have X0 = {x, y} and X1 = {z}. For z = 5 we obtained in Example 8.3 the Cayley
representation
ax , ax b, bx by a, by , by a2 , a, a2 , a, a2 , a, a2 , ab−1 , a, a.
A set representation R of E ′ consists of the following Cayley representations:
• ax , ax , bx by , by , by for ν(z) = 0,
• ax , ax b, bx by a, by b−1 , by a, a for ν(z) = 1,
• ax , ax b, bx by a, by , by a2 , a, a2 , . . . , a, a2 , ab−1 , a, a for ν(z) ≥ 2.
{z
}
|
ν(z) − 2 times
Only finitely many different marked knapsack expressions appear in this set representation
R, and R is clearly a regular language over the finite alphabet consisting of this finitely
many marked knapsack expressions.
In the following, we will show that for every knapsack expression E = ux1 1 · · · uxk k uk+1
there exists a non-deterministic finite automaton (NFA) that accepts a set representation of
E, whose size is exponential in n = |E|. First, we consider the blocks ux1 1 , . . . , uxk k , uk+1 .
Lemma 8.5. One can compute in polynomial time for each 1 ≤ i ≤ k + 1 an NFA Ai of
size |ui |O(1) that recognizes a set representation of uxi i or uk+1 .
Proof. Let us do a case distinction.
Case 1. Consider an expression uxi i where xi ∈ X0 , i.e. ui evaluates to some element
(f, 0) ∈ G ≀ Z. Let [a, b] be the minimal interval which supports (f, 0). Thus, b − a ≤ |ui |.
Then
ri = f (a)xi · · · f (−1)xi f (0)xi f (1)xi · · · f (b)xi
is a Cayley representation of uxi i where |ri | = b − a + 1 ≤ |ui | + 1. Clearly, {ri } is a set
representation of uxi i , which is recognized by an NFA Ai of size |ri | + 1 ≤ |ui | + 2.
Case 2. Similarly, for the word uk+1 we obtain a Cayley representation rk+1 as above except
that the exponents xi are not present. Again, {rk+1 } is a set representation of uk+1 , which
is recognized by an NFA Ak+1 of size |uk+1 | + 2.
Case 3. Consider an expression uxi i where xi ∈ X1 , i.e., ui evaluates to some element
(f, d) ∈ G ≀ Z where d 6= 0. Let [a, b] be a minimal interval which supports (f, d), hence
b − a ≤ |ui |.
We only consider the case d > 0; at the end we say how to modify the construction for
d < 0. Consider the word
ri = f (a) · · · f (−1)f (0) · · · f (d)f (1) · · · f (b),
which is a Cayley representation of (f, d). We will prove that there is an NFA Ai with εtransitions of size O(|ri |2 ) = O(|ui |2 ) which recognizes a set representation of uxi i . This set
representation has to contain a Cayley representation of every um
i (a variable-free knapsack
expression over G) for m ≥ 0.
First we define an auxiliary automaton B. Example 8.6 shows an example of the following
construction. Let Γ be the alphabet of ri (a set of possibly marked elements of G) and define
g : [a, b] → Γ by
f (0) if c = 0
g(c) = f (d) if c = d
f (c) otherwise.
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
b
a
b
(-1)
a
(0)
b−1
b
1
(1,-1)
a
a
a2
(2,0)
a
b−1
b
a
a
a
(3,1,-1)
a2
(2,0)
a
b−1
b
a
a
a
(3,1,-1)
a2
(2,0)
a
b−1
b
a
(3,1,-1)
19
a
a
a2
(2,0)
a
b−1
ab−1
(3,1)
a
a
(2)
a
a
(3)
Figure 2. A run of the automaton for (at−1 bt2 b−1 tatat−1 )x
The state set of B is the set Q of all decreasing arithmetic progressions (s, s−d, s−2d, . . . , s−
ℓd) in the interval [a, b] where ℓ ≥ 0 together with a unique final state ⊤. It is not hard to
see that |Q| = O(|ri |2 ). For each state (s0 , . . . , sℓ ) ∈ Q we define the marked G-element
f (s ) · · · f (sℓ ) if neither g(s0 ) is top-marked nor g(sℓ ) is bottom-marked
0
α(s0 , . . . , sℓ ) = f (s0 ) · · · f (sℓ ) if g(s0 ) is top-marked
f (s0 ) · · · f (sℓ ) if g(sℓ ) is bottom-marked
Since d > 0 it cannot happen that g(s0 ) is top-marked and at the same time g(sℓ ) is
bottom-marked. The initial state is the 1-tuple (a). For each state (s0 , . . . , sℓ ) ∈ Q and
γ = α(s0 , . . . , sℓ ) the automaton has the following transitions:
•
•
•
•
ε
(s0 , . . . , sℓ ) −
→ (s0 , . . . , sℓ , a) if sℓ = a + d
γ
(s0 , . . . , sℓ ) −
→ (s0 + 1, . . . , sℓ + 1) if s0 < b
γ
(s0 , . . . , sℓ ) −
→ (s1 + 1, . . . , sℓ + 1) if s0 = b and ℓ ≥ 1
γ
(s0 , . . . , sℓ ) −
→ ⊤ if s0 = b and ℓ = 0
Finally we take the union with another automaton which accepts the singleton {1}. This
yields the desired automaton Ai .
If d < 0 we can consider the group element (f ′ , −d) with f ′ : [−b, −a] → G, f ′ (c) = f (−c)
for −b ≤ c ≤ −a. We then do the above automaton construction for (f ′ , −d). From the
resulting NFA we finally construct an automaton for the reversed language. This proves the
lemma.
Example 8.6. Below is a run of the automaton for (at−1 bt2 b−1 tatat−1 )x on the word
b, a, 1, (a2 , a)3 , a2 , ab−1 , a, a.
Fig. 2 shows how this word is produced from (at−1 bt2 b−1 tatat−1 )5 . The last line shows the
tuple of relative positions in the currently “active” copies of b, a, b−1 , a, a. The positions
are −1, 0, 1, 2, 3. For instance, the tuple (3, 1, −1) means that currently three copies of
b, a, b−1 , a, a are active. The current position in the first copy is 3, the current position in
the second copy is 1, and the current position in the third copy is -1. These tuples are
states in the run below. The only additional states (1) and (3, 1) in the run are origins of
ε-transitions, which add new copies of b, a, b−1 , a, a.
b
a
ε
1
→ (1) −
→ (1, −1) −
→
(−1) −
→ (0) −
a2
ε
a
a2
ε
a
a2
ε
a
a2
ab−1
(2, 0) −→ (3, 1) −
→ (3, 1, −1) −
→
(2, 0) −→ (3, 1) −
→ (3, 1, −1) −
→
(2, 0) −→ (3, 1) −
→ (3, 1, −1) −
→
a
a
→ (3) −
→⊤
(2, 0) −→ (3, 1) −−−→ (2) −
20
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
A language L ⊆ Σ∗ is bounded if there exist words β1 , . . . , βn ∈ Σ∗ such that L ⊆ β1∗ · · · βn∗ .
It will be convenient to use the following characterization. For states p, q of an automaton
B, let Lp,q (B) be the set of all words read on a path from p to q. An NFA B recognizes
a bounded language if and only if for every state q, the language Lq,q (B) is commutative,
meaning that uv = vu for any u, v ∈ Lq,q (B) [5].
Lemma 8.7. Given an NFA B that recognizes a bounded language, one can compute in
polynomial time words β1 , . . . , βn with L(B) ⊆ β1∗ · · · βn∗ .
Proof. For any two states p, q with Lp,q (B) 6= ∅, compute a shortest word wp,q ∈ Lp,q (B)
and let Pq = u∗1 · · · u∗m , where wq,q = u1 · · · um and u1 , . . . , um are letters.
We first prove the lemma for the languages Lp,q = Lp,q (B) if p, q lie in the same strongly
connected component. Any two words in Lp,q have to be comparable in the prefix order:
Otherwise we could construct two distinct words of equal length in Lp,p , contradicting the
∗
commutativity of Lp,p . Since wp,q wq,q
⊆ Lp,q , this means that every word in Lp,q must be
∗
∗
∗
wq,q
Pq .
a prefix of a word in wp,q wq,q . In particular, we have Lp,q ⊆ wp,q
In the general case, we assume that B has only one initial state s. We decompose B into
strongly connected components, yielding a directed acyclic graph Γ with vertices V . For
i ≤ |V |, let Di = {v ∈ V | v has distance i from [s] in Γ}, where [s] denotes the strongly
Q
Q|V | Q
connected component of s. Observe that L(B) ⊆ i=0 v∈Di p,q∈v Lp,q , where the two
innermost products are carried out in an arbitrary order. Since we have established the
lemma in the case of the Lp,q , this tells us how to perform the computation for L(B).
Lemma 8.8. The NFAs Ai from Lemma 8.5 recognize bounded languages.
Proof. The statement is clear for the automata which recognize singleton languages in cases
1. and 2. Consider the constructed automaton B from case 3. It is almost deterministic
in the following sense: Every state in B has at most one outgoing transition labelled by a
symbol from the alphabet and at most one outgoing ε-transition.
We partition its state set as Q = Q0 ⊎ Q1 , where Q0 consists of those states (s0 , . . . , sℓ )
where sℓ ≤ a + d. Since there is no transition from Q1 to Q0 , every strongly connected
component is either entirely within Q0 or entirely within Q1 . If a state q has an outgoing
ε-transition, then q ∈ Q0 and all non-ε-transitions from q lead into Q1 . Therefore, every
state in B has at most one outgoing transition that leads into the same strongly connected
component. Thus, every strongly connected component is a directed cycle, meaning that
Lq,q (B) = w∗ , where w is the word read on that cycle. Hence, B recognizes a bounded
language. Hence also L(Ai ) = L(B) ∪ {1} is bounded.
Qk+1
O(n log n)
Lemma 8.9. There exists an NFA A of size i=1 O(|ui |) ≤ 2
which recognizes a
set representation of E, where n = |E|.
Proof. Reconsider the automata Ai from Lemma 8.5. We first ensure that for all 1 ≤ i ≤ k+1
we have L(Ai ) = 1∗ L(Ai ) 1∗ , which can be achieved using two new states in Ai . Let Ei be
the finite alphabet of marked knapsack expressions that occur as labels in Ai and let E be
the set of consistent tuples in the cartesian product E1 × · · · × Ek+1 .
Let A′ be the following product NFA over the alphabet E. It stores a (k + 1)-tuple of
states (one for each NFA Ai ). On input of a consistent tuple (γ1 , . . . , γk+1 ) ∈ E it reads γi
Qk+1
into Ai . The size of A′ is i=1 O(|ui |) ≤ 2O(n log n) . To obtain the NFA A we project the
transition labels of A′ as follows: Let (γ1 , . . . , γk+1 ) ∈ E and let (χ1 , . . . , χk+1 ) obtained by
removing all markings from the γi . We then replace the transition label (χ1 , . . . , χk+1 ) by
• χ1 · · · χk+1 if neither χ1 is top-marked nor χk+1 is bottom-marked,
• χ1 · · · χk+1 if χ1 is top-marked and χk+1 is not bottom-marked,
• χ1 · · · χk+1 if χ1 is not top-marked and χk+1 is bottom-marked,
• χ1 · · · χk+1 if χ1 is top-marked and χk+1 is bottom-marked.
One can verify that A recognizes a set representation of E.
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
21
Proposition 8.10. Let G be a finitely generated abelian group. If ExpEq(G) ∈ NP and
Membership(Gρ∗ ) ∈ NP, then also KP(G ≀ Z) ∈ NP.
Proof. We first claim that, if E = 1 is solvable, then there exists a solution ν such that ν(x) is
exponentially bounded in n for all x ∈ X1 . Assume that ν is a solution for E = 1. From the
NFA A, we obtain an automaton A′ by replacing each knapsack expression in the alphabet
of A by its value unter ν in G. Then, A′ has the same number of states as A, hence at most
2O(n log n) . Moreover, A′ accepts a Cayley representation of the identity of G ≀ Z (which is
just a sequence of 1’s). Due to the size bound, A′ accepts such a representation of length
2O(n log n) . Since A accepts a set representation of E, this short computation corresponds to
a solution ν ′ . By definition of a set representation, for each x ∈ X1 , A′ makes at least ν ′ (x)
steps. Therefore, ν ′ (x) is bounded exponentially for x ∈ X1 .
Since each Ai accepts a set representation of uxi i , i ∈ [1, k] or of uk+1 , this implies that
solvability of E is witnessed by words α1 , . . . , αk+1 with αi ∈ L(Ai ) for i ∈ [1, k + 1] whose
length is bounded exponentially.
In the following we will encode exponentially long words as follows: A cycle compression
of a word w is a sequence (β1 , ℓ1 , . . . , βm , ℓm ) where each βi is a word and each ℓi ≥ 0 is a
binary encoded integer such that there exists a factorization w = w1 · · · wm and each factor
wi is the prefix of βiω of length ℓi . Each wi is called a cycle factor in w.
We need the following simple observation. Let (β1 , ℓ1 , . . . , βm , ℓm ) be a cycle compression
of a word w with the corresponding factorization w = w1 · · · wm . Given a position p in w
which yields factorizations w = uv, u = w1 · · · wi−1 wi′ , v = wi′′ wi+1 · · · wm and wi = wi′ wi′′ .
Splitting (β1 , ℓ1 , . . . , βm , ℓm ) at position p yields the unique cycle compression of w of the
form
(β1 , ℓ1 , . . . , βi−1 , ℓi−1 , βi′ , ℓ′i , βi′′ , ℓ′′i , . . . , βm , ℓm )
where |wi′ | = ℓ′i and |wi′′ | = ℓ′′i . Clearly, splitting can be performed in polynomial time.
With the help of splitting operations we can also remove a given set of positions from a
cycle compressed word in polynomial time.
This leads us to our NP-algorithm: First we construct the NFAs Ai as above. By
Lemma 8.8 each NFA Ai recognizes a bounded language. Hence for each i ∈ [1, k + 1],
Lemma 8.7 allows us to compute in polynomial time words βi,1 , . . . , βi,mi such that L(Ai ) ⊆
∗
∗
. For each Ai we guess a cycle compression (βi,1 , ℓi,1 , . . . , βi,mi , ℓi,mi ) of a word
βi,1
· · · βi,m
i
αi such that the words α1 , . . . , αk+1 have equal length ℓ. Then, we test in polynomial time
whether αi is accepted by Ai (this is a restricted case of the compressed membership problem
of a regular language [15]). Next we verify in polynomial time whether the markers of the
αi are consistent and whether the position of the origin in α1 coincides with the position of
the cursor in αk+1 . If so, we remove all markers from the words αi .
Finally we reduce to instances of ExpEq(G) and Membership(Gρ∗ ). Denote with P =
{p1 , . . . , pr } ⊆ [1, ℓ] the set of positions p such that there exists a variable xi ∈ X0 occurring
in αi [p], which is the expression at position p in αi . Note that if a variable xi ∈ X0
occurs in αi , then by definition of X0 and set representations, αi contains at most |ui |O(1)
positions with an expression 6= 1. We can therefore compute P in polynomial time and obtain
an instance of ExpEq(G) containing the expression α1 [pj ] · · · αk+1 [pj ] for each j ∈ [1, r].
We then remove the positions in P from the words αi and compute cycle compressions
(βi,1 , ℓi,1 , . . . , βi,mi , ℓi,mi ) of the new words αi in polynomial time.
The remaining words reduce to instances of Membership(Gρ∗ ) as follows: Consider the set
Pk+1
of at most i=1 mi positions at which some cycle factor begins in αi . By splitting all words
αi along these positions we obtain new cycle compressions of the form (βi,1 , ℓ1 , . . . , βi,m , ℓm )
of αi , i.e., the j-th cycle factor has uniform length across all αi . From this representation
one easily obtains m instances of Membership(Gρ∗ ).
Proposition 8.10 yields the NP
Lmupper bound for Theorem 5.7: If G is a finitely generated
abelian group, then G ∼
= Zn ⊕ i=1 (Z/ri Z) for some n, r1 , . . . , rm ∈ N, so that ExpEq(G)
22
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
corresponds to the solvability problem for linear equation systems over the integers, possibly
with modulo-constraints (if m > 0). This is a well known problem in NP. Moreover,
Membership(Gρ∗ ) belongs to TC0 by Theorem 8.2.
It remains to prove the NP-hardness part of Theorem 5.7, which is the content of the
next section.
8.3. NP-hardness.
Theorem 8.11. If G is non-trivial, then KP(G ≀ Z) is NP-hard.
Proof. Since every non-trivial group contains a non-trivial cyclic group, we may assume
that G is non-trivial and abelian. We reduce from 3-dimensional matching, 3DM for short.
In this problem, we have a set of triples T = {e1 , . . . , et } ⊆ [1, q] × [1, q] × [1, q] for some
q ≥ 1, and the question whether there is a subset M ⊆ T such that |M | = q and all pairs
(i, j, k), (i′ , j ′ , k ′ ) ∈ M with (i, j, k) 6= (i′ , j ′ , k ′ ) satisfy i 6= i′ , j 6= j ′ and k 6= k ′ ; such a set
M is called a matching. Since we will write all group operations multiplicatively, we denote
the generator of Z by a.
Let G be a non-trivial group and g ∈ G \ {1}. We reduce 3DM to KP(G ≀ Z) in the
following way: for every el = (i, j, k) ∈ T let
wl
=
ai gaq−i+j gaq−j+k ga−2q−k+(3q+1)l ga−(3q+1)l
=
ai gaq−i+j gaq−j+k ga−2q−k a(3q+1)l ga−(3q+1)l .
|
{z
} |
{z
}
ul
vl
Intuitively, ul is the word that puts g on positions i, q + j and 2q + k, and vl puts g on
position (3q + 1)l and then moves the cursor back to 0. Hence, vl is contained in G(Z) and
thus commutes with every element of G ≀ Z (recall that G is abelian).
We define the knapsack expression
E = w1x1 · · · wtxt (ag −1 )3q a−3q
q
Y
(a(3q+1)yi g −1 ) a−(3q+1)yq+1
i=1
with variables x1 , . . . , xt , y1 , . . . , yq+1 . For all values of these variables, the following equivalences hold.
w1x1
· · · wtxt
(ag
−1 3q −3q
) a
u1x1 · · · uxt t (ag −1 )3q a−3q v1x1 · · · vtxt
ux1 · · · uxt t (ag −1 )3q a−3q = 1 and v1x1 · · · vtxt
|1
{z
}
E1
|
q
Y
(a(3q+1)yi g −1 ) a−(3q+1)yq+1 = 1
i=1
q
Y
(a(3q+1)yi g −1 ) a−(3q+1)yq+1 = 1
i=1
q
Y
⇔
⇔
(a(3q+1)yi g −1 ) a−(3q+1)yq+1 = 1
i=1
{z
E2
}
The second equivalence holds because (i) for all values of the variables, the word E1 only
affects positions from the interval [1, 3q], whereas the word E2 only affects positions that
are multiples of 3q + 1 and (ii) E2 represents a word in G(Z) .
First assume that there is a matching M ⊆ T . We define a valuation ν for E by ν(xi ) = 1
if ei ∈ M and ν(xi ) = 0 if ei ∈
/ M . Let M = {em1 , . . . , emq } such that mi < mj for i < j
and let m0 = 0. Then we set ν(yi ) = mi − mi−1 for 1 ≤ i ≤ q, and ν(yq+1 ) = mq . Since M
is a matching, we have
Y
ν(uxe11 · · · uxett ) =
ul = (ag)3q a−3q
el ∈M
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
23
and thus ν(E1 ) = 1. Furthermore, we have
ν(vex11
· · · vextt )
=
q
Y
i=1
a
(3q+1)mi
ga
−(3q+1)mi
=
q
Y
(a(3q+1)(mi −mi−1 ) g) a−(3q+1)mq
i=1
and thus ν(E2 ) = 1.
Now assume that there is a valuation ν for E with ν(E1 ) = ν(E2 ) = 1. Let ni = ν(xi )
and mi = ν(yi ). For every 1 ≤ l ≤ t, we must have g nl ∈ {1, g}, i.e., nl ≡ 0 mod ord(g) or
nl ≡ 1 mod ord(g). We first show that q ′ :=Q#{l | nl ≡ 1 mod ord(g)} = q. This follows from
ν(E2 ) = 1 and the fact that the effect of qi=1 a(3q+1)mi g −1 is to multiply the G-elements
at exactly q many positions p (p ≡ 0 mod (3q + 1)) with g −1 . Hence, the effect of v1n1 · · · vtnt
must be to multiply the G-elements at exactly q many positions p (p ≡ 0 mod (3q + 1)) with
g. But this means that q ′ = q.
So we can assume that q ′ = q. We finally show that M = {el | nl ≡ 1 mod ord(g)} ⊆ T
is a matching: Assume that there are e = (i, j, k) ∈ M and e′ = (i′ , j ′ , k ′ ) ∈ M with
i = i′ , j = j ′ or k = k ′ . Since q ′ = q this would imply that at most 3q − 1 positions p with
1 ≤ p ≤ 3q can be set to g by the word une11 · · · unett . But then, (ag −1 )3q a−3q would leave
a position with value g −1 , and hence ν(E1 ) 6= 1. Hence, M must be a matching. Notice
that the argumentation of the whole proof still works in the case that we allow the variables
x1 , . . . , xt , y1 , . . . , yq+1 to be integers instead of naturals.
Note that the above NP-hardness proof also works for the subset sum problem, where the
range of the valuation is restricted to {0, 1}. Moreover, if the word problems for two groups
G and H can be solved in polynomial time, then word problem for G ≀ H can be solved in
polynomial time as well [21]. This implies that subset sum for G ≀ H belongs to NP. Thus,
we obtain:
Theorem 8.12. Let G and H be non-trivial finitely generated groups and assume that H
contains an element of infinite order. Then, the subset sum problem for G ≀ H is NP-hard. If
moreover, the word problem for G and H can be solved in polynomial time, then the subset
sum problem for G ≀ H is NP-complete.
9. Open problems
Our results yield decidability of KP(G ≀ H) for almost all groups G and H that are known
to satisfy the necessary conditions. However, we currently have no complete characterization
of those G and H for which KP(G ≀ H) is decidable.
Several interesting open problems concerning the complexity of knapsack for wreath products remain. We are confident that our NP upper bound for KP(G ≀ Z), where G is finitely
generated abelian, can be extended to KP(G≀F ) for a finitely generated free group G as well
as to KP(G ≀ Zk ). Another question is whether the assumption on G being abelian can be
weakened. In particular, we want to investigate whether polynomial time algorithms exist
for Membership(Gρ∗ ) for certain non-abelian groups G.
The complexity of knapsack for free solvable groups is open as well. Our decidability proof
uses the preservation of knapsack-semilinearity under wreath products (Theorem 5.5). Our
construction in the proof of Theorem 5.5 adds for every application of the wreath product a
∀∗ ∃∗ -quantifier prefix in the formula describing the solution set. Since a free solvable group
of class d and rank r is embedded into a d-fold iterated wreath product of Zr , this leads to a
Π2(d−1) -formula (for d = 1, we clearly have a Π0 -formula). The existence of a solution is then
expressed by a Σ2d−1 -formula. Haase [9] has shown that the Σi+1 -fragment of Presburger
arithmetic is complete for the i-th level of the so-called weak EXP hierarchy. In addition
to the complexity resulting from the quantifier alternations in Presburger arithmetic, our
algorithm incurs a doubly exponential increase in the formula size for each application of
the wreath product. This leads to the question whether there is a more efficient algorithm
for knapsack over free solvable groups.
24
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
Finally, we are confident that with our techniques from [18] one can also show preservation
of knapsack-semilinearity under graph products.
References
[1] Tara C. Davis and Alexander Yu. Olshanskii. Subgroup distortion in wreath products of cyclic groups.
Journal of Pure and Applied Algebra, 215(12):2987–3004, 2011.
[2] Samuel Eilenberg and Marcel P. Schützenberger. Rational sets in commutative monoids. Journal of
Algebra, 13:173–191, 1969.
[3] Michael Elberfeld, Andreas Jakoby, and Till Tantau. Algorithmic meta theorems for circuit classes of
constant and logarithmic depth. Electronic Colloquium on Computational Complexity (ECCC), 18:128,
2011.
[4] Elizaveta Frenkel, Andrey Nikolaev, and Alexander Ushakov. Knapsack problems in products of groups.
Journal of Symbolic Computation, 74:96–108, 2016.
[5] Pawel Gawrychowski, Dalia Krieger, and Jeffrey Shallit Narad Rampersad. Finding the growth rate of a
regular or context-free language in polynomial time. International Journal of Foundations of Computer
Science, 21(04):597–618, 2010.
[6] Etienne Ghys and Pierre de la Harpe. Sur les groupes hyperboliques d’après Mikhael Gromov. Progress
in mathematics. Birkhäuser, 1990.
[7] Seymour Ginsburg and Edwin H. Spanier. Semigroups, Presburger formulas, and languages. Pacific
Journal of Mathematics, 16(2):285–296, 1966.
[8] Christoph Haase. On the complexity of model checking counter automata. PhD thesis, University of
Oxford, St Catherine’s College, 2011.
[9] Christoph Haase. Subclasses of presburger arithmetic and the weak EXP hierarchy. In Joint Meeting
of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the TwentyNinth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), CSL-LICS 2014, pages
47:1–47:10. ACM, 2014.
[10] William Hesse, Eric Allender, and David A. Mix Barrington. Uniform constant-depth threshold circuits
for division and iterated multiplication. Journal of Computer and System Sciences, 65:695–716, 2002.
[11] Richard M. Karp. Reducibility among combinatorial problems. In R. E. Miller and J. W. Thatcher,
editors, Complexity of Computer Computations, pages 85–103. Plenum Press, 1972.
[12] Daniel König, Markus Lohrey, and Georg Zetzsche. Knapsack and subset sum problems in nilpotent,
polycyclic, and co-context-free groups. In Algebra and Computer Science, volume 677 of Contemporary
Mathematics, pages 138–153. American Mathematical Society, 2016.
[13] Daniel König and Markus Lohrey. Evaluation of circuits over nilpotent and polycyclic groups. Algorithmica, 2017.
[14] Jörg Lehnert and Pascal Schweitzer. The co-word problem for the higman-thompson group is contextfree. Bulletin of the London Mathematical Society, 39(2):235–241, 2007.
[15] Markus Lohrey. Algorithmics on SLP-compressed strings: A survey. Groups Complexity Cryptology,
4(2):241–299, 2012.
[16] Markus Lohrey, Benjamin Steinberg, and Georg Zetzsche. Rational subsets and submonoids of wreath
products. Information and Computation, 243:191–204, 2015.
[17] Markus Lohrey and Georg Zetzsche. Knapsack in graph groups, HNN-extensions and amalgamated
products. CoRR, abs/1509.05957, 2015.
[18] Markus Lohrey and Georg Zetzsche. Knapsack in graph groups, HNN-extensions and amalgamated
products. In Nicolas Ollinger and Heribert Vollmer, editors, Proc. of the 33rd International Symposium on Theoretical Aspects of Computer Science (STACS 2016), volume 47 of Leibniz International
Proceedings in Informatics (LIPIcs), pages 50:1–50:14, Dagstuhl, Germany, 2016. Schloss Dagstuhl–
Leibniz-Zentrum fuer Informatik.
[19] Markus Lohrey and Georg Zetzsche. The complexity of knapsack in graph groups. In Proceedings of
the 34th Symposium on Theoretical Aspects of Computer Science, STACS 2017, volume 66 of LIPIcs,
pages 52:1–52:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017.
[20] Wilhelm Magnus. On a theorem of Marshall Hall. Annals of Mathematics. Second Series, 40:764–768,
1939.
[21] Alexei Miasnikov, Svetla Vassileva, and Armin Weiß. The conjugacy problem in free solvable groups
and wreath products of abelian groups is in TC0 . In Computer Science – Theory and Applications –
12th International Computer Science Symposium in Russia, CSR 2017, Proceedings, volume 10304 of
Lecture Notes in Computer Science, pages 217–231. Springer, 2017.
[22] Alexei Mishchenko and Alexander Treier. Knapsack problem for nilpotent groups. Groups Complexity
Cryptology, 9(1):87–98, 2017.
[23] Alexei Myasnikov, Andrey Nikolaev, and Alexander Ushakov. Knapsack problems in groups. Mathematics of Computation, 84:987–1016, 2015.
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
25
[24] Alexei Myasnikov and Andrey Nikolaev. Verbal subgroups of hyperbolic groups have infinite width.
Journal of the London Mathematical Society, 90(2):573–591, 2014.
[25] Alexei Myasnikov and Armin Weiß. TC0 circuits for algorithmic problems in nilpotent groups. CoRR,
abs/1702.06616, 2017.
[26] Andrey Nikolaev and Alexander Ushakov. Subset sum problem in polycyclic groups. Journal of Symbolic
Computation, 84:84–94, 2018.
[27] Charles Sims. Computation with finitely presented groups. Cambridge University Press, 1994.
Appendix A. Hyperbolic groups
Let G be a finitely generated group with the finite symmetric generating set Σ. The
Cayley-graph of G (with respect to Σ) is the undirected graph Γ = Γ(G) with node set G
and all edges (g, ga) for g ∈ G and a ∈ Σ. We view Γ as a geodesic metric space, where every
edge (g, ga) is identified with a unit-length interval. It is convenient to label the directed
edge from g to ga with the generator a. The distance between two points p, q is denoted
with dΓ (p, q). For g ∈ G let |g| = dΓ (1, g). For r ≥ 0, let Br (1) = {g ∈ G | dΓ (1, g) ≤ r}.
Given a word w ∈ Σ∗ , one obtains a unique path P [w] that starts in 1 and is labelled
with the word w. This path ends in the group element represented by w. More generally, for
g ∈ G we denote with g · P [w] the path that starts in g and is labelled with w. We will only
consider paths of the form g · P [w]. One views g · P [w] as a continuous mapping from the
real interval [0, |w|] to Γ. Such a path P : [0, n] → Γ is geodesic if dΓ (P (0), P (n)) = n; it is a
(λ, ǫ)-quasigeodesic if for all points p = P (a) and q = P (b) we have |a − b| ≤ λ · dΓ (p, q) + ε.
We say that a path P : [0, n] → Γ is path from P (0) to P (n). A word w ∈ Σ∗ is geodesic if
the path P [w] is geodesic.
A geodesic triangle consists of three points p, q, r, ∈ G and geodesic paths Pp,q , Pp,r , Pq,r
(the three sides of the triangle), where Px,y is a path from x to y. For δ ≥ 0, the group
G is δ-hyperbolic, if for every geodesic triangle, every point p on one of the three sides has
distance at most δ from a point belonging to one of the two sides that are opposite of p.
Finally, G is hyperbolic, if it is δ-hyperbolic for some δ ≥ 0. Finitely generated free groups
are for instance 0-hyperbolic. The property of being hyperbolic is independent of the the
chosen generating set. The word problem for every hyperbolic group is decidable in linear
time. This allows to compute for a given word w an equivalent geodesic word; the best
known algorithm is quadratic.
Let us fix a δ-hyperbolic group G with the finite symmetric generating set Σ for the
further discussion.
Lemma A.1 (c.f. [6, 8.21]). Let g ∈ G be of infinite order and let n ≥ 1. Let u be a
geodesic word representing g. Then the path P [un ] is a (λ, ǫ)-quasigeodesic, where λ = |g|N ,
ǫ = 2|g|2 N 2 + 2|g|N and N = |B2δ (1)|.
Consider two paths P1 : [0, n1 ] → Γ, P1 : [0, n2 ] → Γ and let K be a positive real number.
We say that P1 and P2 asynchronously K-fellow travel if there exist two continuous nondecreasing mappings ϕ1 : [0, 1] → [0, n1 ] and ϕ2 : [0, 1] → [0, n2 ] such that ϕ1 (0) = ϕ2 (0) = 0,
ϕ1 (1) = n1 , ϕ2 (1) = n2 and for all 0 ≤ t ≤ 1, dΓ (P1 (ϕ1 (t)), P2 (ϕ2 (t))) ≤ K. Intuitively, this
means that one can travel along the paths P1 and P2 asynchronously with variable speeds
such that at any time instant the current points have distance at most K.
Lemma A.2 (c.f. [24]). Let P1 and P2 be (λ, ǫ)-quasigeodesic paths in ΓG and assume that
Pi starts in gi and ends in hi . Assume that dΓ (g1 , h1 ), dΓ (g2 , h2 ) ≤ h. Then there exists a
computable bound K = K(δ, λ, ǫ, h) ≥ h such that P1 and P2 asynchronously K-fellow travel.
A.1. Hyperbolic groups are knapsack-semilinear. In this section, we prove the following result:
Theorem A.3. Every hyperbolic group is knapsack-semilinear.
Let us fix a δ-hyperbolic group G and let Σ be a finite symmetric generating set for G.
We first consider knapsack instances of depth 2.
26
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
Lemma A.4. For all g1 , h1 , g2 , h2 ∈ G such that g1 and g2 have infinite order, the set
{(x1 , x2 ) | h1 g1x1 = g2x2 h2 in G} is effectively semilinear.
Proof. The semilinear subsets of Nk are exactly the rational subsets of Nk [2]. A subset
A ⊆ Nk is rational if it is a homomorphic image of a regular set of words. In other words,
there exists a finite automaton with transitions labeled by elements of Nk such that A is
the set of v ∈ Nk that are obtained by summing the transition labels along a path from
the initial state to a final state. We prove that the set {(x1 , x2 ) | h1 g1x1 = g2x2 h2 in G} is
effectively rational.
Let ui be a geodesic word representing gi and let ℓi = |ui |. Assume that n1 , n2 ≥ 1
are such that h1 g1n1 = g2n2 h2 . Let P1 = h1 · P [un1 1 ] and let P2 = P [un2 2 ]. By Lemma A.1,
P1 and P2 are (λ, ǫ)-quasigeodesics, where λ and ǫ only depend on δ, |u1 | and |u2 |. By
Lemma A.2, the paths P1 and P2 asynchronously K-fellow travel, where K is a computable
bound that only depends on δ, λ, ǫ, |g1 |, |h1 |, |g2 |, |h2 |. Let ϕ1 : [0, 1] → [0, n1 · ℓ1 ] and
ϕ2 : [0, 1] → [0, n2 · ℓ2 ] be the corresponding continuous non-decreasing mappings.
Let p1,i = h1 g1i = P1 (i · ℓ1 ) for 0 ≤ i ≤ n1 and p2,j = g2j = P2 (j · ℓ2 ) for 0 ≤ j ≤ n2 . Thus,
p1,i is a point on P1 and p2,j is a point on P2 . We define the binary relation R ⊆ {p1,i | 0 ≤
i ≤ n1 } × {p2,j | 0 ≤ j ≤ n2 } by
R = {(p1,i , p2,j ) | ∃r ∈ [0, 1] : ϕ1 (r) ∈ [i · ℓ1 , (i + 1) · ℓ1 ), ϕ2 (r) ∈ [j · ℓ2 , (j + 1) · ℓ1 )}.
Thus, we take all pairs (P1 (ϕ1 (r)), P2 (ϕ2 (r))), and push the first (resp., second) point in this
pair back along P1 (resp., P2 ) to the next point p1,i (resp., p2,j ). Then R has the following
properties:
• (0, 0), (n1 , n2 ) ∈ R
• If (p1,i , p2,j ) ∈ R and (i, j) 6= (n1 , n2 ) then one of the following pairs also belongs to
R: (p1,i+1 , p2,j ), (p1,i , p2,j+1 ), (p1,i+1 , p2,j+1 ).
• If (p1,i , p2,j ) ∈ R, then dΓ (p1,i , p2,j ) ≤ K + ℓ1 + ℓ2 .
Let r = K + ℓ1 + ℓ2 . We can now construct a finite automaton over N × N that accepts the
set {(x1 , x2 ) | h1 g1x1 = g2x2 h2 in G}. The set of states consists of Br (1). The initial state is
h1 , the final state is h2 . Finally, the transitions are the following:
(0,1)
• p −−−→ q for p, q ∈ Br (1) if p = g2 q
(1,0)
• p −−−→ q for p, q ∈ Br (1) if pg1 = q
(1,1)
• p −−−→ q for p, q ∈ Br (1) if pg1 = g2 q
By the above consideration, it is clear that this automaton accepts the set {(x1 , x2 ) | h1 g1x1 =
g2x2 h2 in G}.
We can now prove Theorem A.3.
Proof of Theorem A.3. Consider a knapsack expression E = v1 ux1 1 v2 ux2 2 v3 · · · uxk k vk+1 . We
want to show that the set of all solutions of E = 1 is a semilinear subset of Nk . For this we
construct a Presburger formula with free variables x1 , . . . , xk that is equivalent to E = 1.
We do this by induction on the depth k. Therefore, we can use in our Presburger formula
also knapsack equations of the form F = 1, where F has depth at most k − 1.
Let gi ∈ G be the group element represented by the word ui . In a hyperbolic group the
order of torsion elements is bounded by a fixed constant that only depends on the group, see
also the proof of [23, Theorem 6.7]). This allows to check for each gi whether it has finite
order, and to compute the order in the positive case. Assume that gi has finite order mi .
We then produce for every number 0 ≤ d ≤ mi − 1 a knapsack instance of depth k − 1 by
replacing uxi i by udi , which by induction can be transformed into an equivalent Presburger
formula. We then take the disjunction of all these Presburger formulae for all 0 ≤ d ≤ mi −1.
A similar argument shows that it suffices to construct a Presburger formula describing all
solutions in Nk+ (where N+ = N \ {0}).
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
27
un2 2
un1 1
un3 3
v
un4 4
un5 5
Figure 3. The (k + 1)-gon for k = 5 from the proof of Theorem A.3
By the above discussion, we can assume that all ui represent group elements of infinite
order. The case that k ≤ 2 is covered by Lemma A.4. Hence, we assume that k ≥ 3. By the
above remark, we only need to consider valuations ν such that ν(xi ) > 0 for all i ∈ [1, k].
Moreover, we can assume that E has the form ux1 1 · · · uxk k v, where all ui and v are geodesic
ν(x )
words. By Lemma A.1 for every valuation ν, all words ui i are (λ, ǫ)-quasigeodesics for
certain constants λ and ǫ.
Consider a solution ν and let ni = ν(xi ) for i ∈ [1, k]. Consider the polygon obtained by
traversing the closed path labelled with ux1 1 · · · uxk k v. We partition this path into segements
P1 , . . . , Pk , Q, where Pi is the subpath labelled with uni i and Q is the subpath labelled with
v. We consider these subpaths as the sides of a (k + 1)-gon, see Fig. 3. Since all sides of
this (k + 1)-gon are (λ, ǫ)-quasigeodesics, we can apply [23, Lemma 6.4]: Every side of the
(k+1)-gon is contained in the h-neighborhoods of the other sides, where h = (κ+κ log(k+1))
for a constant κ that only depends on the constants δ, λ, ε.
Let us now consider the side P2 of the quasigeodesic (k+1)-gon. It is labelled with ux2 2 . Its
neighboring sides are P1 and P3 (recall that k ≥ 3) and are labelled with ux1 1 and ux3 3 .4 We
now distinguish the following cases. In each case we cut the (k + 1)-gon into smaller pieces
along paths of length ≤ h, and these smaller pieces will correpsond to knapsack instances of
smaller depth. When we speak of a point on the (k + 1)-gon, we mean a node of the Cayley
graph (i.e., an element of the group G) and not a point in the interior of an edge. Moreover,
when we peak of the successor point of a point p, we refer to the clockwise order on the
(k + 1)-gon, where the sides are traversed in the order P1 , . . . , Pk , Q.
Case 1: There is a point on p ∈ P2 that has distance at most h from a node q ∈ P4 · · · Pk .
Let us assume that q ∈ Pi where i ∈ [4, k]. We now construct two new knapsack instances
Ft and Gt for all words w ∈ Σ∗ of length at most h and all factorizations u2 = u2,1 u2,2 and
ui = ui,1 ui,2 , where t = (i, w, u2,1 , u2,2 , ui,1 , ui,2 ):
x
Ft
i+1
= ux1 1 uy22 (u2,1 wui,2 )uzi i ui+1
· · · uxk k v and
Gt
i−1 yi
= u2,2 uz22 ux3 3 · · · ui−1
ui (ui,1 w−1 )
x
Here y2 , z2 , yi , zi are new variables. The situation looks as follows, where the case i = k = 5
is shown:
4We take the side P since Q is not a neighboring side of P . This avoids some additional cases in the
2
2
following case distinction.
28
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
uy22
u2,1 u2,2
uz22
ux1 1
ux3 3
w
ux4 4
v
uz55
u5,2 u5,1
uy55
Note that Ft and Gt have depth at most k−1. Lets say that a tuple t = (i, w, u2,1 , u2,2 , ui,1 , ui,2 )
is valid for case 1 if i ∈ [4, k], w ∈ Σ∗ , |w| ≤ h, u2 = u2,1 u2,2 and ui = ui,1 ui,2 . Moreover,
let A1 be the following formula, where t ranges over all tuples that are valid for case 1, and
i is the first component of the tuple t:
_
A1 =
∃y2 , z2 , yi , zi : x2 = y2 + 1 + z2 ∧ xi = yi + 1 + zi ∧ Ft = 1 ∧ Gt = 1
t
Case 2: There is a point on p ∈ P2 that has distance at most h from a node q ∈ Q. We
construct two new knapsack instances Ft and Gt for all words w ∈ Σ∗ of length at most h
and all factorizations u2 = u2,1 u2,2 and v = v1 v2 , where t = (w, u2,1 , u2,2 , v1 , v2 ):
Ft
=
ux1 1 uy22 (u2,1 wv2 ) and
Gt
=
u2,2 uz22 ux3 3 · · · uxk k (v1 w−1 )
As in case 1, y2 , z2 are new variables and Ft and Gt have depth at most k − 1. The situation
looks as follows:
uy22
u2,1 u2,2
ux1 1
uz22
ux3 3
w
v2
ux4 4
v1
ux5 5
We say that a tuple t = (w, u2,1 , u2,2 , v1 , v2 ) is valid for case 2 if w ∈ Σ∗ , |w| ≤ h, u2 =
u2,1 u2,2 and v = v1 v2 . Moreover, let A2 be the following formula, where t ranges over all
tuples that are valid for case 2:
_
A2 =
∃y2 , z2 : x2 = y2 + 1 + z2 ∧ Ft = 1 ∧ Gt = 1
t
Case 3: Every point p ∈ P2 has distance at most h from a point on P1 . Let q be the
unique point in P2 ∩ P3 and let p ∈ P1 be a point with dΓ (p, q) ≤ h. We construct two new
KNAPSACK PROBLEMS FOR WREATH PRODUCTS
29
knapsack instances Ft and Gt for all words w ∈ Σ∗ of length at most h and all factorizations
u1 = u1,1 u1,2 , where t = (w, u1,1 , u1,2 ):
Ft
= uy11 (u1,1 w)ux3 3 · · · uxk k v and
Gt
= u1,2 uz11 ux2 2 w−1
Since k ≥ 3, Ft and Gt have depth at most k − 1. The situation looks as follows:
uz11
u1,2
u1,1
un2 2
w
ux3 3
uy11
ux4 4
v
ux5 5
We say that a triple t = (w, u1,1 , u1,2 ) is valid for case 3 if w ∈ Σ∗ , |w| ≤ h and u1 = u1,1 u1,2 .
Moreover, let A3 be the following formula, where t ranges over all tuples that are valid for
case 3:
A3 =
_
∃y1 , z1 : x1 = y1 + 1 + z1 ∧ Ft = 1 ∧ Gt = 1
t
Case 4: Every point p ∈ P2 has distance at most h from a point on P3 . This case is of course
completely analogous to case 3 and yields a corresponding formula A4 .
Case 5: Every point p ∈ P2 has distance at most h from a point on P1 ∪ P3 but P2 is neither
contained in the h-neighborhood of P1 nor in the h-neighborhood of P3 . Hence there exists
points p1 , p3 ∈ P2 which are connected by an edge and such that p1 has distance at most h
from P1 and p3 has distance at most h from P3 . Therefore, p1 has distance at most h+1 from
P1 as well as distance at most h + 1 from P3 . We construct three new knapsack instances Ft ,
Gt , Ht for all words w1 , w2 ∈ Σ∗ with |w1 |, |w2 | ≤ h + 1 and all factorizations u1 = u1,1 u1,2 ,
u2 = u2,1 u2,2 , and u3 = u3,1 u3,2 , where t = (w1 , w2 , u1,1 , u1,2 , u2,1 , u2,2 , u3,1 , u3,2 ):
Ft
=
uy11 (u1,1 w1 w2 u3,2 )uz33 ux4 4 · · · uxk k v,
Gt
=
u1,2 uz11 uy22 u2,1 w1−1 ,
Ht
=
u2,2 uz22 uy33 u3,1 w2−1
Since k ≥ 3, Ft , Gt and Ht have depth at most k − 1. The situation looks as follows:
30
M. GANARDI, D. KÖNIG, M. LOHREY, AND G. ZETZSCHE
uz11
u1,2
uy22
u2,1 u2,2
w1
uz22
uy33
w2
u3,1
u1,1
uy11
u3,2
uz33
ux4 4
v
ux5 5
We say that a tuple t = (w1 , w2 , u1,1 , u1,2 , u2,1 , u2,2 , u3,1 , u3,2 ) is valid for case 5 if w1 , w2 ∈
Σ∗ , |w1 |, |w2 | ≤ h + 1, u1 = u1,1 u1,2 , u2 = u2,1 u2,2 , and u3 = u3,1 u3,2 . Moreover, let A5 be
the following formula, where t ranges over all tuples that are valid for case 5:
_
A5 =
∃y1 , z1 , y2 , z2 , y3 , z3 : x1 = y1 + 1 + z1 ∧ x2 = y2 + 1 + z2 ∧ x3 = y3 + 1 + z3 ∧
t
Ft = 1 ∧ Gt = 1 ∧ Ht = 1.
Our final formula is A1 ∨A2 ∨A3 ∨A4 ∨A5 . It is easy to check that a valuation ν : {x1 , . . . , xk }
satisfies ν(E) = 1 if and only if ν makes A1 ∨ A2 ∨ A3 ∨ A4 ∨ A5 true. If ν(E) = 1 holds,
then one of the above five cases holds, in which case ν makes the corresponding formula Ai
true. Vice versa, if ν makes one of the formulas Ai true then ν(E) = 1 holds.
Universität Siegen, Germany, {ganardi,koenig,lohrey}@eti.uni-siegen.de
LSV, CNRS & ENS Paris-Saclay, France, [email protected]
| 4 |
arXiv:1510.05886v2 [cs.DM] 13 Mar 2017
Approximation Algorithm for Minimum Weight
Connected m-Fold Dominating Set
Zhao Zhang1
1
Jiao Zhou2
Ker-I Ko3
Ding-zhu Du4
College of Mathematics Physics and Information Engineering, Zhejiang Normal University
Jinhua, Zhejiang, 321004, China
2
College of Mathematics and System Sciences, Xinjiang University
Urumqi, Xinjiang, 830046, China
3
Department of Computer Science, National Chiao Tung University
Hsinchu, 30050, Taiwan
4
Department of Computer Science, University of Texas at Dallas
Richardson, Texas, 75080, USA
Abstract: Using connected dominating set (CDS) to serve as a virtual backbone in
a wireless networks can save energy and reduce interference. Since nodes may fail due
to accidental damage or energy depletion, it is desirable that the virtual backbone has
some fault-tolerance. A k-connected m-fold dominating set ((k, m)-CDS) of a graph G
is a node set D such that every node in V \ D has at least m neighbors in D and the
subgraph of G induced by D is k-connected. Using (k, m)-CDS can tolerate the failure of
min{k − 1, m − 1} nodes. In this paper, we study Minimum Weight (1, m)-CDS problem
((1, m)-MWCDS), and present an (H(δ+m)+2H(δ−1))-approximation algorithm, where
δ is the maximum degree of the graph and H(·) is the Harmonic number. Notice that
there is a 1.35 ln n-approximation algorithm for the (1, 1)-MWCDS problem, where n is
the number of nodes in the graph. Though our constant in O(ln ·) is larger than 1.35, n
is replaced by δ. Such a replacement enables us to obtain a 3.67-approximation for the
connecting part of (1, m)-MWCDS problem on unit disk graphs.
Keyword: m-fold dominating set, connected dominating set, non-submodular function, greedy algorithm, unit disk graph.
1
Introduction
A wireless sensor network (WSN) consists of spatially distributed autonomous sensors
to monitor physical or environmental condition, and to cooperatively pass their data
through the network. During recent years, WSN has been widely used in many fields,
1
such as environment and habitat monitoring, disaster recovery, health applications, etc.
Since there is no fixed or predefined infrastructure in WSNs, frequent flooding of control
messages from sensors may cause a lot of redundant contentions and collisions. Therefore,
people have proposed the concept of virtual backbone which corresponds to a connected
dominating set in a graph (Das and Bhargharan [4] and Ephremides et al. [8]).
Given a graph G with node set V and edge set E, a subset of nodes C ⊆ V is said to
be a dominating set (DS) of G if any v ∈ V \ C is adjacent to at least one node of C. We
say that a dominating set C of G is a connected dominating set of G if the subgraph of
G induced by C, denoted by G[C], is connected. Nodes in C are called dominators, the
nodes in V \ C are called dominatees.
Because sensors in a WSN are prone to failures due to accidental damage or battery
depletion, it is important to maintain a certain degree of redundancy such that the virtual
backbone is more fault-tolerant. In a more general setting, every sensor has a cost, it
is desirable that under the condition that tasks can be successfully accomplished, the
whole cost of virtual backbone is as small as possible. These considerations lead to the
Minimum Node-Weighted k-Connected m-Fold Dominating Set problem (abbreviated as
(k, m)-MWCDS), which is defined as follows:
Definition 1.1 ((k, m)-MWCDS). Let G be a connected graph, k and m be two positive
integers, c : V → R+ be a cost function on nodes. A node subset D ⊆ V is an m-fold
dominating set (m-DS) if every node in V \ D has at least m neighbors in D. It is a kconnected m-fold dominating set ((k, m)-CDS) if furthermore, the subgraph of G induced
by D is k-connected. The (k, m)-MWCDS problem
is to find a (k, m)-CDS D such that
P
the cost of D is minimized, that is, c(D) = u∈D c(u) is as small as possible.
After Dai and Wu [3] proposed using (k, k)-CDS as a model for fault-tolerant virtual
backbone, a lot of approximation algorithms emerged, most of which are on unit disk
graphs. In a unit disk graph (UDG), every node corresponds to a sensor on the plane,
two nodes are adjacent if and only if the Euclidean distance between the corresponding
sensors is at most one unit. There are a lot of studies on fault-tolerant virtual backbone
in UDG which assume unit weight on each disk. However, for a general graph with a
general weight function, related studies are rare.
In this paper, we present a (H(δ + m) + 2H(δ − 1))-approximation algorithm for the
(1, m)-MWCDS
Pγ problem on a general graph, where δ is the maximum degree of the graph,
and H(γ) = 1 1/i is the Harmonic number. It is a two-phase greedy algorithm. First,
it constructs an m-fold dominating set D1 of G. Then it connects D1 by adding a set
of connectors D2 . It is well known that if the potential function and the cost function
related with a greedy algorithm is monotone increasing and submodular, then an O(ln n)
performance ration can be achieved. Unfortunately, for various minimum CDS problems,
no such potential functions are known. Nevertheless, we manage, in this paper, to deal
with a nonsubmodular potential function and achieve an O(ln δ) performance ratio.
It should be pointed out that for a general graph, Guha and Khuller [11] proposed a
1.35 ln n-approximation for the (1, 1)-MWCDS problem. Though our constant in O(ln n)
2
is larger, the parameter n in the performance ratio is replaced by δ. In many cases, δ might
be substantially smaller than n. In particular, for an UDG, due to such an replacement,
after having found an m-DS, the connecting part has a performance ratio at most 3.67. In
[34], Zou et al. proposed a method for the connecting part with performance ratio at most
3.875, which makes use of a 1.55-approximation algorithm [16] for the classic Minimum
Steiner Tree problem. If using currently best ratio for Minimum Steiner Tree problem [1],
then their ratio is at most 3.475. Notice that the algorithm in [1] uses randomized iterative
rounding. So, although our ratio 3.67 is a litter larger than 3.475, our algorithm has the
advantage that it is purely combinatorial. Furthermore, we believe that our method is
also of more theoretical interests and may find more applications in the study of other
related problems.
The rest of this paper is organized as follows. Section 2 introduces related works.
Some notation and some preliminary are given in Section 3. In Section 4, the algorithm
is presented. Section 5 analyzes the performance ratio. Section 6 improves the ratio on
unit disk graph. Section 7 concludes the paper.
2
Related work
The idea of using a connected dominating set as a virtual backbone for WSN was
proposed by Das and Bhargharan [4] and Ephremides et al. [8]. Constructing a CDS
of the minimum size is NP-hard. In fact, Guha and Khuller [11] proved that a minimum CDS cannot be approximated within ρ ln n for any 0 < ρ < 1 unless NP ⊆
DT IME(nO(loglogn) ). In the same paper, they proposed two greedy algorithms with performance ratios of 2(H(δ) + 1) and H(δ) + 2, respectively, where δ is the maximum degree
of the graph and H(·) is the harmonic number. This was improved by Ruan et al. [17]
to 2 + ln δ. Du et al. [5] presented a (1 + ε) ln(δ − 1)-approximation algorithm, where ε
is an arbitrary positive real number.
For unit disk graphs, a polynomial time approximation scheme (PTAS) was given by
Cheng et al. [2], which was generalized to higher dimensional space by Zhang et al. [28].
There are a lot of studies on distributed algorithms for this problem. For a comprehensive
study on CDS in UDG, the readers may refer to the book [7].
Considering the weighted version of the CDS problem, Guha and Khuller [11] proposed
a (cn +1) ln n-approximation algorithm in a general graph, where cn ln k is the performance
ratio for the node weighted Steiner tree problem (k is the number of terminal nodes to
be connected). Later, they [12] improved it to an algorithm of performance ratio at most
(1.35 + ε) ln n. For the minimum weight CDS in UDG, Zou et al. [34] gave a (9.875 + ε)approximation.
The problem of constructing fault-tolerant virtual backbones was introduced by Dai
and Wu [3]. They proposed three heuristic algorithms for the minimum (k, k)-CDS problem. However, no performance ratio analysis was given. A lot of works have been done for
the CDS problem in UDG. The first constant approximation algorithm in this aspect was
3
given by Wang et al. [24], who obtained a 72-approximation for the (2, 1)-CDS problem
in UDG. Shang et al. [18] gave an algorithm for the minimum (1, m)-CDS problem and
an algorithm for the minimum (2, m)-CDS problem in UDG, the performance ratios are
5 + m5 for m ≤ 5 and 7 for m > 5, and 5 + 25
for 2 ≤ m ≤ 5 and 11 for m > 5, rem
spectively. Constant approximation algorithms also exist for (3, m)-CDS in UDG [25, 26].
Recently, Shi et al. [20] presented the first constant approximation algorithm for general
(k, m)-CDS on UDG.
For the fault-tolerant CDS problem in a general graph, Zhang et al. [30] gave a
2rH(δr + m − 1)-approximation for the minimum r-hop (1, m)-CDS problem, where δr is
the maximum degree in Gr , the r-th power graph of G. A node u is r-hop dominated by
a set D if it is at most r-hops away from D. In particular, taking r = 1, the algorithm in
[30] has performance ratio at most 2H(δ + m − 1) for the minimum (1, m)-CDS problem.
This was improved by our recent work [33] to 2 + H(δ + m − 2). We also gave an
(ln δ + o(ln δ))-approximation algorithm for the minimum (2, m)-CDS problem [19] and
the minimum (3, m)-CDS problem [32] on a general graph.
For the weighted version of fault-tolerant CDS problem on UDG, as a consequence of
recent work [13], the minimum weight (1, 1)-CDS problem admits a PTAS. Combining
the constant approximation algorithm for the minimum weight m-fold dominating set
problem [9] and the 3.475-approximation algorithm for the connecting part [34], (1, m)MWCDS on UDG admits a constant-approximation. Recently, we [20, 31] gave the first
constant approximation algorithm for the general minimum weight (k, m)-CDS problem
on unit disk graph.
As far as we know, there is no previous work on the approximation of the weighted
version of fault-tolerant CDS problem in a general graph.
Notice that in [34], the 3.875-approximation for the connecting part is based on the
1.55-approximation algorithm [16] for the classic minimum Steiner tree problem. If the
best known ratio for the Steiner tree problem is used, which is 1.39 currently, then their
connecting part has performance ratio at most 3.475. Notice that the 1.39-approximation
for the Steiner tree problem uses randomized iterative rounding. So, although our performance ratio is larger than 3.475, it has the advantage that it is purely combinatorial.
3
Preliminaries
In this section, we introduce some notation and give some preliminary results. For a
node u ∈ V (G), denote by NG (u) the set of neighbors of u in G, andSdegG (u) = |NG (u)| is
the degree of node u in G. For a node subset D ⊆ V (G), NG (D) = u∈D NG (u) \ D is the
neighbor set of D, G[D] is the subgraph of G induced by D. When there is no confusion
in the context, the vertex set of a subgraph will be used to denote the subgraph itself.
For an element set U, suppose f : 2U 7→ R+ is a set function on U (called a potential
4
function). For two element sets C, D ⊆ U, let
△D f (C) = f (C ∪ D) − f (C)
be the marginal profit obtained by adding D into C. For simplicity, △u f (C) will be
used to denote △{u} f (C) when u is a node. Potential function f is monotone increasing
if f (C) ≤ f (D) holds for any subsets C ⊆ D ⊆ V . It is submodular if and only if
△u f (C) ≥ △u f (D) holds for any C ⊆ D ⊆ V and any u ∈ V \ D. A monotone increasing
and submodular function f with f (∅) = 0 is called a polymatroid. Given an element set
U with cost function c : U 7→ R+ and given a polymatroid f : 2U 7→ R+ , denote by
Ωf = {C ⊆ U : △u f (C) = 0 for any u ∈ U }. The Submodular Cover problem is:
X
min c(C) =
c(u)
u∈C
s.t. C ∈ Ωf .
The following is a classic result which can be found in [6] Theorem 2.29.
Theorem 3.1. The greedy algorithm for the submodular coverPproblem has performance
γ
ratio H(γ), where γ = max{f ({u}) : u ∈ U} and H(γ) =
i=1 1/i is the Harmonic
number.
4
4.1
The Algorithm
The Algorithm
For a node set C, denote by p(C) the number of components in G[C]. For a node
u ∈ V \ C, let NC (u) denote the set of neighbors of u in C. Define
max{m − |NC (u)|, 0}, u ∈ V \ C,
qC (u) =
0,
u ∈ C,
and
q(C) = m|V | −
X
qC (u).
u∈V
For a node set U ⊆ V , denote by NCC (U) the set of components of G[C] which are
adjacent to U. Every component in NCC (U) is called a component neighbor of U in G[C].
For a node u ∈ V , we shall use Su to denote some star with center u, that is, Su is a
subgraph of G induced by edges between node u and some of u’s neighbors in G. In
particular, a node is a star of cardinality one and an edge is a star of cardinality two.
To abuse the notation a little, we also use Su to denote the set of nodes in Su . Suppose
Su \ {u} = {u1, u2 , . . . , us }, where c(u1 ) ≤ c(u2 ) ≤ . . . ≤ c(us ). Define
p′C (Su )
= |NCC (u)| − 1 +
s
X
min{1, − △ui p(C ∪ {u, u1, . . . , ui−1})}.
i=1
5
(1)
Call eC (Su ) = p′C (Su )/c(Su ) the efficiency of Su with respect to C.
The algorithm is presented in Algorithm 1.
Algorithm 1
Input: A connected graph G = (V, E).
Output: A (1, m)-CDS DG of G.
1: Set D1 ← ∅
2: while there exists a node u ∈ V \ D1 such that △u q(D1 ) > 0, do
3:
select u which maximizes △u q(D1 )/c(u)
4:
D1 ← D1 ∪ {u}
5: end while
6: Set D2 ← ∅
7: while there exists a star Su ⊆ V \ (D1 ∪ D2 ) such that p′D1 ∪D2 (Su ) > 0, do
8:
select a star Su with the largest efficiency with respect to D1 ∪ D2 .
9:
D2 ← D2 ∪ S u
10: end while
11: Output DG ← D1 ∪ D2
4.2
The Idea of the Algorithm
The idea underlying the algorithm is as follows. Potential function qD1 (u) measures
how many more times that node u needs to be dominated, and q(D1 ) is the total residual
domination requirement. As can be seen from Lemma 5.2, at the end of the first phase,
we have an m-fold dominating set D1 . Then, the second phase aims to connect it by
adding a connector set D2 . A natural potential function for connection is p(D1 ∪ D2 ), the
number of components of G[D1 ∪ D2 ]. That is, every iteration chooses a node set S to
be added into D2 which reduces the number of components by the largest amount until
p(D1 ∪ D2 ) reaches 1. It is a folklore result (see Lemma 5.5) that simultaneously adding
at most two nodes can reduce the number of components (even when ading any single
node does not reduce the number of components). So, it is natural to use
max{− △S p(D1 ∪ D2 )/c(S) : S ⊆ V \ (D1 ∪ D2 ), 1 ≤ |S| ≤ 2}
(2)
to work as a criterion for the choice of node set S to be added into D2 . However, choosing
at most two nodes might yield a solution with very bad performance ratio. Consider
the example shown in Fig.1, its optimal solution OP T = {u, v1 , . . . , S
vd } has cost opt =
1 + (d + 1)ε. If (2) is used as the greedy criterion, then the output is di=1 {ui , vi }, whose
S
cost is c( di=1 {ui , vi }) = d(1 + ε) ≈ d · opt = n−2
opt. To overcome such a shortcoming,
3
an idea is to choose some star Su to maximize − △Su p(D1 ∪ D2 )/c(Su ) in each iteration.
However, the computation of a most efficient star will take exponential time. This is why
we define p′ as in (1) to be used in the greedy criterion. On one hand, p′ also plays the
role of counting the reduction on components. On the other hand, a most efficient Su
to maximize p′D1 ∪D2 (Su )/c(Su ) can be found in polynomial time, which will be shown in
Subsection 5.2.
6
u1
v1
s
✚
✡❩
❏
✚
❩
✚ ✡ ❏ ❩
u
❝
✚ ✡
❏ ❩
✚ ❩ ❏ ❩❩
✚
❝✚ u2 ❝✡✚✚✡ ❏❩❩
❝ud−1 ❝ud
✡ ❏
❩
✚ ✡
❏
❩
✚
❩ ❝vd
❝✚ v2 ❝✡ . . . ❏ ❝vd−1
s
s
...
{z
}
d
Figure 1: Solid nodes represent nodes in C. The costs on circled nodes are c(u1 ) = c(u2 ) =
. . . = c(ud ) = 1, c(v1 ) = c(v2 ) = . . . = c(vd ) = ε, and c(u) = 1 + ε.
s
|
5
s
The Analysis of the Algorithm
In this section, we analyze the performance ratio of Algorithm 1.
5.1
The Analysis of D1
Lemma 5.1. Function q is a polymatroid.
Proof. Obviously, q(∅) = 0. It is easy to see that for any node u ∈ V , function −qC (u) is
monotone increasing and submodular with respect to C. So, function q, being the summation of a constant function and some monotone increasing and submodular functions,
is also monotone increasing and submodular.
Lemma 5.2. The final node set D1 in Algorithm 1 is an m-fold dominating set of G.
Proof. If there exists a node u ∈ V \D1 with |ND1 (u)| < m, then − △u qD1 (u) =
−qD1 ∪{u} (u) + qD1 (u) = −0 + (m − |ND1 (u)|) > 0, and − △u qD1 (v) ≥ 0 for any node
v P
∈ V \ {u} (by the monotonicity of −qD1 (v) with respect to D1 ). Thus △u q(D1 ) =
− v∈V △u qD1 (v) > 0, and the algorithm does not terminate at this stage.
Theorem 5.3. The final node set D1 in Algorithm 1 has weight w(D1) ≤ H(m + δ) · opt,
where δ is the maximum degree of graph G, and opt is the optimal value for the m-MWCDS
problem.
Proof. By Lemma 5.1 and Lemma 5.2, the minimum weight m-fold dominating set problem is a special submodular cover problem with potential function q. So by Theorem 3.1,
we have w(D1) ≤ H(γ) · opt′ , where opt′ is the optimal value for the m-MWDS problem.
Then, the result follows from the observation that opt′ ≤ opt and γ = max{q({u})} =
u∈V
P
max{m|V | − v∈V \{u} (m − |N{u} (v)|)} = max{m + dG (u)} = m + δ.
u∈V
u∈V
7
5.2
The Computation of an Optimal Star for Greedy Choice
The idea for the definition of p′C (Su ) is as follows: Adding node u into C will merge
e which reduces
those components in NCC (u) into one component of G[C ∪ {u}], say C,
the number of components by |NCC (u)| − 1. Then adding nodes in Su \ {u} sequentially
according to the increasing order of their costs. Notice that
− △ui p(C ∪ {u1 , . . . , ui−1 }) = |NCC (ui ) \ NCC (u, u1, . . . , ui−1 )|
(3)
e So, the term in the summation of
is the number of components newly merged into C.
definition (1) indicates that if adding ui merges at least one more component, we regard
its contribution to p′C (Su ) as one. The advantage of such a counting is that an optimal
star for the greedy criterion is polynomial-time computable. This claim is based on the
following lemma. A simple relation will be used in the proof: for four positive real numbers
a, b, c, d
a+b
b
a
b
≥ ⇒ ≥ .
(4)
c+d
d
c
d
Lemma 5.4. Suppose C is a dominating set of graph G. Then, there exists a most
efficient star Su with respect to C such that |NCC (v)| = 1 for every node v ∈ Su \ {u}.
Furthermore, if we denote by Cv the unique component in NCC (v), then components in
{Cv }v∈Su \{u} are all distinct and they are also distinct from those components in NCC (u).
Proof. Suppose Su is a most efficient star with Su \ {u} = {u1, . . . , us } such that c(u1 ) ≤
. . . ≤ c(us ). We first show that for any i = 1, . . . , s,
− △ui p(C ∪ {u, u1, . . . , ui−1}) ≥ 1.
(5)
Suppose this is not true. Let i be the first index with − △ui p(C ∪ {u, u1, . . . , ui−1 }) = 0.
Then by (3), we have NCC (ui ) ⊆ NCC (u, u1, . . . , ui−1 ) and thus NCC (u, u1, . . . , ui−1) =
NCC (u, u1, . . . , ui ). It follows that for any j > i,
− △uj p(C ∪ {u, u1, . . . , ui−1 , ui+1, . . . , uj−1}) = − △ui p(C ∪ {u, u1, . . . , uj−1}).
So, for the star Su′ = Su \ {ui }, we have p′C (Su′ ) = p′C (Su ) and thus p′C (Su′ )/c(Su′ ) >
p′C (Su )/c(Su ), contradicting the maximality of p′C (Su )/c(Su ).
As a consequence of (5),
p′C (Su ) = |NCC (u)| − 1 + s.
Suppose there is a node ui with |NCC (ui )| ≥ 2, we choose ui to be such that i is as
small as possible. Let Su′ = Su \ {ui }. Notice that for any j > i,
− △uj p(C ∪ {u, u1, . . . , ui−1 , ui+1 , . . . , uj−1}) ≥ − △ui p(C ∪ {u, u1, . . . , uj−1}).
Combining this with (5), we have
p′C (Su′ ) = |NCC (u)| − 1 + (s − 1) = p′C (Su ) − 1.
8
By the maximality of Su , we have
p′C (Su )
p′C (Su′ ) + 1
p′C (Su′ )
≤
=
.
c(Su′ )
c(Su )
c(Su′ ) + c(ui )
Then by (4),
p′C (Su′ )
1
≤
.
c(Su′ )
c(ui )
It follows that
1
c(Su′ ) + 1
p′C (Su′ ) + 1
1
p′C (ui )
p′C (Su )
c(ui )
=
≤
=
≤
,
c(Su )
c(Su′ ) + c(ui )
c(Su′ ) + c(ui )
c(ui )
c(ui )
where the last inequality holds because p′C (ui ) = |NCC (ui )| − 1 ≥ 1. Hence ui is a also a
most efficient star. It is a trivial star satisfing the requirement of the lemma.
Next, suppose Su is a nontrivial star in which every v ∈ Su \ {u} has |NCC (v)| = 1.
Notice that an equivalent statement of (5) is that |NCC (ui ) \ NCC (u, u1, . . . , ui−1)| ≥ 1
for any i = 1, . . . , s. The second part of this lemma follows.
By Lemma 5.4, a most efficient star with respect to C can be found in the following
way. Guessing the center of the star requires time O(n). Suppose u is the guessed center.
Let N (u) = {v : v is a neighbor of u in G and |NCC (v)| = 1}. Order the nodes in N (u)
as u1 , . . . , us such that c(u1 ) ≤ . . . ≤ c(us ). For i = 1, . . . , s, scan ui sequentially. If
ui has NCC (ui ) ⊆ NCC (u, u1, . . . , ui−1), then remove it from N (u) . For convenience of
statement, suppose the remaining set N (u) = {u1 , u2 . . . , ut}. Then, the node set of a most
efficient star centered at u must be of the form {u, u1, u2 , . . . , ul } for some l ∈ {0, . . . , t}.
So, it suffices to compute the efficiency of the t + 1 sets {u, u1, . . . , ul } for l = 0, . . . , t
and choose the most efficient one from them. Clearly, such a computation can be done in
polynomial time.
5.3
Correctness of the Algorithm
The following result is a folklore in the study of CDS (see, for example, [22]).
Lemma 5.5. Suppose D is a dominating set of G such that G[D] is not connected. Then,
two nearest components of G[D] are at most three hops away.
Theorem 5.6. The output DG of Algorithm 1 is a (1, m)-CDS.
Proof. By Lemma 5.2, D1 is an m-fold DS, and thus DG is also an m-DS. If G[DG ] is not
connected, consider two nearest components of G[DG ], say G1 and G2 . Let P = u0 u1 . . . ut
be a shortest path between G1 and G2 , where u0 ∈ V (G1 ) and ut ∈ V (G2 ). By Lemma
5.5, we have t = 2 or 3. Then, u1 (in the case t = 2) or u1u2 (in the case t = 3) is a star
Su1 with p′DG (Su1 ) > 0. The algorithm will not terminate.
9
5.4
Decomposition of Optimal Solution
The following lemma, as well as its proof, can be illustrated by Fig.2.
Lemma 5.7. Let G be a connected graph, C be a dominating set of G and C ∗ be a
connected dominating set of G. Then C ∗ \C can be decomposed into the union of node
sets C ∗ \C = Y0 ∪ Y1 ∪ Y2 ∪, . . . , ∪Yh such that:
(i) for 1 ≤ i ≤ h, subgraph G[Yi ] contains a star;
(ii) subgraph G[C ∪ Y1 ∪ Y2 ∪, . . . , ∪Yh ] is connected;
(iii) for 1 ≤ i ≤ h, |NCC (Yi )| ≥ 2;
(iv) any node of C ∗ \C belongs to at most two sets of {Y0, Y1 , Y2 , . . . , Yh }.
Proof. Let H be the graph obtained from G[C ∪ C ∗ ] by contracting every component of
G[C] into a super-node (call it a terminal node). Since G[C ∪ C ∗ ] is connected, H is also
connected, and thus H contains a spanning tree T . Recursively pruning non-terminal
leaves, we obtain a tree T ′ in which every leaf is a terminal node (see Fig.2(a)(b)). Let
Y0 be the set of pruned nodes. We may assume that
every non-terminal node has at least one terminal neighbor in T ′ .
(6)
In fact, if this is not true, then we can modify T ′ into another tree satisfying this assumption. See Fig.2(c)(d) for an illustration. In (c), node u does not have a terminal neighbor
in T ′ . Since C is a dominating set, u is adjacent with some component of G[C], say the
component corresponding to terminal node v. Adding edge uv creates a unique cycle in
T ′ + uv. Removing the edge on this cycle which is incident with u in T ′ , we have another
tree in which u has a terminal neighbor (see Fig.2(d)). Notice that the removed edge is
between two non-terminal nodes. So, the number of non-terminal nodes which have no
terminal neighbors is strictly reduced. Recursively making such a modification results in
a tree satisfying assumption (6).
Tree T ′ can be viewed as a Steiner tree. By splitting off at non-leaf terminal nodes,
T can be decomposed into full components T1′ , . . . , Tl′ (a full component in a Steiner tree
is a subtree in which a node is a leaf if and only if it is a terminal node, see Fig.6(e)). Let
Ti be the subtree of Ti′ induced by those non-terminal nodes.
′
For each i ∈ {1, . . . , l}, let vi be an arbitrary node of Ti and view Ti as a tree rooted
at vi . For each node v ∈ V (Ti ), let Sv be the star centered at node v which
Sl contains all
children of v in Ti . Let Si = {Sv : v ∈ V (Ti ) and |NCC (Sv )| ≥ 2}. Let S = i=1 Si . Then
{Yv = V (Sv ) : Sv ∈ S} is a desired decomposition of C ∗ \ (C ∪ Y0 ) (see Fig.6(f)).
It should be noted that property (6) is used to guarantee that no node is missed in
the decomposition. For example, in Fig.6(c) which does not satisfy this assumption, if we
choose x to be the root of the lower subtree, then Su = uy and Sy = y are stars which
does not have at least two terminal neighbors in T ′ , and thus they are excluded from S.
But then, node y does not belong to any star in the decomposition.
10
s
s
❝
s
s
❝
❝
❝
s
s
❝
❛❝
s
❝
❝
s
❛❝
s
s
❛❝
❛❝
❝
s
❛❝
s
s
❝
s
s
❝
s
❝
❝
vs
x❝
❝
❝
❝
s
s
s
s
❝
u❝
y❝
s
s
❝
(c)
s
s
❝
❝
s
❝
(b)
s
❝
s
❝
(a)
s
s
❝
❝
❝
❝
s
s
(d)
❝
s
s
❝ sss ❝
T1 ❝ T2
s ❝ T3 ❝
❝
s
s
s
(e)
❝
s
❝
❝
S1 ❝ S2
❝ S3 ❝
❝
❝
❝S4
❝S5
(f)
Figure 2: An example for the decomposition of C ∗ \ C. Solid nodes are terminal nodes
which correspond to components of G[C]. In (a), double circle nodes are pruned, yielding
tree T ′ in (b). Figures in (c) and (d) are used to show how to obtain a tree satisfying
assumption (6). In (c), the dashed edge uv is in G but not in T ′ . Adding edge uv and
removing edge ux results in the tree in (d). Full components of T ′ are shown in (e). Figure
(f ) shows the decomposed stars.
11
5.5
The Performance Ratio
Theorem 5.8. The connector set D2 of Algorithm 1 has cost c(D2 ) ≤ 2H(δ − 1)opt.
Proof. Let S1 , S2 , . . . , Sg be the sets chosen by Algorithm 1 in the order of their selection
(0)
into set D2 . Let DG = D1 . For 1 ≤ i ≤ g, let
(i)
(0)
DG = DG ∪ S1 ∪ . . . ∪ Si .
For i = 1, . . . , g, denote
(i−1)
ri = − △Si p(DG
) and wi =
c(Si )
(i−1)
− △Si p(DG
)
.
Suppose {Y0, Y1 , . . . , Yh } is the decomposition of OP T \ D1 as in Lemma 5.7, where
Yi is a star centered at node vi . For i = 1, . . . , g + 1 and j = 1, . . . , h, denote
ai,j = p′D(i−1) (Yj ).
(7)
G
For 1 ≤ j ≤ h, define
f (Yj ) =
g
X
(ai,j − ai+1,j )wi ,
i=1
and let
f (OP T ) =
h
X
f (Yj ).
(8)
j=1
Claim 1. For any j = 1, . . . , h, f (Yj ) ≤ H(a1,j )c(Yj ).
Since Si is chosen according to Lemma 5.4, the special structure of Si implies that
(i−1)
p′D(i−1) (Si ) = − △Si p(DG
).
G
Hence,
wi =
c(Si )
.
′
p (i−1) (Si )
DG
Then, by the greedy choice of Si , we have
c(Yj )
c(Yj )
c(Si )
.
≤ ′
=
wi = ′
p (i−1) (Si )
p (i−1) (Yj )
ai,j
DG
(9)
DG
By the definition of p′ , it can be seen that ai,j is a decreasing function on variable i. Hence
ai,j − ai+1,j ≥ 0. Combining this with (9),
f (Yj ) ≤
g
X
(ai,j − ai+1,j )
i=1
≤ c(Yj )
g
X
c(Yj )
ai,j
H(ai,j ) − H(ai+1,j )
i=1
= c(Yj ) H(a1,j ) − H(ag+1,j ) ,
12
where the second inequality uses the fact that for any integers a ≥ b,
a
a
X
X
1
1
a−b
=
≤
= H(a) − H(b).
a
a
l
l=b+1
l=b+1
(g)
Observe that ag+1,j = 0 since DG is connected, the claim follows.
Claim 2. c(D2 ) ≤ f (OP T ).
Notice that c(D2 ) and f (Yj ) can be rewritten as
!
!
!
g
g
g
g
g
g
g
X
X
X
X
X
X
X
rl (wi − wi−1 )
rl w 1 +
rl w i =
rl −
ri w i =
c(D2 ) =
i=1
i=1
l=i+1
l=i
i=2
l=1
l=i
(10)
and
f (Yj ) = a1,j w1 +
g
X
ai,j (wi − wi−1 ).
(11)
i=2
′
By the monotonicity of p , we have
greedy choice of Si−1 , we have
wi =
c(Si )
′
p (i−1) (Si )
DG
≥
p′ (i−1) (Si )
DG
≤ p′
c(Si )
′
p (i−2) (Si )
DG
≥
(i−2)
DG
(Si ). Combining this with the
c(Si−1 )
′
p (i−2) (Si−1 )
DG
= wi−1 .
In other words, wi − wi−1 ≥ 0 for i = 1, . . . , g. Then, by (8), (10), and (11), it can be
seen that to prove Claim 2, it suffices to prove that for i = 1, . . . , g,
h
X
ai,j ≥
j=1
g
X
rl .
(12)
l=i
The right hand side is
g
X
l=i
rl =
g
X
l=i
(l−1)
−△Sl p(DG )
=
g
X
(l−1)
p(DG
l=i
(l)
(i−1)
(g)
(i−1)
)−p(DG ) = p(DG )−p(DG ) = p(DG )−1.
So, proving (12) is equivalent to proving
h
X
j=1
(i−1)
p′D(i−1) (Yj ) + 1 ≥ p(DG
).
(13)
G
This inequality can be illustrated by Fig.3. In this figure, OP T is decomposed into four
stars. A comprehension for the value p′D (Y1 ) = 3 is that in Fig.3(c), the double circled
e
components are merged into the triangled component. Call the new component as C.
′
Then, the comprehension of pD (Y2 ) = 3 is that in Fig.3(d), double circled components are
merged into the triangled component. Notice that this triangled component is contained
e and thus we can regard it as C
e in our comprehension. Continue this procedure
in C,
13
sequentially in such a way that Y1 ∪ . . . ∪ Yl is connected for l = 1, . . . , 4. Finally, all
components of G[D] are merged into one component, the reduction on the number of
components is p(D) − 1. Notice that the inequality might be strict because some components are counted more than once in the summation part. For example, the component
labeled by u4 is repetitively counted.
ut1
ut2
u4 t
ut3
S❞1
❞
v1
u5 t
❞v2
v4 ❞
❞v3
v5 ❞
t❣
❞
❞
t❣
❞
❞
S2
S4
❞
t
❞
❞
❞
❞
❞ S3
u6 t u7 t u8 t u9 t
(a)
(b)
(c)
t
△
t
△
❣
t
❞
❞
t
△
❞
❣
t
t
t
△
t❣
t❣
t
△
❞
t❣
t
❞
t❣
(d)
(e)
(f)
Figure 3: An illustration for inequality (13). Solid circles indicate components.
Claim 2 is proved.
By Lemma 5.7 (iv), we have
h
X
c(Yi ) ≤ 2opt.
(14)
i=1
Then by Claim 1, Claim 2, and the observation that a1,j ≤ δ − 1, we have
c(D2 ) ≤
h
X
c(Yj )H(δ − 1) ≤ 2H(δ − 1)opt.
j=1
The theorem is proved.
Combining Theorem 5.3 and Theorem 5.8, we have the following result.
Theorem 5.9. Algorithm 1 is a polynomial-time H(δ + m) + 2H(δ − 1) -approximation
for the m-MWCDS problem.
14
6
Implementation on Unit Disk Graphs
Notice that the maximum degree δ in the performance ratio of the second part of
the algorithm comes from a1,j ≤ δ − 1. So, to improve the approximation factor when
the graph under consideration is a unit disk graph, we improve the upper bound for a1,j
first. We use notation T1 , . . . , Tl in the proof of Lemma 5.7. Recall that each Ti has
V (Ti ) ⊆ C ∗ \ C and is decomposed into some stars, and the final decomposition of C ∗ \ C
is the union of these stars and a set of pruned nodes. We shall use k·k to denote Euclidean
length.
Lemma 6.1. In a unit disk graph, there exists a set of subtrees T1 , . . . , Tl in the proof of
Lemma 5.7 such that any node v ∈ V (Ti ) has |NCC (v)| + degTi (v) ≤ 5.
Proof. Construct a spanning tree of G[C ∪C ∗ ] as follows. First, replace each component of
G[C] by a spanning tree of that component, which is called a tree component. Then, find
a minimum length tree T which spans all nodes of C ∗ \ C and all tree components, where
“minimum length” is with respect to Euclidean distance. For convenience of statement,
we shall call a tree which spans all nodes of C ∗ \ C and all tree components as a valid tree.
Each component of G[C] is called a component node of such a tree and is dealt with as a
whole in the following. Similarly to the proof of Lemma 5.7, by pruning leafs in C ∗ \ C
and by splitting off at component nodes, we obtain a set of full components T1′ , . . . , Tl′ .
Let Ti be the subtree of Ti′ with component nodes removed. We choose tree T such that
l
X
X
degTi (u) is as small as possible.
(15)
i=1 u∈V (Ti )
Suppose there is a node v ∈ V (Ti ) with |NCC (v)|+degTi (v) = t > 6, assume NCC (v)∪
NTi (v) = {x1 , . . . , xt }, where xj is a node in C ∗ \ C or a node in a component neighbor
of v (if v is adjacent with more than one node of a component neighbor, only one node
of that component neighbor is chosen to appear in {x1 , . . . , xt }), and x1 , . . . , xt are in a
clockwise order around node v. Since t > 6, there is an index j with ∠xj vxj+1 < π/3
(xt+1 is viewed as x1 ). Then kxj xj+1 k < max{kvxj k, kvxj+1k}, say kxj xj+1 k < kvxj k.
Replacing edge vxj by xj xj+1 , we obtain another valid tree whose Euclidean length is
shorter than T , a contradiction. So,
every node v ∈ V (Ti ) has |NCC (v)| + degTi (v) ≤ 6.
(16)
A node u with |NCC (v)| + degTi (v) = 6 is called bad.
Similar argument shows that for any bad node v ∈ V (Ti ), ∠xj vxj+1 = π/3 for j =
1, . . . , 6 and kvx1 k = · · · = kvx6 k. In other words, x1 , . . . , x6 locate at the corners of a
regular hexagon with center v. First, suppose node v has a component neighbor, say x1 is
in a component neighbor of v. If x2 ∈ C, then x2 must be in a same component of G[C] as
x1 , because kx1 x2 k = kvx1 k ≤ 1, contradicting our convention that one component has at
most one node appearing in {x1 , . . . , x6 }. So, x2 ∈ C ∗ \ C. Then, Te = T − {vx2 } + {x1 x2 }
15
P P
is a valid tree with the same length as T . Notice that li=1 u∈V (Ti ) degTi (u) is decreased
by two (edge x1 x2 does not contribute to the degree sum since one of its end is in a
component neighbor and thus does not belong to Ti ), contradicting the choice of T (see
(15)). So, for j = 1, . . . , 6, xj ∈ C ∗ \ C. Since Te = T − {vxj−1 , vxj+1} + {xj xj−1 , xj xj+1 }
is also a valid tree whose length is the same as T , we have |NCC (xj )| + degTei (xj ) ≤ 6. By
noticing that degTei (xj ) = degTi (xj ) + 2 (since both xj−1 , xj+1 ∈ C ∗ \ C), we have
|NCC (xj )| + degTi (xj ) ≤ 4.
(17)
This inequality holds for any j = 1,P
. . . , 6.PNotice that Tb = T P
− {vxP
1 } + {x1 x2 } is a valid
l
l
deg
(u)
=
tree with the same length as T and i=1 u∈V (Tbi )
i=1
u∈V (Ti ) degTi (u). By
Tbi
property (16) and (17), we have
|NCC (u)| + degTi (u) − 1 ≤ 5, u = v,
|NCC (u)| + degTbi (u) =
|NCC (u)| + degTi (u) + 1 ≤ 5, u = x1 or x2 ,
|NCC (u)| + degTi (u) ≤ 6,
u 6= v, x1 , x2 .
So, the number of bad nodes in Tb is strictly reduced. By recursively executing such an
operation, we have a tree satisfying the requirement of this lemma.
Recall that condition (6) plays an important role in the decomposition. This does no
pose any difficulty here, because by the modification method in the proof of Lemma 5.7,
if u ∈ C ∗ \ C is not adjacent with any component neighbor in T ′ , then we may just add an
edge between u and a component neighbor, and remove an edge between u and another
node in C ∗ \ C. Such an operation does not increase the value of |NCC (u)| + degTi (u).
Theorem 6.2. When applied to unit disk graphs, the node set D2 produced by Algorithm
1 has cost c(D2 ) ≤ 3.67opt.
Proof. Let T1 , . . . , Tl be the subtrees in Lemma 6.1. Choose node vi = arg max{c(v) : v ∈
V (Ti )} to be the root of Ti . Decompose C ∗ \ C as in the proof of Lemma 5.7. Let
(i)
{Yv } be the set of stars coming from the decomposition of Ti . To avoid ambiguity, we
(i)
use a1,Yv(i) to denote p′D1 (Yv ) (which is a1,j in the proof of Theorem 5.8). By Lemma
6.1, if v = vi , then a1,Yv(i) ≤ |NCD1 (v)| − 1 + degTi (v) ≤ 4; if v 6= vi , then a1,Yv(i) ≤
|NCD1 (v)| − 1 + degTi (v) − 1 ≤ 3 (this is because the parent of v is not in Yv if v 6= vi ).
Notice that vi belongs to exactly one star in the decomposition of C ∗ \ C. Hence
inequality (14) can be improved to
l X
X
i=1
c(Yv(i) )
+
l
X
c(vi ) ≤ 2opt.
(18)
i=1
(i)
Yv
Combining Lemma 6.1 with Claim 1 and Claim 2 of Theorem 5.8,
l X
l
X
X
X
H(4)c(Yv(i) ) + H(3)
c(D2 ) ≤
c(Yv(i) )H(a1,Yv(i) ) ≤
c(Yv(i) ) .
i
i=1 Y (i)
v
i=1
(i)
Yv ,v6=vi
16
(19)
Since D1 is an m-fold dominating set, every vi has at least one component neighbor.
(i)
Then by Lemma 6.1, degTi (vi ) ≤ 4, and thus Yvi has at most five nodes. Since vi has the
(i)
maximum cost in Ti , we have c(Yvi ) ≤ 5c(vi ). So,
(i)
c(Yvi )
(i)
≤ H(3)c(Yv(i)
)
+
5c(v
)/4
<
H(3)
c(Y
)
+
c(v
)
.
=
+
i
i
vi
i
4
Combining this inequality with (18) and (19),
l X
l
X
X
c(D2 ) ≤ H(3)
c(vi ) ≤ 2H(3)opt < 3.67opt.
c(Yv(i) ) +
H(4)c(Yv(i)
)
i
H(3)c(Yv(i)
)
i
i=1 Y (i)
v
i=1
The theorem is proved.
7
Conclusion
In this paper, we presented a (H(δ + m) + 2H(δ − 1))-approximation algorithm for the
minimum weight (1, m)-CDS problem, where δ is the maximum degree of the graph. Compared with the 1.35 ln n-approximation algorithm for the minimum (1, 1)-CDS problem
[12], our constant is larger. However, since in many cases, the maximum degree is much
smaller than the number of nodes, our result is an improvement on the performance ratio
in some sense. In particular, the replacement of n by δ in the performance ratio makes
it possible to obtain a 3.67-approximation for the connecting part when the topology of
the network is a unit disk graph. In fact, Zou et al. obtained a 2.5ρ-approximation for
the connecting part in a unit disk graph, where ρ is the performance ratio for the minimum Steiner tree problem. If the best ρ = 1.39 is used, their algorithm has performance
ratio 3.475. Notice that the 1.39-approximation algorithm for the minimum Steiner tree
problem uses a combination of iterated rounding and random rounding. Our algorithm
has the advantage of purely combinatorial and deterministic. Furthermore, we expect our
method to have a theoretical value which can be used to deal with other problems.
Acknowledgements
This research is supported by NSFC (61222201,11531011), SRFDP (20126501110001),
and Xingjiang Talent Youth Project (2013711011). It is accomplished when the second
author is visiting National Chiao Tung University, Taiwan, sponsored by “Aiming for
the Top University Program” of the National Chiao Tung University and Ministry of
Education, Taiwan.
References
[1] Byrka J, Grandoni F, Rothvoss T, Sanità L (2013) J. ACM, 60(1), Article 6.
17
[2] Cheng X, Huang X, Li D, Wu W, Du DZ (2003) A polynomial-time approximation scheme for the minimum-connected dominating set in ad hoc wireless networks.
Networks, 42(4), 202–208.
[3] Dai F, Wu J (2006) On constructing k-connected k-dominating set in wireless ad hoc
and sensor networks. J Parallel Distrib Comput 66(7), 947–958.
[4] Das B, Bharghavan V (1997) Routing in ad-hoc networks using minimum connected
dominating sets. IEEE International Conference on Comunications, Montreal, 376–
380.
[5] Du DZ, Graham RL, Pardalos PM, Wan PJ, Wu WL, Zhao W (2008) Analysis of
greedy approximation with nonsubmodular potential functions. SODA’08, 167–175.
[6] Du DZ, Ko K, Hu X (2012) Design and analysis of approximation algorithms.
Springer, New York.
[7] Du DZ, Wan PJ (2013) Connected Dominating Set: Theory and Applications,
Springer, New York.
[8] Ephremides A, Wieselthier J, Baker D (1987) A design concept for reliable mobile
radio networks with frequency hopping signaling. Proc IEEE 56–73.
[9] Fukunage T (2015) Constant-approximation algorithms for highly-connected multidominating sets in unit disk graphs. arXiv:1511.09156.
[10] Gao X, Wang Y, Li X, Wu W (2009) Analysis on theoretical bounds for approximating dominating set problems. Discrete Math. Algo. Appl., 1(1), 71–84.
[11] Guha S, Khuller S (1998) Approximation algorithms for connected dominating sets.
Algorithmica, 20(4), 374–387.
[12] Guha S, Khuller S (1999) Improved methods for approximating node weithed steiner
trees and connected dominating sets. Information and Computation 150, 57–74.
[13] Li J, Jin Y (2015) A PTAS for the weighted unit disk cover problem. ICALP’15.
[14] Li M, Wan P, Yao F (2009) Tighter approximation bounds for minimum CDS in
wireless ad hoc networks. ISAAC’09, LNCS 5878, 699–709.
[15] Li Y, Wu Y, Ai C, Beyah R (2012) On the construction of k-connected m-dominating
sets in wireless networks. J. Comb. Optim. 23, 118–139.
[16] Robins G, Zelikovsky A (2000) Improved Steiner tree approximation in graphs.
SODA’00, 770–779.
[17] Ruan L, Du H, Jia X, Wu W, Li Y, Ko K (2004) A greedy approximation for minimum
connected dominating sets. Theor. Comput. Sci. 329(1), 325–330.
[18] Shang W, Yao F, Wan P, Hu X (2008) On minimum m-connected k-dominating set
problem in unit disc graphs. J. Comb. Optim. 16, 99–106.
18
[19] Shi Y, Zhang Y, Zhang Z, Wu W (2016) A greedy algorithm for the minimum 2connected m-fold dominating set problem. J. Comb. Optim. 31, 136–151.
[20] Shi Y, Zhang Z, Mo Y, Du D-Z (2017) Approximation algorithm for
minimum weight fault-tolerant virtual backbone. IEEE/ACM Trans. Net.
doi:10.1109/TNET.2016.2607723. in Unit Disk Graphs
[21] Thai M, Zhang N, Tiwari R, Xu X (2007) On approximation algorithms of kconnected m-dominating sets in disk graphs. Theor. Comput. Sci. 385, 49–59.
[22] Wan PG, Alzoubi KM, Frieder O (2002) Distributed construction of connected dominating set in wireless ad hoc networks. INFOCOM’02 1597–1604, Mobile Networks
and Applications, 9, 141–149.
[23] Wan PJ, Wang L, Yao F (2008) Two-phased approximation algorithms for minimum
CDS in wireless ad hoc networks. ICDCS’08, 337–344.
[24] Wang F, Thai M, Du DZ (2009) On the construction of 2-connected virtual backbone
in wireless networks. IEEE Trans. Wirel. Commun. 8(3), 1230–1237.
[25] Wang W, Kim D, An M, Gao W, Li X, Zhang Z, Wu W (2013) On construction
of quality fault-tolerant virtual backbone in wireless networks. IEEE/ACM Trans.
Netw. 21(5), 1499–1510.
[26] Wang W, Liu B, Kim D, Li D, Wang J, Jiang Y (2015) A better constant approximation for minimum 3-connected m-dominating set problem in unit disk graph using
Tutte decomposition, to appear in INFOCOM’15.
[27] Wu W, Du H, Jia X, Li Y, Huang S (2006) Minimum connected dominating sets and
maximal independent sets in unit disk graphs. Theor. Comput. Sci. 352, 1–7.
[28] Zhang Z, Gao X, Wu W, Du DZ (2009) A PTAS for minimum connected dominating
set in 3-dimensional wireless sensor networks. J. Glob. Optim. 45, 451–458.
[29] Zhang Z, Willson J, Wu W, Zhu X, Du DZ (2015) (3 + ε)-approximation for the minimum weight k-cover problem, in submission. A preliminary version was Computer
Communications (INFOCOM), 2015 IEEE Conference on, pp 1364–1372.
[30] Zhang Z, Liu Q, Li D (2009) Two algorithms for connected r-hop k-dominationg set.
Discrete Math. Algo. Appl. 1(4), 485–498.
[31] Zhang Z, Shi Y (2015) Approximation algorithm for minimum weight fault-tolerant
virtual backbone in homogeneous wireless sensor network. Computer Communications (INFOCOM), 2015 IEEE Conference on, pp 1080–1085.
[32] Zhang Z, Zhou J, Mo Y, Du D-Z (2016) Performance-guaranteed approximation
algorithm for fault-tolerant connected dominating set in wireless networks. INFOCOM2016.
19
[33] Zhou J, Zhang Z, Wu W, Xing K (2014) A greedy algorithm for the fault-tolerant
connected dominating set in a general graph. J. Comb. Optim. 28(1), 310–319.
[34] Zou F, Li X, Gao S, Wu W (2009) Node-weighted Steiner tree approximation in unit
disk graphs. J. Comb. Optim. 18, 342–349.
[35] Zou F, Wang Y, Xu X, Du H, Li X, Wan P, Wu W (2011) New approximations
for weighted dominating sets and connected dominating sets in unit disk graphs.
Theoret. Comput. Sci. 412(3), 198–208.
20
| 8 |
arXiv:0711.4444v2 [cs.MS] 29 Nov 2007
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE
Building the Tangent and Adjoint codes of the Ocean
General Circulation Model OPA with the Automatic
Differentiation tool TAPENADE
M.H. Tber — L. Hascoët — A. Vidard — B. Dauvergne
N° 6372
Novembre 2007
Thème NUM
apport
de recherche
Building the Tangent and Adjoint odes of the
O ean General Cir ulation Model OPA with the
Automati Dierentiation tool TAPENADE
∗
M.H. Tber
, L. Has oët
∗
†
, A. Vidard
, B. Dauvergne
∗
Thème NUM Systèmes numériques
Projet Tropi s
Rapport de re her he
Abstra t:
n° 6372 Novembre 2007 28 pages
The o ean general
ir ulation model OPA is developed by the
LODYC team at Paris VI university.
OPA has re ently undergone a major
rewriting, migrating to FORTRAN95, and its adjoint
ode needs to be re-
built. For earlier versions, the adjoint of OPA was written by hand at a high
development
ost.
We use the Automati
Dierentiation tool TAPENADE
to build me hani aly the tangent and adjoint
dierentiated
odes by
odes of OPA. We validate the
omparison with divided dieren es, and also with an
identi al twin experiment. We apply state-of-the-art methods to improve the
performan e of the adjoint
and Walther's binomial
ode.
In parti ular we implement the Griewank
he kpointing algorithm whi h gives us an optimal
trade-o between time and memory
onsumption. We apply a spe i
egy to dierentiate the iterative linear solver that
strat-
omes from the impli it time
stepping s heme.
Key-words:
OPA, general
ir ulation model, TAPENADE, Automati
ferentiation, reverse mode, Adjoint Code, Che kpointing
∗
†
INRIA Sophia Antipolis, Fran e
INRIA Grenoble, Fran e
Unité de recherche INRIA Sophia Antipolis
Dif-
Constru tion des odes Tangent et Adjoint du
modèle de ir ulation générale o éanique OPA
par l'outil de Diérentiation Automatique
TAPENADE
Résumé : Le modèle de
ir ulation générale o éanique OPA est développé
par l'équipe LODYC de l'université Paris VI. La nouvelle version 9 d'OPA
onstitue une évolution majeure, ave
TRAN95.
Les
en parti ulier une migration vers FOR-
odes Linéaire Tangent et Adjoint d'OPA, qui auparavant
étaient é rits à la main, doivent don
être redéveloppés. Nous utilisons l'outil
de Diérentiation Automatique TAPENADE pour
gent et Adjoint d'OPA 9.
raison ave
onstruire les
odes Tan-
Nous validons les dérivées obtenues par
ompa-
les Diéren es Divisées et sur deux appli ations test in luant des
expérien es jumelles. Nous utilisons le s héma de
he kpointing ré ursif bino-
mial de Griewank et Walther pour améliorer les performan es du
ode adjoint.
Nous utilisons une stratégie spé ique pour diérentier le solveur linéaire itératif provenant du s héma impli ite d'avan ement en temps.
montrent un
oût raisonnable, tant en
Nos résultats
onsommation mémoire que pour le
temps d'exé ution de l'adjoint.
Mots- lés :
OPA, Cir ulation O éanique, TAPENADE, Diérentiation Au-
tomatique, mode inverse, Code Adjoint, Che kpointing
Tangent and Adjoint dierentiation of OPA
3
1 Introdu tion
The development of tangent and adjoint models is an important step in addressing sensitivity analysis and variational data assimilation problems in O eanography. Sensitivity analysis is the study of how model output varies with
hanges in model inputs.
The sensitivity information given by the adjoint
model is used dire tly to gain an understanding of the physi al pro esses. In
data assimilation, one
onsiders a
ost fun tion whi h is a measure of the
model-data mist. The adjoint sensitivities are used to build the gradient for
des ent algorithms. Similarly the tangent model is used in the
in remental algorithms [3℄ to linearize the
ontext of the
ost fun tion around a ba kground
ontrol. For the previous version 8 of the O ean General Cir ulation Model
OPA [17℄, Weaver et al [22℄ developed the numeri al tangent and adjoint
by hand using
odes
lassi al te hniques [5, 19℄. Sin e then, the OPA model has un-
dergone a major update. Parti ularly the new versions are fully rewritten in
FORTRAN95.
adjoint
In this paper, we report on the development of tangent and
odes of OPA using the Automati
Dierentiation (AD) tool TAPE-
NADE [12℄. A brief des ription of the OPA model and the
onguration used
in this work is given in the next se tion. In se tion 3 we present the prin iples of AD and how they are ree ted into the fun tionalities of the AD tool
TAPENADE. In se tion 4 we fo us on the most interesting di ulties that we
en ountered in the appli ation of AD to su h a large
ode. Se tion 5 shows
some experiments that validate our derivatives and presents two illustrative
appli ations, fo using on
omputational aspe ts rather than impli ations for
o eanography. An outlook of further work is given in the
on lusion.
2 The O ean General Cir ulation Model OPA
Developed by the LODYC team at Paris VI university, OPA is a exible o ean
ir ulation model that
onguration.
an be used either in a regional or in a global o ean
OPA is the o ean model
omponent of NEMO (Nu leus For
European Modelling of the O ean) and is widely used in the s ienti
om-
munity. Moreover it is be oming a major a tor in operational o eanography
(Mer ator, ECMWF, UK-Met o e) Its formulation is based on the so- alled
primitive equations for the temporal evolution of o ean velo ity
RR n° 6372
urrents, tem-
Tber et al.
4
perature and salinity in its three horizontal and verti al dimensions.
equations are derived from Navier-Stokes equations
These
oupled with a state equa-
tion for water density and heat equation, under Boussinesq and hydrostati
approximations.
U
Let us introdu e the following variables:
Uh + wk
(the subs ript
tial temperature,
S
h
the velo ity ve tor,
denotes the lo al horizontal ve tor),
the salinity,
p
the pressure and
ρ
T
U =
the poten-
the in-situ density. The
ve tor invariant form of the primitive equations in an orthogonal set of unit
ve tors linked to the earth are written as follows
∂Uh
∂t
∂p
∂z
∇·U
∂T
∂t
∂S
∂t
ρ
1
= − (∇ × U) × U + ∇ (U) 2
2
h
− f k × Uh −
1
∇h p + D U
ρ0
= −ρg
= 0
= −∇ · (T U) + D T
= −∇ · (SU) + D S
= ρ (T, S, p)
∇ is the generalized derivative ve tor operator, t the time, z the verti al
oordinate, ρ0 a referen e density, f the Coriolis a eleration, and g the gravity
U
T
S
a eleration. D , D and D are the parametrization of small s ale physi s for
where
momentum, temperature and salinity, in luding surfa e for ing terms. A full
des ription of the model basi s, dis retization, physi al and numeri al details
an be found in [17℄.
Through this paper, OPA is used in its global free surfa e
ORCA-2. In this
onguration
onguration the model uses a rotated grid with poles on
North Ameri a and Asia in order to avoid the singularity problem on the North
Pole. The spa e resolution is roughly equivalent to a geographi al mesh of 2°
by 1.3° with a meridional resolution of 0.5° near the Equator (see gure 1). The
Verti al domain, spreading from the surfa e to a depth of 5000m, is meshed
using 31 levels with levels 1 to 10 in the top 100 meters. The time step is 96
minutes so that there are 15 time steps per day. The model is for ed by heat,
freshwater, and momentum uxes from the atmosphere and/or the sea-i e.
INRIA
Tangent and Adjoint dierentiation of OPA
5
Figure 1: ORCA 2 Mesh
The solar radiation penetrates the upper layers of the o ean. Zero uxes of
heat and salt are applied through the bottom. On the lateral solid boundaries
a no-slip
ondition is also applied. Initialization of the model for temperature
and salinity is based on the Levitus et al. (1998)
limatology with a null initial
velo ity eld. For more details about the spa e time-domain and the o ean
physi s of ORCA-2, we refer to the page dedi ated to this
1
onguration in the
o ial website of NEMO-OPA .
The
to
onguration ORCA-2 is routinely used by MERCATOR/Meteo-Fran e
ompute the o eani
omponent of their seasonal fore asting system. The
size of OPA-9, 200 modules dening 800 pro edures with over 100 000 lines of
FORTRAN95, makes it the largest appli ation dierentiated by TAPENADE
to date.
The
omputational kernel whi h is a tually dierentiated a
for 330 pro edures.
1 http://www.lody
RR n° 6372
.jussieu.fr/NEMO/general/des ription/ORCA_ ong.html
ounts
Tber et al.
6
3 Prin iples of AD and the tool TAPENADE
2
TAPENADE [12℄ is an AD tool developed by the Tropi s
team at INRIA.
Given the sour e of an original program that evaluates a mathemati al fun tion, and given a sele tion of input and output variables to be dierentiated,
TAPENADE produ es a new sour e program that
omputes the partial deriva-
tives of the sele ted outputs with respe t to the sele ted inputs.
Basi ally, TAPENADE does that by inserting additional statements into a
opy of the original program. Like other AD tools, TAPENADE is based on
the fundamental observation that the original program P, whatever its size and
m
n
run time, omputes a fun tion F, X∈IR 7→ Y ∈IR whi h is the omposition
of the elementary fun tions
words if
P
omputed by ea h run-time instru tion. In other
exe utes a sequen e of elementary statements
Ik , k ∈ [1..p],
then
P
a tually evaluates
F = fp ◦ fp−1 ◦ · · · ◦ f1 ,
where ea h
fk
is the fun tion implemented by
hain rule of derivative
derivatives of ea h
Calling
X0 = X
Y
Xk = fk (Xk−1 )
omponent of
and
variables, i.e. the su
Ik .
Therefore one an apply the
F ′ , i.e. the partial
al ulus to get the Ja obian matrix
essive
with respe t to ea h
the su
omponent of
X.
essive values of all intermediate
states of the memory throughout exe
ution of
P,
we get
′
F ′ (X) = fp′ (Xp−1 ) × fp−1
(Xp−2 ) × · · · × f1′ (X0 ) .
The derivatives
fk′
(1)
of ea h elementary instru tion are easily built, and must
be inserted in the dierentiated program so that ea h of them has the values
Xk−1
dire tly available for use. This pro ess yields analyti
are exa t up to numeri al a
derivatives, that
ura y.
In pra ti e, two sorts of derivatives are of parti ular importan e in s ienti
omputing: the tangent (or dire tional) derivatives, and the adjoint (or
reverse) derivatives.
In parti ular, tangent and adjoint are the two sorts of
derivative programs required for OPA, and TAPENADE provides both. The
′
tangent derivative is the produ t Ẏ = F (X) × Ẋ of the full Ja obian times a
dire tion
Ẋ
in the input spa e. From equation (1), we nd
′
Ẏ = F ′ (X) × Ẋ = fp′ (Xp−1) × fp−1
(Xp−2 ) × · · · × f1′ (X0 ) × Ẋ
2
(2)
http://www-sop.inria.fr/tropi s/
INRIA
Tangent and Adjoint dierentiation of OPA
whi h is most
u ts are mu h
7
heaply exe uted from right to left be ause matrix×ve tor prodheaper than matrix×matrix produ ts. This is also the most
onvenient exe ution order be ause it uses the intermediate values
P builds them.
X = F ′∗ (X) × Y of
Xk
in the
same order as the program
On the other hand the adjoint
derivative is the produ t
the
transposed Ja
Y in the output spa e. The resulting X
(Y · Y ). From equation (1), we nd
weight ve tor
dot produ t
obian times a
is the gradient of the
′∗
X = F ′∗ (X) × Y = f1′∗ (X0 ) × · · · × fp−1
(Xp−2 ) × fp′∗ (Xp−1 ) × Y
(3)
whi h is also most
heaply exe uted from right to left. However, this uses the
intermediate values
Xk
in the inverse of their building order in
Regarding the runtime
Ẏ
and adjoint
X
ost only a small multiple of the original program
slowdown fa tor is less than 4 in theory. In pra ti e it
the tangent, whereas it
below. Despite its higher
m runs
m
The
an be less than 2 for
ost, the adjoint
ode is still by large the
heapest
To get the gradient with the tangent mode would
of the tangent
is independent from
P.
an rea h up to 10 for the adjoint for a reason dis ussed
way to obtain gradients.
require
P.
ost for obtaining the derivatives, both tangent
ode, one per dimension of
X,
whereas this
ost
with the adjoint mode.
The di ulty of the adjoint mode lies in the fa t that it needs the intermediate values
Xk
in reverse order. To this end, TAPENADE basi ally uses a
two-sweeps strategy,
a
alled Store-All. In the rst sweep (the forward sweep),
opy of the original program
P
is run, together with Push statements that
store intermediate values on a sta k just before they get overwritten. In the
se ond sweep (the ba kward sweep), the derivative statements ompute the
′∗
elementary derivatives fk (Xk−1 ) for k = p down to 1, using Pop statements
to restore the intermediate values as they are required. This in urs a ost in
memory spa e as the maximum sta k size needed is attained at the end of
the forward sweep, and is thus proportional to the length of the program
P.
There is also a runtime penalty for these sta k manipulations. TAPENADE
implements a number of strategies [11℄ to mitigate this
data-ow analysis of the program's
values
Xk that need to be stored.
ontrol ow graph, redu ing the number of
However for very long programs su h as OPA,
involving unsteady simulations, Store-All
an not work alone.
ombines it with a storage/re omputation trade-o
RR n° 6372
ost, based on stati
TAPENADE
alled he kpointing.
Tber et al.
8
Che kpointing redu es the maximum sta k size at the
exe utions. Consider a pie e
C
of the original program
P.
ost of dupli ated
Che kpointing
illustrated on gure 2 means that during the main forward sweep,
C
C
as
pushes
no value on the sta k. When the ba kward sweep rea hes ba k to the pla e
where intermediate values are now missing on the sta k, it runs
C
a se ond
time, this time with the Store-All strategy i.e. pushing values on the sta k.
The ba kward sweep
an then resume safely.
To run
C
twi e requires that
enough of its input values, a snapshot, are stored but the size of a snapshot
is generally mu h less than the sta k size used by
down the adjoint program. When
C
is well
C.
hosen,
Obviously, this also slows
he kpointing
the peak size of the sta k by a fa tor of two. Che kpoints
whi h
an divide
an be nested, in
ase both the sta k's peak size and the adjoint runtime slowdown
grow as little as the logarithm of the size of
applies
he kpointing to ea h pro edure
an
P. In its default mode, TAPENADE
all.
{
C
successive
sweeps
Figure 2: Che kpointing applied to the program pie e
C.
Rightwards arrows
represent forward sweeps, thi k when they store intermediate values on the
sta k, thin otherwise.
Leftwards arrows represent ba kward sweeps.
Bla k
dots are stores, white dots are retrieves. Small dots are Push and Pops, big
dots are snapshots.
TAPENADE
apa ity to generate robust and e ient tangent and adjoint
odes has been demonstrated on several real-world test appli ations [15, 7, 1,
13, 16℄. Regarding the appli ation language, it
in FORTRAN. Taking into a
an handle programs written
ount the new programming
onstru ts provided
by FORTRAN95 has required an important programming eort in the past
few years, mostly to handle modules, stru tured data types, array notation,
pointers, and dynami
memory allo ation. Sin e the new OPA 9 is now written
in FORTRAN95, dierentiation of OPA is a very realisti
test for the new
TAPENADE 2.2.
INRIA
Tangent and Adjoint dierentiation of OPA
There exist several other AD tools.
9
Restri ting to the tools whi h, like
TAPENADE, operate by sour e transformation, provide tangent and adjoint
modes, use global program analysis to optimize the dierentiated
have demonstrated their appli ability on large industrial
odes, we
ode, and
an men-
tion TAF [4℄ a pioneer of AD for meteorology, now the standard AD tool for
the popular MIT Global Cir ulation Model. Unlike TAPENADE's, the adjoint
mode of TAF regenerates the intermediate values
an given initial point. This is
Xk
with Store-All strategy is getting blurred by nested
joint
su
by re omputation from
alled a Re ompute-All strategy. Comparison
odes grow more alike as more
he kpointing, as the ad-
he kpoints are inserted. OpenAD [20℄,
essor of ADIFOR and ADIC, uses the Store-All strategy. There are ex-
periments to also apply OpenAD to the MIT GCM. The tool Adol-C [10℄,
although using operator overloading instead of sour e transformation, is very
popular and has been applied su
adjoint mode
essfully to many industrial appli ations. Its
an be seen as an extension of the Store-All strategy: not only
the intermediate values are stored on the sta k, but also the
omputation graph
to be dierentiated. This allows the AD tool to perform further optimizations
on this graph, at the
ost of a higher memory
onsumption.
4 Applying TAPENADE to OPA
We generated working tangent and adjoint
odes for the
omputational kernel
of OPA, using TAPENADE. Depending on the nal appli ation (
f
se tion
5), the a tual fun tion to dierentiate as well as the input and output variables may be dierent, but the te hni al di ulties that we en ountered are
essentially the same. This se tion des ribes these points.
4.1
FORTRAN95
onstru ts
The new OPA 9 uses extensively the modular
had to extend the
all-graph internal representation of TAPENADE to handle
the nesting of modules and pro edures.
into the dierentiated
Be ause a module
Essentially this nesting is mirrored
ode.
an dene private
ferentiated modules do not have a
RR n° 6372
onstru ts of FORTRAN95. We
omponents, subroutines in the dif-
ess to all variables of the original module.
Tber et al.
10
Therefore the dierentiated module must ontain its own opy of all the original
module's variables, types, and pro edures. This is a
dierentiation model: the dierentiated
the original
ode; it must
dierentiated
ode
ontain its own
hange in TAPENADE's
annot just
all or use parts of
opies of those. In other words, the
ode need not be linked with the original.
The interfa e me hanism of FORTRAN95 is a way to implement overloaded pro edures.
This is stati
overloading, whi h is resolved at
ompile
time. Therefore we had to extend TAPENADE type- he king phase to
pletely solve the
alls to interfa ed pro edures.
om-
Conversely, TAPENADE is
now able to generate interfa es on the dierentiated pro edures, so that the
general stru ture of the
ode is preserved.
The array notation of FORTRAN95 is used systemati ally in OPA. At the
same time, dierentiation requires that many
alls to intrinsi
fun tions be
split to propagate the derivatives. When these fun tions are used on arrays
("elemental" intrinsi s) TAPENADE must generate a
ode whi h is far from
trivial. For instan e the single statement from OPA:
zws(:,:,:) = SQRT(ABS(psal(:,:,:)))
generates in the adjoint mode
abs1 = ABS(psal(:,:,:))
mask = (psal(:,:,:) .GT. 0.0)
...
WHERE (abs1 .EQ. 0.0)
abs1b = 0.0
ELSEWHERE
abs1b = zwsb(:, :, :)/(2.0*SQRT(abs1))
END WHERE
WHERE (.NOT.mask(:, :, :))
psalb(:, :, :) = psalb(:, :, :) - abs1b
ELSEWHERE
psalb(:, :, :) = psalb(:, :, :) + abs1b
END WHERE
Without going in too mu h detail into the adjoint dierentiation model, we
observe that the test that is needed to prote t the dierentiated
ode against
INRIA
Tangent and Adjoint dierentiation of OPA
the non-dierentiability of
dierentiation of
ABS,
SQRT
11
at 0, as well as the test that
have been turned into
WHERE
ontrols the
onstru ts to keep the
runtime benets of array notation. Some temporary variables are introdu ed
automati ally to store
ontrol-ow de isions (e.g.
abs1
and
mask),
although
TAPENADE still doesn't do this in an optimal way on the example.
OPA uses pointers and dynami
DEALLOCATE).
memory allo ation ( alls to
ALLOCATE
and
This is an appli ation for the pointer analysis now available
in TAPENADE, nding whether a variable has a derivative, even when this
variable is a
essed through a pointer. Unfortunately, dynami
allo ation is
handled partly, i.e. only in the tangent mode of TAPENADE. In the adjoint
mode, we have no general strategy for memory allo ation and TAPENADE
sometimes
annot produ e a working
an allo ate should be a
ode. We understand that the adjoint of
DEALLOCATE,
and vi e-versa, but some
be made by hand on the dierentiated
4.2
hanges must
ode to make it work.
Che kpointing and hidden variables
OPA reads and writes several data les, not only during the pre- and postpro essing stages, but also during the
omputational kernel itself. Sour e terms
su h as the wind stress are being read at intermediate time steps. Also, some
modules and pro edures dene private
but
annot be a
SAVE variables, whose value is preserved
essed from outside.
are just examples of a
Although unrelated, these two points
ommon problem: they
reentrant".
If a
alled pro edure modies an internal
sible from the outside
alling
ontext to
an identi al result. Similarly if the
SAVE
an make a pro edure non
variable, it be omes impos-
all the pro edure a se ond time with
alled pro edure reads from a previously
opened le, and just moves the read pointer further in the le, then it be omes
impossible to
all the pro edure twi e and obtain the same values read.
Non reentrant pro edures are a problem for the
the adjoint mode. We saw in se tion 3 that
he kpointing strategy of
he kpointing relies on
alling the
he kpointed pie e twi e, in su h a way that the se ond
all is equivalent to the
rst. To this end, a su ient subset of the exe ution
ontext, the snapshot,
must be saved and restored. Hidden variables like an internal
the read pointer inside an opened le
RR n° 6372
SAVE variable or
annot be saved nor restored in general.
Tber et al.
12
When
then
he kpointing would require hidden variables to be put in the snapshot,
he kpointing should be forbidden.
Similarly, when a pro edure only allo ates some memory, the allo ation
must not be done twi e.
If this pro edure is
he kpointed, then one must
deallo ate the memory when restoring the snapshot before the dupli ate
all.
TAPENADE is not yet able to do this automati ally.
TAPENADE has some fun tionalities to
problem, but in all
ope with this hidden variables
ases intera tion with the user is ne essary. First, TAPE-
NADE issues a warning message when a subroutine
be ause of a private
SAVE
annot be
he kpointed
variable. The message is issued only when this vari-
able would be part of the snapshot for this pro edure. When this happened
for OPA, we just turned by hand the variables in question into publi
variables in the original
ode. In prin iple this
global
ould also be done automati-
ally. However there are only a handful su h variables, thus developing this is
not our priority.
When a subroutine is not reentrant be ause of I/O le pointers or be ause
of isolated memory allo ation or deallo ation, then TAPENADE lets the user
label the subroutine so that it must not be
he kpointed. For OPA, we took
another strategy: we modied the main I/O subroutines so that they always
rst make sure that the le is opened and then only use dire t read into the
le without using a read I/O pointer. Thus all I/O subroutines are reentrant.
4.3
Binomial Che kpointing
Automati
Dierentiation of OPA is one of the most ambitious appli ations
of TAPENADE so far. It means building the adjoint of a pie e of
ode that
performs an unsteady nonlinear simulation over a very large number of time
steps. Ea h time step
omputes a new state whose size ranges in the hundreds
of megabytes. In adjoint mode if no
he kpointing was applied, whi h means
that all intermediate values were to be stored on a sta k, we
ould exe ute
only a handful of time steps before we run out of memory even on our largest
workstation. Che kpointing is
ompulsory to
ompute the adjoint over several
thousands of time steps, whi h is our goal.
We saw in se tion 3 that TAPENADE applies
of subroutine
alls, i.e. ea h
all is
he kpointing at the level
he kpointed. This easy strategy is often
INRIA
Tangent and Adjoint dierentiation of OPA
far from optimal. On one hand several
13
alls are better not
TAPENADE now oers the option to mark sele ted
On the other hand,
he kpointed, and
alls for
not
he kpointing.
he kpointing should be applied at other lo ations. For
example at the top level of the simulation program is a loop over many time
steps.
We denitely need an e ient
he kpointing s heme applied at this
level of time iterations.
One
lassi al solution used by TAF on the MIT GCM
multi level re ursive
he kpointing. Basi ally, it splits the
ode [14℄, is
alled
omplete time inter-
val into a small number of equidistant intervals, then apply the same strategy
to ea h of the sub-intervals.
For instan e 64 time steps
an be split into 4
large intervals of 4 small intervals of 4 time steps, as sket hed on gure 3. This
onsumes a maximum of 9 simultaneous snapshots, and the average number
of dupli ate exe utions for a time step is
1000 time steps
2.25.
In a more realisti
situation,
an be split into 10 large intervals of 10 small intervals of
10 time steps, and one
an gure out that this
onsumes a maximum of 27
simultaneous snapshots, and the average number of dupli ate exe utions for a
time step is
2.7.
0
16
32
48
52
56
60 61 62
57 58
53 54
49 50
36
40
44 45 46
41 42
37 38
33 34
20
24
28 29 30
25 26
21 22
17 18
4
8
12 13 14
9 10
5
1
6
2
Figure 3: Three-levels
ward
he kpointing with 64 time steps and 9 snapshots. For-
omputations go right, adjoint
omputations go left. Bla k
resent writing/taking a snapshot, white
snapshot.
RR n° 6372
ir les rep-
ir les represent reading an available
Tber et al.
14
However, it was shown in [21℄ that this strategy is not optimal. Under the
reasonable assumptions that all time steps
ost the same run time, and that
the snapshot needed to run again from time step
from step
n
to any later step
n + x,
the optimal distribution of nested
n to n+1 is the same as to run
Griewank and Walther have
hara terized
he kpoints, whi h follows a binomial law.
With this optimal strategy, both spatial and temporal omplexity of the adjoint
ode grow logarithmi ally with respe t to the number of time steps of the
original simulation.
In other words, both the slowdown fa tor whi h grows
like the number of times ea h time step is exe uted, and the memory whi h
grows like the number of simultaneous snapshots, grow logarithmi ally with
the total number of time steps.
In real appli ations, run-time and memory spa e do not behave symmetrially. One
an always wait a little longer for the result, whereas the memory
spa e is bounded. Therefore the maximum number of snapshots
d
that
an
be stored simultaneously is xed.
Then [8℄ shows that the optimal strategy
th
gives a slowdown fa tor that grows only like the d root of the total number of
time steps, whi h is still very good. Figure 4 shows the optimal
he kpointing
strategy for the same problem as gure 3 i.e. 64 time steps with memory for
9 snapshots.
only
2.
The average number of dupli ate exe utions for a time step is
For the more realisti
situation (1000 time steps and memory for 27
2.57.
snapshots) the average number of dupli ate exe utions is only
We implemented this optimal strategy in the adjoint
ode of OPA. We
made our rst experiments by hand modi ation of the adjoint
ode produ ed
by TAPENADE. Still, TAPENADE produ ed automati ally the pro edures
that store and retrieve the snapshot, and therefore the hand modi ation was
benign: given the number of time steps, a general pro edure
3
s hedules the
optimal sequen e of a tions (store snapshot, retrieve snapshot, run time step,
run adjoint time step) to dierentiate the
omplete simulation. Further ver-
sions of TAPENADE will fully automate this pro ess.
Figure 5 shows the
performan es on OPA. They are in good agreement with the theory. Noti e in
parti ular the two small ine tion points on the
and 800 iterations.
urve around 150 iterations
Going ba k to the optimality proof in [8℄, we see that
the optimal strategy is parti ularly e ient when the number of time steps is
3A
FORTRAN95 implementation of this s heduling pro edure an be found in www.inriasop/tropi s/ftp/Hi ham_Tber/
INRIA
Tangent and Adjoint dierentiation of OPA
0
16
30
15
38
45
51
56
60
62
57 58
52 53 54
46 47 48 49
39 40 41 42 43
31 32 33 34 35 36
19
22
24 25 26 27 28
20
17
3
6
9
10 11 12 13 14
7
4
1
Figure 4: Optimal binomial
he kpointing with 64 time steps and 9 snapshots
7.5
Slowdown factor
7
6.5
6
5.5
5
4.5
200
400
600
800
1000
1200
Total number of time steps
Figure 5: Optimal binomial he kpointing with 15 snapshots: slowdown fa tor
as a fun tion of the total length of the initial simulation. The slowdown fa tor
is the run-time ratio of the adjoint
RR n° 6372
ode
ompared to the original
ode.
Tber et al.
16
exa tly
(d + t)!
d!t!
where d is the number of snapshots and t is the number of dupli ate exe utions
allowed per time step. For our target ma hine d = 15 and we nd η(15, 2) =
136 and η(15, 3) = 816, whi h orresponds to the ine tion points of gure 5.
η(d, t) =
For the previous version OPA 8, the adjoint was written by hand. Nevertheless, even a hand-written adjoint must implement strategies to retrieve intermediate states in reverse order that is, something very
lose to
Looking at this hand-written adjoint, we rst observe that the
he kpointing.
he kpointing
strategy is neither multi level nor optimal binomial. It is more like a single level
strategy, with one snapshot stored every xed number of time steps. During the
reverse sweep, states between two stored snapshots are rebuilt approximately
using linear interpolation. The advantage is that few time steps are evaluated
twi e, and therefore the slowdown fa tor remains well below 4. We
an see at
least two drawba ks. First, this hand manipulation requires deep knowledge of
the original program and of the underlying equations. This method does not
blend easily with Automati
Dierentiation. It is not yet automated in any
AD tool and therefore tedious and error-prone
ode manipulations would still
be ne essary. Se ond, this introdu es approximation errors into the
omputed
derivatives, whose mathemati al behavior is un lear. The gradient obtained in
the end is used in
omplex optimizations or data-assimilation loops, and small
errors may result in poor
onvergen e. In any
ase, for very large numbers of
time steps, we believe a trade-o between exa t binomial
he kpointing and
approximate interpolation is worth experimenting. Interpolation is probably
good enough for many variables that vary very slowly, and whi h
ould be
designated by the end-user, and only the other variables would need to be
stored.
4.4
Iterative linear solver
The OPA model solves an ellipti
equation at the end of ea h time step, using
an iterative method that generates a sequen e of approximations of the exa t
solution. The me hani al appli ation of AD on this kind of methods gives a
sequen e of derivatives of the approximate solutions with the same number
INRIA
Tangent and Adjoint dierentiation of OPA
17
of iterations as the original solver. The reason is that AD keeps the ow of
ontrol of the original program in the dierentiated program. In parti ular the
onvergen e tests are still based only on the non-dierentiated variables. Naturally, one may ask whether and how AD-produ ed derivatives are reasonable
approximations to the desired derivative of the exa t solution. The issues of
derivative
onvergen e for iterative solvers in relation to AD are dis ussed in
[6, 9, 2℄.
OPA provides two alternative algorithms to solve the ellipti
PCG for Pre onditioned Conjugate Gradient, and SOR for Su
Relaxation method. Both algorithms give
equation:
essive Over-
orre t results for the original
ode,
but PCG is generally preferred thanks to its e ien y and ve torization properties. However, the AD-dierentiated
algorithms. Figure 6
tives obtained by divided dieren es.
with the SOR algorithm remain
reases.
be ome
On the
ode gives dierent results using the two
ompares the AD-derivatives with approximate derivaWe see that the derivatives obtained
orre t when the number of time steps in-
ontrary, the derivatives obtained with the PCG algorithm
ompletely wrong after 80 time steps. Noti e that this o
urs in tan-
gent mode as well as in adjoint mode: the derivatives obtained with PCG,
although wrong, remain identi al in tangent and adjoint. Our explanation is
that ea h iteration of PCG involves the
omputation of s alar produ ts of vari-
ables that depend on the state ve tor, thus making the numeri al algorithm
nonlinear even though the ellipti
equation is linear. In [6℄, Gilbert has shown
that the appli ation of AD to a xed point iteration gives a derivative xed
point iteration that
in the
onverges R-linearly to the desired derivative in parti ular
ase of a large
this is not the
ontra tive iterate or se ant updating.
Unfortunately
ase for quasi-Newton iterative solvers su h as PCG, for whi h
there is no similar
onvergen e result to our knowledge.
To solve this problem for the tangent-dierentiated OPA we exploit the
linearity of the ellipti
system, and for the adjoint-dierentiated OPA we ex-
ploit the self adjointness property of the ellipti
operator [22℄. We
an thus
use the original PCG routine itself to solve for the dierentiated linear systems. Pra ti ally, we do this using the so- alled bla k-box feature provided
in TAPENADE. Figure 6 shows that (here for the tangent mode) the PCG
gives the same a
RR n° 6372
ura y as the SOR solver.
18
Tber et al.
Figure 6: evolution of the relative error between tangent derivative and divided
dieren es, for the three strategies: SOR and straightforward AD, PCG and
straightforward AD, PCG with the bla k box strategy
INRIA
Tangent and Adjoint dierentiation of OPA
19
In another experiment, we tried to use straightforward AD with the PCG
solver, but this time xing the number of PCG iterations to some very high
value. We observed that the derivatives be ome
dieren es. This
expensive and the
oherent again with divided
ould be another way to solve our problem, but it is
ertainly
hoi e of the high iteration number is deli ate. This prob-
lem denitely deserves further study, and
onrms the general re ommendation
not to dierentiate solvers of a nonlinear kind, and use a bla k-box strategy
instead.
5 Validation Experiments
5.1
The
Corre tness test
lassi al way to
gent and adjoint
he k for
orre tness of the automati ally generated tan-
odes is as follows:
1. Choose an arbitrary input
X
and and arbitrary dire tion
Ẋ .
Compute
the Divided Dieren e
DD =
for a good enough small
F (X + εẊ) − F (X)
ε
ε.
2. Using the tangent dierentiated program,
3. Using the adjoint dierentiated program,
and nally
test for the
derivative
he k that
Ẏ = F ′ (X) × Ẋ .
ompute
ompute
X = F ′∗ (X) × Ẏ
(DD · DD) = (Ẏ · Ẏ ) = (X · Ẋ).
We performed this
omplete global ORCA-2 simulation on 1000 time steps and its
odes.
The results are shown in table 1.
The values mat h, and
Table 1: Dot produ t test for 1000 time steps
(DD · DD) (ε = 10−7 )
(Ẏ · Ẏ )
(X · Ẋ)
RR n° 6372
4.405352760987440e+08
4.405346876439977e+08
4.405346876439867e+08
Tber et al.
20
(Ẏ · Ẏ )
and
(X · Ẋ)
mat h very well, up to the last few digits, whi h shows
that the tangent and adjoint
in a dierent
of
odes really
ompute the same derivatives, only
omputation order as shown by equations (2) and (3). The values
(DD · DD)
and
(Ẏ · Ẏ )
don't mat h so well, be ause of the weakness of the
Divided Dieren es approximation. Figure 7 shows this weakness: For a small
0
10
−2
10
−4
10
−6
10 −12
10
−10
−8
10
10
−6
ε
10
−4
−2
10
10
Figure 7: Relative error of Divided Dieren es with respe t to AD-generated
derivatives,
value of
of
ε,
ε,
omputed for various values of of the step size
the dominant error is due to ma hine a
ura y. For a large value
the dominant error is due to the se ond derivatives of
minimizes both errors, but
5.2
ε
annot eliminate them
F.
The best
ε
ompletely.
Sensitivity analysis on a long simulation
One of the main appli ation of adjoint models is the sensitivity analysis i.e. the
study of how model output varies with
hanges in model inputs. Using dire t
or statisti al methods would require many integration of the non linear model
while one adjoint model integration is enough to ompute this sensitivity. As an
example, gure 8 shows the output map of the sensitivity of the North Atlanti
meridional heat ux at 29°N to
hanges in the initial sea surfa e temperature
(SSTt0 ) over one year integration period, starting January 1, 1998.
This is
INRIA
Tangent and Adjoint dierentiation of OPA
done by
omputing the gradient respe t to
J=
Z
tN
t0
where
Ω
v
SSTt0
of
T.v dxdzdt
Ω
ross se tion at 29°N in the North Atlanti ,
is the zonal
temperature and
ZZ
21
is the meridional
T
is the
urrent velo ity.
Contours in gure 8 show where variation of initial SST would ee t the
most upon heat transport at 29°N. It shows large s ale patterns mainly lo ated
north of the 29°N parallel and in the Caribbean sea with a strong spot o
Moro
o. These results are
onsistent with those obtained by Marotzke et al.
([18℄)
This map was
omputed by the TAPENADE-generated adjoint of OPA on
the global ORCA2 grid, over 5475 time steps (1 year). This experiment was
done with the SOR algorithm as the iterative linear solver. The TAPENADEgenerated adjoint
omputed this sensitivity map in a time that is only 8.03
times that of the original simulation.
5.3
Data Assimilation
For further validation of the automati ally generated derivatives, we
out a data assimilation experiment.
arried
This was done in a so- alled twin ex-
periment framework whereby the dire t model traje tory is used to generate
syntheti
observations.
The initial sea surfa e temperature is perturbed by
a white noise and it has to be re overed using variational data assimilation
te hniques. Syntheti
observation are given by the sea surfa e height (SSH)
and the sea surfa e salinity (SSS) generated from the model's original outputs
starting from the unperturbed SST.
The
ost fun tion to be minimised is
J(SST (t0 )) =
Z
tN
k SSH(t) − SSH o(t) k2 + k SSS(t) − SSS o (t) k2 dt
(4)
t0
Where the supers ript
SSS(t)
For
o
stands for syntheti
observation and
SSH(t)
and
are model output.
omputing
ost issues, only the Antar ti
zoom of ORCA2 is
onsid-
ered, the minimisation is done an iterative gradient sear h algorithm where
RR n° 6372
Tber et al.
22
Figure 8: Sensitivity map of the North Atlanti
line), with respe t to
heat transport at 29°N (dotted
hanges in the initial surfa e temperature
INRIA
Tangent and Adjoint dierentiation of OPA
the gradient of
J
is
23
umputed using adjoint te hniques. Figure 9 illustrates
the performan e of the optimization loop for an integration period of 1 month
i.e. 450 time steps. The
ost fun tion de reases by two orders of magnitude.
Figure 10 indi ates that the true solution (top panel) is re overed with a
good approximation (bottom panel) from the randomly perturbed one(middle
panel), showing the quality of the derivatives obtained.
6
Cost function
10
5
10
4
10 0
10
1
2
10
10
Iterations
Figure 9: Twin experiment: Convergen e of the
ost fun tion
6 Con lusion and Outlook
The eort to build the tangent and adjoint
odes for the previous version 8
of the OPA o ean General Cir ulation Model has
ost several months devel-
opment from an experien ed resear her. For the new version OPA 9 written
in FORTRAN95, the use of the AD tool TAPENADE signi antly redu es
this eort. Our rst numeri al appli ations show the quality of the derivatives
obtained. This works validates the
hoi e of AD as the strategy to obtain the
tangent and adjoint for OPA 9, and for the versions to
ome.
At the same time, OPA is the largest FORTRAN95 appli ation dierentiated with TAPENADE. This work has pointed at a number of limitations of
RR n° 6372
24
Tber et al.
Figure 10: Twin experiment: True eld (top), Initial perturbed eld (middle)
and identied optimal sea surfa e temperatures (bottom)
INRIA
Tangent and Adjoint dierentiation of OPA
25
TAPENADE that have been lifted. Other limitations remain, su h as the nonreentrant pro edures, whi h need to be addressed in future work. Su
dierentiation of OPA denitely in reases our
essful
onden e in TAPENADE.
This works is also an additional illustration of the superiority of the binomial
he kpointing strategy,
ompared to multi level
he kpointing. By the
standards of other appli ation elds, e.g. CFD, a slowdown of the adjoint
of only 7 for a nonsteady simulation on 1000 time steps would be
ode
onsidered
very good. By the standards of weather simulation or o ean modeling however, s ientists expe t yet faster adjoints, at the
tion. Even if we
ost of a radi al approxima-
onsider that these approximations
hange the mathemati al
nature of the optimization pro ess, we understand they are ne essary and we
shall study how they
an be proposed as an option by the AD tool.
This work has underlined several dire tions for further resear h in AD and
AD tools. Some of them are already being studied by resear hers in our groups.
Considering the appli ation language, two
onstru ts need to be dierentiated
better:
The next experiment to be made very soon is to apply TAPENADE to
the parallelized version of OPA. This is ne essary before the generated
tangent and adjoint
odes
an be used in produ tion
ontext.
The OPA sour e makes extensive use of the prepro essor dire tives su h
as
#IFDEF. TAPENADE does not deal with these dire
tives be ause they
do not respe t the synta ti stru ture of a ode. Handling these dire tives
in the AD tool is in our opinion hopeless. What might be done though,
is to generate dierentiated
odes for ea h possible prepro essed
and devise a tool to put the dire tives ba k into the dierentiated
This is made easier if the dierentiated
of the original, as is the
ode
ode,
odes.
losely follows the stru ture
ase with TAPENADE.
Considering spe i ally adjoint dierentiation, we hope to obtain more e ient
ode through a more systemati
ellipti
operator.
exploitation of self-adjointness, e.g.
We also hope to optimize the
its present version, TAPENADE applies
Using proling information, we believe we
whi h
he kpointing is useless or
he kpointing strategy.
he kpointing to ea h pro edure
an dete t several pro edure
In
all.
alls for
ounter-produ tive. TAPENADE is already
able to use this information to produ e a better adjoint.
RR n° 6372
of the
Tber et al.
26
Referen es
[1℄ W. Castaings, D. Dartus, M. Honnorat, F.-X. Le Dimet, Y. Loukili, and
J. Monnier. Automati
dierentiation: A tool for variational data assimi-
lation and adjoint sensitivity analysis for ood modeling. In Bü ker et al,
editor,
Automati Dierentiation: Appli ations, Theory, and Implemen-
tations, LNCSE, pages 249262. Springer, 2005.
[2℄ B. Christianson. Reverse a
Opti-
umulation and attra tive xed points.
mization Methods and Software, 3:311326, 1994.
[3℄ P. Courtier, J.-N. Thépaut, and A. Hollingsworth. A strategy for operational implementation of 4d-var, using an in remental approa h.
Meteorol. So ., 120:13671387, 1994.
[4℄ R. Giering.
Q. J. R.
Tangent linear and Adjoint Model Compiler, Users manual,
1997. http://www.autodi. om/tam .
[5℄ R. Giering and T. Kaminski. Re ipes for adjoint
ode
ACM
onstru tion.
Transa tions on Mathemati al Software, 24(4):437474, 1998.
[6℄ J.C. Gilbert. Automati
dierentiation and iterative pro esses.
tion Methods and Software, 1:1321, 1992.
[7℄ M.B. Giles, D. Ghate, and M.C. Duta. Using automati
adjoint CFD
ode development. In Uthup et al, editor,
Aerospa e Design and Optimization,
Optimiza-
dierentiation for
Re ent Trends in
pages 363373. Tata-M Graw Hill,
New Delhi, 2005. Post-SAROD-2005, Bangalore, India.
[8℄ A. Griewank. A hieving logarithmi
plexity in reverse automati
Software, 1:3554, 1992.
growth of temporal and spatial
dierentiation.
om-
Optimization Methods and
[9℄ A. Griewank, C. Bis hof, G. Corliss, A. Carle, and K. Williamson. Derivative
onvergen e for iterative equation solvers.
Software, 2:321355, 1993.
Optimization Methods and
INRIA
Tangent and Adjoint dierentiation of OPA
27
[10℄ A. Griewank, D. Juedes, J. Srinivasan, and C. Tyner. ADOL-C, a pa kage
for the automati
dierentiation of algorithms written in C/C++.
Trans. Math. Software, 22(2):131167, 1996.
[11℄ L. Has oët and M. Araya-Polo.
The adjoint data-ow analyses:
malization, properties, and appli ations.
Appli ations, Theory, and Tools, Le
In
ACM
For-
Automati Dierentiation:
ture Notes in Computational S ien e
and Engineering. Springer, 2005. Sele ted papers from AD2004 Chi ago.
[12℄ L. Has oët and V. Pas ual. Tapenade 2.1 user's guide. Te hni al Report
0300, INRIA, 2004.
[13℄ L. Has oët, M. Vázquez, and A. Dervieux. Automati
optimum design, applied to soni
dierentiation for
boom redu tion. In Kumar et al., editor,
Pro eedings of ICCSA'03, Montreal, Canada,
pages 8594. LNCS 2668,
Springer, 2003.
[14℄ Heimba h, P. and Hill, C. and Giering, R.
of the parallel MIT general
dierentiation.
An e ient exa t adjoint
ir ulation model, generated via automati
Future Generation Computer Systems,
21(8):13561371,
2005.
[15℄ J. Kim, E. Hunke, and W. Lips omb. Sensitivity analysis and parameter
tuning s heme for global sea-i e modeling.
O ean Modeling Journal, 14(1
2):6180, 2006.
[16℄ C. Lauvernet, F. Baret, L. Has oët, and F.-X. LeDimet. Improved estimates of vegetation biophysi al variables from meris toa images by using
spatial and temporal
Beijing, China, 2005.
onstraints.
In
pro eedings of the 9th ISPMSRS,
[17℄ G. Made , P. Dele luse, M. Imbard, and C. Levy. Opa8.1 o ean general
ir ulation model referen e manual. Te hni al report, Pole de Modelisation, IPSL, 1998.
[18℄ J. Marotzke, R. Giering, K.Q. Zhang, D. Stammer, C. Hill, and T. Lee.
Constru tion of the adjoint MIT o ean General Cir ulation Model and
RR n° 6372
Tber et al.
28
appli ation to atlanti
heat transport sensitivity.
J. Geophys. Res.,
(104(C12)):29,52929,548, 1999.
[19℄ O. Talagrand. The use of adjoint equations in numeri al modeling of the
Automati Dierentiation of Algorithms: Theory, Implementation and Appliation, pages 169180, 1991. Philadelphia, Penn: SIAM.
atmospheri
ir ulation. In A. Griewank and G. Corliss, editors,
[20℄ J. Utke, U. Naumann, M. Fagan, N. Tallent, Strout M., P. Heimba h,
C. Hill, and C. Wuns h.
Automati
OpenAD/F: A modular, open-sour e tool for
Dierentiation of Fortran
odes. Te hni al report ANL/MCS-
P1230-0205, Argonne National Laboratory, 2006.
Submitted to ACM
TOMS.
[21℄ A. Walther and A. Griewank. Advantages of binomial
he kpointing for
Numeri al
Mathemati s and Advan ed Appli ations, pages 834843. Springer, Berlin,
memory-redu ed adjoint al ulations. In Feistauer et al, editor,
2003. Pro eeding of ENUMATH 2003.
[22℄ A. Weaver, J. Vialard, and D. Anderson.
Three- and four-dimensional
variational assimilation with an o ean general
tropi al pa i
he ks.
ir ulation model of the
o ean: I. formulation, internal diagnosti s and
Monthly Weather Review, 131(7):13601378, 2003.
onsisten y
INRIA
Unité de recherche INRIA Sophia Antipolis
2004, route des Lucioles - BP 93 - 06902 Sophia Antipolis Cedex (France)
Unité de recherche INRIA Futurs : Parc Club Orsay Université - ZAC des Vignes
4, rue Jacques Monod - 91893 ORSAY Cedex (France)
Unité de recherche INRIA Lorraine : LORIA, Technopôle de Nancy-Brabois - Campus scientifique
615, rue du Jardin Botanique - BP 101 - 54602 Villers-lès-Nancy Cedex (France)
Unité de recherche INRIA Rennes : IRISA, Campus universitaire de Beaulieu - 35042 Rennes Cedex (France)
Unité de recherche INRIA Rhône-Alpes : 655, avenue de l’Europe - 38334 Montbonnot Saint-Ismier (France)
Unité de recherche INRIA Rocquencourt : Domaine de Voluceau - Rocquencourt - BP 105 - 78153 Le Chesnay Cedex (France)
Éditeur
INRIA - Domaine de Voluceau - Rocquencourt, BP 105 - 78153 Le Chesnay Cedex (France)
http://www.inria.fr
ISSN 0249-6399
| 5 |
Fine-Grained Complexity for Sparse Graphs
Udit Agarwal
⋆
and Vijaya Ramachandran∗
arXiv:1611.07008v3 [] 19 Oct 2017
October 20, 2017
Abstract
We consider the fine-grained complexity of sparse graph problems that currently have Õ(mn)
time algorithms, where m is the number of edges and n is the number of vertices in the input
graph. This class includes several important path problems on both directed and undirected
graphs, including APSP, MWC (minimum weight cycle), and Eccentricities, which is the problem
of computing, for each vertex in the graph, the length of a longest shortest path starting at that
vertex.
We introduce the notion of a sparse reduction which preserves the sparsity of graphs, and
we present near linear-time sparse reductions between various pairs of graph problems in the
Õ(mn) class. Surprisingly, very few of the known nontrivial reductions between problems in the
Õ(mn) class are sparse reductions. In the directed case, our results give a partial order on a
large collection of problems in the Õ(mn) class (along with some equivalences), and many of our
reductions are very nontrivial. In the undirected case we give two nontrivial sparse reductions:
from MWC to APSP, and from unweighted ANSC (all nodes shortest cycles) to a natural variant
of APSP. The latter reduction also gives an improved algorithm for ANSC (for dense graphs).
We propose the MWC Conjecture, a new conditional hardness conjecture that the weight of
a minimum weight cycle in a directed graph cannot be computed in time polynomially smaller
than mn. Our sparse reductions for directed path problems in the Õ(mn) class establish that
several problems in this class, including 2-SiSP (second simple shortest path), s-t Replacement
Paths, Radius, and Eccentricities, are MWCC hard. We also identify Eccentricities as a key
problem in the Õ(mn) class which is simultaneously MWCC-hard, SETH-hard and k-DSH-hard,
where SETH is the Strong Exponential Time Hypothesis, and k-DSH is the hypothesis that a
dominating set of size k cannot be computed in time polynomially smaller than nk .
Our framework using sparse reductions is very relevant to real-world graphs, which tend to
be sparse and for which the Õ(mn) time algorithms are the ones typically used in practice, and
not the Õ(n3 ) time algorithms.
∗
Dept.
of Computer Science, University of Texas, Austin TX 78712.
Email: [email protected],
[email protected]. This work was supported in part by NSF Grant CCF-1320675. The first author’s research
was also supported in part by a Calhoun Fellowship.
0
1
Introduction
In recent years there has been considerable interest in determining the fine-grained complexity of
problems in P, see e.g. [40]. For instance, the 3SUM [15] and OV (Orthogonal Vectors) [41, 11]
problems have been central to the fine-grained complexity of several problems with quadratic time
algorithms, in computational geometry and related areas for 3SUM and in edit distance and related
areas for OV. APSP (all pairs shortest paths) has been central to the fine-grained complexity of
several path problems with cubic time algorithms on dense graphs [42]. 3SUM has a quadratic time
algorithm but no sub-quadratic (i.e., O(n2−ǫ ) for some constant ǫ > 0) time algorithm is known. It
has been shown that a sub-quadratic time algorithm for any of a large number of problems would
imply a sub-quadratic time algorithm for 3SUM [15, 41]. In a similar vein no sub-quadratic time
algorithm is known for OV, and it has been shown that a sub-quadratic time algorithm for LCS or
Edit Dis tance would imply a sub-quadratic time algorithm for finding orthogonal vectors (OV) [11].
For several graph problems related to shortest paths that currently have Õ(n3 ) 1 time algorithms,
equivalence under sub-cubic reductions has been shown in work starting with [42]: between all pairs
shortest paths (APSP) in either a directed or undirected weighted graph, finding a second simple
shortest path from a given vertex u to a given vertex v (2-SiSP) in a weighted directed graph,
finding a minimum weight cycle (MWC) in a directed or undirected graph, finding a minimum
weight triangle (Min-Wt-∆), and a host of other problems. This gives compelling evidence that a
large class of problems on dense graphs is unlikely to have sub-cubic algorithms as a function of n,
the number of vertices, unless fundamentally new algorithmic techniques are developed.
The Strong Exponential Time Hypothesis (SETH) [22] states that for every δ < 1, there is a k
such that k-SAT cannot be solved in O(2δ·n ) time. It has been shown that a sub-quadratic time
algorithm for OV would falsify SETH [41]. No hardness results relative to SETH are known for
either 3SUM or cubic APSP, and the latter problem, in fact, does not have a SETH hardness result
for deterministic algorithms unless NSETH, a nondeterministic version of SETH, is falsified [13].
Other hardness conjectures have been proposed, for instance, for k Clique [1] and for k Dominating
Set (k-DSH) [31].
In this paper we consider a central collection of graph problems related to APSP, which refines the
subcubic equivalence class. We let n be the number of vertices and m the number of edges. All of
the sub-cubic equivalent graph problems mentioned above (and several others) have Õ(mn) time
algorithms; additionally, many sub-cubic equivalent problems related to minimum triangle detection
and triangle listing have lower O(m3/2 ) time complexities for sparse graphs [23]. (Checking whether
a graph contains a triangle has an even faster O(m1.41 ) time algorithm [9] but can also be computed
in sub-cubic time using fast matrix multiplication.) For APSP with arbitrary edge weights, there is
an O(mn+n2 log log n) time algorithm for directed graphs [32] and an even faster O(mn log α(m, n))
time algorithm for undirected graphs [33], where α is a certain natural inverse of the Ackermann’s
function. (For integer weights, there is an O(mn) time algorithm for undirected graphs [39] and
an O(mn + n2 log log n) time algorithm for directed graphs [19].) When a graph is truly sparse
with m = O(n) these bounds are essentially optimal or very close to optimal, since the size of the
output for APSP is n2 . Thus, a cubic in n bound for APSP does not fully capture what is currently
achievable by known algorithms, especially since graphs that arise in practice tend to have m close
to linear in n or at least are sparse, i.e., have m = O(n1+δ ) for δ < 1. This motivates our study of
the fine-grained complexity of graph path problems that currently have Õ(mn) time algorithms.
1
Õ and Θ̃ can hide sub-polynomial factors; in our new results they only hide polylog factors.
1
Another fundamental problem in the Õ(mn) class is MWC (Minimum Weight Cycle). In both
directed and undirected graphs, MWC can be computed in Õ(mn) time using an algorithm for
APSP. Very recently Orlin and Sedeno-Noda gave an improved O(mn) time algorithm [29] for
directed MWC. This is an important result but the bound still remains Ω(mn). Finding an MWC
algorithm that runs faster than mn time is a long-standing open problem in graph algorithms.
Fine-grained reductions and hardness results with respect to time bounds that consider only n or
only m, such as bounds of the form m2 , n2 , nc , m1+δ , are given in [34, 6, 4, 5, 21]. One exception is
in [3] where some reductions are given for problems with Õ(mn) time algorithms, such as diameter
and some betweenness centrality problems, that preserve graph sparsity. Several other related
results on fine-grained complexity are in [21, 43, 30, 27, 24, 10, 5, 4, 6, 40].
In this paper we present both fine-grained reductions and hardness results for graph problems with
Õ(mn) time algorithms, most of which are equivalent under sub-cubic reductions on dense graphs,
but now taking sparseness of edges into consideration. We use the current long-standing upper
bound of Õ(mn) for these problems as our reference, both for our fine-grained reductions and for
our hardness results. Our results give a partial order on hardness of several problems in the Õ(mn)
class, with equivalence within some subsets of problems, and a new hardness conjecture (MWCC)
for this class.
Our results appear to the first that consider a hardness class with respect to two natural parameters
in the input (the Õ(mn) class in our case). Further, the Õ(mn) time bound has endured for a long
time for a large collection of important problems, and hence merits our detailed study.
2
Our Contributions
We will deal with either an unweighted graph G = (V, E) or a weighted graph G = (V, E, w), where
the weight function is w : E → R+ . We assume that the vertices have distinct labels with ⌈log n⌉
bits. Let M and m denote the largest and the smallest edge weight, and let the edge weight ratio
be ρ = M /m . Let dG (x, y) denote the length (or weight) of a shortest path from x to y in G, and
for a cycle C in G, let dC (x, y) denote the length of the shortest path from x to y in C. We deal
with the APSP problem2 whose output is the shortest path weights for all pairs of vertices, together
with a concise representation of the shortest paths, which in our case is an n × n matrix, LastG ,
that contains, in position (x, y), the predecessor vertex of y on a shortest path from x to y. We also
consider the APSD (All Pairs Shortest Distances) problem [16] 2 which only involves computing
the weights of the shortest paths. Most of the currently known APSD algorithms, including matrix
multiplication based methods for small integer weights [37, 38, 45], can compute APSP in the same
bound as APSD. We deal with only simple graphs in this paper.
I. Sparse Reductions and the mn Partial Order. In Definition 2.1 below, we define the notion of a sparsity preserving reduction (or sparse reduction for short) from a graph problem P to
a graph problem Q that allows P to inherit Q’s time bound for the graph problem as a function
of both m and n, as long as the reduction is efficient. Our definition is in the spirit of a Karp
reduction [25], but slightly more general, since we allow a constant number of calls to Q instead of
just one call in a Karp reduction (and we allow polylog calls for Õ(·) bounds).
2
In an earlier version of this write-up, we used APSP′ for APSP, and APSP for APSD, respectively.
2
Reduction
MWC
≤ APSD
ANSC
(Unweighted)
APSP
Prior Results (Undirected)
2
Õ(n ) reduction [35]
a. goes through Min-Wt-∆
b. Θ(n2 ) edges in reduced graphs
3−ω
Sparse Õ(mn 2 ) reduction [44]
a. randomized
b. polynomial calls to APSP
ω+3
c. gives randomized Õ(n 2 )
time algorithm [44]
New Results (Undirected)
Sparse Õ(n2 ) Reduction
a. no intermediate problem
b. Θ(m) edges in reduced graphs
Sparse Õ(n2 ) Reduction
a. deterministic
b. Õ(1) calls to APSP
c. gives deterministic Õ(nω )
time algorithm
Table 1: Our sparse reduction results for undirected graphs. These results are in Sections 3 and 7. Note
that Min-Wt-∆ can be solved in m3/2 time.
One could consider a more general notion of a sparsity preserving reduction in the spirit of Turing
reductions as in [17] (which considers functions of a single variable n). However, for all of the
many sparsity preserving reductions we present here, the simpler notion defined below suffices. It
should also be noted that the simple and elegant definition of a Karp reduction suffices for the
vast majority of known NP-completeness reductions. The key difference between our definition and
other definitions of fine-grained reductions is that it is fine-grained with regard to both m and n,
and respects the dependence on both parameters.
It would interesting to see if some of the open problems left by our work on fine-grained reductions
for the mn class can be solved by moving to a more general sparsity preserving reduction in the
spirit of a Turing reduction applied to functions of both m and n. We do not consider this more
general version here since we do not need it for our reductions.
Definition 2.1 (Sparsity Preserving Graph Reductions). Given graph problems P and Q,
sprs
there is a sparsity preserving f (m, n) reduction from P to Q, denoted by P ≤f (m,n) Q, if given an
algorithm for Q that runs in TQ (m, n) time on graphs with n vertices and m edges, we can solve P
in O(TQ (m, n) + f (m, n)) time on graphs with n vertices and m edges, by making a constant number
of oracle calls to Q.
For simplicity, we will refer to a sparsity preserving graph reduction as a sparse reduction, and we
will say that P sparse reduces to Q . Similar to Definition 2.1, we will say that P tilde-f (m, n)
sparse reduces to Q, denoted by P .sprs
f (m,n) Q, if, given an algorithm for Q that runs in TQ (m, n)
time, we can solve P in Õ(TQ (m, n) + f (m, n)) time (by making polylog oracle calls to Q on graphs
sprs
∼sprs
with Õ(n) vertices and Õ(m) edges). We will also use ≡sprs
f (m,n) and =f (m,n) in place of ≤f (m,n) and
sprs
.sprs
f (m,n) when there are reductions in both directions. In a weighted graph we allow the Õ term to
have a log ρ factor. (Recall that ρ = M/m.)
We present several sparse reductions for problems that currently have Õ(mn) time algorithms. This
gives rise to a partial order on problems that are known to be sub-cubic equivalent, and currently
have Õ(mn) time algorithms. For the most part, our reductions take Õ(m+n) time (many are in fact
O(m + n) time), except reductions to APSP take Õ(n2 ) time. This ensures that any improvement in
the time bound for the target problem will give rise to the same improvement to the source problem,
to within a polylog factor. Surprisingly, very few of the known sub-cubic reductions carry over to
the sparse case due to one or both of the following features.
1. A central technique used in many of these earlier reductions has been to reduce to or from a
suitable triangle finding problem. As noted above, in the sparse setting, all triangle finding and
3
Reduction
MWC
≤ 2-SiSP
2-SiSP
≤ Radius;
≤ BC
Replacement
paths
≤ ANSC;
≤ Eccentricities
ANSC
≤ ANBC
Prior Results (Directed)
Õ(n2 ) reduction [35, 42]
a. goes through Min-Wt-∆
b. Θ(n2 ) edges in reduced graphs
Õ(n2 ) reductions [18, 42, 3]
a. goes through Min-Wt-∆
and a host of other problems
b. Θ(n2 ) edges in reduced graphs
Õ(n2 ) reductions [18, 42, 3]
a. goes through Min-Wt-∆
and a host of other problems
b. Θ(n2 ) edges in reduced graphs
Õ(n2 ) reduction [42, 3]
a. goes through Min-Wt-∆
and a host of other problems
b. Θ(n2 ) edges in reduced graphs
New Results (Directed)
Sparse O(m) Reduction
a. no intermediate problem
b. O(m) edges in reduced graph
Sparse Õ(m) Reduction
a. no intermediate problem
b. Õ(m) edges in reduced graph
Sparse O(m) (Õ(m)) Reduction
to ANSC (Eccentricities)
a. no intermediate problem
b. O(m) edges for ANSC &
Õ(m) edges for Eccentricities in
reduced graph
Sparse Õ(m) Reduction
a. no intermediate problem
b. Õ(m) edges in reduced graphs
Table 2: Our sparse reduction results for directed graphs. These results are in Sections 4, 8 and 9. The
definitions for these problems are in Appendix A.1. Note that Min-Wt-∆ can be solved in m3/2 time.
2SiSC
ANSC
[7]
MWC
2SiSP
s-t replacement
paths
n2
Eccentricities
APSP
Radius
Figure 1: Our sparse reductions for weighted directed graphs (reductions related to centrality problems are in
Figure 8). The regular edges represent sparse O(m + n) reductions, the squiggly edges represent tilde-sparse
O(m + n) reductions, and the dashed edges represent reductions that are trivial. The n2 label on dashed
edge to APSP denotes an O(n2 ) time reduction.
enumeration problems can be computed in Õ(m3/2 ) time, which is an asymptotically smaller bound
than mn when the graph is sparse.
2. Many of the known sub-cubic reductions convert a sparse graph into a dense one, and all known
subcubic reductions from a problem in the mn class to a triangle finding problem create a dense
graph. If any such reduction had been sparse, it would have given an O(m3/2 ) time algorithm for
a problem whose current fastest algorithm is in the mn class, a major improvement.
We present a suite of new sparse reductions for the Õ(mn) time class. Many of our reductions
are quite intricate, and for some of our reductions we introduce a new technique of bit-sampling
(previously called ‘bit-fixing’ but re-named here to avoid confusion with an un-named technique
used in [3]). The full definitions of the problems we consider are in the Appendix. Tables 1 and 2
summarize the improvements our reductions achieve over prior results. We now give some highlights
of our results.
4
(a) Sparse Reductions for Undirected Graphs: Finding the weight of a minimum weight
cycle (MWC) is a fundamental problem. A simple sparse O(m + n) reduction from MWC to APSD
is known for directed graph but it does not work in the undirected case mainly because an edge can
be traversed in either direction in an undirected graph, and known algorithms for the directed case
would create non-simple paths when applied to an undirected graph. Roditty and Williams [35],
in a follow-up to [42], pointed out the challenges of reducing from undirected MWC to APSD in
sub-nω time, where ω is the matrix multiplication exponent, and then gave a Õ(n2 ) reduction
from undirected MWC to undirected Min-Wt-∆ in a dense bipartite graph. But a reduction that
increases the density of the graph is not helpful in our sparse setting. Instead, in this paper we
give a sparse Õ(n2 ) time reduction from undirected MWC to APSD. Similar techniques allow us to
obtain a sparse Õ(n2 ) time reduction from undirected ANSC (All Nodes Shortest Cycles) [44, 36],
which asks for a shortest cycle through every vertex) to APSP. This reduction improves the running
time for unweighted ANSC in dense graphs [44], since we can now solve it in Õ(nω ) time using the
unweighted APSP algorithm in [37, 8]. Our ANSC reduction and resulting improved algorithm is
only for unweighted graphs and extending it to weighted graphs appears to be challenging.
We introduce a new bit-sampling technique 3 in these reductions. This technique contains a simple
construction with exactly log n hash functions for Color Coding [9] with 2 colors (described in detail
in Section 3). Our bit-sampling method also gives the first near-linear time algorithm for k-SiSC
in weighted undirected graphs. k-SiSC is the cycle variant of k-SiSP [26], and our reduction that
gives a fast algorithm for k-SiSC in weighted undirected graphs is given in Appendix A.2.
Section 3 summarizes the proof of Theorem 2.2 below, and Section 7 proves Theorems 2.2 and 2.3
in full.
Theorem 2.2. In a weighted undirected n-node m-edge graph with edge weight ratio ρ, MWC can
be computed with 2 · log n · log ρ calls to APSD on graphs with 2n nodes, at most 2m edges, and edge
weight ratio at most ρ, with O(n + m) cost for constructing each reduced graph, and with additional
O(n2 · log n · log(nρ)) processing time. Additionally, edge weights are preserved, and every edge in
the reduced graph retains its corresponding edge weight from the original graph. Hence, MWC .sprs
n2
APSD.
In undirected graphs with integer weights at most M , APSD can be computed in Õ(M · nω )
time [37, 38]. In [35], the authors give an Õ(M · nω ) time algorithm for undirected MWC in such
graphs by preprocessing using a result in [28] and then making Õ(1) calls to an APSD algorithm.
By applying our sparse reduction in Theorem 2.2 we can get an alternate simpler Õ(M · nω ) time
algorithm for MWC in undirected graphs (the sparsity of our reduction is not relevant here except
for the fact that it is also an Õ(n2 ) reduction).
The following result gives an improved algorithm for ANSC in undirected unweighted graphs.
sprs
Theorem 2.3. In undirected unweighted graphs, ANSC .sprs
APSP and ANSC can be computed
n2
ω
in Õ(n ) time.
(b) Directed Graphs. We give several nontrivial sparse reductions starting from MWC in directed graphs, as noted in the following theorem (also highlighted in Figure 1).
Theorem 2.4 (Directed Graphs.). In weighted directed graphs:
3
In an earlier version, we called this bit-fixing but that term was often confused with a ‘bit-encoding’ technique
used in [3].
5
sprs
sprs
sprs
sprs
sprs
sprs
sprs
(i) MWC ≤sprs
m+n 2-SiSP ≤m+n s-t replacement paths ≡m+n ANSC .m+n Eccentricities
sprs
sprs
sprs
(ii) 2-SiSP .sprs
m+n Radius ≤m+n Eccentricities
sprs
(iii) 2-SiSP .sprs
m+n Betweenness Centrality
In Section 4 we present a brief overview of our sparse reduction from 2-SiSP to Radius for directed
graphs. The remaining reductions are in Section 8, and in Section 9 we also present nontrivial sparse
reductions from 2-SiSP and ANSC to versions of the betweenness centrality problem, to complement
a collection of sparse reductions in [3] for betweenness centrality problems.
II. Conditional Hardness Results. Conditional hardness under fine-grained reductions falls
into several categories: For example, 3SUM hardness [15] holds for several problems to which
3SUM reduces in sub-quadratic time, and OV-hardness holds for several problems to which OV
has sub-quadratic reductions. By a known reduction from SETH to OV [41] OV-hardness implies
SETH-hardness as well. The n3 equivalence class for path problems in dense graphs [42] gives
sub-cubic hardness for APSP in dense graphs and for the other problems in this class.
In this paper our focus is on the mn class, the class of graph path problems for which the current
best algorithms run in Õ(mn) time. This class differs from all previous classes considered for finegrained complexity since it depends on two parameters of the input, m and n. To formalize hardness
results for this class we first make precise the notion of a sub-mn time bound, which we capture
with the following definition.
Definition 2.5 (Sub-mn). A function g(m, n) is sub-mn if g(m, n) = O(mα · nβ ), where α, β are
constants such that α + β < 2.
A straightforward application of the notions of sub-cubic and sub-quadratic to the two-variable
function mn would have resulted in a simpler but less powerful definition for sub-mn, namely that
of requiring a time bound O((mn)δ ), for some δ < 1. Another weaker form of the above definition
would have been to require α ≤ 1 and β ≤ 1 with at least one of the two being strictly less than 1.
The above definition is more general than either of these. It considers a bound of the form m3 /n2
to be sub-mn even though such a bound for the graph problem with be larger than n3 for dense
graphs. Thus, it is a very strong definition of sub-mn when applied to hardness results. (Such a
definition could be abused when giving sub-mn reductions, but as noted in part I above, all of our
sub-mn reductions are linear or near-linear in the sizes of input and output, and thus readily satisfy
the sub-mn definition while being very efficient.)
Based on our fine-grained reductions for directed graphs, we propose the following conjecture:
Conjecture 1. (MWCC: Directed Min-Wt-Cycle Conjecture) There is no sub-mn time algorithm
for MWC (Minimum Weight Cycle) in directed graphs.
Directed MWC is a natural candidate for hardness for the mn class since it is a fundamental
problem for which a simple Õ(mn) time algorithm has been known for many decades, and very
recently, an O(mn) time algorithm [29]. But a sub-mn time algorithm remains as elusive as ever.
Further, through the fine-grained reductions that we present in this paper, many other problems in
the mn class have MWCC hardness for sub-mn time as noted in the following theorem.
6
Theorem 2.6. Under MWCC, the following problems on directed graphs do not have sub-mn time
algorithms: 2-SiSP, 2-SiSC, s-t Replacement Paths, ANSC, Radius, Betweenness Centrality, and
Eccentricities.
The problems in the above theorem are a subset of the problems that are sub-cubic equivalent to
directed MWC, and hence one could also strengthen the MWCC conjecture to state that directed
MWC has neither a sub-mn time algorithm nor a subcubic (in n) time algorithm. As noted above,
these two classes of time bounds are incomparable as neither one is contained in the other.
SETH and Related Problems. Recall that SETH conjectures that for every δ < 1 there exists
a k such that there is no ‘2δ·n ’ time algorithm for k-SAT. No SETH-based conditional lower bound
is known for either 3SUM or for dense APSP. In fact, it is unlikely that dense APSP would have
a hardness result for deterministic sub-cubic algorithms, relative to SETH, since this would falsify
NSETH [13].
Despite the above-mentioned negative result in [13] for SETH hardness for sub-cubic equivalent
problems, we now observe that SETH-hardness does hold for a key MWCC-hard problem in the mn
class. In particular we observe that a SETH hardness construction, used in [34] to obtain sub-m2
hardness for Eccentricities in graphs with m = O(n), can be used to establish the following result.
Theorem 2.7 (Sub-mn Hardness Under SETH). Under SETH, Eccentricities does not have a
sub-mn time algorithm in an unweighted or weighted graph, either directed or undirected.
The k Dominating Set Hypothesis (k-DSH) [31] states that there exists k0 (k0 = 3 in [31]) such
that for all k ≥ k0 , a dominating set of size k in an undirected graph on n vertices cannot be found
in O(nk−ǫ ) for any constant ǫ > 0. This conjecture formalizes a long-standing open problem, and
it was shown in [31] that falsifying k-DSH would falsify SETH.
For even values of k, it was shown in [34] that solving Eccentricities in O(m2−ǫ ) time, for any
constant ǫ > 0, would falsify k-DSH. We extend this result to all values of k, both odd and even,
and establish it for the sub-mn class (which includes O(m2−ǫ )), to obtain the following theorem.
Theorem 2.8 (Conditional Hardness Under k-DSH). Under k-DSH, Eccentricities does not
have a sub-mn time algorithm in an unweighted or weighted graph, either directed or undirected.
More precisely, if Eccentricities can be solved in O(mα n2−α−ǫ ) time, then for any k ≥ 3 + (2α/ǫ),
a dominating set of size k can be found in O(nk−ǫ ) time.
We give the proof of Theorem 2.8 in Section 5, and this also establishes Theorem 2.7, since hardness
under k-DSH implies hardness under SETH.
Eccentricities as a Central Problem for mn. We observe that directed Eccentricities is a
central problem in the mn class: If a sub-mn time algorithm is obtained for directed Eccentricities,
not only would it refute k-DSH, SETH and MWCC (Conjecture 1), it would also imply sub-mn time
algorithms for several MWCC-hard problems: 2-SiSP, 2-SiSC, s-t Replacement Paths, ANSC and
Radius in directed graphs as well as Radius and Eccentricities in undirected graphs. (The undirected
versions of 2-SiSP, 2-SiSC, and s-t Replacement Paths have near-linear time algorithms and are not
relevant, and there is no known sparse reduction from undirected ANSC to Eccentricities.)
APSP. APSP has a special status in the mn class. Since its output size is n2 , it has near-optimal
algorithms [39, 19, 32, 33] for graphs with m = O(n). Also, the n2 size for the APSP output means
7
that any inference made through sparse reductions to APSP will not be based on a sub-mn time
bound but instead on a sub-mn + n2 time bound. It also turns out that the SETH and k-DSH
hardness results for Eccentricities depend crucially on staying with a purely sub-mn bound, and
hence even though Eccentricities has a simple sparse n2 reduction to APSP, we do not have SETH
or k-DSH hardness for computing APSP in sub-mn + n2 time.
Betweenness Centrality (BC). We discuss this problem in Section 9. Sparse reductions for
several variants of BC were given in [3] but none established MWCC-hardness. In Section 9 we give
nontrivial sparse reductions to establish MWCC hardness for some important variants of BC.
Time Bounds for Sparse Graphs and their Separation under SETH. It is readily seen
that the Õ(m3/2 ) bound for triangle finding problems is a better bound than the Õ(mn) bound for
the mn class. But imposing a total ordering on functions of two variables requires some care. For
example, maximal 2-connected subgraphs of a given directed graph can be computed in O(m3/2 )
time [14] as well as in O(n2 ) time [20]. Here, m3/2 is a better bound for very sparse graphs, and
n2 for very dense graphs. In Section 6 we give natural definition of what it means for for one time
bound to be smaller than another time bound for sparse graphs. By our definitions in Section 6,
m3/2 is a smaller time bound than both mn and n2 for sparse graphs. Our definitions in Section 6,
in conjunction with Theorem 2.7, establish that the problems related to triangle listing must have
provably smaller time bounds for sparse graphs than Eccentricities under k-DSH or SETH.
Sub-cubic equivalence for Õ(n3 ) versus the sub-mn Partial Order for Õ(mn). Our sparse
reductions give a partial order on hardness for several graph problems with Õ(mn) time bounds,
and a hardness conjecture for directed graphs relative to a specific problem MWC. While this
partial order may appear to be weaker than the sub-cubic equivalence class of problems with Õ(n3 )
time bound for n-node graphs [42], we also show that under SETH or k-DSH there is a provable
separation between Eccentricities, a problem in the Õ(mn) partial order, and the O(m3/2 ) class
of triangle finding problems, even though all of these problems are equivalent under sub-cubic
reductions. Thus, if we assume SETH, the equivalences achieved under sub-cubic reductions cannot
hold for the sparse versions of the problems (i.e., when parameterized by both m and n).
The results for conditional hardness relative to SETH and k-DSH are fairly straightforward, and
they adapt earlier hardness results to our framework. The significance of these hardness results is
in the new insights they give into our inability to make improvements to some long-standing time
bounds for important problems on sparse graphs. On the other hand, many of our sparse reductions
are highly intricate, and overall these reductions give a partial order (with several equivalences) on
the large class of graph problems that currently have Õ(mn) time algorithms.
Roadmap. The rest of the paper is organized as follows. Sections 3 and 4 present key examples
of our sparse reductions for undirected and directed graphs, with full details in Sections 7 and 8.
In Sections 5 and 6 we present SETH and k-DSH hardness results, and the resulting provable split
of the sub-cubic equivalence class under these hardness results for sparse time bounds. Section 9
discusses betweenness centrality. The Appendix gives the definitions of the problems we consider.
3
sprs
Weighted Undirected Graphs: MWC .sprs
n2 APSD
In undirected graphs, the only known sub-cubic reduction from MWC to APSD [35] uses a dense
reduction to Min-Wt-∆. Described in [35] for integer edge weights of value at most M , it first uses
8
an algorithm in [28] to compute, in O(n2 · log n log nM ) time, a 2-approximation W to the weight
of a minimum weight cycle as well as shortest paths between all pairs of vertices with pathlength
at most W/2. The reduced graph for Min-Wt-∆ is constructed as a (dense) bipartite graph with
edges to represent all of these shortest paths, together with the edges of the original graph in one
side of the bipartition. This results in each triangle in the reduced graph corresponding to a cycle
in the original graph, and with a minimum weight cycle guaranteed to be present as a triangle. An
MWC is then constructed using a simple explicit construction for Color Coding [35] with 2 colors.
The approach in [35] does not work in our case, as we are dealing with sparse reductions. Instead,
we give a sparse reduction directly from MWC to APSD. In contrast to [35], where finding a
minimum weight 3-edge triangle gives the MWC in the original graph, in our reduction the MWC
is constructed as a path P in a reduced graph followed by a shortest path in the original graph.
One may ask if we can sparsify the dense reduction from MWC to Min-Wt-∆ in [35], but such
a reduction, though very desirable, would immediately refute MWCC and would achieve a major
breakthrough by giving an Õ(m3/2 ) time algorithm for undirected MWC.
We now sketch our sparse reduction from undirected MWC to APSP. (In Section 7 we refine this
to a sparse reduction to APSD.) It is well known that in any cycle C = hv1 , v2 , . . . , vl i in a weighted
undirected graph G = (V, E, w) there exists an edge (vi , vi+1 ) on C such that ⌈ w(C)
2 ⌉ − w(vi , vi+1 ) ≤
w(C)
w(C)
w(C)
dC (v1 , vi ) ≤ ⌊ 2 ⌋ and ⌈ 2 ⌉ − w(vi , vi+1 ) ≤ dC (vi+1 , v1 ) ≤ ⌊ 2 ⌋. The above edge, (vi , vi+1 ), is
called the critical edge of C with respect to the start vertex v1 in [35].
We will make use of the following simple observation proved in Section 7.
Observation 3.1. Let G = (V, E, w) be a weighted undirected graph. Let C = hv1 , v2 , . . . , vl i be a
minimum weight cycle in G, and let (vp , vp+1 ) be its critical edge with respect to v1 . WLOG assume
that dG (v1 , vp ) ≥ dG (v1 , vp+1 ). If G′ is obtained by removing edge (vp−1 , vp ) from G, then the path
P = hv1 , vl , . . . , vp+1 , vp i is a shortest path from v1 to vp in G′ .
In our reduction we construct a collection of graphs Gi,j,k , each with 2n vertices (containing 2 copies
of V ) and O(m) edges, with the guarantee that, for the minimum weight cycle C, in at least one of
the graphs the edge (vp−1 , vp ) (in Observation 3.1) will not connect across the two copies of V and
the path P of Observation 3.1 will be present. Then, if a call to APSP computes P as a shortest
path from v1 to vp (across the two copies of V ), we can verify that edge (vp , vp+1 ) is not the last
edge on the computed shortest path from v1 to vp in G, and so we can form the concatenation of
these two paths as a possible candidate for a minimum weight cycle. The challenge is to construct
a small collection of graphs where we can ensure that the path we identify in one of the derived
graphs is in fact the simple path P in the input graph.
Each Gi,j,k has two copies of each vertex u ∈ V , u1 ∈ V1 and u2 ∈ V2 . All edges in G are present on
the vertex set V1 , but there is no edge that connects any pair of vertices within V2 . In Gi,j,k there
is an edge from u1 ∈ V1 to v 2 ∈ V2 iff there is an edge from u to v in G, and u’s i-th bit is j, and
M
M
< w(u, v) ≤ 2k−1
. All the edges in Gi,j,k retain their weights from G. Thus, the edge (u1 , v 2 ) is
2k
present in Gi,j,k with weight w only if (u, v) is an edge in G with the same weight w and further,
certain conditions (as described above) hold for the indices i, j, k. Here, 1 ≤ i ≤ ⌈log n⌉, j ∈ {0, 1}
and k ∈ {1, 2, . . . , ⌈log ρ⌉}, so we have 2 · log n · log ρ graphs. Figure 2 depicts the construction of
graph Gi,j,k .
The first condition for an edge (u1 , v 2 ) to be present in Gi,j,k is that u’s i-th bit must be j. This
1 , v 2 ) is absent and the edge (v 1 , v 2 ) is present
ensures that there exist a graph where the edge (vp−1
p
p+1 p
9
u1
w(u, a)
all
edges
from
E
(u1 , v2 ) present if (u, v) ∈ E
and u’s i-th bit is j
M
and 2Mk < w(u, v) ≤ 2k−1
v2
a1
c1
f1
V1
w(f, g)
g2
V2
no edges
between
vertices in
V2
Figure 2: Construction of Gi,j,k .
(as vp−1 and vp+1 differ on at least 1 bit). To contrast with a similar step in [35], we need to find
the path P in the sparse derived graph, while in [35] it suffices to look for the 2-edge path that
represents P in a triangle in their dense reduced graph.
M
< w(u, v) ≤ 2k−1
— ensures that
The second condition — that an edge (u1 , v 2 ) is present only if M
2k
2
1 , v 2 ) is absent
1
there is a graph Gi,j,k in which, not only is edge (vp+1 , vp ) is present and edge (vp−1
p
as noted by the first condition, but also the shortest path from v11 to vp2 is in fact the path P in
Observation 3.1, and does not correspond to a false path where an edge in G is traversed twice. In
particular, we show that this second condition allows us to exclude a shortest path from v11 to vp2 of
the following form: take the shortest path from v1 to vp in G on vertices in V1 , then take an edge
(vp1 , x1 ), and then the edge (x1 , vp2 ). Such a path, which has weight dC (v1 , vp ) + 2w(x, vp ), could
be shorter than the desired path, which has weight dC (v1 , vp+1 ) + w(vp+1 , vp ). In our reduction
we avoid selecting this ineligible path by requiring that the weight of the selected path should not
exceed dG (v1 , vp ) by more than M/2k−1 . We show that these conditions suffice to ensure that P is
identified in one of the Gi,j,k , and no spurious path of shorter length is identified. Notice that, in
contrast to [35], we do not estimate the MWC weight by computing a 2-approximation. Instead,
this second condition allows us to identify the critical edge in the appropriate graph.
The following lemma establishes the correctness of the resulting sparse reduction described in
Algorithm MWC-to-APSP. The proof and full details are in Section 7, where Figure 4 illustrates
how Observation 3.1 applies to the MWC in a suitable Gi,j,k .
Lemma 3.2. Let C = hv1 , v2 , . . . , vl i be a minimum weight cycle in G and let (vp , vp+1 ) be its
critical edge with respect to the start vertex v1 . Assume dG (v1 , vp ) ≥ dG (v1 , vp+1 ). Then there exist
i ∈ {1, . . . , ⌈log n⌉}, j ∈ {0, 1} and k ∈ {1, 2, . . . , ⌈log ρ⌉} such that the following conditions hold:
(i) dGi,j,k (v11 , vp2 ) + dG (v1 , vp ) = w(C);
(ii) LastGi,j,k (v11 , vp2 ) 6= LastG (v1 , vp );
M
.
(iii) dGi,j,k (v11 , vp2 ) ≤ dG (v1 , vp ) + 2k−1
The converse also holds: If there exist vertices y, z in G with the above three properties satisfied for
y = v1 , z = vp for one of the graphs Gi,j,k using a weight wt in place of w(C) in part 1, then there
exists a cycle in G that passes through z of weight at most wt.
10
MWC-to-APSP(G)
1: wt ← ∞
2: for 1 ≤ i ≤ ⌈log n⌉, j ∈ {0, 1}, and 1 ≤ k ≤ ⌈log ρ⌉ do
3:
Compute APSP on Gi,j,k
4:
for y, z ∈ V do
M
5:
if dGi,j,k (y 1 , z 2 ) ≤ dG (y, z) + 2k−1
then check if LastGi,j,k (y 1 , z 2 ) 6= LastG (y, z)
6:
if both checks in Step 5 hold then wt ← min(wt, dGi,j,k (y 1 , z 2 ) + dG (y, z))
7: return wt
Bit-sampling and Color Coding: Color Coding is a method introduced by Alon, Yuster and
Zwick [9]. For the special case of 2 colors, the method constructs a collection C of O(log n) different
2-colorings on an n-element set V , such that for every pair {x, y} in V , there is a 2-coloring in C
that assigns different colors to x and y. When the elements of V have unique log n-bit labels, e.g.,
by numbering them from 0 to n − 1, our bit-sampling method on index i (ignoring indices j and
k) can be viewed as an explicit construction of exactly ⌈log n⌉ hash functions for the 2-perfect hash
family: the i-th hash function assigns to each element the i-th bit in its label as its color.
In our construction we actually use 2 log n functions (using both i and j) since we need a stronger
version of color coding where, for any pair of vertices x, y, there is a hash function that assigns
color 0 to x and 1 to y and another that assigns 1 to x and 0 to y. This is needed in order to
1 , v 2 ) is absent and the edge (v 1 , v 2 ) is
ensure that when x = vp−1 and y = vp+1 , the edge (vp−1
p
p+1 p
present. A different variant of Color Coding with 2 colors is used in [35] in their dense reduction
from undirected MWC to Min-Wt-∆, and we do not immediately see how to apply our bit-sampling
technique there.
Our bit-sampling method differs from a ‘bit-encoding’ technique used in some reductions in [3, 2],
where the objective is to preserve sparsity in the constructed graph while also preserving paths
from the original graph G = (V, E). This technique creates paths between two copies of V by
adding Θ(log n) new vertices with O(log n) bit labels, and using the O(log n) bit labels on these
new vertices to induce the desired paths in the constructed graph. The bit-encoding technique
(from [3]) is useful for certain types of reductions, and we use it in our sparse reduction from 2-SiSP
to Radius in Section 4, and from 2-SiSP to BC in Section 9.
However, the bit-sampling technique we use in our reduction here is different. Here the objective
is to selectively sample the edges from the original graph to be placed in the reduced graph, based
on the bit-pattern of the end points and the edge weight. In our construction, we create Θ(log n)
different graphs, where in each graph the copies of V are connected by single-edge paths, without
requiring additional intermediate vertices.
4
Weighted Directed Graphs: 2-SiSP .sprs
m+n Radius
A sparse O(n2 ) reduction from 2-SiSP to APSP was given in [18]. Our sparse reduction from 2-SiSP
to Radius refines this result and the sub-mn partial order by plugging the Radius and Eccentricities
problems within the sparse reduction chain from 2-SiSP to APSP. Also, in Section 8 we show MWC
≤sprs
m+n 2-SiSP, thus establishing MWC-hardness for both 2-SiSP and Radius. Our 2-SiSP to Radius
reduction here is unrelated to the sparse reduction in [18] from 2-SiSP to APSP.
A sub-cubic reduction from Min-Wt-∆ to Radius is in [3]. This reduction transforms the problem
of finding a minimum weight triangle to the problem of computing Radius by creating a 4-partite
11
dG (s, v1 )
dG (v2 , t)
v1
G
v2
v0 (s)
v3 (t)
dG (v1z, t)
0
z0o
z1o
0i
0
0
0
z1i
0
0
0
y0o
0
0
z2i
0
y1o
y0i
A
(s, v2 )
zd2G
o
y1i
y2o
y2i
B
C1,0
C1,1
C2,0
C2,1
1
′
′
Figure 3: G′′ for l = 3. The gray and the bold edges have weight 11
9 M and 3 M respectively. All the
outgoing (incoming) edges from (to) A have weight 0 and the outgoing edges from B have weight M ′ .
graph. However, in order to use this reduction to show MWC hardness of the Radius problem we
would need to show that Min-Wt-∆ is MWC-hard. But such a reduction would give an O(m3/2 )
time algorithm for MWC, thus refuting the MWC Conjecture. Instead, we present a more complex
reduction from 2-SiSP, which is MWC-hard (as shown in Section 8).
The input is G = (V, E, w), with source s and sink t in V , and a shortest path P (s = v0 → v1
vl−1 → vl = t). We need to compute a second simple s-t shortest path.
Figure 3 gives an example of our reduction to an input G′′ to the Radius problem for l = 3. In
G′′ we first map every edge (vj , vj+1 ) lying on P to the vertices zjo and zji such that the shortest
path from zjo to zji corresponds to the shortest path from s to t avoiding the edge (vj , vj+1 ). We
then add vertices yjo and yji in the graph and connect them to vertices zjo and zji by adding edges
(yjo , zjo ) and (yji , zji ), and then additional edges from yjo to other yko and yki vertices such that the
longest shortest path from yjo is to the vertex yji , which in turn corresponds to the shortest path
from zjo to zji . In order to preserve sparsity, we have an interconnection from each yjo vertex to all
yki vertices (except for k = j) with a sparse construction by using 2 log n additional vertices Cr,s
in a manner similar to a bit-encoding technique used in [3] in their reduction from Min-Wt-∆ to
Betweenness Centrality (this technique however, is different from the new ‘bit-fixing’ technique used
in Section 3), and we have two additional vertices A, B with suitable edges to induce connectivity
among the yjo vertices. In our construction, we ensure that the center is one of the yjo vertices and
hence computing the Radius in the reduced graph gives the minimum among all the shortest paths
from zjo to zji . This corresponds to a shortest replacement path from s to t. Further details are in
Section 8, where the following three claims are proved.
(i) For each 0 ≤ j ≤ l − 1, the longest shortest path in G′′ from yjo is to the vertex yji .
(ii) A shortest path from zjo to zji corresponds to a replacement path for the edge (vj , vj+1 ).
(iii) One of the vertices among yjo ’s is a center of G′′ .
Thus after computing the radius in G′′ , we can use (i), (ii), and (iii) to compute the weight of a
shortest replacement path from s to t, which is a second simple shortest path from s to t. The cost
of this reduction is O(m + n log n). For full details see Section 8.
12
5
Conditional Hardness Under k-DSH
The following lemma shows that a sub-mn time algorithm for Diameter in an unweighted graph,
either undirected or directed, would falsify k-DSH. This proves Theorems 2.8 and Theorem 2.7 since
the Diameter of a graph can be computed in O(n) time after one call to Eccentricities on the same
graph. Diameter is in the Õ(mn) class, but at this time we do not know if Diameter is MWCC-hard.
Lemma 5.1. Suppose for some constant α there is an O(mα · n2−α−ǫ ) time algorithm, for some
ǫ > 0, for solving Diameter in an unweighted m-edge n-node graph, either undirected or directed.
Then there exists a k′ > 0 such that for all k ≥ k′ , the k-Dominating Set problem can be solved in
O(nk−ǫ ) time.
Proof. When k is even we use a construction in [34]. To determine if undirected graph G = (V, E)
has a k-dominating set we form G′ = (V ′ , E ′ ), where V ′ = V1 ∪ V2 , with V1 containing a vertex for
each subset of V of size k/2 and V2 = V . We add an edge from a vertex v ∈ V1 to a vertex x ∈ V2
if the subset corresponding to v does not dominate x. We induce a clique in the vertex partition
V2 . As shown in [34], G′ has diameter 3 if G has a dominating set of size k and has diameter 2
otherwise, and this gives the reduction when k is even.
If k is odd, so k = 2r + 1, we make n calls to graphs derived from G′ = (V ′ , E ′ ) as follows, where
now each vertex in V1 represents a subset of r vertices in V . For each x ∈ V let Vx be the set
{x} ∪ {neighbors of x in G}, and let Gx be the subgraph of G′ induced on V − Vx . If G has a
dominating set D of size k that includes vertex x then consider any partition of the remaining 2r
vertices in D into two subsets of size r each, and let u and v be the vertices corresponding to these
two sets in V1 . Since all paths from u to v in Gx pass through V2 − Vx , there is no path of length
2 from u to v since every vertex in V2 − Vx is covered by either u or v. Hence the diameter of Gx
is greater than 2 in this case. But if there is no dominating set of size k that includes x in G, then
for any u, v ∈ V1 , at least one vertex in V2 − Vx is not covered by both u and v and hence there is
a path of length 2 from u to v. If we now compute the diameter in each of graphs Gx , x ∈ V , we
will detect a graph with diameter greater than 2 if and only if G has a dominating set of size k.
Each graph Gx has N = O(nr ) vertices and M = O(nr+1 ) edges. If we now assume that Diameter
can be computed in time O(M α · N 2−α−ǫ ), then the above algorithm for k Dominating Set runs in
time O(n · M α · N 2−α−ǫ ) = O(n2r+1−ǫr+α ), which is O(nk−ǫ ) time when k ≥ 3 + 2α
ǫ . The analysis is
similar for k even. In the directed case, we get the same result by replacing every edge in G′ with
two directed edges in opposite directions.
6
Time Bounds for Sparse Graphs
Let T (m, n) be a function which is defined for m ≥ n − 1. We will interpret T (m, n) as a time for an
algorithm on a connected graph and we will refer to T (m, n) as a time bound for a graph problem.
We now focus on formalizing the notion of a time bound T (m, n) being smaller than another time
bound T ′ (m, n) for sparse graphs.
If the time bounds T (m, n) and T ′ (m, n) are of the form mα nβ , then one possible way to check if
T (m, n) is smaller than T ′ (m, n) is to check if the exponents of m and n in T (m, n) are individually
smaller than the corresponding exponents in T ′ (m, n). But using this approach, we would not
be able to compare between time bounds m1/2 n and mn1/2 . Another possible way is to use a
13
direct extrapolation from the single variable case and define T (m, n) to be (polynomially) smaller
than T ′ (m, n) if T (mn) = O((T ′ (m, n))1−ǫ ) for some constant ǫ > 0. But such a definition would
completely ignore the dependence of the functions on each of their two variables. We would want
our definition to take into account the sparsity of the graph, i.e., as a graph becomes sparser, the
smaller time bound has smaller running time. To incorporate this idea, our definition below asks
for T (m, n) to be a factor of mǫ smaller than T ′ (m, n), for some ǫ > 0. Further, this requirement
is placed only on sufficiently sparse graphs (and for a weakly smaller time bound, we also require a
certain minimum edge density). The consequence of this definition is that when one time bound is
not dominated by the other for all values of m, the domination needs to hold for sufficiently sparse
graphs in order for the dominated function to be a smaller time bound for sparse graphs.
Definition 6.1 (Comparing Time Bounds for Sparse Graphs). Given two time bounds
T (m, n) and T ′ (m, n),
(i) T (m, n) is a smaller time bound than T ′ (m,
n) for sparse graphs if there exist constants
γ, ǫ > 0 such that T (m, n) = O m1ǫ · T ′ (m, n) for all values of m = O(n1+γ ).
(ii) T (m, n) is a weakly smaller time bound than T ′ (m, n) for sparse graphs if there exists a
positive constant γ such that for any constant δ with γ > δ > 0, there exists an ǫ > 0 such that
T (m, n) = O m1ǫ · T ′ (m, n) for all values of m in the range m = O(n1+γ ) and m = Ω(n1+δ ).
Part (i) in above definition requires a polynomially smaller (in m) bound for T (m, n) relative to
3
T ′ (m, n) for sufficiently sparse graphs. For example, m
is a smaller time bound than m3/2 , which
n2
in turn is a smaller time bound than mn; m2 is a smaller time bound than n3 . A time bound of
√
√
n m is a weakly smaller bound than m n for sparse graphs by part (ii) but not a smaller bound
since the two bounds coincide when m = O(n).
Our definition for comparing time bounds for sparse graphs is quite strong as it allows us to compare
√
a wide range of time bounds. For example, using this definition we can say that n m is a weakly
√
smaller bound than m n for sparse graphs. Whereas if we use the possible approaches that we
discussed before then we would not be able to compare these two time bounds.
With Definition 6.1 in hand, the following lemma is straightforward.
Lemma 6.2. Let T1 (m, n) = O(mα1 nβ1 ) and T2 (m, n) = O(mα2 nβ2 ) be two time bounds, where
α1 , β1 , α2 , β2 are constants.
(i) T1 (m, n) is a smaller time bound than T2 (m, n) for sparse graphs if α2 + β2 > α1 + β1 .
(ii) T1 (m, n) is a weakly smaller time bound than T2 (m, n) for sparse graphs if α2 + β2 = α1 + β1 ,
and α2 > α1 .
Definition 6.1, in conjunction with Lemma 6.2 and Theorem 2.7, lead to the following provable
separation of time bounds for sparse graph problems in the sub-cubic equivalence class:
Theorem 6.3 (Split of Time Bounds for Sparse Graphs.). Under either SETH or k-DSH,
triangle finding problems in the sub-cubic equivalence class have algorithms with a smaller time
bound for sparse graphs than any algorithm we can design for Eccentricities.
7
Reduction Details for Undirected Graphs
In Section 3, we gave an overview of our sparse reduction from MWC to APSP. Here in Section 7.1,
we provide full details of this reduction, and refine it to a reduction to APSD. We then describe a
14
• i, j, k are such that:
1. i-th bit of vp+1 is j,
2. i-th bit of vp−1 is j and
M
.
3. 2Mk < w(vp , vp+1 ) ≤ 2k−1
Gi,j,k
G
vp+1
1
vp+1
vp
v1
vp2 (z 2 )
v11 (y 1 )
vp−1
• Alg. MWC-to-APSP’ will
compute w(C) in this Gi,j,k
in wt (line 6)
1
vp−1
(a) MWC C in G. Shortest path
from v1 to vp is highlighted.
(b) Shortest path from
y 1 to z 2 in Gi,j,k .
Figure 4: Here Figure (a) represent the MWC C in G. The path πy,z (in bold) is the shortest path from y to
z in G. The path πy1 ,z2 (in bold) in Figure (b) is the shortest path from y 1 to z 2 in Gi,j,k : where the edge
1
(vp−1
, vp2 ) is absent due to i, j bits. The paths πy,z in G and πy1 ,z2 in Gi,j,k together comprise the MWC C.
Õ(n2 ) sparse reduction from ANSC to APSP in unweighted undirected graphs in Section 7.2.
We start with stating from [35] the notion of a ‘critical edge’.
Lemma 7.1 ([35]). Let G = (V, E, w) be a weighted undirected graph, where w : E → R+ , and
let C = hv1 , v2 , . . . , vl i be a cycle in G. There exists an edge (vi , vi+1 ) on C such that ⌈ w(C)
2 ⌉−
w(C)
w(C)
w(C)
w(vi , vi+1 ) ≤ dC (v1 , vi ) ≤ ⌊ 2 ⌋ and ⌈ 2 ⌉ − w(vi , vi+1 ) ≤ dC (vi+1 , v1 ) ≤ ⌊ 2 ⌋.
The edge (vi , vi+1 ) is called the critical edge of C with respect to the start vertex v1 .
7.1
Reducing Minimum Weight Cycle to APSD
We first describe a useful property of a minimum weight cycle.
Lemma 7.2. Let C be a minimum weight cycle in weighted undirected graph G. Let x and y be two
1 and π 2 be the paths from x to y in C. WLOG assume
vertices lying on the cycle C and let πx,y
x,y
1
2
1
2 is a second simple shortest
that w(πx,y ) ≤ w(πx,y ). Then πx,y is a shortest path from x to y and πx,y
path from x to y, i.e. a path from x to y that is shortest among all paths from x to y that are not
1 .
identical to πx,y
3 is a second simple shortest path from x to y of weight less
Proof. Assume to the contrary that πx,y
2 ). Let the path π 3 deviates from the path π 1 at some vertex u and then it merges
than w(πx,y
x,y
x,y
1 and π 3 together form a cycle of
back at some vertex v. Then the subpaths from u to v in πx,y
x,y
weight strictly less than w(C), resulting in a contradiction as C is a minimum weight cycle in G.
Observation 3.1 follows, since the path P there must be either a shortest path or a second simple
shortest path in G by the above lemma, so in G′ it must be a shortest path.
Consider the graphs Gi,j,k as described in Section 3. In the following two lemmas we identify three
key properties of a path π from y 1 to z 2 (y 6= z) in a Gi,j,k that (I) will be satisfied by the path P
in Observation 3.1 for y 1 = v11 and z 2 = vp2 in some Gi,j,k (Lemma 7.3), and (II) will cause a simple
cycle in G to be contained in the concatenation of π with the shortest path from y to z computed
by APSP (Lemma 7.4). Once we have these two Lemmas in hand, it gives us a method to find a
minimum weight cycle in G by calling APSP on each Gi,j,k and then identifying all pairs y 1 , z 2 in
each graph that satisfy these properties. Since the path P is guaranteed to be one of the pairs, and
no spurious path will be identified, the minimum weight cycle can be identified. We now fill in the
details.
15
Lemma 7.3. Let C = hv1 , v2 , . . . , vl i be a minimum weight cycle in G and let (vp , vp+1 ) be its
critical edge with respect to the start vertex v1 . WLOG assume that dG (v1 , vp ) ≥ dG (v1 , vp+1 ). Then
there exists an i ∈ {1, . . . , ⌈log n⌉}, j ∈ {0, 1} and k ∈ {1, 2, . . . , ⌈log ρ⌉} such that the following
conditions hold:
(i) dGi,j,k (v11 , vp2 ) + dG (v1 , vp ) = w(C)
(ii) LastGi,j,k (v11 , vp2 ) 6= LastG (v1 , vp )
(iii) dGi,j,k (v11 , vp2 ) ≤ dG (v1 , vp ) +
M
2k−1
Proof. Let i, j and k be such that: vp−1 and vp+1 differ on i-th bit and j be the i-th bit of vp+1 and k
M
1 , v 2 ) is not present and the edge (v 1 , v 2 )
< w(vp , vp+1 ) ≤ 2k−1
. Hence, edge (vp−1
be such that M
p
p+1 p
2k
is present in Gi,j,k and so LastGi,j,k (v11 , vp2 ) 6= LastG (v1 , vp ), satisfying part 2 of the lemma.
Let us map the path P in Observation 3.1 to the path P ′ in Gi,j,k , such that all vertices except vp
are mapped to V1 and vp is mapped to V2 (bold path from v11 to vp2 in Figure 4b). Then, if P ′ is a
shortest path from v11 to vp2 in Gi,j,k , both parts 1 and 3 of the lemma will hold. So it remains to
show that P ′ is a shortest path. But if not, an actual shortest path from v11 to vp2 in Gi,j,k would
create a shorter cycle in G than C, and if that cycle were not simple, one could extract from it an
even shorter cycle, contradicting the fact that C is a minimum weight cycle in G.
Lemma 7.4. If there exists an i ∈ {1, . . . , ⌈log n⌉}, j ∈ {0, 1} and k ∈ {1, 2, . . . , ⌈log ρ⌉} and
y, z ∈ V such that the following conditions hold:
(i) dGi,j,k (y 1 , z 2 ) + dG (y, z) = wt for some wt
(ii) LastGi,j,k (y 1 , z 2 ) 6= LastG (y, z)
(iii) dGi,j,k (y 1 , z 2 ) ≤ dG (y, z) +
M
2k−1
Then there exists a simple cycle C containing z of weight at most wt in G.
Proof. Let πy,z be a shortest path from y to z in G (see Figure 4a) and let πy1 ,z 2 be a shortest path
′
from y 1 to z 2 in Gi,j,k (Figure 4b). Let πy,z be the path corresponding to πy1 ,z 2 in G.
′
′
Now we need to show that the path πy,z is simple. Assume that πy,z is not simple. It implies that
the path πy1 ,z 2 must contain x1 and x2 for some x ∈ V . Now if x 6= z, then we can remove the
subpath from x1 to x2 (or from x2 to x1 ) to obtain an even shorter path from y 1 to z 2 .
It implies that the path πy1 ,z 2 contains z 1 as an internal vertex. Let πz 1 ,z 2 be the subpath of πy1 ,z 2
from vertex z 1 to z 2 . If πz 1 ,z 2 contains at least 2 internal vertices then this would be a simple cycle
of weight less than wt, and we are done. Otherwise, the path πz 1 ,z 2 contains exactly one internal
vertex (say x1 ). Hence path πz 1 ,z 2 corresponds to the edge (z, x) traversed twice in graph G. But
the weight of the edge (x, z) must be greater than M
(as the edge (x1 , z 2 ) is present in Gi,j,k ).
2k
M
M
Hence w(πz 1 ,z 2 ) > 2k−1
and hence dGi,j,k (y 1 , z 2 ) ≥ dG (y, z) + w(πz 1 ,z 2 ) > dG (y, z) + 2k−1
, resulting
in a contradiction as condition 3 states otherwise. (It is for this property that the index k in Gi,j,k
′
is used.) Thus path πy1 ,z 2 does not contain z 1 as an internal vertex and hence πy,z is simple.
′
′
If the paths πy,z and πy,z do not have any internal vertices in common, then πy,z ◦ πy,z corresponds
to a simple cycle C in G of weight wt that passes through y and z. Otherwise, we can extract from
′
πy,z ◦ πy,z a cycle of weight smaller than wt. This establishes the lemma.
Proof of Theorem 2.2: To compute the weight of a minimum weight cycle in G in Õ(n2 + TAP SP ),
we use the procedure MWC-to-APSP described in Section 3. By Lemmas 7.3 and 7.4, the value wt
returned by this algorithm is the weight of a minimum weight cycle in G.
16
Sparse Reduction to APSD: We now describe how to avoid using the Last matrix in the
reduction. A 2-approximation algorithm for finding a cycle of weight at most 2t, where t is such
that the minimum-weight cycle’s weight lies in the range (t, 2t], as well as distances between pairs
of vertices within distance at most t, was given by Lingas and Lundell [28]. This algorithm can also
compute the last edge on each shortest path it computes, and its running time is Õ(n2 log(nρ)).
For a minimum weight cycle C = hv1 , v2 , . . . , vl i where the edge (vp , vp+1 ) is a critical edge with
respect to the start vertex v1 , the shortest path length from v1 to vp or to vp+1 is at most t. Thus
using this algorithm, we can compute the last edge on a shortest path for such pair of vertices in
Õ(n2 log(nρ)) time.
In our reduction to APSD, we first run the 2-approximation algorithm on the input graph G to
obtain the Last(y, z) for certain pairs of vertices. Then, in Step 5 we check if LastGi,j,k (y 1 , z 2 ) 6=
LastG (y, z) only if LastG (y, z) has been computed (otherwise the current path is not a candidate
for computing a minimum weight cycle). It appears from the algorithm that the Last values are
also needed in the Gi,j,k . However, instead of computing the Last values in each Gi,j,k , we check for
the shortest path from y to z only in those Gi,j,k graphs where the LastG (y, z) has been computed,
and the edge is not present in Gi,j,k . In other words, if Last(y, z) = q, we will only consider the
shortest paths from y 1 to z 2 in those graphs Gi,j,k where q’s i-th bit is not equal to j. Thus our
reduction to APSD goes through without needing APSP to output the Last matrix.
7.2
Reducing ANSC to APSP in Unweighted Undirected Graphs
For our sparse Õ(n2 ) reduction from ANSC to APSP in unweighted undirected graphs, we use the
graphs from the previous section, but we do not use the index k, since the graph is unweighted.
Our reduction exploits the fact that in unweighted graphs, every edge in a cycle is a critical edge
with respect to some vertex. Thus we construct 2⌈log n⌉ graphs Gi,j , and in order to construct a
shortest cycle through vertex z in G, we will set z = vp2 in the reduction in the previous section.
Then, by letting one of the two edges incident on z in the shortest cycle through z be the critical
edge for the cycle, the construction from the previous section will allow us to find the length of a
minimum length cycle through z, for each z ∈ V , with the following post-processing algorithm.
ANSC-to-APSP
1: for each vertex z ∈ V do
2:
wt[z] ← ∞
3: for 1 ≤ i ≤ ⌈log n⌉, j ∈ {0, 1} do
4:
Compute APSP′ on Gi,j,
5:
for y, z ∈ V do
6:
if dGi,j, (y 1 , z 2 ) ≤ dG (y, z) + 1 then check if LastGi,j, (y 1 , z 2 ) 6= LastG (y, z)
7:
if both checks in Step 6 hold then wt[z] ← min(wt[z], dGi,j, (y 1 , z 2 ) + dG (y, z))
8: return wt array
Correctness of the above sparse reduction follows from the following two lemmas, which are similar
to Lemmas 7.3 and 7.4.
Lemma 7.5. Let C = hz, v2 , v3 , . . . , vq i be a minimum length cycle passing through vertex z ∈ V .
Let (vp , vp+1 ) be its critical edge such that p = ⌊ 2q ⌋ + 1. Then there exists an i ∈ {1, . . . , ⌈log n⌉}
and j ∈ {0, 1} such that the following conditions hold:
(i) dGi,j (vp1 , z 2 ) + dG (vp , z) = len(C)
(ii) LastGi,j (vp1 , z 2 ) 6= LastG (vp , z)
17
(iii) dGi,j (vp1 , z 2 ) ≤ dG (vp , z) + 1
Lemma 7.6. If there exists an i ∈ {1, . . . , ⌈log n⌉} and j ∈ {0, 1} and y, z ∈ V such that the
following conditions hold:
(i) dGi,j (y 1 , z 2 ) + dG (y, z) = q for some q where dG (y, z) = ⌊ 2q ⌋
(ii) LastGi,j (y 1 , z 2 ) 6= LastG (y, z)
(iii) dGi,j (y 1 , z 2 ) ≤ dG (y, z) + 1
Then there exists a simple cycle C passing through z of length at most q in G.
Proof of Theorem 2.3: We now show that the entries in the wt array returned by the above algorithm
correspond to the ANSC output for G. Let z ∈ V be an arbitrary vertex in G and let q = wt[z].
Let y ′ be the vertex in Step 5 for which we obtain this value of q. Hence by Lemma 7.6, there exists
a simple cycle C passing through z of length at most q in G. If there were a cycle through z of
length q ′ < q then by Lemma 7.5, there exists a vertex y ′′ such that conditions in Step 6 hold for q ′ ,
and the algorithm would have returned a smaller value than wt[z], which is a contradiction. This
is a sparse Õ(n2 ) reduction since it makes O(log n) calls to APSP, and spends Õ(n2 ) additional
time.
It would be interesting to see if we can obtain a reduction from weighted ANSC to APSD or APSP.
The above reduction does not work for the weighted case since it exploits the fact that for any cycle
C through a vertex z, an edge in C that is incident on z is a critical edge for some vertex in C.
However, this property need not hold in the weighted case.
8
Fine-grained Reductions for Directed Graphs
In Section 4, we gave an overview of our tilde-sparse O(m + n log n) reduction from directed 2-SiSP
to the Radius problem. Here we give full details of this reduction along with the rest of our sparse
reductions for directed graphs (except for the reductions related to Centrality problems which are
described in Section 9).
I. 2-SiSP to Radius and s-t Replacement Paths to Eccentricities: Here we give the details of the sparse reduction from 2SiSP to Radius described in Section 4 and a related sparse
reduction from s-t Replacement Paths to Eccentricities.
We start by pointing out some differences between the sparse reduction we give below and an earlier
sub-cubic reduction to Radius in [3]. In that sub-cubic reduction to Radius, the starting problem
is Min-Wt-∆. This reduction constructs a 4-partite graph that contains paths that correspond to
triangles in the original graph. However, Min-Wt-∆ can be solved in O(m3/2 ) time and hence to use
this reduction to show MWC hardness for Radius, we first need to show that Min-Wt-∆ is MWC
Hard. But this would achieve a major breakthrough by giving an Õ(m3/2 ) time algorithm for MWC
(and would refute MWCC). If we tried to adapt this reduction in [3] to the mn class by starting
from MWC instead of Min-Wt-∆, it appears that we would need to have an n-partite graph, since
a minimum weight cycle could pass through all n vertices in the graph. This would not be a sparse
reduction. Hence, here we instead present a more complex reduction from 2-SiSP. As in [3], we also
use bit-encoding to preserve sparsity in the reduced graph. However the rest of the reduction is
different.
18
sprs
sprs
sprs
Lemma 8.1. In weighted directed graphs, 2-SiSP .sprs
m+n Radius and s-t Replacement Paths .m+n
Eccentricities
Proof. We are given an input graph G = (V, E), a source vertex s and a sink/target vertex t and we
wish to compute the second simple shortest path from s to t. Let P (s = v0 → v1
vl−1 → vl = t)
be the shortest path from s to t in G.
Constructing the reduced graph G′′ : We first create the graph G′ , which contain G and l additional
vertices z0 ,z1 ,. . .,zl−1 . We remove the edges lying on P from G′ . For each 0 ≤ i ≤ l − 1, we add
an edge from zi to vi of weight dG (s, vi ) and an edge from vi+1 to zi of weight dG (vi+1 , t). Also for
each 1 ≤ i ≤ l − 1, we add a zero weight edge from zi to zi−1 .
Now form G′′ from G′ . For each 0 ≤ j ≤ l − 1, we replace vertex zj by vertices zji and zjo and
we place a directed edge of weight 0 from zji to zjo , and we also replace each incoming edge to
(outgoing edge from) zj with an incoming edge to zji (outgoing edge from zjo ) in G′ .
Let M be the largest edge weight in G and let M ′ = 9nM . For each 0 ≤ j ≤ l−1, we add additional
vertices yji and yjo and we place a directed edge of weight 0 from yjo to zjo and an edge of weight
11
′
9 M from zji to yji .
We add 2 additional vertices A and B, and we place a directed edge from A to B of weight 0. We
also add l incoming edges to A (outgoing edges from B) from (to) each of the yj′ o s of weight 0 (M ′ ).
′
2
We also add edges of weight 2M
3 from yjo to yki (for each k 6= j). But due to the addition of O(n )
′
edges, graph G becomes dense. To solve this problem, we add a gadget in our construction that
′
ensures that ∀0 ≤ j ≤ l − 1, we have at least one path of length 2 and weight equal to 2M
3 from yjo
to yki (for each k 6= j) (similar to [3]). In this gadget, we add 2⌈log n⌉ vertices of the form Cr,s for
1 ≤ r ≤ ⌈log n⌉ and s ∈ {0, 1}. Now for each 0 ≤ j ≤ l − 1, 1 ≤ r ≤ ⌈log n⌉ and s ∈ {0, 1}, we add
′
′
an edge of weight M3 from yjo to Cr,s if j ′ s r-th bit is equal to s. We also add an edge of weight M3
from Cr,s to yji if j’s r-th bit is not equal to s. So overall we add 2n log n edges that are incident to
Cr,s vertices; for each yjo we add log n outgoing edges to Cr,s vertices and for each yji we add log n
incoming edges from Cr,s vertices.
′
We can observe that for 0 ≤ j ≤ l − 1, there is at least one path of weight 2M
3 from yjo to yki (for
each k 6= j) and the gadget does not add any new paths from yjo to yji . The reason is that for
every distinct j, k, there is at least one bit (say r) where j and k differ and let s be the r-th bit of
j. Then there must be an edge from yjo to Cr,s and an edge from Cr,s to yki , resulting in a path of
′
weight 2M
3 from yjo to yki . And by the same argument we can also observe that this gadget does
not add any new paths from yjo to yki .
We call this graph as G′′ . Figure 3 depicts the full construction of G′′ for l = 3. We now establish
the following three properties.
(i) For each 0 ≤ j ≤ l − 1, the longest shortest path in G′′ from yjo is to the vertex yji . It is easy to
see that the shortest path from yjo to any of the vertices in G or any of the z’s has weight at most
nM . And the shortest paths from yjo to the vertices A and B have weight 0. For k 6= j, the shortest
path from yjo to yko and yki has weight M ′ and 23 M ′ respectively. Whereas the shortest path from
′
yjo to yji has weight at least 10nM as it includes the last edge (zji , yji ) of weight 11
9 M = 11nM .
It is easy to observe that the shortest path from yjo to yji corresponds to the shortest path from
zjo to zji .
(ii) The shortest path from zjo to zji corresponds to the replacement path for the edge (vj , vj+1 )
lying on P . Suppose not and let Pj (s
vh
vk
t) (where vh is the vertex where Pj separates
19
from P and vk is the vertex where it joins P ) be the replacement path from s to t for the edge
zji )
(vj , vj+1 ). But then the path πj (zjo → zj−1i
zho → vh ◦ Pj (vh , vk ) ◦ vk → zki → zko
(where Pj (vh , vk ) is the subpath of Pj from vj to vk ) from zjo to zji has weight equal to wt(Pj ),
resulting in a contradiction as the shortest path from zjo to zji has weight greater than that of Pj .
(iii) One of the vertices among yjo ’s is a center of G′′ . It is easy to see that none of the vertices
in G could be a center of the graph G′′ as there is no path from any v ∈ V to any of the yjo ’s in
G′′ . Using a similar argument, we can observe that none of the z’s, or the vertices yji ’s could be a
potential candidate for the center of G′′ . For vertices A and B, the shortest path to any of the yji ’s
has weight exactly 53 M ′ = 15nM , which is strictly greater than the weight of the largest shortest
path from any of the yjo ’s. Thus one of the vertices among yjo ’s is a center of G′′ .
Thus by computing the radius in G′′ , from (i), (ii), and (iii), we can compute the weight of the
shortest replacement path from s to t, which by definition of 2-SiSP, is the second simple shortest
path from s to t. This completes the proof of 2-SiSP .sprs
m+n Radius.
Now, if instead of computing Radius in G′′ we compute the Eccentricities of all vertices in G′′ , then
from (i) and (ii) we can compute the weight of the replacement path for every edge (vj , vj+1 ) lying
on P , thus solving the replacement paths problem. So s-t Replacement Paths .sprs
m+n Eccentricities.
Constructing G′′ takes O(m+n log n) time since we add O(n) additional vertices and O(m+n log n)
additional edges, and given the output of Radius (Eccentricities), we can compute 2-SiSP (s-t
Replacement Paths) in O(1) (O(n)) time and hence the cost of both reductions is O(m+n log n).
II. ANSC and Replacement Paths: We first describe a sparse reduction from directed MWC
to 2-SiSP, which we will use for reducing ANSC to the s-t replacement paths problem. This reduction
is adapted from a sub-cubic non-sparse reduction from Min-Wt-∆ to 2-SiSP in [42]. The reduction
in [42] reduces Min-Wt-∆ to 2-SiSP by creating a tripartite graph. Since starting from Min-Wt-∆
is not appropriate for our results (as discussed in our sparse reduction to directed Radius), we start
instead from MWC, and instead of the tripartite graph used in [42] we use the original graph G
with every vertex v replaced with 2 copies, vi and vo .
In this reduction, as in [42], we first create a path of length n with vertices labeled from p0 to pn ,
which will be the initial shortest path. We then map every edge (pi , pi+1 ) to the vertex i in the
original graph G such that the replacement path from p0 to pn for the edge (pi , pi+1 ) corresponds
to the shortest cycle passing through i in G. Thus computing 2-SiSP (i.e., the shortest replacement
path) from p0 to pn in the constructed graph corresponds to the minimum weight cycle in the
original graph. The details of the reduction are in the proof of the lemma below.
sprs
Lemma 8.2. In weighted directed graphs, MWC ≤sprs
m+n 2-SiSP
Proof. To compute MWC in G, we first create the graph G′ , where we replace every vertex z by
vertices zi and zo , and we place a directed edge of weight 0 from zi to zo , and we replace each
incoming edge to (outgoing edge from) z with an incoming edge to zi (outgoing edge from zo ). We
also add a path P (p0 → p1
pn−1 → pn ) of length n and weight 0.
Let Q = n· M , where M is the maximum weight of any edge in G. For each 1 ≤ j ≤ n, we add an
edge of weight (n − j + 1)Q from pj−1 to jo and an edge of weight jQ from ji to pj in G′ to form
G′′ . Figure 5 depicts the full construction of G′′ for n = 3. This is an (m + n) reduction, and it can
be seen that the second simple shortest path from p0 to pn in G′′ corresponds to a minimum weight
cycle in G.
20
1o 1i
2o 2i
G′
3W
p0
W
3o 3i
2W
2W
W
p1
p2
0
0
p3
3W
0
Figure 5: G′′ for n = 3 in the reduction: directed MWC ≤sprs
m+n 2-SiSP
G
v1
v0 = s
v2
v3 = t
0
dG (v1 , t)
0
z0 0 z1 0 z2
dG (s, v1 )
dG (v2 , t)
dG (s, v2 )
sprs
Figure 6: G for l = 3 in the reduction: directed s-t Replacement Paths ≤sprs
m+n ANSC
′
We now establish the equivalence between ANSC and the s-t replacement paths problem under
(m + n)-reductions by first showing an (m + n)-sparse reduction from s-t replacement paths problem
to ANSC. We then describe a sparse reduction from ANSC to the s-t replacement paths problem,
which is similar to the reduction from MWC to 2-SiSP as described in Lemma 8.2.
sprs
Lemma 8.3. In weighted directed graphs, s-t replacement paths ≡sprs
m+n ANSC
Proof. We are given an input graph G = (V, E), a source vertex s and a sink vertex t and we wish
to compute the replacement paths for all the edges lying on the shortest path from s to t. Let
P (s = v0 → v1
vl−1 → vl = t) be the shortest path from s to t in G.
(i) Constructing G′ : We first create the graph G′ , as described in the proof of Lemma 8.1. Figure 6
depicts the full construction of G′ for l = 3.
(ii) We now show that for each 0 ≤ i ≤ l − 1, the replacement path from s to t for the edge (vi , vi+1 )
lying on P has weight equal to the shortest cycle passing through zi . If not, assume that for some i
(0 ≤ i ≤ l − 1), the weight of the replacement path from s to t for the edge (vi , vi+1 ) is not equal
to the weight of the shortest cycle passing through zi .
Let Pi (s
vj
vk
t) (where vj is the vertex where Pi separates from P and vk is the
vertex where it joins P ) be the replacement path from s to t for the edge (vi , vi+1 ) and let Ci
(zi
zp → vp
vq → zq
zi ) be the shortest cycle passing through zi in G′ .
If wt(Pi ) < wt(Ci ), then the cycle Ci′ (zi → zi−1
zj → vj ◦ Pi (vj , vk ) ◦ vk → zk → zk−1
zi )
(where Pi (vj , vk ) is the subpath of Pi from vj to vk ) passing through zi has weight equal to wt(Pi ) <
wt(Ci ), resulting in a contradiction as Ci is the shortest cycle passing through zi in G′ .
Now if wt(Ci ) < wt(Pi ), then the path Pi′ (s
vp ◦ Ci (vp , vq ) ◦ vq
vl ) where Ci (vp , vq ) is the
subpath of Ci from vp to vq , is also a path from s to t avoiding the edge (vi , vi+1 ), and has weight
equal to wt(Ci ) < wt(Pi ), resulting in a contradiction as Pi is the shortest replacement path from
s to t for the edge (vi , vi+1 ).
We then compute ANSC in G′ . And by (ii), the shortest cycles for each of the vertices z0 , z1 , . . . , zl−1
gives us the replacement paths from s to t. This leads to an (m + n) sparse reduction from s-t
replacement paths problem to ANSC.
21
BC
RC
Diameter
Pos BC
ANBC
Pos ANBC
Min-Wt-∆
Figure 7: Known sparse reductions for centrality problems, all from [3]. The regular edges represent sparse
O(m + n) reductions, the squiggly edges represent tilde-sparse O(m + n) reductions, and the dashed edges
represent reductions that are trivial. BC and Min-Wt-∆ (shaded with gray) are known to be sub-cubic
equivalent to APSP [3, 42].
Now for the other direction, we are given an input graph G = (V, E) and we wish to compute the
′′
ANSC in G. We first create the graph G , as described in Lemma 8.2. We can see that the shortest
path from p0 to pn avoiding edge (pj−1 , pj ) corresponds to a shortest cycle passing through j in G.
This gives us an (m + n)-sparse reduction from ANSC to s-t replacement paths problem.
9
Betweenness Centrality: Reductions
In this section, we consider sparse reductions for Betweenness Centrality and related problems. In
its full generality, the Betweenness Centrality of a vertex v is the sum, across all pairs of vertices s, t,
of the fraction of shortest paths from s to t that contain v as an internal vertex. This problem has
a Õ(mn) time algorithm due to Brandes [12]. Since there can be an exponential (in n) number of
shortest paths from one vertex to another, this general problem can deal with very large numbers.
In [3], a simplified variant was considered, where it is assumed that there is a unique shortest path
for each pair of vertices, and the Betweenness Centrality of vertex v, BC(v), is defined as the
number of vertex pairs s, t such that v is an internal vertex on the unique shortest path from s to
t. We will also restrict our attention to this variant here.
A number of sparse reductions relating to the following problems were given in [3].
• Betweenness Centrality (BC) of a vertex v, BC(v).
• Positive Betweenness Centrality (Pos BC) of v: determine whether BC(v) > 0.
• All Nodes Betweenness Centrality (ANBC): compute, for each v, the value of BC(v).
• Positive All Nodes Betweenness Centrality (Pos ANBC): determine, for each v, whether
BC(v) > 0.
• Reach Centrality (RC) of v: compute maxs,t∈V :dG (s,v)+dG (v,t)=dG (s,t) min(dG (s, v), dG (v, t)).
Figure 7 gives an overview of the previous fine-grained results given in [3] for Centrality problems.
In this figure, BC is the only centrality problem that is known to be sub-cubic equivalent to APSP,
and hence is shaded in the figure (along with Min-Wt-∆). None of these sparse reductions in [3]
imply MWCC hardness for any of the centrality problems since Diameter is not MWCC-hard (or
even sub-cubic equivalent to APSP), and Min-Wt-∆ has an Õ(m3/2 ) time algorithm, and so will
falsify MWCC if it is MWCC-hard. On other hand, Diameter is known to be both SETH-hard [34]
and k-DSH Hard (Section 5) and hence all these problems in Figure 7 (except Min-Wt-∆) are also
SETH and k-DSH hard.
22
BC
MWC
2-SiSP
≡sprs
m+n
2-SiSC
[7]
s-t replacement paths
≡sprs
m+n ANSC
Pos
ANBC
ANBC
n2
n2
Radius
Eccentricities
APSP
Figure 8: Sparse reductions for weighted directed graphs. The regular edges represent sparse O(m + n)
reductions, the squiggly edges represent tilde-sparse O(m + n) reductions, and the dashed edges represent
reductions that are trivial. All problems except APSP are MWCC-hard. Eccentricities, BC, ANBC and Pos
ANBC are also SETH/ k-DSH hard. ANBC and Pos ANBC (problems inside the dashed circles) are both
not known to be subcubic equivalent to APSP.
In this section, we give a sparse reduction from 2-SiSP to BC, establishing MWCC-hardness for
BC. We also give a tilde-sparse reduction from ANSC to Pos ANBC, and thus we have MWCChardness for both Pos ANBC and for ANBC, though neither problem is known to be in the sub-cubic
equivalence class. (Both have Õ(mn) time algorithms, and have APSP-hardness under sub-cubic
reductions.)
Figure 8 gives an updated partial order of our sparse reductions for weighted directed graphs; this
figure augments Figure 1 by including the sparse reductions for BC problems given in this section.
I. 2-SiSP to BC: Our sparse reduction from 2-SiSP to BC is similar to the reduction from 2SiSP to Radius described in Section 4. In our reduction, we first map every edge (vj , vj+1 ) to new
vertices yjo and yji such that the shortest path from yjo to yji corresponds to the replacement path
from s to t for the edge (vj , vj+1 ). We then add an additional vertex A and connect it to vertices
yjo ’s and yji ’s. We also ensure that the only shortest paths passing through A are from yjo to yji .
We then do binary search on the edge weights for the edges going from A to yji ’s with oracle calls to
the Betweenness Centrality problem, to compute the weight of the shortest replacement path from
s to t, which by definition of 2-SiSP, is the second simple shortest path from s to t.
sprs
Lemma 9.1. In weighted directed graphs, 2-SiSP .sprs
m+n Betweenness Centrality
Proof. We are given an input graph G = (V, E), a source vertex s and a sink/target vertex t and we
wish to compute the second simple shortest path from s to t. Let P (s = v0 → v1
vl−1 → vl = t)
be the shortest path from s to t in G.
(i) Constructing G′′ : We first construct the graph G′′ , as described in the proof of Lemma 8.1,
without the vertices A and B. For each 0 ≤ j ≤ l − 1, we change the weight of the edge from zji to
yji to M ′ (where M ′ = 9nM and M is the largest edge weight in G).
We add an additional vertex A and for each 0 ≤ j ≤ l − 1, we add an incoming (outgoing) edge
from (to) yjo (yji ). We assign the weight of the edges from yjo ’s to A as 0 and from A to yji ’s as
M ′ + q (for some q in the range 0 to nM ).
Figure 9 depicts the full construction of G′′ for l = 3.
We observe that for each 0 ≤ j ≤ l − 1, a shortest path from yjo to yji with (zji , yji ) as the last
edge has weight equal to M ′ + dG′′ (zjo , zji ).
23
dG (s, v1 )
dG (v2 , t)
v1
G
v2
v0 (s)
v3 (t)
dG (v1z, t)
0
z0o
z1o
0i
0
0
0
(s, v2 )
zd2G
o
z1i
0
0
0
0
0
z2i
0
y0o
y1o
y0i
y1i
y2o
y2i
A
C1,0
C1,1
C2,0
C2,1
sprs
Figure 9: G′′ for l = 3 in the reduction: directed 2-SiSP .sprs
m+n Betweenness Centrality. The gray and the
bold edges have weight M ′ and 31 M ′ respectively. All the outgoing (incoming) edges from (to) A have weight
M ′ + q (0).
(ii) We now show that the Betweenness Centrality of A, i.e. BC(A), is equal to l iff q < dG′′ (zjo , zji )
for each 0 ≤ j ≤ l − 1. The only paths that passes through the vertex A are from vertices yjo ’s to
vertices yji ’s. For j 6= k, as noted in the proof of Lemma 8.1, there exists some r, s such that there
is a path from yjo to yki that goes through Cr,s and has weight equal to 32 M ′ . However a path from
yjo to yki has weight M ′ + q, which is strictly greater than 23 M ′ and hence the pairs (yjo , yki ) does
not contribute to the Betweenness Centrality of A.
Now if BC(A), is equal to l, it implies that the shortest paths for all pairs (yjo , yji ) passes through
A and there is exactly one shortest path for each such pair. Hence for each 0 ≤ j ≤ l − 1,
M ′ + q < M ′ + dG′′ (zjo , zji ). Thus q < dG′′ (zjo , zji ) for each 0 ≤ j ≤ l − 1.
On the other hand if q < dG′′ (zjo , zji ) for each 0 ≤ j ≤ l − 1, then the path from yjo to yji with
(zji , yji ) as the last edge has weight M ′ + dG′′ (zjo , zji ). However the path from yjo to yji passing
through A has weight M ′ + q < M ′ + dG′′ (zjo , zji ). Hence every such pair contributes 1 to the
Betweenness Centrality of A and thus BC(A) = l.
Thus using (ii), we just need to find the minimum value of q such that BC(A) < l in order to
compute the value min0≤j≤l−1 dG′′ (zjo , zji ). We can find such q by performing a binary search in
the range 0 to nM and computing BC(A) at every layer. Thus we make O(log nM ) calls to the
Betweenness Centrality algorithm.
As observed in the proof of Lemma 8.3, we know that the shortest path from zjo to zji corresponds
to the replacement path for the edge (vj , vj+1 ) lying on P . Thus by making O(log nM ) calls to the
Betweenness Centrality algorithm, we can compute the second simple shortest path from s to t in
G. This completes the proof.
The cost of this reduction is O((m + n log n) · log nM ).
II. ANSC to Pos ANBC: We now describe a tilde-sparse reduction from the ANSC problem
to the All Nodes Positive Betweenness Centrality problem (Pos ANBC). Sparse reductions from
24
1o
0 1i 2o
0 2i 3o
0
q1 0
q2 0
0 3i
G
z1
z2
q3
z3
sprs
Figure 10: G for n = 3 in the reduction: directed ANSC .sprs
m+n Pos ANBC.
′
Min-Wt-∆ and from Diameter to Pos ANBC are given in [3]. However, Min-Wt-∆ can be solved in
O(m3/2 ) time, and Diameter is not known to be MWC-hard, hence neither of these reductions can
be used to show hardness of the All Nodes Positive Betweenness Centrality problem relative to the
MWC Conjecture. (Recall that Pos ANBC is not known to be subcubic equivalent to APSP.)
Our reduction is similar to the reduction from 2-SiSP to the Betweenness Centrality problem, but
instead of computing betweenness centrality through one vertex, it computes the positive betweenness centrality values for n different nodes. We first split every vertex x into vertices xo and xi
such that the shortest path from xo to xi corresponds to the shortest cycle passing through x in
the original graph. We then add additional vertices zx for each vertex x in the original graph and
connect it to the vertices xo and xi such that the only shortest path passing through zx is from xo
to xi . We then perform binary search on the edge weights for the edges going from zx to xi with
oracle calls to the Positive Betweenness Centrality problem, to compute the weight of the shortest
cycle passing through x in the original graph. Our reduction is described in Lemma 9.2.
sprs
Lemma 9.2. In weighted directed graphs, ANSC .sprs
m+n Pos ANBC
Proof. We are given an input graph G = (V, E) and we wish to compute the ANSC in G. Let M
be the largest edge weight in G.
(i) Constructing G′ : Now we construct a graph G′ from G. For each vertex x ∈ V , we replace x
by vertices xi and xo and we place a directed edge of weight 0 from xi to xo , and we also replace
each incoming edge to (outgoing edge from) x with an incoming edge to xi (outgoing edge from xo )
in G′ . We can observe that the shortest path from xo to xi in G′ corresponds to the shortest cycle
passing through x in G.
For each vertex x ∈ V , we add an additional vertex zx in G′ and we add an edge of weight 0 from
xo to zx and an edge of weight qx (where qx lies in the range from 0 to nM ) from zx to xi .
Figure 10 depicts the full construction of G′ for n = 3.
We observe that the shortest path from xo to xi for some vertex x ∈ V passes through zx only if
the shortest cycle passing through x in G has weight greater than qx .
(ii) We now show that for each vertex x ∈ V , Positive Betweenness Centrality of zx is true, i.e.,
BC(zx ) > 0 iff the shortest cycle passing through x has weight greater than qx . It is easy to see that
the only path that pass through vertex zx is from xo to xi (as the only outgoing edge from xi is to
xo and the only incoming edge to xo is from xi ).
Now if BC(zx ) > 0, it implies that the shortest path from xo to xi passes through zx and hence the
path from xo to xi corresponding to the shortest cycle passing through x has weight greater than
qx .
On the other hand, if the shortest cycle passing through x has weight greater than qx , then the
shortest path from xo to xi passes through zx . And hence BC(zx ) > 0.
Then using (ii), we just need to find the maximum value of qx such that BC(zx ) > 0 in order
to compute the weight of the shortest cycle passing through x in the original graph. We can find
25
such qx by performing a binary search in the range 0 to nM and computing Positive Betweenness
Centrality for all nodes at every layer. Thus we make O(log nM ) calls to the Pos ANBC algorithm.
This completes the proof.
The cost of this reduction is O((m + n) · log nM ).
A
A.1
Appendix
Definitions of Graph Problems
All Pairs Shortest Distances (APSD). Given a graph G = (V, E), the APSD problem is to
compute the shortest path distances for every pair of vertices in G.
APSP. This is the problem of computing the shortest path distances for every pair of vertices in
G together with a concise representation of the shortest paths, which in our case is an n × n matrix,
LastG , that contains, in position (x, y), the predecessor vertex of y on a shortest path from x to y.
Minimum Weight Cycle (MWC). Given a graph G = (V, E), the minimum weight cycle
problem is to find the weight of a minimum weight cycle in G.
All Nodes Shortest Cycles (ANSC). Given a graph G = (V, E), the ANSC problem is to find
the weight of a shortest cycle through each vertex in G.
Replacement Paths. Given a graph G = (V, E) and a pair of vertices s, t, the replacement paths
problem is to find, for each edge e lying on the shortest path from s to t, a shortest path from s to
t avoiding the edge e.
k-SiSP. Given a graph G = (V, E) and a pair of vertices s, t, the k-SiSP problem is to find the k
shortest simple paths from s to t: the i-th path must be different from first (i − 1) paths and must
have weight greater than or equal to the weight of any of these (i − 1) paths.
k-SiSC. The corresponding cycle version of k-SiSP is known as k-SiSC, where the goal is to
compute the k shortest simple cycles through a given vertex x, such that the i-th cycle generated
is different from all previously generated (i − 1) cycles and has weight greater than or equal to the
weight of any of these (i − 1) cycles.
Radius. For a given graph G = (V, E), the Radius problem is to compute the value minx∈V
maxy∈V dG (x, y). The center of a graph is the vertex x which minimizes this value.
Diameter.
dG (x, y).
For a given graph G = (V, E), the Diameter problem is to compute the value maxx,y∈V
Eccentricities. For a given graph G = (V, E), the Eccentricities problem is to compute the value
maxy∈V dG (x, y) for each vertex x ∈ V .
Betweenness Centrality (BC). For a given graph G = (V, E) and a node v ∈ V , the BetweenP
σ (v)
ness Centrality of v, BC(v), is the value s,t∈V,s,t6=v s,t
σs,t , where σs,t is the number of shortest
paths from s to t and σs,t (v) is the number of shortest paths from s to t passing through v.
As in [3] we assume that the graph has unique shortest paths, hence BC(v) is simply the number
of s, t pairs such that the shortest path from s to t passes through v.
All Nodes Betweenness Centrality (ANBC).
ity: determine BC(v) for all vertices.
26
The all-nodes version of Betweenness Central-
Positive Betweenness Centrality (Pos BC).
Pos BC problem is to deternine if BC(v) > 0.
Given a graph G = (V, E) and a vertex v, the
All Nodes Positive Betweenness Centrality (Pos ANBC).
Betweenness Centrality.
The all-nodes version of Positive
Reach Centrality (RC). For a given graph G = (V, E) and a node v ∈ V , the Reach Centrality
of v, RC(v), is the value maxs,t∈V :dG (s,v)+dG (v,t)=dG (s,t) min(dG (s, b), dG (b, t)).
A.2
k-SiSC Algorithm : Undirected Graphs
This section deals with an application of our bit-sampling technique to obtaining a new near-linear
time algorithm for k-SiSC in undirected graphs (see definition below). Note that this problem is
not in the mn class and this result is included here as an application of the bit-sampling technique.
k-SiSC is the problem of finding k simple shortest cycles passing through a vertex v. Here the
output is a sequence of k simple cycles through v in non-decreasing order of weights such that the
i-th cycle in the output is different from the previous i − 1 cycles. The corresponding path version
of this problem is known as k-SiSP and is solvable in near linear time in undirected graphs [26].
We now use our bit-sampling technique (described in Section 3) to get a near-linear time algorithm for k-SiSC, which was not previously known. We obtain this k-SiSC algorithm by giving a
tilde-sparse Õ(m + n) time reduction from k-SiSC to k-SiSP. This reduction uses our bit-sampling
technique for sampling the edges incident to v and creates ⌈log n⌉ different graphs. Here we only
use index i of our bit-sampling method.
sprs
Lemma A.1. In undirected graphs, k-SiSC .sprs
(m+n) k-SiSP.
Proof. Let the input be G = (V, E) and let x ∈ V be the vertex for which we need to compute
k-SiSC. Let N (x) be the neighbor-set of x. We create ⌈log n⌉ graphs Gi = (Vi , Ei ) such that
∀1 ≤ i ≤ ⌈log n⌉, Gi contains two additional vertices x0,i and x1,i (instead of the vertex x) and
∀y ∈ N (x), the edge (y, x0,i ) ∈ Ei if y’s i-th bit is 0, otherwise the edge (y, x1,i ) ∈ Ei . This is our
bit-sampling method.
The construction takes O((m+n)·log n) time and we observe that every cycle through x will appear
as a path from x0,i to x1,i in at least one of the Gi . Hence, the k-th shortest path in the collection
of k-SiSPs from x0,i to x1,i in log n Gi , 1 ≤ i ≤ ⌈log n⌉ (after removing duplicates), corresponds to
the k-th SiSC passing through x.
Using the undirected k-SiSP algorithm in [26] that runs in O(k · (m + n log n)), we obtain an
O(k log n · (m + n log n)) time algorithm for k-SiSC in undirected graphs.
References
[1] A. Abboud, A. Backurs, and V. V. Williams. If the current clique algorithms are optimal, so
is Valiant’s parser. In Proc. FOCS, pages 98–117. IEEE, 2015.
[2] A. Abboud, K. Censor-Hillel, and S. Khoury. Near-linear lower bounds for distributed distance
computations, even in sparse networks. In Proc. ISDC, pages 29–42. Springer, 2016.
27
[3] A. Abboud, F. Grandoni, and V. V. Williams. Subcubic equivalences between graph centrality
problems, APSP and diameter. In Proc. SODA, pages 1681–1697, 2015.
[4] A. Abboud, V. Vassilevska Williams, and H. Yu. Matching triangles and basing hardness on
an extremely popular conjecture. In Proc. STOC, pages 41–50. ACM, 2015.
[5] A. Abboud and V. V. Williams. Popular conjectures imply strong lower bounds for dynamic
problems. In Proc. FOCS, pages 434–443. IEEE, 2014.
[6] A. Abboud, V. V. Williams, and J. Wang. Approximation and fixed parameter subquadratic
algorithms for radius and diameter in sparse graphs. In Proc. SODA, pages 377–391. SIAM,
2016.
[7] U. Agarwal and V. Ramachandran. Finding k simple shortest paths and cycles. In Proc.
ISAAC, pages 8:1–8:12, 2016.
[8] N. Alon, Z. Galil, O. Margalit, and M. Naor. Witnesses for boolean matrix multiplication and
for shortest paths. In Proc. FOCS, pages 417–426. IEEE, 1992.
[9] N. Alon, R. Yuster, and U. Zwick. Finding and counting given length cycles. Algorithmica,
17(3):209–223, 1997.
[10] A. Amir, T. M. Chan, M. Lewenstein, and N. Lewenstein. On hardness of jumbled indexing.
In Proc. ICALP, pages 114–125. Springer, 2014.
[11] A. Backurs and P. Indyk. Edit distance cannot be computed in strongly subquadratic time
(unless SETH is false). In Proc. STOC, pages 51–58. ACM, 2015.
[12] U. Brandes. A faster algorithm for betweenness centrality. Jour. Math. Soc., 25(2):163–177,
2001.
[13] M. L. Carmosino, J. Gao, R. Impagliazzo, I. Mihajlin, R. Paturi, and S. Schneider. Nondeterministic extensions of the strong exponential time hypothesis and consequences for nonreducibility. In Proc. ITCS, pages 261–270. ACM, 2016.
[14] S. Chechik, T. D. Hansen, G. F. Italiano, V. Loitzenbauer, and N. Parotsidis. Faster algorithms
for computing maximal 2-connected subgraphs in sparse directed graphs. In Proc. SODA, pages
1900–1918. SIAM, 2017.
[15] A. Gajentaan and M. H. Overmars. On a class of O(n2 ) problems in computational geometry.
Computational Geometry, 5(3):165–185, 1995.
[16] Z. Galil and O. Margalit. All pairs shortest distances for graphs with small integer length
edges. Information and Computation, 134(2):103–139, 1997.
[17] J. Gao, R. Impagliazzo, A. Kolokolova, and R. Williams. Completeness for first-order properties
on sparse structures with algorithmic applications. In Proc. SODA, pages 2162–2181. SIAM,
2017.
[18] Z. Gotthilf and M. Lewenstein. Improved algorithms for the k simple shortest paths and the
replacement paths problems. Inf. Proc. Lett., 109(7):352–355, 2009.
28
[19] T. Hagerup. Improved shortest paths on the word ram. In Proc. ICALP, pages 61–72. Springer,
2000.
[20] M. Henzinger, S. Krinninger, and V. Loitzenbauer. Finding 2-edge and 2-vertex strongly
connected components in quadratic time. In Proc. ICALP, pages 713–724. Springer, 2015.
[21] M. Henzinger, S. Krinninger, D. Nanongkai, and T. Saranurak. Unifying and strengthening
hardness for dynamic problems via the online matrix-vector multiplication conjecture. In Proc.
STOC, pages 21–30. ACM, 2015.
[22] R. Impagliazzo and R. Paturi. On the complexity of k-sat. Jour. Comput. Sys. Sci., 62(2):367–
375, 2001.
[23] A. Itai and M. Rodeh. Finding a minimum circuit in an graph. SIAM Jour. Comput., 7(4):413–
423, 1978.
[24] Z. Jafargholi and E. Viola. 3SUM, 3XOR, triangles. Algorithmica, 74(1):326–343, 2016.
[25] R. M. Karp. Reducibility among combinatorial problems. In Complexity of computer computations, pages 85–103. Springer, 1972.
[26] N. Katoh, T. Ibaraki, and H. Mine. An efficient algorithm for k shortest simple paths. Networks,
12(4):411–427, 1982.
[27] T. Kopelowitz, S. Pettie, and E. Porat. Higher lower bounds from the 3SUM conjecture. In
Proc. SODA, pages 1272–1287. SIAM, 2016.
[28] A. Lingas and E.-M. Lundell. Efficient approximation algorithms for shortest cycles in undirected graphs. Inf. Proc. Lett., 109(10):493–498, 2009.
[29] J. B. Orlin and A. Sedeno-Noda. An O(nm) time algorithm for finding the min length directed
cycle in a graph. In Proc. SODA. SIAM, 2017.
[30] M. Patrascu. Towards polynomial lower bounds for dynamic problems. In Proc. STOC, pages
603–610. ACM, 2010.
[31] M. Pătraşcu and R. Williams. On the possibility of faster sat algorithms. In Proc. SODA,
pages 1065–1075. SIAM, 2010.
[32] S. Pettie. A new approach to all-pairs shortest paths on real-weighted graphs. Theoretical
Computer Science, 312(1):47–74, 2004.
[33] S. Pettie and V. Ramachandran. A shortest path algorithm for real-weighted undirected graphs.
SIAM Jour. Comput., 34(6):1398–1431, 2005.
[34] L. Roditty and V. Vassilevska Williams. Fast approximation algorithms for the diameter and
radius of sparse graphs. In Proc. STOC, pages 515–524. ACM, 2013.
[35] L. Roditty and V. V. Williams. Minimum weight cycles and triangles: Equivalences and
algorithms. In Proc. FOCS, pages 180–189. IEEE, 2011.
[36] P. Sankowski and K. Węgrzycki. Improved distance queries and cycle counting by Frobenius
Normal Form. In Proc. STACS, pages 56:1–56:14, 2017.
29
[37] R. Seidel. On the all-pairs-shortest-path problem in unweighted undirected graphs. Jour.
Comput. Sys. Sci., 51(3):400–403, 1995.
[38] A. Shoshan and U. Zwick. All pairs shortest paths in undirected graphs with integer weights.
In Proc. FOCS, pages 605–614. IEEE, 1999.
[39] M. Thorup. Undirected single source shortest paths in linear time. In Proc. FOCS, pages
12–21. IEEE, 1997.
[40] V. Vassilevska Williams. Hardness of easy problems: Basing hardness on popular conjectures
such as the strong exponential time hypothesis (invited talk). In LIPIcs-Leibniz Intl. Proc.
Informatics, volume 43. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2015.
[41] R. Williams. A new algorithm for optimal 2-constraint satisfaction and its implications. Theoretical Computer Science, 348(2):357–365, 2005.
[42] V. V. Williams and R. Williams. Subcubic equivalences between path, matrix and triangle
problems. In Proc. FOCS, pages 645–654. IEEE, 2010.
[43] V. V. Williams and R. Williams. Finding, minimizing, and counting weighted subgraphs. SIAM
Jour. Comp., 42(3):831–854, 2013.
[44] R. Yuster. A shortest cycle for each vertex of a graph. Inf. Proc. Lett., 111(21):1057–1061,
2011.
[45] U. Zwick. All pairs shortest paths using bridging sets and rectangular matrix multiplication.
JACM, 49(3):289–317, 2002.
30
| 8 |
An Emptiness Algorithm for Regular Types with
Set Operators
arXiv:cs/9811015v1 [cs.LO] 11 Nov 1998
Lunjin Lu and John G. Cleary
Department of Computer Science
University of Waikato
Hamilton, New Zealand
Phone: +64-838-4627/4378
{lunjin,jcleary}@cs.waikato.ac.nz
Abstract. An algorithm to decide the emptiness of a regular type expression with set operators given a set of parameterised type definitions
is presented. The algorithm can also be used to decide the equivalence
of two regular type expressions and the inclusion of one regular type expression in another. The algorithm strictly generalises previous work in
that tuple distributivity is not assumed and set operators are permitted
in type expressions.
Keywords: type, emptiness, prescriptive type
1
Introduction
Types play an important role in programming languages [6]. They
make programs easier to understand and help detect errors. Types
have been introduced into logic programming in the forms of type
checking and inference [5,9,12,26,32] or type analysis [25,33,17,19,13,22,7,23]
or typed languages [16,21,28,31]. Recent logic programming systems
allow the programmer to declare types for predicates and type errors
are then detected either at compile time or at run time. The reader
is referred to [27] for more details on types in logic programming.
A type is a possibly infinite set of ground terms with a finite
representation. An integral part of any type system is its type language that specifies which sets of ground terms are types. To be
useful, types should be closed under intersection, union and complement operations. The decision problems such as the emptiness of
a type, inclusion of a type in another and equivalence of two types
should be decidable. Regular term languages [14,8], called regular
types, satisfy these conditions and have been used widely used as
types [29,25,33,9,17,21,28,31,12,32,19,13,22,7,23].
Most type systems use tuple distributive regular types which are
strictly less powerful than regular types [29,25,33,17,21,28,31,12,32,19,13,22,7,23].
Tuple distributive regular types are regular types closed under tuple
distributive closure. Intuitively, the tuple distributive closure of a set
of terms is the set of all terms constructed recursively by permuting
each argument position among all terms that have the same function
symbol [32].
This paper gives an algorithm to decide if a type expression denotes an empty set of terms. The correctness of the algorithm is
proved and its complexity is analysed. The algorithm works on prescriptive types [28]. By prescriptive types, we mean that the meaning of a type is determined by a given set of type definitions. We
allow parametric and overloading polymorphism in type definitions.
Prescriptive types are useful both in compilers and other program
manipulation tools such as debuggers because they are easy to understand for programmers. Type expressions may contain set operators
with their usual interpretations. Thus, the algorithm can be used to
decide the equivalence of two type expressions and the inclusion of
one type expression in another. The introduction of set operators
into type expressions allows concise and intuitive representation of
regular types.
Though using regular term languages as types allow us to make
use of theoretical results in the field of tree automata [14], algorithms
for testing the emptiness of tree automata cannot be applied directly
as type definitions may be parameterised. For instance, in order to
decide the emptiness of a type expression given a set of type definitions, it would be necessary to construct a tree automaton from
the type expression and the set of type definitions before an algorithm for determining the emptiness of an tree automaton can be
used. When type definitions are parameterised, this would make it
necessary to construct a different automaton each time the emptiness of a type expression is tested. Thus, an algorithm that works
directly with type definitions is desirable as it avoids this repeated
construction of automata.
Attempts have been made in the past to find algorithms for regular types [25,12,32,33,31,10,9]. To our knowledge, Dart and Zobel’s
work [10] is the only one to present decision algorithms for emptiness
and inclusion problems for prescriptive regular types without the tuple distributive restriction. Unfortunately, their decision algorithm
for the inclusion problem is incorrect for regular types in general.
See [24] for a counterexample. Moreover, the type language of Dart
and Zobel is less expressive than that considered in this paper since
it doesn’t allow set operators and parameterised type definitions.
Set constraint solving has also been used in type checking and
type inference [3,2,20,18,11]. However, set constraint solving methods are intended to infer descriptive types [28] rather than for testing
emptiness of prescriptive types [28]. Therefore, they are useful in different settings from the algorithm presented in this paper. Moreover,
algorithms proposed for set constraint solving [3,4,2,1] are not applicable to the emptiness problem we considered in this paper as they
don’t take type definitions into account.
The remainder of this paper is organised as follows. Section 2
describes our language of type expressions and type definitions. Section 3 presents our algorithm for testing if a type expression denotes
an empty set of terms. Section 4 addresses the of the algorithm.
Section 5 presents the complexity of the algorithm and section 6
concludes the paper. Some lemmas are presented in the appendix.
2
Type Language
Let Σ be a fixed ranked alphabet. Each symbol in Σ is called a
function symbol and has a fixed arity. It is assumed that Σ contains
at least one constant that is a function symbol of arity 0. The arity
of a symbol f is denoted as arity(f ). Σ may be considered as the set
of function symbols in a program. Let T (Φ) be the set of all terms
over Φ. T (Σ) is the set of all possible values that a program variable
can take. We shall use regular term languages over Σ as types.
A type is represented by a ground term constructed from another
ranked alphabet Π and {⊓, ⊔, ∼, 1, 0}, called type constructors. It is
assumed that (Π ∪ {⊓, ⊔, ∼, 1, 0}) ∩ Σ = ∅. Thus, a type expression
is a term in T (Π ∪ {⊓, ⊔, ∼, 1, 0}). The denotations of type constructors in Π are determined by type definitions whilst ⊓, ⊔, ∼, 1
and 0 have fixed denotations that will be given soon.
Several equivalent formalisms such as tree automata [14,8], regular term grammars [14,10,8] and regular unary logic programs [32]
have been used to define regular types. We define types by type
rules. A type rule is a production rule of the form c(ζ1 , · · · , ζm ) → τ
where c ∈ Π, ζ1 , · · · , ζm are different type parameters and τ ∈
T (Σ ∪ Π ∪ Ξm ) where Ξm = {ζ1 , · · · , ζm }. The restriction that every
type parameter in the righthand side of a type rule must occur in
the lefthand side of the type rule is often referred to as type preserving [30] and has been used in all the type definition formalisms.
Note that overloading of function symbols is permitted as a function
symbol can appear in the righthand sides of many type rules. We
def S
denote by ∆ the set of all type rules and define Ξ = c∈Π Ξarity(c) .
hΠ, Σ, ∆i is a restricted form of context-free term grammar.
Example 1. Let Σ = {0, s(), nil, cons(, )} and Π = {Nat, Even, List()}.
∆ defines natural numbers, even numbers, and lists where
Nat
→ 0 | s(Nat),
∆ = Even → 0 | s(s(Even)),
List(ζ) → nil | cons(ζ, List(ζ))
where, for instance, Nat → 0 | s(Nat) is an abbreviation of two rules
Nat → 0 and Nat → s(Nat).
∆ is called simplified if τ in each production rule c(ζ1 , · · · , ζm ) → τ
is of the form f (τ1 , · · · , τn ) such that each τj , for 1 ≤ j ≤ n, is either
in Ξm or of the form d(ζ1′ , · · · , ζk′ ) and ζ1′ , · · · , ζk′ ∈ Ξm . We shall
assume that ∆ is simplified. There is no loss of generality to use
a simplified set of type rules since every set of type rules can be
simplified by introducing new type constructors and rewriting and
adding type rules in the spirit of [10].
Example 2. The following is the simplified version of the set of type
rules in example 1. Σ = {0, s(), nil, cons(, )}, Π = {Nat, Even, Odd, List()}
and
∆=
(
Nat → 0 | s(Nat), Even → 0 | s(Odd),
Odd → s(Even), List(ζ) → nil | cons(ζ, List(ζ))
)
A type valuation φ is a mapping from Ξ to T (Π ∪{⊓, ⊔, ∼, 1, 0}).
The instance φ(R) of a production rule R under φ is obtained by replacing each occurrence of each type parameter ζ in R with φ(ζ).
E.g., List(Nat⊓(∼Even)) → cons(Nat⊓(∼Even), List(Nat⊓(∼Even)))
is the instance of List(ζ) → cons(ζ, List(ζ)) under a type valuation
that maps ζ to Nat⊓(∼Even). Let
def
ground(∆) = {φ(R) | R ∈ ∆ ∧ φ ∈ (Ξ 7→ T (Π ∪ {⊓, ⊔, ∼, 1, 0}))}
∪ {1 7→ f (1, · · · , 1) | f ∈ Σ}
ground(∆) is the set of all ground instances of grammar rules in ∆
plus rules of the form 1 → f (1, · · · , 1) for every f ∈ Σ.
Given a set ∆ of type definitions, the type denoted by a type
expression is determined by the following meaning function.
def
[1]]∆ = T (Σ)
def
[0]]∆ = ∅
def
[E1 ⊓E2]∆ = [E1]∆ ∩ [E2]∆
def
[E1 ⊔E2]∆ = [E1]∆ ∪ [E2]∆
def
[∼E]]∆ = T (Σ) − [E]]∆
def
[ω]]∆ =
[
{f (t1 , · · · , tn ) | ∀1 ≤ i ≤ n. ti ∈ [Ei]∆ }
(ω→f (E1 ,···,En ))∈ground(∆)
[·]]∆ gives fixed denotations to ⊓, ⊔, ∼, 1 and 0. ⊓, ⊔ and ∼ are
interpreted by [·]]∆ as set intersection, set union and set complement
with respect to T (Σ). 1 denotes T (Σ) and 0 the empty set.
Example 3. Let ∆ be that in example 2. We have
[Nat]]∆ = {0, s(0), s(s(0)), · · ·}
[Even]]∆ = {0, s(s(0)), s(s(s(s(0)))), · · ·}
[Nat⊓∼Even]]∆ = {s(0), s(s(s(0))), s(s(s(s(s(0))))), · · ·}
[List(Nat⊓∼Even)]]∆ = {cons(s(0), nil), cons(s(s(s(0))), nil), · · ·}
The lemma 5 in the appendix states that every type expression
denotes a regular term language, that is, a regular type.
We extend [·]]∆ to sequences θ of type expressions as follows.
def
[ǫ]]∆ = {ǫ}
def
[hEi • θ′]∆ = [E]]∆ × [θ′]∆
where ǫ is the empty sequence, • is the infix sequence concatenation
operator, hEi is the sequence consisting of the type expression E
and × is the Cartesian product operator. As a sequence of type
expressions, ǫ can be thought of consisting of zero instance of 1. We
use Λ to denote the sequence consisting of zero instance of 0 and
define [Λ]]∆ = ∅.
We shall call a sequence of type expressions simply a sequence.
A sequence expression is an expression consisting of sequences of
the same length and ⊓, ⊔ and ∼. The length of the sequences in a
sequence expression θ is called the dimension of θ and is denoted by
kθk. Let θ, θ1 and θ2 be sequence expressions of the same length.
def
[θ1 ⊓θ2]∆ = [θ1]∆ ∩ [θ2]∆
def
[θ1 ⊔θ2]∆ = [θ1]∆ ∪ [θ2]∆
def
[∼θ]]∆ = T (Σ) × · · · × T (Σ) −[[θ]]∆
|
{z
kθk times
}
A conjunctive sequence expression is a sequence expression of the
form γ1 ∧ · · · ∧ γm where γi for, 1 ≤ i ≤ m, are sequences.
3
Emptiness Algorithm
This section presents an algorithm that decides if a type expression
denotes the empty set with respect to a given set of type definitions.
The algorithm can also be used to decide if (the denotation of) one
type expression is included in (the denotation of) another because
E1 is included in E2 iff E1 ⊓∼E2 is empty.
We first introduce some terminology and notations. A type atom
is a type expression of which the principal type constructor is not a
set operator. A type literal is either a type atom or the complement
of a type atom. A conjunctive type expression C is of the form ⊓i∈I li
with li being a type literal. Let α be a type atom. F (α) defined below
is the set of the principal function symbols of the terms in [α]]∆ .
def
F (α) = {f ∈ Σ | ∃ζ1 · · · ζk .((α → f (ζ1, · · · , ζk )) ∈ ground(∆))}
Let f ∈ Σ. Define
def
Afα = {hα1 , · · · , αk i | (α → f (α1 , · · · , αk )) ∈ ground(∆)}
We have [Afα]∆ = {ht1 , · · · , tk i | f (t1 , · · · , tk ) ∈ [α]]∆ }. Both F (α) and
Afα are finite even though ground(∆)) is usually not finite.
The algorithm repeatedly reduces the emptiness problem of a type
expression to the emptiness problems of sequence expressions and
then reduces the emptiness problem of a sequence expression to the
emptiness problems of type expressions. Tabulation is used to break
down any possible loop and to ensure termination. Let O be a type
def
expression or a sequence expression. Define empty(O) = ([[O]]∆ = ∅).
3.1
Two Reduction Rules
We shall first sketch the two reduction rules and then add tabulation
to form an algorithm. Initially the algorithm is to decide the validity
of a formula of the form
empty(E)
(1)
where E is a type expression.
The first reduction rule rewrites a formula of
the form (1) into a conjunction of formulae of the following form.
Reduction Rule One.
empty(σ)
(2)
where σ is a sequence expression where ∼ is applied to type expressions but not to any sequence expression.
It is obvious that a type expression has a unique (modulo equivalence of denotation) disjunctive normal form. Let DNF(E) be the disjunctive normal form of E. empty(E) can written into ∧C∈DNF(E) empty(C).
Each C is a conjunctive type expression. We assume that C contains
at least one positive type literal. This doesn’t cause any loss of generality as [1⊓C]]∆ = [C]]∆ for any conjunctive type expression C. We
also assume that C doesn’t contain repeated occurrences of the same
type literal.
Let C = ⊓1≤i≤m ωi ⊓ ⊓1≤j≤n ∼τj where ωi and τj are type atoms.
def
The set of positive type literals in C is denoted as pos(C) = {ωi | 1 ≤
i ≤ m} while the set of complemented type atoms are denoted as
def
neg(C) = {τj | 1 ≤ j ≤ n}. lit(C) denotes the set of literals occurring in C. By lemma 3 in the appendix, empty(C) is equivalent
to
∀f ∈ ∩α∈pos(C) F (α).
(3)
empty((⊓ω∈pos(C) (⊔Afω ))⊓(⊓τ ∈neg(C) ∼(⊔Afτ )))
The intuition behind the equivalence is as follows. [C]]∆ is empty
iff, for every function symbol f , the set of the sequences ht1 , · · · , tk i
of terms such that f (t1 , · · · , tk ) ∈ [C]]∆ is empty. Only the function
symbols in ∩α∈pos(C) F (α) need to be considered.
We note the following two special cases of the formula (3).
(a) If ∩α∈pos(C) F (α) = ∅ then the formula (3) is true because ∧∅ =
true. In particular, F (0) = ∅. Thus, if 0 ∈ pos(C) then ∩α∈pos(C) F (α) =
∅ and hence the formula (3) is true.
(b) If Afτ = ∅ for some τ ∈ neg(C) then ⊔Afτ = h0, · · · , 0i and
∼(⊔Afτ ) = h1, · · · , 1i. Thus, τ has no effect on the subformula
for f when Afτ = ∅.
In order to get rid of complement operators over sequence subexpressions, the complement operator in ∼(⊔Afτ ) is pushed inwards
by the function push defined in the following.
def
push(∼(⊔i∈I γi )) = ⊓i∈I push(∼γi )
def
push(∼hE1 , E2 , · · · , Ek i) = ⊔1≤l≤k h 1, · · · , 1, ∼El , 1, · · · , 1 i
def
push(∼ǫ) = Λ
|
{z
l−1
}
|
{z
k−l
for k ≥ 1
}
It follows from De Morgan’s law and the definition of [·]]∆ that
[push(∼(⊔Afτ ))]]∆ = [∼(⊔Afτ )]]∆ . Substituting push(∼(⊔Afτ )) for ∼(⊔Afτ )
in the formula (3) gives rise to a formula of the form (2).
The second reduction rule rewrites a formula
of the form 2 to a conjunction of disjunctions of formulae of the
form 1. Formula 2 is written into a disjunction of formulae of the
form.
empty(Γ )
Reduction Rule Two.
where Γ be a conjunctive sequence expression.
In the case kΓ k = 0, by lemma 4 in the appendix, empty(Γ ) can
be decided without further reduction. If Λ ∈ Γ then empty(Γ ) is true
because [Λ]]∆ = ∅. Otherwise, empty(Γ ) is false because [Γ ]∆ = {ǫ}.
In the case kΓ k =
6 0, empty(Γ ) is equivalent to
∨1≤j≤kΓ k empty(Γ↓j)
def
where, letting Γ = γ1 ⊓ · · · ⊓γk , Γ↓j = ⊓1≤i≤k γij with γij being the j th
component of γi . Note that Γ↓j is a type expression and empty(Γ↓j)
is of the form 1.
3.2
Algorithm
The two reduction rules in the previous section form the core of the
algorithm. However, they alone cannot be used as an algorithm as
a formula empty(E) may reduce to a formula containing empty(E)
as a sub-formula, leading to nontermination. Suppose Σ = {f (), a},
Π = {Null} and ∆ = {Null → f (Null)}. Clearly, empty(Null) is
true. However, by the first reduction rule, empty(Null) reduces to
empty(hNulli) which then reduces to empty(Null) by the second
reduction rule. This process will not terminate.
The solution, inspired by [10], is to remember in a table a particular kind of formulae of which truth is being tested. When a formula
of that kind is tested, the table is first looked up. If the formula is
implied by any formula in the table, then it is determined as true.
Otherwise, the formula is added into the table and then reduced by
a reduction rule.
The emptiness algorithm presented below remembers every conjunctive type expression of which emptiness is being tested. Thus
the table is a set of conjunctive type expressions. Let C1 and C2 be
def
conjunctive type expressions. We define (C1 C2 ) = (lit(C1 ) ⊇
lit(C2 )). Since Ci = ⊓l∈lit(Ci ) l, C1 C2 implies [C1]∆ ⊆ [C2]∆ and
hence (C1 C2 ) ∧ empty(C2 ) implies empty(C1 ).
Adding tabulation to the two reduction rules, we obtain the following algorithm for testing the emptiness of prescriptive regular
types. Let
BCf = (⊓ω∈pos(C) (⊔Afω ))⊓(⊓τ ∈neg(C) push(∼(⊔Afτ ))).
def
etype(E) = etype(E, ∅)
def
etype(E, Ψ ) = ∀C ∈ DNF(E).etype conj(C, Ψ )
(4)
(5)
def
etype
conj(C, Ψ ) =
if pos(C) ∩ neg(C) 6= ∅,
true,
true,
if ∃C ′ ∈ Ψ.C C ′ ,
∀f ∈ ∩
f
otherwise.
α∈pos(C) F(α).eseq(BC , Ψ ∪ {C}),
def
eseq(Θ, Ψ ) = ∀Γ ∈ DNF(Θ).eseq conj(Γ, Ψ )
def
eseq conj(Γ, Ψ ) =
(
true
if kΓ k = 0 ∧ Λ ∈ Γ ,
false
if kΓ k = 0 ∧ Λ 6∈ Γ ,
∃1 ≤ j ≤ kΓ k.etype(Γ↓j, Ψ )
if kΓ k =
6 0.
(6)
(7)
(8)
Equation 4 initialises the table to the empty set. Equations 5
and 6 implement the first reduction rule while equations 7 and 8 implement the second reduction rule. etype(, ) and etype conj(, ) test
the emptiness of an arbitrary type expression and that of a conjunctive type expression respectively. eseq(, ) tests emptiness of a
sequence expression consisting of sequences and ⊓ and ⊔ operators
while eseq conj(, ) tests the emptiness of a conjunctive sequence expression. The expression of which emptiness is to be tested is passed
as the first argument to these functions. The table is passed as the
second argument. It is used in etype conj(, ) to detect a conjunctive
type expression of which emptiness is implied by the emptiness of
a tabled conjunctive type expression. As we shall show later, this
ensures the termination of the algorithm. Each of the four binary
functions returns true iff the emptiness of the first argument is implied by the second argument and the set of type definitions.
Tabling any other kind of expressions such as arbitrary type expressions can also ensure termination. However, tabling conjunctive
type expressions makes it easier to detect the implication of the
emptiness of one expression by that of another because lit(C) can
be easily computed given a conjunctive type expression C. In an implementation, a conjunctive type expression C in the table can be
represented as lit(C).
The first two definitions for etype conj(C, Ψ ) in equation 6 terminates the algorithm when the emptiness of C can be decided by
C and Ψ without using type definitions. The first definition also excludes from the table any conjunctive type expression that contains
both a type atom and its complement.
3.3
Examples
We now illustrate the algorithm with some examples.
Example 4. Let type definitions be given as in example 2. The tree
in figure 1 depicts the evaluation of etype(Nat⊓∼Even⊓∼Odd) by
the algorithm. Nodes are labeled with function calls. We will identity
a node with its label. Arcs from a node to its children are labeled
with the number of the equation that is used to evaluate the node.
Abbreviations used in the labels are defined in the legend to the
right of the tree. Though [A]]∆ = [B]]∆ , A and B are syntactically
different type expressions. The evaluation returns true, verifying
[Nat⊓∼Even⊓∼Odd]]∆ = ∅. Consider etype conj(B, {A}). We have
B A as lit(A) = lit(B). Thus, by equation 6, etype conj(B, {A}) =
true.
etype(A)
3
etype(A, ∅)
4
etype conj(A, ∅)
∧
5
5
eseq(C, {A})
eseq(ǫ⊓Λ, {A})
6
6
eseq conj(ǫ⊓Λ, {A})
eseq conj(C, {A})
7
7
true
etype(B, {A})
4
etype conj(B, {A})
5
true
Legend:
A = N at⊓∼Even⊓∼Odd
B = N at⊓∼Odd⊓∼Even
C = hN ati⊓h∼Oddi⊓h∼Eveni
Fig. 1. Evaluation of etype(N at⊓∼Even⊓∼Odd))
Example 5. Let type definitions be given as in example 2. The tree
in figure 2 depicts the evaluation of etype(List(Even⊓∼Nat)) by the
algorithm. The evaluation returns false, verifying [List(Even⊓∼Nat)]]∆ 6=
∅. Indeed, [List(Even⊓∼Nat)]]∆ = {nil}. The rightmost node is not
evaluated as its sibling returns false, which is enough to establish
the falsity of their parent node.
etype(A)
(3)
etype(A, ∅)
(4)
etype conj(A, ∅)
∧
(5)/nil
(5)/cons(,)
eseq(ǫ, {A})
eseq(hB, Ai, {A})
(6)
eseq conj(ǫ, {A})
(7)
false
Legend:
A = List(Even⊓∼N at)
B = Even⊓∼N at
Fig. 2. Evaluation of etype(List(Even⊓∼N at))
Example 6. The following is a simplified version of the type definitions that is used in [24] to show the incorrectness of the algorithm
by Dart and Zobel for testing inclusion of one regular type in another [10].
Let Π = {α, β, θ, σ, ω, ζ, η}, Σ = {a, b, g(), h(, )} and
∆=
(
α → g(ω), β → g(θ) | g(σ), θ → a | h(θ, ζ), σ → b | h(σ, η),
ω → a | b | h(ω, ζ) | h(ω, η), ζ → a,
η→b
)
Let t = g(h(h(a, b), a)). t ∈ [α]]∆ and t 6∈ [β]]∆ , see example 3
in [24] for more details. So, [α]]∆ 6⊆ [β]]∆ . This is verified by our
algorithm as follows. Let Ψ1 = {α⊓∼β} and Ψ2 = Ψ1 ∪ {ω⊓∼θ⊓∼σ}.
By applying equations 4, 5, 6, 7, 8 and 5 in that order, we have
etype(α⊓∼β) = etype conj(ω⊓∼θ⊓∼σ, Ψ1 ). By equation 6, we have
etype(α⊓∼β) = eseq(ǫ⊓Λ⊓ǫ, Ψ2 ) ∧ eseq(ǫ⊓ǫ⊓Λ, Ψ2 ) ∧ eseq(Θ, Ψ2)
where Θ = (hω, ζi⊔hω, ηi)⊓(h∼θ, 1i⊔h1, ∼ζi)⊓(h∼σ, 1i⊔h1, ∼ηi). We
choose not to simplify expressions such as ǫ⊓ǫ⊓∼Λ so as to make
the example easy to follow. By applying equations 7 and 8, we
have both eseq(ǫ⊓Λ⊓ǫ, Ψ2 ) = true and eseq(ǫ⊓ǫ⊓Λ, Ψ2 ) = true. So,
etype(α⊓∼β) = eseq(Θ, Ψ2). Let Γ = hω, ζi⊓h∼θ, 1i⊓h1, ∼ηi. To
show etype(α⊓∼β) = false, it suffices to show eseq conj(Γ, Ψ2 ) =
false by equation 7 because Γ ∈ DNF(Θ) and etype(α⊓∼β) =
eseq(Θ, Ψ2).
Figure 3 depicts the evaluation of eseq conj(Γ, Ψ2). The node that
is linked to its parent by a dashed line is not evaluated because one of
its siblings returns false, which is sufficient to establish the falsity of
its parent. It is clear from the figure that etype conj(Θ, Ψ2) = false
and hence etype(α⊓∼β) = false.
4
Correctness
This section addresses the correctness of the algorithm. We shall
first show that tabulation ensures the termination of the algorithm
because the table can only be of finite size. We then establish the
partial correctness of the algorithm.
etype conj(Γ, Ψ2 )
∨
7
7
etype(ζ⊓∼η, Ψ2 )
etyp(ω⊓∼θ, Ψ2 )
4
4
etyp conj(ω⊓∼θ, Ψ2 )
etype conj(ζ⊓∼η, Ψ2 )
∧ 5/b
5/a
5/h(,)
5
eseq(ǫ⊓Λ, Ψ3 )
eseq(ǫ⊓ǫ, Ψ4 )
eseq(ǫ⊓ǫ, Ψ3 )
eseq(Θ1 , Ψ3 )
6
6
6
eseq conj(ǫ⊓Λ, Ψ3 )
eseq conj(ǫ⊓ǫ, Ψ3 )
eseq conj(ǫ⊓ǫ, Ψ4 )
7
7
7
true
false
false
Legend:
Θ1 = (hω, ζi⊔hω, ηi)⊓(h∼θ, 1i⊔h1, ∼ζi)
Ψ3 = Ψ2 ∪ {ω⊓∼θ}
Ψ4 = Ψ2 ∪ {ζ⊓∼η}
Γ = hω, ζi⊓h∼θ, 1i⊓h1, ∼ηi
Fig. 3. Evaluation of etype conj(Γ, Ψ2 )
4.1
Termination
Given a type expression E, a top-level type atom in E is a type
atom in E that is not a sub-term of any type atom in E. The
set of top-level type atoms in E is denoted by TLA(E). For instance, letting E = ∼List(Nat)⊔T ree(Nat⊓∼Even), TLA(E) =
{List(Nat), T ree(Nat⊓∼Even)}. We extend TLA(·) to sequences
def S
by TLA(hE1 , E2 , · · · , Ek i) = 1≤i≤k TLA(Ei ).
Given a type expression E0 , the evaluation tree for etype(E0 ) contains nodes of the form etype(E, Ψ ), etype conj(C, Ψ ), eseq(Θ, Ψ )
and eseq conj(Γ, Ψ ) in addition to the root that is etype(E0 ). Only
nodes of the form etype conj(C, Ψ ) add conjunctive type expressions to the table. Other forms of nodes only pass the table around.
Therefore, it suffices to show that the type atoms occurring in the
first argument of the nodes are from a finite set because any conjunctive type expression added into the table is the first argument
of a node of the form etype conj(C, Ψ ).
The set RTA(E0 ) of type atoms relevant to a type expression E0
is the smallest set of type atoms satisfying
– TLA(E0 ) ⊆ RTA(E0 ), and
– if τ is in RTA(E0 ) and τ → f (τ1 , τ2 , · · · , τk ) is in ground(∆) then
TLA(τi ) ⊆ RTA(E0 ) for 1 ≤ i ≤ k.
The height of τi is no more than that of τ for any τ → f (τ1 , τ2 , · · · , τk )
in ground(∆). Thus, the height of any type atom in RTA(E0 ) is
finite. There are only a finite number of type constructors in Π.
Thus, RTA(E0 ) is of finite size. It follows by examining the algorithm
that type atoms in the first argument of the nodes in the evaluation
tree for etype(E0 ) are from RTA(E0 ) which is finite. Therefore, the
algorithm terminates.
4.2
Partial Correctness
The partial correctness of the algorithm is established by showing
etype(E0 ) = true iff empty(E0 ). Let Ψ be a set of conjunctive type
def
expressions. Define ρΨ = ∧C∈Ψ empty(C). The following two lemmas
form the core of our proof of the partial correctness of the algorithm.
Lemma 1. Let Ψ be a set of conjunctive type expressions, E a type
expression, C a conjunctive type expression, Θ a sequence expression
and Γ a conjunctive sequence expression.
(a)
(b)
(c)
(d)
If
If
If
If
ρΨ
ρΨ
ρΨ
ρΨ
|= empty(C)
|= empty(E)
|= empty(Γ )
|= empty(Θ)
then
then
then
then
etype conj(C, Ψ ) = true, and
etype(E, Ψ ) = true, and
etype(Γ, Ψ ) = true, and
etype(Θ, Ψ ) = true.
Proof. The proof is done by induction on the size of the complement
of Ψ with respect to the set of all possible conjunctive type expressions
in which type atoms are from RTA(E0 ) where E0 is a type expression.
Basis. The complement is empty. Ψ contains all possible conjunctive type expressions in which type atoms are from RTA(E0 ). We
have C ∈ Ψ and hence etype conj(C, Ψ ) = true by equation 6. Therefore, (a) holds. (b) follows from (a) and equation 5. (c) follows from
(b), equation 8 and lemma 4 in the appendix, and (d) follows from
(c) and equation 7.
Induction. By lemma 3 in the appendix, ρΨ |= empty(C) implies ρΨ |= empty(BCf ) for any f ∈ ∩α∈pos(C) F (α). Thus, ρΨ∪{C} |=
empty(BCf ). The complement of Ψ ∪ {C} is smaller than the complement of Ψ . By the induction hypothesis, we have eseq(BCf , Ψ ∪{C}) =
true. By equation 6, etype conj(C, Ψ ) = true. Therefore, (a) holds.
(b) follows from (a) and equation 5. (c) follows from (b), equation 8
and lemma 4 in the appendix and (d) follows from (c) and equation 7.
This completes the proof of the lemma.
Lemma 1 establishes the completeness of etype(, ), etype conj(, ),
eseq(, ) and eseq conj(, ) while the following lemma establishes their
soundness.
Lemma 2. Let Ψ be a set of conjunctive type expressions, E a type
expression, C a conjunctive type expression, Θ a sequence expression
and Γ a conjunctive sequence expression.
(a)
(b)
(c)
(d)
ρΨ
ρΨ
ρΨ
ρΨ
|= empty(C)
|= empty(E)
|= empty(Γ )
|= empty(Θ)
if
if
if
if
etype conj(C, Ψ ) = true, and
etype(E, Ψ ) = true, and
etype(Γ, Ψ ) = true, and
etype(Θ, Ψ ) = true.
Proof. It suffices to prove (a) since (b),(c) and (d) follow from (a)
as in lemma 1. The proof is done by induction on dp(C, Ψ ) the depth
of the evaluation tree for etype conj(C, Ψ ).
Basis. dp(C, Ψ ) = 1. etype conj(C, Ψ ) = true implies either (i)
pos(C) ∩ neg(C) 6= ∅ or (ii) ∃C ′ ∈ Ψ.C C ′ . In case (i), empty(C)
is true and ρΨ |= empty(C). Consider case (ii). By the definition of
and ρΨ , we have etype conj(C, Ψ ) = true implies ρΨ |= empty(C).
Induction. dp(C, Ψ ) > 1. Assume etype conj(C, Ψ ) = true and
ρΨ |= ¬empty(C). By lemma 3, there is f ∈ ∩α∈pos(C) F (α) such
that ρΨ |= ¬empty(BCf ). We have ρΨ∪{C} |= ¬empty(BCf ). dp(BCf , Ψ ∪
{C}) < dp(C, Ψ ). By the induction hypothesis, we have etuple(BCf , Ψ ∪
{C}) = false for otherwise, ρΨ∪{C} |= BCf . By equation 6, etype conj(C, Ψ ) =
false which contradicts etype conj(C, Ψ ) = true. So, ρΨ |= empty(C)
if etype conj(C, Ψ ) = true. This completes the induction and the
proof of the lemma.
The following theorem is a corollary of lemmas 1 and 2.
Theorem 1. For any type expression E, etype(E) = true iff empty(E).
Proof. By equation 4, etype(E) = etype(E, ∅). By lemma 1.(b) and
lemma 2.(b), we have etype(E, ∅) = true iff ρ∅ |= empty(E). The
result follows since ρ∅ = true.
5
Complexity
We now address the issue of complexity of the algorithm. We only
consider the worst-case time complexity of the algorithm. The time
spent on evaluating etype(E0 ) for a given type expression E0 can be
measured in terms of the number of nodes in the evaluation tree for
etype(E0 ).
The algorithm cycles through etype(, ), etype conj(, ), eseq(, ) and
eseq conj(, ). Thus, children of a node of the form etype(E, Ψ ) can
only be of the form etype conj(C, Ψ ), and so on.
Let |S| be the number of elements in a given set S. The largest
possible table in the evaluation of etype(E0 ) contains all the conjunctive type expressions of which type atoms are from RTA(E0 ).
Therefore, the table can contain at most 2|RTA(E0 )| conjunctive type
expressions. So, the height of the tree is bounded by O(2|RTA(E0 )| ).
We now show that the branching factor of the tree is also bounded
by O(2|RTA(E0 )| ). By equation 5, the number of children of etype(E, Ψ )
is bounded by two to the power of the number of type atoms in E
which is bounded by |RTA(E0 )| because E can only contain type
atoms from RTA(E0 ). By equation 6, the number of children of
etype conj(C, Ψ ) is bounded by |Σ|. The largest number of children of a node eseq(Θ, Ψ ) is bounded by two to the power of the
number of sequences in Θ where Θ = BCf . For each τ ∈ neg(C),
|push(∼(⊔Afτ ))| is O(arity(f )) and |C| < |RTA(E0 )|. Thus, the
number of sequences in Θ is O(arity(f ) ∗ |RTA(E0 )|) and hence the
number of children of eseq(Θ, Ψ ) is O(2|RTA(E0 )| ) since arity(f ) is a
constant. By equation 8, the number of children of eseq conj(Γ, Ψ )
is bounded by maxf ∈Σ arity(f ). Therefore, the branching factor of
the tree is bounded by O(2|RTA(E0 )| ).
The above discussion leads to the following conclusion.
Proposition 1. The time complexity of the algorithm is O(2|RTA(E0 )| )).
The fact that the algorithm is exponential in time is expected because the complexity coincides with the complexity of deciding the
emptiness of any tree automaton constructed from the type expression and the type definitions. A deterministic frontier-to-root tree
automaton recognising [E0]∆ will consist of 2|RTA(E0 )| states as observed in the proof of lemma 5. It is well-known that the decision
of the emptiness of the language of a deterministic frontier-to-root
tree automaton takes time polynomial in the number of the states
of the tree automaton. Therefore, the worst-case complexity of the
algorithm is the best we can expect from an algorithm for deciding
the emptiness of regular types that contain set operators.
6
Conclusion
We have presented an algorithm for deciding the emptiness of prescriptive regular types. Type expressions are constructed from type
constructors and set operators. Type definitions prescribe the meaning of type expressions.
The algorithm uses tabulation to ensure termination. Though the
tabulation is inspired by Dart and Zobel [10], the decision problem
we consider in this paper is more complex as type expressions may
contain set operators. For that reason, the algorithm can also be
used for inclusion and equivalence problems of regular types. The
way we use tabulation leads to a correct algorithm for regular types
while the Dart-Zobel algorithm has been proved incorrect for regular
types [24] in general. To the best of our knowledge, our algorithm is
the only correct algorithm for prescriptive regular types.
In addition to correctness, our algorithm generalises the work of
Dart and Zobel [10] in that type expressions can contain set operators and type definitions can be parameterised. Parameterised
type definitions are more natural than monomorphic type definitions [12,26,32] while set operators makes type expressions concise.
The combination of these two features allows more natural type declarations. For instance, the type of the logic program append can be
declared or inferred as append(List(α), List(β), List(α⊔β)).
The algorithm is exponential in time. This coincides with deciding the emptiness of the language recognised by a tree automaton
constructed from the type expression and the type definitions. However, the algorithm avoids the construction of the tree automaton
which cannot be constructed a priori when type definitions are parameterised.
Another related field is set constraint solving [3,2,20,18,11]. However, set constraint solving methods are intended to infer descriptive
types [28] rather than for testing the emptiness of a prescriptive
type [28]. Therefore, they are useful in different settings from the al-
gorithm presented in this paper. In addition, algorithms proposed for
solving set constraints [3,4,2,1] are not applicable to the emptiness
problem we considered in this paper. Take for example the constructor rule in [3,2] which states that emptiness of f (E1 , E2 , · · · , Em ) is
equivalent to the emptiness of Ei for some 1 ≤ i ≤ m. However,
empty(List(0)) is not equivalent to empty(0). The latter is true
while the former is false since [List(0)]]∆ = {nil}. The constructor
rule doesn’t apply because it deals with function symbols only but
doesn’t take the type definitions into account.
References
1. A. Aiken, D. Kozen, M. Vardi, and E. Wimmers. The complexity of set constraints.
In Proceedings of 1993 Computer Science Logic Conference, pages 1–17, 1992.
2. A. Aiken and T.K. Lakshman. Directional type checking of logic programs. In
B. Le Charlier, editor, Proceedings of the First International Static Analysis Symposium, pages 43–60. Springer-Verlag, 1994.
3. A. Aiken and E. Wimmers. Solving systems of set constraints. In Proceedings of
the Seventh IEEE Symposium on Logic in Computer Science, pages 329–340. The
IEEE Computer Society Press, 1992.
4. A. Aiken and E. Wimmers. Type inclusion constraints and type inference. In
Proceedings of the 1993 Conference on Functional Programming Languages and
Computer Architecture, pages 31–41, Copenhagen, Denmark, June 1993.
5. C. Beierle. Type inferencing for polymorphic order-sorted logic programs. In
L. Sterling, editor, Proceedings of the Twelfth International Conference on Logic
Programming, pages 765–779. The MIT Press, 1995.
6. L. Cardelli and P. Wegner. On understanding types, data abstraction, and polymorphism. ACM computing surveys, 17(4):471–522, 1985.
7. M. Codish and V. Lagoon. Type dependencies for logic programs using aciunification. In Proceedings of the 1996 Israeli Symposium on Theory of Computing
and Systems, pages 136–145. IEEE Press, June 1996.
8. H. Comon, M. Dauchet, R. Gilleron, D. Lugiez, S. Tison, and M. Tommasi. Tree
Automata Techniques and Applications. Draft, 1998.
9. P.W. Dart and J. Zobel. Efficient run-time type checking of typed logic programs.
Journal of Logic Programming, 14(1-2):31–69, 1992.
10. P.W. Dart and J. Zobel. A regular type language for logic programs. In Frank
Pfenning, editor, Types in Logic Programming, pages 157–189. The MIT Press,
1992.
11. P. Devienne, J-M. Talbot, and S. Tison. Co-definite set constraints with membership expressions. In J. Jaffar, editor, Proceedings of the 1998 Joint Conference and
Symposium on Logic Programming, pages 25–39. The MIT Press, 1998.
12. T. Fruhwirth, E. Shapiro, M.Y. Vardi, and E. Yardeni. Logic programs as types
for logic programs. In Proceedings of Sixth Annual IEEE Symposium on Logic in
Computer Science, pages 300–309. The IEEE Computer Society Press, 1991.
13. J.P. Gallagher and D.A. de Waal. Fast and precise regular approximations of logic
programs. In M. Bruynooghe, editor, Proceedings of the Eleventh International
Conference on Logic Programming, pages 599–613. The MIT Press, 1994.
14. F. Gécseg and M. Steinby. Tree Automata. Akadémiai Kiadó, 1984.
15. F. Gécseg and M. Steinby. Tree languages. In G. Rozenberg and A. Salomma,
editors, Handbook of Formal Languages, pages 1–68. Springer-Verlag, 1996.
16. M. Hanus. Horn clause programs with polymorphic types: semantics and resolution.
Theoretical Computer Science, 89(1):63–106, 1991.
17. N. Heintze and J. Jaffar. A finite presentation theorem for approximating logic
programs. In Proceedings of the seventh Annual ACM Symposium on Principles of
Programming Languages, pages 197–209. The ACM Press, 1990.
18. N. Heintze and J. Jaffar. A decision procedure for a class of set constraints. Technical Report CMU-CS-91-110, Carnegie-Mellon University, February 1991. (Later
version of a paper in Proc. 5th IEEE Symposium on LICS).
19. N. Heintze and J. Jaffar. Semantic types for logic programs. In Frank Pfenning,
editor, Types in Logic Programming, pages 141–155. The MIT Press, 1992.
20. N. Heintze and J. Jaffar. Set constraints and set-based analysis. In Alan Borning,
editor, Principles and Practice of Constraint Programming, volume 874 of Lecture
Notes in Computer Science. Springer, May 1994. (PPCP’94: Second International
Workshop, Orcas Island, Seattle, USA).
21. D. Jacobs. Type declarations as subtype constraints in logic programming. SIGPLAN Notices, 25(6):165–73, 1990.
22. L. Lu. Type analysis of logic programs in the presence of type definitions. In
Proceedings of the 1995 ACM SIGPLAN Symposium on Partial Evaluation and
Semantics-Based program manipulation, pages 241–252. The ACM Press, 1995.
23. L. Lu. A polymorphic type analysis in logic programs by abstract interpretation.
Journal of Logic Programming, 36(1):1–54, 1998.
24. L. Lu and J. Cleary. On Dart-Zobel algorithm for testing regular type inclusion.
Technical report, Department of Computer Science, The University of Waikato,
October 1998. http://xxx.lanl.gov/ps/cs/9810001.
25. P. Mishra. Towards a theory of types in Prolog. In Proceedings of the IEEE international Symposium on Logic Programming, pages 289–298. The IEEE Computer
Society Press, 1984.
26. A. Mycroft and R.A. O’Keefe. A polymorphic type system for Prolog. Artificial
Intelligence, 23:295–307, 1984.
27. Frank Pfenning, editor. Types in logic programming. The MIT Press, Cambridge,
Massachusetts, 1992.
28. U.S. Reddy. Types for logic programs. In S. Debray and M. Hermenegildo, editors,
Logic Programming. Proceedings of the 1990 North American Conference, pages
836–40. The MIT Press, 1990.
29. M. Soloman. Type definitions with parameters. In Conference Record of the Fifth
ACM Symposium on Principles of Programming Languages, pages 31–38, 1978.
30. J. Tiuryn. Type inference problems: A survey. In B. Roven, editor, Proceedings of
the Fifteenth International Symposium on Mathematical Foundations of Computer
Science, pages 105–120. Springer-Verlag, 1990.
31. E. Yardeni, T. Fruehwirth, and E. Shapiro. Polymorphically typed logic programs.
In K. Furukawa, editor, Logic Programming. Proceedings of the Eighth International
Conference, pages 379–93. The MIT Press, 1991.
32. E. Yardeni and E. Shapiro. A type system for logic programs. Journal of Logic
Programming, 10(2):125–153, 1991.
33. J. Zobel. Derivation of polymorphic types for Prolog programs. In J.-L. Lassez, editor, Logic Programming: Proceedings of the fourth international conference, pages
817–838. The MIT Press, 1987.
Appendix
Lemma 3. Let C be a conjunctive type expression. empty(C) iff
∀f ∈ ∩α∈pos(C) F (α).
empty((⊓ω∈pos(C) (⊔Afω ))⊓(⊓τ ∈neg(C) ∼(⊔Afτ )))
Proof. Let t be a sequence of terms and f a function symbol. By
the definition of [·]]∆ , f (t) ∈ [C]]∆ iff f ∈ ∩α∈pos(C) F (α) and t ∈
[⊓ω∈pos(C) (⊔Afω ))]]∆ \ [(⊔τ ∈neg(C) (⊔Afτ ))]]∆ . t ∈ [⊓ω∈pos(C) (⊔Afω ))]]∆ \
[(⊔τ ∈neg(C) (⊔Afτ ))]]∆ iff t ∈ [(⊓ω∈pos(C) (⊔Afω ))⊓(⊓τ ∈neg(C) ∼(⊔Afτ ))]]∆ .
Thus, empty(C) iff empty((⊓ω∈pos(C) (⊔Afω ))⊓(⊓τ ∈neg(C) ∼(⊔Afτ ))) for
each f ∈ ∩α∈pos(C) F (α).
Lemma 4. Let Γ be a conjunctive sequence expression. Then
empty(Γ ) iff ⊔1≤jkΓ kempty(Γ↓j)
Proof. Let kΓ k = Tn and Γ = γ1 ⊓γ2 ⊓ · · · ⊓γm with γi = hγi,1 , γi,2, · · · , γi,n i.
We have [Γ ]∆ = 1≤j≤m [γj]∆ . We have Γ↓j = γ1,j ⊓γ2,j ⊓ · · · ⊓γm,j .
T
∃1 ≤ j ≤ n.empty(Γ↓j) iff ∃1 ≤ j ≤ n. 1≤i≤m [γi,j]∆ = ∅ iff
[Γ ]∆ = ∅ iff empty(Γ ).
Lemma 5. [M]]∆ is a regular term language for any type expression
M.
Proof. The proof is done by constructing a regular term grammar
for M [14]. We first consider the case M ∈ T (Π ∪ {1, 0}). Let
R = hRTA(M), Σ, ∅, Υ, Mi with
Υ = {(α → f (α1 , · · · , αk )) ∈ ground(∆) | α ∈ RTA(M)}
R is a regular term grammar. It now suffices to prove that t ∈ [M]]∆
iff M ⇒∗R t.
– Sufficiency. Assume M ⇒∗R t. The proof is done by induction on
derivation steps in M ⇒∗R t.
• Basis. M ⇒R t. t must be a constant and M → t is in Υ
which implies M → t is in ground(∆). By the definition of
[·]]∆ . t ∈ [M]]∆ .
(n−1)
• Induction. Suppose M ⇒ f (M1 , · · · , Mk ) ⇒R
t. Then
ni
t = f (t1 , · · · , tk ) and Mi ⇒R t with ni ≤ (n − 1). By the
induction hypothesis, ti ∈ [Mi]∆ and hence t ∈ [M]]∆ by the
definition of [·]]∆ .
– Necessity. Assume t ∈ [M]]∆ . The proof is done by the height of
t, denoted as height(t).
• height(t) = 0 implies that t is a constant. t ∈ [M]]∆ implies
that M → t is in ground(∆) and hence M → t is in Υ .
Therefore, M ⇒R t.
• Let height(t) = n. Then t = f (t1 , · · · , tk ). t ∈ [M]]∆ implies
that (M → f (M1, · · · , Mk )) ∈ ground(∆) and ti ∈ [Mi]∆ .
By the definition of Υ , we have (M → f (M1 , · · · , Mk )) ∈
Υ . By the definition of RTA(·), we have Mi ∈ RTA(M).
By the induction hypothesis, Mi ⇒∗R ti . Therefore, M ⇒R
f (M1 , · · · , Mk ) ⇒∗R f (t1 , · · · , tk ) = t.
Now consider the case M ∈ T (Π ∪ {⊓, ⊔, ∼, 1, 0}). We complete
the proof by induction on the height of M.
– height(M) = 0. Then M doesn’t contain set operator. We have
already proved that [M]]∆ is a regular term language.
– Now suppose height(M) = n. If M doesn’t contain set operator
then the lemma has already been proved. If the principal type constructor is one of set operators then the result follows immediately
as regular term languages are closed under union, intersection
and complement operators [14,15,8]. It now suffices to prove the
case M = c(M1 , · · · , Ml ) with c ∈ Π. Let N = c(X1 , · · · , Xl )
where each Xj is a different new type constructor of arity 0.
Let Π ′ = Π{X1, · · · , Xl }, Σ ′ = Σ ∪ {x1 , · · · , xl } and ∆′ = ∆ ∪
{Xj → xj |1 ≤ j ≤ l}. [N ]∆′ is a regular term language on
Σ ∪ {x1 , · · · , xl } because N doesn’t contain set operators. By the
induction hypothesis, [Mj]∆ is a regular term language. By the
definition of [·]]· , we have
[M]]∆ = [N ]∆′ [x1 := [M1]∆ , · · · , xl := [Ml]∆ ]
which is a regular term language [14,15,8]. S[y1 := Sy1 , · · · , ] is
the set of terms each of which is obtained from a term in S by
replacing each occurrence of yj with a (possibly different) term
from Syj . This completes the induction and the proof.
The proof also indicates that a non-deterministic frontier-to-root
tree automaton that recognises [M]]∆ has |RTA(M)| states and that
a deterministic frontier-to-root tree automaton that recognises [M]]∆
has O(2|RTA(M)| ) states.
| 6 |
1
A Learning-to-Infer Method for Real-Time
Power Grid Topology Identification
arXiv:1710.07818v1 [cs.LG] 21 Oct 2017
Yue Zhao, Member, IEEE, Jianshu Chen, Member, IEEE, and H. Vincent Poor, Fellow, IEEE
Abstract—Identifying arbitrary topologies of power networks
in real time is a computationally hard problem due to the
number of hypotheses that grows exponentially with the network
size. A new “Learning-to-Infer” variational inference method
is developed for efficient inference of every line status in the
network. Optimizing the variational model is transformed to
and solved as a discriminative learning problem based on Monte
Carlo samples generated with power flow simulations. A major
advantage of the developed Learning-to-Infer method is that
the labeled data used for training can be generated in an
arbitrarily large amount fast and at very little cost. As a result,
the power of offline training is fully exploited to learn very
complex classifiers for effective real-time topology identification.
The proposed methods are evaluated in the IEEE 30, 118 and
300 bus systems. Excellent performance in identifying arbitrary
power network topologies in real time is achieved even with
relatively simple variational models and a reasonably small
amount of data.
Index Terms—Topology identification, line outage detection,
power system, smart grid, machine learning, variational inference, Monte Carlo method, neural networks
I. I NTRODUCTION
Lack of situational awareness in abnormal system conditions is a major cause of blackouts in power networks [3].
Network component failures such as transmission line outages,
if not timely identified and contained, can quickly escalate to
cascading failures. In particular, when line failures happen,
the power network topology changes instantly, newly stressed
areas can unexpectedly emerge, and subsequent failures may
be triggered that lead to increasingly complex network topology changes. While the power system is usually protected
against the so called “N − 1” failure scenarios (i.e., only one
component fails), as failures accumulate, effective automatic
protection is no longer guaranteed. Thus, when cascading
failures start developing, real-time protective actions critically
depend on correct and timely knowledge of the network
status. Indeed, without knowledge of the network topology
changes, protective control methods have been observed to
further aggravate the failure scenarios [4]. Thus, real-time
network topology identification is essential to all network
Some preliminary results from this work were presented in part at the IEEE
Workshop on Statistical Signal Processing, Palma de Mallorca, Spain, 2016
[1] and in part at the IEEE Global Conference on Signal and Information
Processing (GlobalSIP), Arlington, VA, USA, 2016 [2]. This material is based
in part on work supported by DARPA.
Y. Zhao is with the Dept. of Electrical and Computer Engineering, Stony Brook University, Stony Brook, NY, 11794 USA (e-mail:
[email protected]).
J. Chen is with Microsoft Research, Redmond, WA, 98052 USA. (e-mail:
[email protected])
H. V. Poor is with the Dept. of Electrical Engineering, Princeton University,
Princeton, NJ 08544 USA (e-mail: [email protected])
control decisions for mitigating failures. In particular, since
the first few line outages may have already been missed, the
ability to identify in real time the network topology with an
arbitrary number of line outages becomes critical to prevent
system collapse.
Real-time topology identification is however a very challenging problem, especially when unknown line statuses in the
network quickly accumulate as in scenarios that cause largescale blackouts [3]. The number of possible topologies grows
exponentially with the number of unknown line statuses, making real time topology identification fundamentally hard. Other
limitations in practice such as behaviors of human operators
under time pressure, missing and contradicting information,
and privacy concerns over data sharing can make this problem
even harder. Assuming a small number of line failures, exhaustive search methods have been developed in [5], [6], [7] and
[8] based on hypothesis testing, and in [9] and [10] based on
logistic regression. To overcome the prohibitive computational
complexity of exhaustive search methods, [11] has developed
sparsity exploiting outage identification methods with overcomplete observations to identify sparse multi-line outages.
Without assuming sparsity of line outages, a graphical model
based approach has been developed for identifying arbitrary
network topologies [12].
On a related note, non-real-time topology identification have
also been extensively studied: the underlying topology stays
the same, while many data are collected over a relatively
long period of time before the topology can be identified.
A variety of data have been exploited for addressing this
problem, e.g., data of power injections [13], voltage correlation
[14], and energy prices [15]. For power distribution systems
in particular, graphical model based approaches have been
developed [16], [17].
In this paper, we focus on real-time identification of arbitrary grid topologies based on instantly collected measurements in the power system. We start with a probabilistic model
of the variables in a power system (topology, power injections,
voltages, power flows, currents etc.) and in its monitoring
system (sensor measurements on all kinds of physical quantities). We then formulate the topology identification problem
in a Bayesian inference framework, where we aim to compute
the posterior probabilities of the topologies given any instant
measurements.
To overcome the fundamental computationally complexity
due to the exponentially large number of possible topologies,
we develop a variational inference framework, in which we
aim to approximate the desired posterior probabilities using
models that allow computationally easy marginal inference
of line statuses. Importantly, we develop “end-to-end” vari-
ational models for topology identification, and allow arbitrary
variational model structures and complexities. In order to
find effective end-to-end variational models, we transform
optimizing a variational model to a discriminative learning
problem leveraging a Monte Carlo approach: a) Based on
full-blown power flow equations, data samples of network
topology, network states, and sensor measurements in the
network can be efficiently generated according to a generative
model of these quantities, and b) With these simulated data,
discriminative models are learned offline, which then offer
real-time prediction of the network topology based on newly
observed instant measurements from the real world. We thus
term the proposed method “Learning-to-Infer”. It is important
to note that this Learning-to-Infer method is not limited by any
potential lack of real-world data, as the entire offline training
procedure can be conducted entirely based on simulated data.
A major strength of the proposed Learning-to-Infer method
is that the labeled data set for training the variational model
can be generated in an arbitrarily large amount, at very little
cost. As such, we can fully exploit the benefit of offline model
training in order to get accurate online topology identification
performance. The proposed approach is also not restricted to
specific models and learning methods, but can exploit any
powerful models such as deep neural networks [18]. As a
result, variational models of very high complexities can be
adopted, yet without worrying about overfitting since more
labeled training data can always be generated had overfitting
been observed.
The developed Learning-to-Infer method is evaluated in
the IEEE 30, 118, and 300 bus systems [19] for identifying
topologies with an arbitrary number of line outages. It is
demonstrated that, even with relatively simple variational models and a reasonably small amount of data, the performance
is surprisingly good for this very challenging task.
The remainder of the paper is organized as follows. Section
II introduces the system model, and formulates real-time topology identification as a Bayesian inference problem. Section III
develops the Learning-to-Infer variational inference method.
Section IV discusses the architectures of neural networks
employed in this study. Section V presents the results from
our numerical experiments. Section VI concludes the paper.
II. P ROBLEM F ORMULATION
A. Power Flow Models
We consider a power system with N buses, and its baseline
topology (i.e., the network topology when there is no line
outage) with L lines. We denote the incidence matrix of the
baseline topology by M ∈ {−1, 0, 1}N ×L [20]. We use a
binary variable sl to denote the status of a line l, with sl = 1
for a connected line l, and 0 otherwise. The actual topology
of the network can then be represented by s = [s1 , . . . , sL ]T .
Generalizing this notation, we also employ smn ∈ {1, 0} to
denote whether two buses m and n are connected by a line or
not. Given a network topology s, the system’s bus admittance
matrix Y can be determined accordingly with the physical
parameters of the system [21]: Ymn = smn (Gmn + jBmn ),
where Gmn and Bmn denote conductance and susceptance,
respectively. Note that, when two buses m and n are not
connected, Ymn = smn = 0.
We denote the real and reactive power injections at all the
buses by P , Q ∈ RN , and the voltage magnitudes and phase
angles by V , θ ∈ RN . Given the bus admittance matrix Y ,
the nodal power injections and the nodal voltages satisfy the
following AC power flow equations [21]:
Pm = V m
Qm = Vm
N
X
n=1
N
X
Vn smn (Gmn cos(θm − θn )+Bmn sin(θm − θn )) ,
Vn smn (Gmn sin(θm − θn )−Bmn sin(θm − θn )) ,
n=1
(1)
where a subscript m denotes the mth component of a vector. In
particular, given the network topology s and a set of controlled
input values {P , Qin , V in }, (where Qin and V in consist of
some subsets of Q and V , respectively,) the remaining values
of {Q, V , θ} can be determined by solving (1). Typically,
apart from a slack bus, most buses are “P Q buses” at which
the real and reactive power injections are controlled inputs,
and the remaining buses are “P V buses” at which the real
power injection and voltage magnitude are controlled inputs
[21]. We refer the readers to [21] for more details of solving
AC power flow equations.
A useful approximation of the AC power flow model is
the DC power flow model: under a topology s, the nodal
real power injections and voltage phase angles approximately
satisfy the following equation [21],
P = M SΓM T θ,
(2)
where S = diag(s1 , . . . , sL ), Γ = diag( x11 , . . . , x1L ), and xl
is the reactance of line l. We note that, in the DC power
flow model, reactive power is not considered, and all voltage
magnitudes are approximated by a constant.
B. Observation Models
To monitor the power system, we consider real time measurements taken by sensors measuring nodal voltage magnitudes and phase angles, current magnitudes and phase angles
on lines, real and reactive power flows on lines, nodal real
and reactive power injections, etc. In general, the observation
model can be written as the following,
y = h(s, P , Qin , V in ) + v,
(3)
where a) y ∈ RK collects all
the noisy measurements, b) h(s, P , Qin , V in ) = h1 (s, P , Qin , V in ), . . .,
T
hK (s, P , Qin , V in )
denotes the noiseless values of the
measured quantities, and the forms of {hk (·)} depend on the
specific locations and types of the sensors, and c) v denote
the measurement noises.
Remark 1: A
noiseless
measurement
function
hk (s, P , Qin , V in ) may not have a closed form. For
example, given s, P , Qin and V in , while the nodal voltage
magnitude and phase angle at a particular P Q bus can be
solved from (1), such a solution can only be obtained using
numerical methods, and a closed form expression is not
available. For discussions on the existence and uniqueness
of the solution to the power flow equations (1), we refer the
readers to [22].
The observations model can be significantly simplified
under the approximate DC power flow model (2). For example,
measurements of θ provided by phasor measurement units
(PMUs) located at a subset of the buses M can be modeled
as
y = θM + v,
(4)
where θM is formed by entries of θ from buses in M. From
the DC power flow model (2), we have
+
θ = M SΓM T P ,
(5)
where (·)+ denotes pseudoinverse1 . We note that, while the
noiseless voltage phase angle measurements enjoy a closed
form (5) and are linear in the power injections P , they are
not linear in the line statuses s (= diag(S)).
C. Topology Identification as Bayesian Inference
We are interested in identifying the network topology s
in real time based on instant measurements y collected in
the power system. We formulate this topology identification
problem as a Bayesian inference problem. First, we model
s, P , Qin , V in and y with a joint probability distribution,
p(s, P , Qin , V in , y)
= p(s, P , Qin , V in ) · p(y|s, P , Qin , V in ).
in
of s grows exponentially with the number of unknown line
statuses, performing such a hypothesis testing based on an
exhaustive search becomes computationally intractable. In
general, as there are up to 2L possibilities for s, computing,
or even listing the probabilities p(s|y), ∀s has an exponential
complexity.
Posterior Marginal Probabilites: As an initial step towards
addressing the fundamental challenge of computational complexity, instead of computing p(s|y), we focus on computing
the posterior marginal conditional probabilities p(sl |y), l =
1, . . . , L. We note that the posterior marginals are characterized by just L numbers, P(sl = 1|y), l = 1, . . . , L, as
opposed to 2L −1 numbers required for characterizing p(s|y).
Accordingly, the hypothesis testing problem on s is decoupled
into L separate binary hypothesis testing problems: for each
line l, the MAP detector identifies argmaxsl ∈{0,1} p(sl |y, P ).
As a result, instead of minimizing the identification error
probability of the vector s (i.e., “symbol” error probability),
the binary MAP detectors minimize the identification error
probability of each line status sl (i.e., “bit” error probability).
Although listing the posterior marginals p(sl |y) are
tractable, computing them, however, still remains intractable.
In particular, even with p(s|y) given, summing out all sk , k 6=
l to obtain p(sl |y) still requires an exponential computational
complexity [25]. As a result, even a binary MAP detection
decision of sl cannot be made in a computationally tractable
way. This challenge will be addressed by a novel method we
will develop in the next section.
(6)
III. A L EARNING -T O -I NFER M ETHOD
in
It is important to note that, given s, P , Q , V , the noiseless
measurements h (cf. (3)) can be exactly computed by solving
the AC power flow equations (1). Adding noises to h then
leads to p(y|s, P , Qin , V in ).
Remark 2 (Generative Model): (6) represents a generative
model [23] with which a) the topology and the controlled
inputs of power injections and voltage magnitudes are generated according to a prior distribution p(s, P , Qin , V in ), and
b) all the quantities h measured in the system can then be
computed by solving the power flow equations (1), based on
which the actual noisy measurements y follow the conditional
probability distribution p(y|s, P , Qin , V in ).
Our objective is to infer the topology of the power grid s
given the observed measurements y. Thus, under a Bayesian
inference framework, we are interested in computing the
posterior conditional probabilities: ∀s,
p(s|y)
R
p(s, P , Qin , V in )p(y|s, P , Qin , V in )dP dQin dV in
.
=
p(y)
(7)
Given the observations y, a maximum a-posteriori probability
(MAP) detector would pick argmaxs p(s|y) as the topology
identification decision, which minimizes the identification error probability [24]. However, as the number of hypotheses
1 For a connected network, the solution of θ given P is made unique by
setting the phase angle at a reference bus to be zero.
A. A Variational Inference Framework
In this section, we develop a variational method for approximate inference of the posterior marginal conditional
probabilities p(sl |y), l = 1, . . . , L. The general idea is to find
a variational conditional distribution q(s|y) which
a) approximates the original p(s|y) very closely, and
b) offers fast and accurate topology identification results.
In particular, we consider that q(s|y) is modeled by some
parametric form (e.g., neural networks), and is hence chosen
from some family of parameterized conditional probability distributions {qβ (s|y)}, where β is a vector of model parameters.
It is worth noting that q(s|y) is a function of both s and y, and
the parameters β associate both s and y with the probability
value qβ (s|y).
To achieve the two goals above, we aim to choose a family
of probability distributions {qβ (s|y)} to satisfy the following:
•
•
The parametric form of {qβ (s|y)} has sufficient expressive power to represent very complicated functions, so
that our approximation to the true p(sl |y) can be made
sufficiently precise.
It is easy to compute the marginal qβ (sl |y), so that we
can use it to infer sl based on the observed y with low
computation complexity.
From a family of parameterized distributions {qβ (s|y)}, we
would like to choose a qβ (s|y) that approximates p(s|y)
TABLE . I. T HE L EARNING - TO -I NFER M ETHOD
Online Inference
Offline computation:
1. Generate labeled data set {st , y t } using Monte Carlo
simulations with the full-blown power flow and
sensor models.
2. Select a parameterized variational model {qβ (s|y)}.
3. Train the model parameters β using the generated data
set.
Online inference (in real time):
1. Collect instant measurements y from the system.
2. Compute the approximate posterior marginals
qβ∗ (sl |y), l = 1, . . . , L, and infer the line statues {sl }.
as closely as possible. For this, we employ the KullbackLeibler (KL) divergence as a metric of closeness between two
probability distributions,
X
p(s|y)
.
(8)
D(pkqβ ) ,
p(s|y) log
qβ (s|y)
s
Note that, for any particular realization of observations y, a
KL divergence D(pkqβ ) can be computed. Thus, D(pkqβ )
can be viewed as a function of y. Since we would like
the parameterized conditional qβ (s|y) to closely approximate
p(s|y) for all y, we would like to minimize the expected KL
divergence as follows:
min Ey [D(pkqβ )]
β
⇔ min
X
⇔ min
X
β
β
y
s,y
p(y)
X
p(s|y) log
s
p(s, y) log
p(s|y)
qβ (s|y)
p(s|y)
qβ (s|y)
⇔ max Es,y [log qβ (s|y)] ,
β
(9)
where the expectation is taken with respect to the true distribution p(s, y).
B. From Generative Model to Discriminative Learning
Evaluating Es,y [log qβ (s|y)] is, however, very difficult,
primarily because it again requires the summation of an
exponentially large number of terms. To address this, the
key step forward is that we can approximate the expectation
by the empirical mean of log qβ (s|y) over a large number
of Monte Carlo samples, generated according to the true
joint probability p(s, P , Qin , V in , y) (cf. (6)). We denote
the relevant Monte Carlo samples by {st , y t ; t = 1, . . . , T }.
Accordingly, (9) is approximated by the following,
T
1X
log qβ (st |y t )].
max
β T
t=1
(10)
With a data set {st , y t } generated using Monte Carlo simulations, (10) can then be solved as a deterministic optimization
problem. The optimal solution of the model parameters β ∗
approaches that for the original problem (9) as T → ∞.
In fact, the problem (10) can be viewed as an empirical
risk minimization problem in machine learning [26], as it
Controlled
inputs
Network
topology
Offline Computation
Power
System
Real-time
Measurements
Infer
Input
Classifier
Simulator
Physical
Laws
Simulated
Data
Train
Learner
Fig. 1. Overall architecture of the learning-to-Infer method.
trains a discriminative model qβ (s|y) with a data set {st , y t }
generated from a generative model p(s, P , Qin , V in , y) (cf.
Remark 2). As a result of this offline learning / training process
(10), an approximate posterior function qβ∗ (s|y) is obtained.
C. Offline Learning for Online Inference
It is important to note that,
a) the training process to obtain the function qβ∗ (s|y) is
conducted completely offline;
b) the use of the trained function qβ∗ (s|y), however, is in
real time, i.e., online.
In particular, in real time, given whatever newly observed
measurements y of the system, based on qβ∗ (s|y), the approximate posterior marginals qβ∗ (sl |y), l = 1, . . . , L will
be computed. Based on such instantly computed qβ∗ (sl |y), a
detection decision of whether line l (= 1, . . . , L) is connected
or not in the current topology will be made. For example, a
MAP detector would make the following decision,
(
0, if qβ∗ (sl = 0|y) > 0.5,
∀l = 1, . . . , L, ŝl =
(11)
1, otherwise.
Accordingly, we name our proposed methodology
“Learning-to-Infer”: To perform real time inference of
network topology, we exploit offline learning to train a
detector based on labeled data simulated from the full-blown
physical model of the power system. The methodology is
summarized in Table I. A system diagram is plotted in Figure
1.
Remark 3 (Training Binary Classifiers): For any detector
that identifies the status of a line l, (e.g., a binary MAP
detector), it can also be viewed as a binary classifier sˆl (y) ∈
{0, 1}: For each possible realization of y, this classifier outputs
an inferred status of line l. From this perspective, solving
(10) is exactly a supervised learning process based on a
labeled data set, {st , y t }, where {st } are the output labels
that correspond to the input data {y t }. As a result, the rich
literature on supervised learning for training binary classifiers
directly apply to our problem under this Learning-to-Infer
framework.
D. Advantages of the Proposed Method
One great advantage of this Learning-to-Infer method is
that we can generate labeled data very efficiently. Specifically, we can efficiently sample from the generative model
of p(s, P , Qin , V in , y) (cf. (6)) as long as we have some
prior p(s, P , Qin , V in ) that is easy to sample from. While
historical data and expert knowledge would surely help in
forming such priors, using simple uninformative priors can
already suffice as will be shown later in the numerical examples. As a result, we can obtain an arbitrarily large set
of data with very little cost to train the discriminative model.
This is quite different from the typical situations encountered
in machine learning problems, where obtaining a large amount
of labeled data is usually expensive as it requires extensive
human annotation effort.
Furthermore, once the approximate posterior distribution
qβ (s|y) is learned, it can be deployed to infer the power
grid topology in real-time as the computation complexity of
qβ (sl |y) is very low by design. This is especially important
in monitoring large-scale power grids in real time, because,
although training qβ (s|y) could take a reasonably amount of
time, the inference speed is very fast. Therefore, the learned
predictor q can be used in real-time with low-cost hardware.
Limitations of Historical Data and Power of Simulated
Data: In overcoming the computational complexity challenges
of real-time topology identification, it is particularly worth
noting the fundamental limitation of using real historical data.
Even with the explosion of data available from pervasive
sensors in power systems, the data are often collected under
a very limited set of system scenarios. For example, most
historical data are collected under normal system topologies.
Even with data collected under slowly updated systems or
faulty systems, the underlying topologies in these real world
cases only represent a very small fraction of the entire,
exponentially large model space. It would not be prudent to
postulate that a fault event in the future would always resemble
some of the earlier faults happened in the past for which
the data have been collected. Moreover, the Consequently,
historical data are fundamentally insufficient for real time
grid topology identification especially under rare faults and
cascading failures.
Simulated data, as evidenced in the proposed Learning-toInfer framework, offer great potential beyond what historical
data can offer. An orders of magnitude richer set of scenarios
can be generated, and a learning procedure based on these
simulated data can provide very powerful classifiers for identifying arbitrary topologies that may appear in the future, but
have rarely, if not at all, appeared in the past.
IV. N EURAL N ETWORK A RCHITECTURES FOR L EARNING
C LASSIFIERS
To perform binary MAP inference of each line status, the
decision boundary of the MAP detector is highly nonlinear (cf.
Remark 3). We investigate classifiers based on neural networks
to capture such complex nonlinear decision boundaries. As
we have L lines, a straightforward design architecture is to
train a separate classifier for each single line l: the input
layer of the neural network consists of y, and the output
layer consists of just one node predicting either sl = 0 or 1.
Thus, a total of L classifiers need to be trained. For training
and testing, we generate labeled data {st , y t } randomly that
sl
…
…
(L such parallel classifiers)
Fig. 2. L separately trained neural networks, (which could have
multiple hidden layers).
S1
S2
…
SL
Fig. 3. A single jointly trained neural network (which could have
multiple hidden layers) whose features are shared for inferring all L
line statuses.
satisfy the power flow equations and the observation models.
Each st = [st1 , . . . , stL ]T consists of L labels used by the
L classifiers respectively. A diagram illustrating this neural
network architecture is depicted in Figure 2. The function of
the neural network for classifying sl can be understood as
follows: The hidden layers of neurons compute a number of
nonlinear features of the input y, and the output layer applies
a binary linear classifier to these features to make a decision
on sl .
Next, we introduce a second architecture that allows classifiers for different lines to share features, which can lead to
more efficient learning of the classifiers. Specifically, instead
of training L separate neural networks each with one node
in its output layer, we train one neural network whose output
layer consists of L nodes each predicting a different line’s
status. An illustration of this architecture is depicted in Figure
3. As a result, the features computed by the hidden layers
can all be used in classifying any line’s status. The idea of
using shared features is that certain common features may
provide good predictive power in inferring many different
lines’ statuses in a power network.
Furthermore, using a single neural network with feature
sharing can drastically reduce the computational complexity of
both the training and the testing processes. Indeed, while using
separate neural networks requires training of L classifiers,
using a neural network that allows feature sharing involves
training of only a single classifier. Note that, with similar
sizes of neural networks, adding nodes in the output layer
incurs only a very small increase in the training time. As a
result, there is an O(L) reduction in computation time for
this architecture with shared features, which can be significant
savings for large power networks.
Evidently, compared with L separate neural networks, a
shared neural network of the same size would have a performance degradation in classification due to a reduced expressive
power of the model. However, such a performance degradation
can be erased by increasing the size of the shared neural network. In fact, increasing the size of the shared neural network
to be the sum of that of the separate neural networks leads
to a classifier model that is strictly more general, and hence
offers a performance enhancement as opposed to degradation.
As will be shown later, it is sufficient to increase the size
of the shared neural network architecture by a much smaller
factor to achieve the same performance as the separate neural
network architecture does.
With the proposed Learning-to-Infer method, since labeled
data can be generated in an arbitrarily large amount using
Monte Carlo simulations, whenever overfitting is observed,
it can in principle always be overcome by generating more
labeled data for training. Thus, as long as the computation
time allows, we can use neural network models of whatever
complexity for approximating the binary MAP detectors, without worrying about overfitting.
TABLE . II. DATA SET SIZE VS . THE ENTIRE SEARCH SPACE
The (reduced) IEEE 30 bus system with 38 lines
Number of all topologies
238= 2.75 × 1011
38
7
Number of topologies with
8 = 4.89 × 10
8 disconnected lines
The generated data set
3 × 105
The (reduced) IEEE 118 bus system with 170 lines
Number of all topologies
2170 = 1.50 × 1051
170
18
Number of topologies with
13 = 9.94 × 10
13 disconnected lines
The generated data set
8 × 105
The (reduced) IEEE 300 bus system with 322 lines
Number of all topologies
2322 = 8.54 × 1096
322
21
Number of topologies with
12 = 2.11 × 10
12 disconnected lines
The generated data set
2.2 × 106
V. N UMERICAL E XPERIMENTS
We evaluate the proposed Learning-to-Infer method for
grid topology identification with three benchmark systems of
increasing sizes, the IEEE 30, 118, and 300 bus systems, as the
baseline topologies. As opposed to considering only a small
number of simultaneous line outages as in existing works, we
allow any number of line outages, and investigate whether the
learned discriminative classifiers can successfully recover the
topologies.
A. Data Set Generation
In our experiments, we employ the DC power flow model
(2) to generate the data sets. Accordingly, the set of controlled
inputs {P , Qin , V in } reduce to just {P }, and the generative
model (6) reduces to p(s, P , y) = p(s, P )p(y|s, P ). To
generate a data set {st , P t , y t , t = 1, . . . , T }, we assume the
prior distribution p(s, P ) factors as p(s)p(P ). As such, we
generate the network topologies s and the power injections P
independently:
• We generate the line statuses {sl } using independent and
identically distributed (IID) Bernoulli random variables,
with P(sl = 1) = 0.6, 0.9 and 0.96 for the IEEE 30, 118,
and 300 bus systems, respectively. We do not consider
disconnected networks in this study, and exclude the line
status samples if they lead to disconnected networks.
As such, considering that some lines must always be
connected to ensure network connectivity, after some
network reduction, the equivalent networks for the IEEE
30, 118, and 300 bus systems have 38, 170, and 322 lines
that can possibly be in outage, respectively.
• We would like our predictor to be able to identify
the topology for arbitrary values of power injections
as opposed to fixed ones. Accordingly, we generate P
using the following procedure: For each data sample,
we first generate bus voltage phase angles θ as IID
uniformly distributed random variables in [0, 0.2π], and
then compute P according to (2) under the baseline
topologies.
With each pair of generated st and P t , we consider two types
of measurements that constitute y: nodal voltage phase angle
measurements and nodal power injection measurements. For
these, a) we generate IID Gaussian voltage phase angle measurement noises with a standard deviation of 0.01 degree, the
state-of-the-art PMU accuracy [27], and b) we assume power
injections are measured accurately. Here, we consider that
measurements of voltage phase angles and power injections are
collected at all the buses. The effect of number and locations
of sensors will be discussed toward the end of this section.
In this study, we generate 300K, 800K, and 2.2M data
samples for the IEEE 30, 118, and 300 bus systems, respectively. The 300K data for the IEEE 30 bus system are divided
into 200K, 50K and 50K samples for training, validation,
and testing, respectively; the 800K data for the IEEE 118 bus
system are divided into 600K, 100K and 100K samples; and
the 2.2M data for the IEEE 300 bus system are divided into
1.8M , 200K and 200K samples. We note that over 99% of
the generated 300K 30-bus topologies are distinct from each
other, so are that of the generated 800K 118 bus topologies
and that of the 2.2M 300 bus topologies. As a result, these
generated data set can very well evaluate the generalizability
of the trained classifiers, as (almost) all data samples in the
test set have topologies unseen in the training set.
Moreover, in the generated data sets, the average numbers
of disconnected lines relative to the baseline topology are
7.8, 13.4 and 11.6 for the IEEE 30, 118 and 300 bus systems,
respectively. These numbers of simultaneous line outages are
significantly higher than those typically assumed in sparse line
outage studies. Furthermore, we would like to compare the
size of the generated data set to the total number of possible
topology hypotheses, as highlighted in Table II. Clearly, a) it
is computationally prohibitive to perform line status inference
based on exhaustive search, and b) the generated 300K, 800K
and 2.2M data sets are only a tiny fraction of the entire
space of all topologies. Yet, we will show that the classifiers
trained with the generated data sets exhibit excellent inference
0.2
1
0.18
0.16
0.14
Losses
0.12
0.1
0.08
0.06
0.98
Identification Accuracies
Training loss 30 bus
Validation loss 30 bus
Training loss 118 bus
Validation loss 118 bus
Training loss 300 bus
Validation loss 300 bus
Testing accuracy, 30 bus
Testing accuracy, 118 bus
Testing accuracy, 300 bus
0.96
0.94
0.92
0.04
0.02
0.9
0
0
0
500
1000
1500
500
1000
1500
2000
Epoch
2000
Epoch
Fig. 5. Progressions of testing accuracies, IEEE 30, 118 and 300 bus
systems.
Fig. 4. Progressions of training and validation losses, IEEE 30, 118
and 300 bus systems.
performance and generalizability.
B. Neural Network Structure and Training
We employ two-layer (i.e., one hidden layer) fully connected neural networks for both the separate training architecture and the feature sharing architecture. Rectified Linear
Units (ReLUs) are employed as the activation functions in
the hidden layer. In the output layer we employ hinge loss as
the loss function. In training the classifiers, we use stochastic
gradient descent (SGD) with momentum update and Nesterov’s acceleration [28]. While this optimization algorithm
works sufficiently well for our experiments, we note that other
algorithms may further accelerate the training procedure [29].
Number of Misidentified Line Statuses
14
30 bus
118 bus
300 bus
12
10
8
6
4
2
0
0
500
1000
1500
2000
Epoch
Fig. 6. Progressions of average numbers of misidentified line statuses,
IEEE 30, 118 and 300 bus systems.
C. Evaluation Results
1) The Separate Training Architecture vs. the Feature Sharing Architecture: We compare the performance of the separate
training architecture (cf. Figure 2) and the feature sharing
architecture (cf. Figure 3) on the IEEE 30 bus system. We
train the neural network classifiers and obtain the accuracy of
identifying each line status. In the remainder of this section,
the average accuracies across all binary inference of line
statuses are presented.
For separately training a neural network for each line status
inference (out of 38 in total), we employ 75 neurons in the
hidden layer, whereas for training a single neural network
with feature sharing we employ 300 neurons. We note that
75 × 38 300. The sizes of the models are chosen such
that both the separate training architecture and the feature
sharing architecture achieve the same average accuracy of
0.989, and their training times can be fairly compared. For
all neural networks, we run SGD for 2000 epochs in training.
On a laptop with an Intel Core i7 3.1-GHz CPU and 8 GB
of RAM, with the 200K training samples, it takes about
14.7 hours to separately train 38 neural networks of size 75,
but only 1.4 hours to train the one neural network of size
300 with feature sharing. We observe that the feature sharing
architecture is about 11 times faster to train than the separate
training architecture while achieving the same performance.
Such a speed advantage of the feature sharing architecture
will become even more pronounced in larger power networks.
As discussed in Section III-D, while the offline data generation and training procedures may take a reasonable amount
of time, the testing procedure, i.e., real time topology identification, is performed extremely fast: In all of our numerical
experiments, the testing time per data sample is under a
millisecond. The extremely fast testing speed demonstrates that
the proposed approach applies very well to real-time tasks,
such as failure identification during cascading failures.
2) Performance of the Learning-to-Infer Method: From this
point on, all simulations are performed with the feature sharing
architecture (cf. Figure 3). We now present the results for the
IEEE 30, 118 and 300 bus systems.
In particular, we continue to employ neural networks with
one hidden layer, and use 300, 1000 and 3000 neurons in
the hidden layer for the IEEE 30, 118 and 300 bus systems,
respectively. For all the three systems, we plot in Figure 4 the
achieved training and validation losses for every epoch, and
in Figure 5 the achieved testing accuracies for every epoch. It
is clear that the training and validation losses stay very close
0.99
0.995
0.99
300 neurons
200 neurons
100 neurons
Testing Identification Accuracy
Testing Identification Accuracy
0.985
0.98
0.975
0.97
0.965
1000 neurons
600 neurons
300 neurons
0.985
0.98
0.975
0.97
0.965
0.96
0
0.5
1
Training Data Size
1.5
0.96
2
0
#10 5
2
3
4
5
6
#10 5
Training Data Size
Fig. 7. Effect of model size and sample size, IEEE 30 bus system.
Fig. 8. Effect of model size and sample size, IEEE 118 bus system.
0.998
3000 neurons
2000 neurons
1000 neurons
0.996
Testing Identification Accuracy
to each other for all the three systems, and thus no overfitting
is observed. Moreover, very high testing accuracies, 0.989,
0.990 and 0.997 are achieved for the IEEE 30, 118 and 300
bus systems, respectively.
The testing accuracies can be equivalently understood by
the average numbers of misidentified line statuses, plotted in
Figure 6. We observe that, at the beginning of the training
procedures, the average numbers of misidentified line statuses
are 7.8, 13.4 and 11.6 for the IEEE 30, 118 and 300 bus
systems, which are exactly the average numbers of disconnected lines in the respective generated data sets (cf. Section
V-A). Indeed, this coincides with the result from a naive
identification decision rule of always claiming all the lines
as connected (i.e., a trivial majority rule). As the training
procedures progress, the average numbers of misidentified line
statuses are drastically reduced to eventually 0.4, 1.7 and 1.0.
In other words, for the IEEE 300 bus system for example,
facing on average 11.6 simultaneous line outages, only 1
line status would be misidentified on average by the learned
classifier. We again emphasize that such a performance is
achieved with identification decisions made in real time, under
a millisecond.
We would like to further emphasize that the topologies and
the power injections used to train the classifier are different
from the ones in the validation and test sets. This is of
particular interest because it means that our learned classifier
is able to generalize well on the unseen test topologies and
power injections based on its knowledge learned from the
training data set. It is also worth noting that we have generated
the training, validation and testing data sets with uniformly
random voltage phase angles, and hence considerably variable
power injections. In practice, there is often more informative
prior knowledge about the power injections based on historical
data and load forecasts. With such information, the model can
be trained with much less variable samples of power injections,
and the identification performance can be further improved
significantly.
3) Model Size, Sample Complexity, and Scalability: In the
proposed Learning-to-Infer method, obtaining labeled data is
not an issue since data can be generated in an arbitrarily
large amount using Monte Carlo simulations. This leads to
1
0.994
0.992
0.99
0.988
0.986
0.984
0
2
4
6
8
10
Training Data Size
12
14
16
18
#10 5
Fig. 9. Effect of model size and sample size, IEEE 300 bus system.
two questions that are of particular interest: to learn a good
classifier, a) what size of a neural network is needed? and
b) how much data needs to be generated? To answer these
questions,
• for the IEEE 30 bus system, we vary the size of the
hidden layer of the neural network from 100 neurons to
300 neurons, as well as the training data size from 10K
to 200K, and evaluate the learned classifiers;
• for the IEEE 118 bus system, we vary the size of the
hidden layer of the neural network from 300 neurons to
1000 neurons, as well as the training data size from 50K
to 600K, and evaluate the learned classifiers.
• for the IEEE 300 bus system, we vary the size of the
hidden layer of the neural network from 1000 neurons
to 3000 neurons, as well as the training data size from
150K to 1.8M , and evaluate the learned classifiers.
We plot the testing results for the IEEE 30, 118 and 300
bus systems in Figure 7, 8 and 9, respectively. We have the
following observations:
• For the IEEE 30 bus system: a) With 10K training
data, the neural network models of size 200 and 300
are severely overfit, but the levels of overfitting are
significantly reduced as the size of training data increases
2
#10 6
300 bus
1.8
Training Data Size
1.6
1.4
1.2
1
118 bus
0.8
0.6
30 bus
0.4
0.2
0
0
50
100
150
200
250
300
Network Size
Fig. 10. Scalability of the Learning-to-Infer method, from the IEEE 30
bus system to the IEEE 300 bus system.
•
•
to above 50K; b) The best performance is achieved with
300 neurons and 200K training data, where no overfitting
is observed.
For the IEEE 118 bus system: a) With 50K data, the
three neural network models of sizes 300, 600 and
1000 all severely overfit, but the levels of overfitting are
significantly reduced as the size of training data increases
to above 150K; b) The best performance is achieved
with 1000 neurons and 600K training data, where no
overfitting is observed.
For the IEEE 300 bus system: a) With 150K data, the
three neural network models of sizes 1000, 2000 and
3000 all severely overfit, but the levels of overfitting are
significantly reduced as the size of training data increases
to above 450K; b) The best performance is achieved
with 3000 neurons and 1.8M training data, where no
overfitting is observed.
Based on all these experiments, we now examine the
scalability of the proposed Learning-to-Infer method as the
problem size increases. We observe that training data sizes
of 200K, 600K and 1.8M and neural network models of
sizes 300, 1000 and 3000 ensure very high and comparable
performance with no overfitting for the IEEE 30, 118 and 300
bus systems, respectively. When these data sizes are reduced
by a half, some levels of overfitting then appeared for these
models in all the three systems. We plot the training data
sizes compared to the problem sizes for the three systems
in Figure 10. We observe that the required training data size
increases approximately linearly with the problem size. This
linear scaling behavior implies that the proposed Learning-toInfer method can be effectively implemented for large-scale
systems with reasonable computation resources.
4) Effect of Number and Locations of Sensors: We close
this section with a look into the effect of sensor placement in
topology identification. It is clear that the performance of realtime topology identification would closely depend on where
and what types of sensor measurements are collected. Given
limited sensing resources, optimizing the sensor placement is
a hard problem for which many studies have addressed (see,
Fig. 11. The IEEE 30 bus system, and a set of locations of PMUs.
e.g., [7] among others). Here, we present a case study on
the IEEE 30 bus system, for which voltage phase angles are
collected only at 19 buses (as opposed to all the buses as in the
previous experiments), as depicted in Figure 11. Interestingly,
the achieved average identification accuracy only drops to
0.978 (from 0.989 when all the buses are monitored.) This
translates to on average only 0.83 misidentified line statuses
among a total of 38 lines. A more comprehensive study of
sensor placement for real-time topology identification is left
for future work.
VI. C ONCLUSION
We have developed a new Learning-to-Infer variational inference method for real-time topology identification of power
grids. The computational complexity due to the exponentially
large number of topology hypotheses is overcome by efficient marginal inference with optimized variational models.
Optimization of the variational model is transformed to and
solved as a discriminative learning problem, based on Monte
Carlo samples efficiently generated with full-blown power
flow models. The developed Learning-to-Infer method has
the major advantages that a) the training process takes place
completely offline, and b) labeled data sets can be generated in
an arbitrarily large amount fast and with very little cost. As a
result, very complex variational models can employed without
worrying about overfitting, as more labeled training data can
always be generated had there been overfitting observed.
With the classifiers learned offline, their actual use is in real
time, and topology identification decisions are made under a
millisecond. We have evaluated the proposed method with the
IEEE 30, 118 and 300 bus systems. It has been demonstrated
that arbitrary network topologies can be identified in real time
with excellent performance using classifiers trained with a
reasonably small amount of generated data.
R EFERENCES
[1] Y. Zhao, J. Chen, and H. V. Poor, “Learning to infer: A new variational
inference approach for power grid topology identification,” in Proc.
IEEE Workshop on Statistical Signal Processing, Jun. 2016, pp. 1–5.
[2] ——, “Efficient neural network architecture for topology identification
in smart grid,” in Proc. IEEE Global Conference on Signal and Information Processing (GlobalSIP), Dec. 2016, pp. 811–815.
[3] US-Canada Power System Outage Task Force, Final Report on the
August 14, 2003 Blackout in the United States and Canada, 2004.
[4] Arizona-Southern California Outages on September 8, 2011: Causes and
Recommendations. FERC, NERC, 2012.
[5] J. E. Tate and T. J. Overbye, “Line outage detection using phasor angle
measurements,” IEEE Transactions on Power Systems, vol. 23, no. 4,
pp. 1644 – 1652, Nov. 2008.
[6] ——, “Double line outage detection using phasor angle measurements,”
in Proc. IEEE Power and Energy Society General Meeting, Jul. 2009.
[7] Y. Zhao, J. Chen, A. Goldsmith, and H. V. Poor, “Identification of
outages in power systems with uncertain states and optimal sensor
locations,” IEEE Journal of Selected Topics in Signal Processing, vol. 8,
no. 6, pp. 1140–1153, Dec. 2014.
[8] Y. Zhao, A. Goldsmith, and H. V. Poor, “On PMU location selection
for line outage detection in wide-area transmission networks,” in Proc.
IEEE Power and Energy Society General Meeting, July 2012, pp. 1–8.
[9] T. Kim and S. J. Wright, “PMU placement for line outage identification
via multinomial logistic regression,” IEEE Transactions on Smart Grid,
2017.
[10] M. Garcia, T. Catanach, S. Vander Wiel, R. Bent, and E. Lawrence,
“Line outage localization using phasor measurement data in transient
state,” IEEE Transactions on Power Systems, vol. 31, no. 4, pp. 3019–
3027, 2016.
[11] H. Zhu and G. B. Giannakis, “Sparse overcomplete representations for
efficient identification of power line outages,” IEEE Transactions on
Power Systems, vol. 27, no. 4, pp. 2215–2224, Nov. 2012.
[12] J. Chen, Y. Zhao, A. Goldsmith, and H. V. Poor, “Line outage detection
in power transmission networks via message passing algorithms,” in
Proc. 48th Asilomar Conference on Signals, Systems and Computers,
2014, pp. 350–354.
[13] M. He and J. Zhang, “A dependency graph approach for fault detection
and localization towards secure smart grid,” IEEE Transactions on Smart
Grid, vol. 2, no. 2, pp. 342–351, Jun. 2011.
[14] S. Bolognani, N. Bof, D. Michelotti, R. Muraro, and L. Schenato, “Identification of power distribution network topology via voltage correlation
analysis,” in IEEE Conference on Decision and Control, 2013, pp. 1659–
1664.
[15] V. Kekatos, G. B. Giannakis, and R. Baldick, “Online energy price matrix factorization for power grid topology tracking,” IEEE Transactions
on Smart Grid, vol. 7, no. 3, pp. 1239–1248, 2016.
[16] D. Deka, M. Chertkov, and S. Backhaus, “Structure learning in power
distribution networks,” IEEE Transactions on Control of Network Systems, 2017.
[17] Y. Weng, Y. Liao, and R. Rajagopal, “Distributed energy resources
topology identification via graphical modeling,” IEEE Transactions on
Power Systems, vol. 32, no. 4, pp. 2682–2694, 2017.
[18] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521,
no. 7553, pp. 436–444, 2015.
[19] Power Systems Test Case Archive, University of Washington Electrical
Engineering, https://www.ee.washington.edu/research/pstca/.
[20] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin, “Network flows: theory,
algorithms, and applications,” 1993.
[21] J. D. Glover, M. Sarma, and T. Overbye, Power System Analysis &
Design. Cengage Learning, 2011.
[22] R. Baldick, Applied optimization: formulation and algorithms for engineering systems. Cambridge University Press, 2006.
[23] C. M. Bishop, Pattern Recognition and Machine Learning. Springer,
2006.
[24] H. V. Poor, An Introduction to Signal Detection and Estimation.
Springer-Verlag, New York, 1994.
[25] M. Mezard and A. Montanari, Information, physics, and computation.
Oxford University Press, 2009.
[26] V. Vapnik, Statistical Learning Theory. Wiley, New York, 1998.
[27] A. von Meier, D. Culler, A. McEachern, and R. Arghandeh, “Microsynchrophasors for distribution systems,” in Proc. IEEE Innovative
Smart Grid Technologies (ISGT), 2013.
[28] Y. Nesterov, Introductory Lectures on Convex Optimization: A Basic
Course. Springer Science & Business Media, 2013, vol. 87.
[29] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
arXiv preprint arXiv:1412.6980, 2014.
| 2 |
A PROBABILISTIC ℓ1 METHOD FOR CLUSTERING HIGH DIMENSIONAL DATA
arXiv:1504.01294v2 [] 22 Apr 2016
TSVETAN ASAMOV AND ADI BEN–ISRAEL
Abstract. In general, the clustering problem is NP–hard, and global optimality cannot be established for
non–trivial instances. For high–dimensional data, distance–based methods for clustering or classification
face an additional difficulty, the unreliability of distances in very high–dimensional spaces. We propose a
probabilistic, distance–based, iterative method for clustering data in very high–dimensional space, using the
ℓ1 –metric that is less sensitive to high dimensionality than the Euclidean distance. For K clusters in Rn ,
the problem decomposes to K problems coupled by probabilities, and an iteration reduces to finding Kn
weighted medians of points on a line. The complexity of the algorithm is linear in the dimension of the data
space, and its performance was observed to improve significantly as the dimension increases.
1. Introduction
The emergence and growing applications of big data have underscored the need for efficient algorithms
based on optimality principles, and scalable methods that can provide valuable insights at a reasonable
computational cost.
In particular, problems with high–dimensional data have arisen in several scientific and technical areas
(such as genetics [19], medical imaging [29] and spatial databases [21], etc.) These problems pose a special
challenge because of the unreliability of distances in very high dimensions. In such problems it is often
advantageous to use the ℓ1 –metric which is less sensitive to the “curse of dimensionality” than the Euclidean
distance.
We propose a new probabilistic distance–based method for clustering data in very high–dimensional
spaces. The method uses the ℓ1 –distance, and computes the cluster centers using weighted medians of
the given data points. Our algorithm resembles well–known techniques such as fuzzy clustering [9] and
K–means, and inverse distance interpolation [26].
The cluster membership probabilities are derived from necessary optimality conditions for an approximate problem, and decompose a clustering problem with K clusters in Rn into Kn one–dimensional
problems, which can be solved separately. The algorithm features a straightforward implementation and
a polynomial running time, in particular, its complexity is linear in the dimension n. In numerical experiments it outperformed several commonly used methods, with better results for higher dimensions.
While the cluster membership probabilities simplify our notation, and link our results to the theory of
subjective probability, these probabilities are not needed by themselves, since they are given in terms of
distances, that have to be computed at each iteration.
1.1. Notation. We use the abbreviation 1,K := {1, 2, . . . , K} for the indicated index set. The j th component of a vector x i ∈ Rn is denoted x i [j]. The ℓp –norm of a vector x = (x[j]) ∈ Rn is
n
X
p
kxk p := (
x[j] )1/p
j=1
Date: April 7, 2016.
2010 Mathematics Subject Classification. Primary 62H30, 90B85; Secondary 90C59.
Key words and phrases. Clustering, ℓ1 –norm, high–dimensional data, continuous location.
1
2
TSVETAN ASAMOV AND ADI BEN–ISRAEL
and the associated ℓp –distance between two vectors x and y is dp (x, y) := kx − ykp , in particular, the
Euclidean distance with p = 2, and the ℓ1 –distance,
n
X
d1 (x, y) = kx − yk1 =
x[j] − y[j] .
(1)
j=1
1.2. The clustering problem. Given
• a set X = {x i : i ∈ 1,N } ⊂ Rn of N points x i in Rn ,
• their weights W = {w i > 0 : i ∈ 1,N }, and
• an integer 1 ≤ K ≤ N ,
partition X into K clusters {X k : k ∈ 1,K}, defined as disjoint sets where the points in each cluster are
similar (in some sense), and points in different clusters are dissimilar. If by similar is meant close in some
metric d(x, y), we have a metric (or distance based) clustering problem, in particular ℓ1 –clustering
if the ℓ1 –distance is used, Euclidean clustering for the ℓ2 –distance, etc.
1.3. Centers. In metric clustering each cluster has a representative point, or center, and distances to
clusters are defined as the distances to their centers. The center c k of cluster X k is a point c that minimizes
the sum of weighted distances to all points of the cluster,
X
w i d(x i , c) .
(2)
c k := arg min
x i ∈Xk
Thus, the metric clustering problem can be formulated as follows: Given X, W and K as above, find centers
{c k : k ∈ 1,K} ⊂ Rn so as to minimize
min
c1 ,··· ,cK
K
X
X
w i d(x i , c k ),
(L.K)
k=1 x i ∈Xk
where X k is the cluster of points in X assigned to the center c k .
1.4. Location problems. Metric clustering problems often arise in location analysis, where X is the set
of the locations of customers, W is the set of their weights (or demands), and it is required to locate
K facilities {c k } to serve the customers optimally in the sense of total weighted-distances traveled. The
problem (L.K) is then called a multi–facility location problem, or a location–allocation problem
because it is required to locate the centers, and to assign or allocate the points to them.
Problem (L.K) is trivial for K = N (every point is its own center) and reduces for K = 1 to the single
facility location problem: find the location of a center c ∈ Rn so as to minimize the sum of weighted
distances,
N
X
w i d(x i , c).
(L.1)
min n
c ∈ R i=1
For 1 < K < N , the problem (L.K) is NP-hard in general [24], while the planar case can be solved
polynomially in N , [13].
1.5. Probabilistic approximation. (L.K) can be approximated by a continuous problem
min
c1 ,··· ,cK
K
X
X
w i pk (x i )d(x i , c k ),
(P.K)
k=1 x i ∈X
where rigid assignments x i ∈ X k are replaced by probabilistic (soft) assignments, expressed by probabilities
pk (x i ) that a point x i belongs to the cluster X k .
HIGH DIMENSIONAL CLUSTERING
3
For each point x i the cluster membership probabilities pk (x i ) sum to 1, and are assumed to depend
on the distances d(x i , c k ) as follows
membership in a cluster is more likely the closer is its center
(A)
Given these probabilities, the problem (P.K) can be decomposed into K single facility location problems,
X
min
pk (x i )w i d(x i , c k ),
(P.k)
ck
x i ∈X
for k ∈ 1,K. The solutions c k of the K problems (P.k), are then used to calculate the new distances
d(x i , c k ) for all i ∈ 1,N , k ∈ 1,K, and from them, new probabilities {pk (x i )}, etc.
1.6. The case for the ℓ1 norm. In high dimensions, distances between points become unreliable [7], and
this in particular “makes a proximity query meaningless and unstable because there is poor discrimination
between the nearest and furthest neighbor” [1]. For the Euclidean distance
n
n
X
X
2 1/2
2
x[j] y[j] + kyk22 )1/2
(3)
x
[j]
−
y
[j]
)
=
(kxk
−
2
d2 (x, y) = (
2
j=1
j=1
between random points x, y ∈
consequently,
Rn ,
the cross products x[j] y[j] in (3) tend to cancel for very large n, and
d2 (x, y) ≈ (kxk22 + kyk22 )1/2 .
√
In particular, if x, y are random points on the unit sphere in Rn then d2 (x, y) ≈ 2 for very large n. This
“curse of high dimensionality” limits the applicability of distance based methods in high dimension.
The ℓ1 –distance is less sensitive to high dimensionality, and has been shown to “provide the best discrimination in high–dimensional data spaces”, [1]. We use it throughout this paper.
The plan of the paper. The ℓ1 –metric clustering problem is solved in § 2 for one center. A probabilistic approximation of (L.K) is discussed in § 3, the probabilities studied in §§ 4–5. The centers of the
approximate problem are computed in § 6. Our main result, Algorithm PCM(ℓ1 ) of § 8, uses the power
probabilities of § 7, and has running time that is linear in the dimension of the space, see Corollary 1.
Theorem 1, a monotonicity property of Algorithm PCM(ℓ1 ), is proved in § 9. Section 10 lists conclusions.
Appendix A shows relations to previous work, and Appendix B reports some numerical results.
2. The single facility location problem with the ℓ1 –norm
For the ℓ1 –distance (1) the problem (L.1) becomes
min
c∈Rn
N
X
w i d1 (xi , c),
i=1
or
min
c∈Rn
N
X
wi
i=1
n
X
j=1
x i [j] − c[j] ,
(4)
in the variable c ∈ Rn , which can be solved separately for each component c[j], giving the n problems
min
c[j]∈R
N
X
i=1
w i x i [j] − c[j] , j ∈ 1,n.
(5)
Definition 1. Let X = {x 1 , · · · , x N } ∈ R be an ordered set of points
x1 ≤ x2 ≤ · · · ≤ xN
and let W = {w 1 , · · · , w N } be a corresponding set of positive weights. A point x is a weighted median
(or W–median) of X if there exist α, β ≥ 0 such that
X
X
{w i : x i < x} + α =
{w i : x i > x} + β
(6)
4
TSVETAN ASAMOV AND ADI BEN–ISRAEL
where α + β is the weight of x if x ∈ X, and α = β = 0 if x 6∈ X.
The weighted median always exists, but is not necessarily unique.
Lemma 1. For X, W as above, define
θ k :=
k
P
i=1
N
P
wi
k ∈ 1,N ,
,
wi
(7)
i=1
and let k∗ be the smallest k with θ k ≥ 12 . If
θ k∗ >
1
2
(8)
then x k∗ is the unique weighted median, with
!
X
X
1
α = 2 w k∗ +
wk −
w k , β = w k∗ − α.
k>k ∗
(9)
k<k ∗
Otherwise, if
θ k∗ = 21 ,
then any point in the open interval (x k∗ , x k∗+1 ) is a weighted median with α = β = 0.
Proof. The statement holds since the sequence (7) is increasing from θ1 = (w1 /
Note: In case (10),
X
{w k : x k ≤ x k∗ } =
X
PN
k=1
(10)
wk ) to θN = 1.
{w k : x k ≥ x k∗+1 },
we can take the median as the midpoint of x k∗ and x k∗+1 , in order to conform with the classical definition
of the median (for an even number of points of equal weight) .
Lemma 2. Given X and W as in Definition 1, the set of minimizers c of
N
X
i=1
is the set of W–medians of X.
wi xi − c
Proof. The result is well known if all weights are 1. If the weights are integers, consider a point x i with
weight w i as w i coinciding points of weight 1 and the result follows. Same if the weights are rational.
Finally, if the weights are real, consider their rational approximations.
3. Probabilistic approximation of (L.K)
We relax the assignment problem in (L.K) of § 1.2 by using a continuous approximation as follows,
min
N
K X
X
w i p k (x i ) d(x i , c k )
(P.K)
k=1 i=1
with two sets of variables,
the centers {c k }, and
the cluster membership probabilities {p k (x i )},
p k (x i ) := Prob {x i ∈ X k }, i ∈ 1,N , k ∈ 1,K,
(11)
HIGH DIMENSIONAL CLUSTERING
5
Because the probabilities {p k (x i )} add to 1 for each i ∈ 1,N , the objective function of (P.K) is an upper
bound on the optimal value of (L.K),
K X
N
X
k=1 i=1
w i p k (x i ) d(x i , c k ) ≥ min (L.K),
(12)
and therefore so is the optimal value of (P.K),
min (P.K) ≥ min (L.K).
(13)
4. Axioms for probabilistic distance clustering
In this section, dk (x) stands for dk (x, ck ), the distance of x to the center ck of the k th –cluster, k ∈ 1,K.
To simplify notation, the point x is assumed to have weight w = 1.
The cluster membership probabilities {pk (x) : k ∈ 1,K} of a point x depend only on the distances
{dk (x) : k ∈ 1,K},
p(x) = f (d(x))
(14)
where p(x) ∈ RK is the vector of probabilities (pk (x)), and d(x) is the vector of distances (dk (x)). Natural
assumptions for the relation (14) include
di (x) < dj (x) =⇒ pi (x) > pj (x), for all i, j ∈ 1,K
f (λ d(x)) = f (d(x)), for any λ > 0
Q p(x) = f (Q d(x)), for any permutation matrices Q
(15a)
(15b)
(15c)
Condition (15a) states that membership in a cluster is more probable the closer it is, which is Assumption
(A) of § 1.5. The meaning of (15b) is that the probabilities pk (x) do not depend on the scale of measurement,
i.e., f is homogeneous of degree 0. It follows that the probabilities pk (x) depend only on the ratios of the
distances {dk (x) : k ∈ 1,K}.
The symmetry of f , expressed by (15c), guarantees for each k ∈ 1, K, that the probability pk (x) does
not depend on the numbering of the other clusters.
Assuming continuity of f it follows from (15a) that
di (x) = dj (x) =⇒ pi (x) = pj (x),
for any i, j ∈ 1, K.
For any nonempty subset S ⊂ 1, K, let
pS (x) =
X
ps (x),
s∈S
the probability that x belongs to one of the clusters {Cs : s ∈ S}, and let pk (x|S) denote the conditional
probability that x belongs to the cluster Ck , given that it belongs to one of the clusters {Cs : s ∈ S}.
Since the probabilities pk (x) depend only on the ratios of the distances {dk (x) : k ∈ 1,K}, and these
ratios are unchanged in subsets S of the index set 1,K, it follows that for all k ∈ 1, K, ∅ =
6 S ⊂ 1,K,
pk (x) = pk (x|S) pS (x)
(16)
which is the choice axiom of Luce, [22, Axiom 1], and therefore, [30],
vk (x)
pk (x|S) = P
vs (x)
s∈S
(17)
6
TSVETAN ASAMOV AND ADI BEN–ISRAEL
where vk (x) is a scale function, in particular,
vk (x)
.
pk (x) = P
vs (x)
(18)
s∈1,K
Assuming vk (x) 6= 0 for all k, it follows that
pk (x)vk (x)−1 = P
1
,
vs (x)
(19)
s∈1,K
where the right hand side is a function of x, and does not depend on k.
Property (15a) implies that the function vk (·) is a monotone decreasing function of dk (x).
5. Cluster membership probabilities as functions of distance
Given K centers {c k }, and a point x with weight w and distances {d(x, c k ) : k ∈ 1,K} from these
centers, a simple choice for the function vk (x) in (17) is
1
,
(20)
vk (x) =
w dk (x)
for which (19) gives1,
w p k (x) d(x, c k ) = D(x), k ∈ 1,K,
(21)
where the function D(x), called the joint distance function (JDF) at x, does not depend on k.
For a given point x and given centers {ck }, equations (21) are optimality conditions for the extremum
problem
)
( K
K
X
X
(22)
min w
p2k d(x, ck ) :
pk = 1, pk ≥ 0, k ∈ 1,K
k=1
k=1
in the probabilities {p k := p k (x)}. The squares of probabilities in the objective of (22) serve to smooth
the underlying non–smooth problem, see the seminal paper by Teboulle [27]. Indeed, (21) follows by
differentiating the Lagrangian
!
K
K
X
X
L(p, λ) = w
p2k d(x, c k ) + λ
pk − 1 ,
(23)
k=1
k=1
with respect to p k and equating the derivative to zero.
Since probabilities add to one we get from (21),
Q
d(x, c j )
pk (x) =
j6=k
K
P
Q
d(x, c m )
, k ∈ 1,K,
(24)
ℓ=1 m6=ℓ
and the JDF at x,
D(x) = w
K
Q
d(x, c j )
j=1
K Q
P
,
(25)
d(x, c m )
ℓ=1 m6=ℓ
which is (up to a constant) the harmonic mean of the distances {d(x, c k ) : k ∈ 1,K}, see also (A-4)
below.
1There are other ways to model Assumption (A), e.g. [5], but the simple model (21) works well enough for our purposes.
HIGH DIMENSIONAL CLUSTERING
7
Note that the probabilities {p k (x) : k ∈ 1,K} are determined by the centers {c k : k ∈ 1,K} alone, while
the function D(x) depends also on the weight w. For example, in case K = 2,
p 1 (x) =
d(x, c 1 )
d(x, c 2 )
, p 2 (x) =
,
d(x, c 1 ) + d(x, c 2 )
d(x, c 1 ) + d(x, c 2 )
d(x, c 1 ) d(x, c 2 )
D(x) = w
.
d(x, c 1 ) + d(x, c 2 )
(26a)
(26b)
6. Computation of centers
We use the ℓ1 –distance (1) throughout. The objective function of (P.K) is a separable function of the
cluster centers,
f (c 1 , . . . , c K ) :=
where f k (c) :=
K
X
k=1
N
X
f k (c k ),
(27a)
k ∈ 1,K.
(27b)
x i [j] − c k [j] , k ∈ 1,K,
(28)
w i p k (x i ) d 1 (x i , c),
i=1
The centers problem thus separates into K problems of type (4),
min
ck ∈ Rn
N
X
w i p k (x i )
n
X
j=1
i=1
coupled by the probabilities {p k (x i )}. Each of these problems separates into n problems of type (5) for
the components c k [j],
min
ck [j] ∈ R
N
X
i=1
w i p k (x i ) x i [j] − c k [j] , k ∈ 1,K, j ∈ 1,n,
(29)
whose solution, by Lemma 2, is a weighted median of the points {x i [j]} with weights {w i p k (x i )}.
7. Power probabilities
The cluster membership probabilities {pk (x) : k ∈ 1,K} of a point x serve to relax the rigid assignment
of x to any of the clusters, but eventually it may be necessary to produce such an assignment. One way
to achieve this is to raise the membership probabilities pk (x) of (24) to a power ν ≥ 1, and normalize,
obtaining the power probabilities
pν (x)
(ν)
p k (x) := K k
,
(30)
P ν
p j (x)
j=1
which, by (24), can also be expressed in terms of the distances d(x, c k ),
Q
d(x, c j )ν
(ν)
p k (x) :=
j6=k
K
P
Q
d(x, c m
)ν
, k ∈ 1,K.
(31)
ℓ=1 m6=ℓ
(ν)
As the exponent ν increases the power probabilities p k (x) tend to hard assignments: If M is the index
set of maximal probabilities, and M has #M elements, then,
1
(ν)
#M , if k ∈ M ;
lim p k (x) =
(32)
ν→∞
0,
otherwise,
8
TSVETAN ASAMOV AND ADI BEN–ISRAEL
and the limit is a hard assignment if #M = 1, i.e. if the maximal probability is unique.
Numerical experience suggests an increase of ν at each iteration, see, e.g., (33) below.
8. Algorithm PCM(ℓ1 ): Probabilistic Clustering Method with ℓ1 distances
The problem (P.K) is solved iteratively, using the following updates in succession.
(ν)
Probabilities computation: Given K centers {c k }, the assignments probabilities {pk (x i )} are calculated using (31). The exponent ν is updated at each iteration, say by a constant increment ∆ ≥ 0,
ν := ν + ∆
(33)
starting with an initial ν0 .
(ν)
Centers computation: Given the assignment probabilities {pk (x i )}, the problem (P.K) separates
into Kn problems of type (29),
min
ck [j]∈R
N
X
wi p
i=1
(ν)
k
(x i ) x i [j] − c k [j] , k ∈ 1,K, j ∈ 1,n,
(34)
one for each component c k [j] of each center c k , that are solved by Lemma 2.
These results are presented in an algorithm form as follows.
Algorithm PCM(ℓ1 ) : An algorithm for the ℓ1 clustering problem
Data:
X = {x i : i ∈ 1,N } data points, {w i : i ∈ 1,N } their weights,
K the number of clusters,
ǫ > 0 (stopping criterion),
ν0 ≥ 1 (initial value of the exponent ν), ∆ > 0 (the increment in (33).)
Initialization: K arbitrary centers {c k : k ∈ 1,K}, ν := ν0 .
Iteration:
Step 1
compute distances {d 1 (x, c k ) : k ∈ 1,K} for all x ∈ X
Step 2
compute the assignments {p
Step 3
compute the new centers {c k+ : k ∈ 1,K} (applying Lemma 2 to (34))
Step 4
if
K
P
d 1 (c k+ , c k ) < ǫ
(ν)
k
(x) : x ∈ X, k ∈ 1,K} (using (31))
stop
k=1
else ν := ν + ∆ , return to step 1
Corollary 1. The running time of Algorithm PCM(ℓ1 ) is
O(N K(K 2 + n)I),
(35)
where n is the dimension of the space, N the number of points, K the number of clusters, and I is the
number of iterations.
Proof. The number of operations in an iteration is calculated as follows:
Step 1: O(nN K), since computing the ℓ1 distance between two n-dimensional vectors takes O(n) time,
and there are N K distances between all points and all centers.
Step 2: O(N K 3 ), there are N K assignments, each taking O(K 2 ).
HIGH DIMENSIONAL CLUSTERING
9
Step 3: O(nN K), computing the weighted median of N points in R takes O(N ) time, and K n such
medians are computed.
Step 4: O(nK), since there are K cluster centers of dimension n.
The corollary is proved by combining the above results.
Remark 1.
(a) The result (35) shows that Algorithm PCM(ℓ1 ) is linear in n, which in high–dimensional data is much
greater than N and K.
(b) The first few iterations of the algorithm come close to the final centers, and thereafter the iterations
are slow, making the stopping rule in Step 4 ineffective. A better stopping rule is a bound on the number
of iterations I, which can then be taken as a constant in (35).
(c) Algorithm PCM(ℓ1 ) can be modified to account for very unequal cluster sizes, as in [14]. This modification did not significantly improve the performance of the algorithm in our experiments.
(d) The centers here are computed from scratch at each iteration using the current probabilities, unlike
the Weiszfeld method [28] or its generalizations, [17]–[18], where the centers are updated at each iteration.
9. Monotonicity
The centers computed iteratively by Algorithm PCM(ℓ1 ) are confined to the convex hull of X, a compact
set, and therefore a subsequence converges to an optimal solution of the approximate problem (P.K), that
in general is not an optimal solution of the original problem (L.K).
The JDF of the data set X is defined as the sum of the JDF’s of its points,
X
D(x).
(36)
D(X) :=
x∈X
We prove next a monotonicity result for D(X).
Theorem 1. The function D(X) decrease along any sequence of iterates of centers.
Proof. The function D(X) can be written as
D(X) :=
X
x∈X
=
K
X
!
pk (x)
k=1
K
X X
D(x), since the probabilities add to 1,
w(x) pk (x)2 d1 (x, c k ), by (21).
(37)
x∈X k=1
The proof is completed by noting that, for each x, the probabilities {pk (x) : k ∈ 1,K} are chosen as to
minimize the function
K
X
w(x) pk (x)2 d1 (x, c k )
(38)
k=1
for the given centers, see (22), and the centers {c k : k ∈ 1,K} minimize the function (38) for the given
probabilities.
Remark 2. The function D(X) also decreases if the exponent ν is increased in (30), for then shorter
distances are becoming more probable in (37).
10. Conclusions
In summary, our approach has the following advantages.
(1) In numerical experiments, see Appendix B, Algorithm PCM(ℓ1 ) outperformed the fuzzy clustering
ℓ1 –method, the K–means ℓ1 method, and the generalized Weiszfeld method [17].
10
TSVETAN ASAMOV AND ADI BEN–ISRAEL
(2) The solutions of (22) are less sensitive to outliers than the solutions of (A-5), which uses squares
of distances.
(3) The probabilistic principle (A-8) allows using other monotonic functions, in particular the exponential function φ(d) = ed , that gives sharper results, and requires only that every distance d(x, c)
be replaced by exp {d(x, c)}, [5].
(4) The JDF (36) of the data set, provides a guide to the “right” number of clusters for the given data,
[6].
References
[1] C.C. Aggarwal, A. Hinneburg and D.A. Keim, On the surprising behavior of distance metrics in high dimensional spaces,
Lecture Notes in Mathematics 1748(2000), 420–434, Springer–Verlag.
[2] A. Andoni and P. Indyk, Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions, Proceedings of the 47th Annual IEEE Symposium on the Foundations of Computer Science, 2006.
[3] M. Arav, Contour approximation of data and the harmonic mean, J. Math. Inequalities 2(2008), 161–167.
[4] A. Beck and S. Sabach, Weiszfeld’s Method: Old and New Results, J. Optimiz. Th. Appl. 164(2015), 1–40.
[5] A. Ben–Israel and C. Iyigun, Probabilistic distance clustering, J. Classification 25(2008), 5–26.
[6] A. Ben–Israel and C. Iyigun, Clustering, Classification and Contour Approximation of Data, pp. 75–100 in Biomedical
Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems, Y. Censor, Ming Jiang and Ge
Wang (Editors), Medical Physics Publishing, Madison, Wisconsin, 2010, ISBN 978-1-930524-48-4.
[7] K. Beyer, J. Goldstein, R. Ramakrishnan and U. Shaft, When is nearest neighbors meaningful?, Int. Conf. Database
Theory (ICDT) Conference Proceedings, 1999, 217– 235.
[8] J.C. Bezdek, Fuzzy mathematics in pattern classification, Doctoral Dissertation, Cornell University, Ithaca, 1973.
[9] J.C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms, Plenum, New York, 1981, ISBN 0-306-406713.
[10] J.C. Bezdek and S.K. Pal, (editors), Fuzzy Models for Pattern Recognition: Methods that Search for Structure in Data,
IEEE Press, New York, 1992
[11] B. Chazelle, Finding a good neighbor, near and fast, Comm. ACM 51(2008), 115.
[12] K. R. Dixon and J. A. Chapman, Harmonic mean measure of animal activity areas, Ecology 61(1980), 1040–1044
[13] Z. Drezner, The planar two–center and two–median problems, Transportation Science 18(1984), 351–361.
[14] C. Iyigun and A. Ben–Israel, Probabilistic distance clustering adjusted for cluster size, Probability in Engineering and
Informational Sciences 22(2008), 1–19.
[15] C. Iyigun and A. Ben–Israel, Contour approximation of data: A duality theory, Lin. Algeb. and Appl. 430(2009), 2771–
2780.
[16] C. Iyigun and A. Ben–Israel, Semi–supervised probabilistic distance clustering and the uncertainty of classification, pp.
3–20 in Advances in Data Analysis, Data Handling and Business Intelligence, A. Fink, B. Lausen, W. Seidel and A. Ultsch
(Editors), Studies in Classification, Data Analysis and Knowledge Organization, Springer 2010, ISBN 978-3-642-01043-9.
[17] C. Iyigun and A. Ben–Israel, A generalized Weiszfeld method for the multi–facility location problem, O.R. Letters
38(2010), 207–214.
[18] C. Iyigun and A. Ben–Israel, Contributions to the multi–facility location problem, (to appear)
[19] K. Kailing, H. Kriegel and P. Kröger, Density-connected subspace clustering for high-dimensional data, In Proc. 4th
SIAM Int. Conf. on Data Mining (2004), 246–257
[20] F. Klawonn, What Can Fuzzy Cluster Analysis Contribute to Clustering of High-Dimensional Data?, pp. 1–14 in Fuzzy
Logic and Applications, F. Masulli, G.Pasi and R. Yager, (Editors), Lecture Notes in Artificial Intelligence, Springer 2013,
ISBN 978-3-319-03199-6.
[21] E. Kolatch, Clustering algorithms for spatial databases: A survey,
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.28.1145&rep=rep1&type=pdf
[22] R.D. Luce, Individual Choice Behavior: A Theoretical Analysis, Wiley, New York, 1959, ISBN 0-486-44136-9.
HIGH DIMENSIONAL CLUSTERING
11
[23] MATLAB version 7.14.0.739. Natick, Massachusetts: The MathWorks Inc., 2012.
[24] N. Megiddo and K.J. Supowit, On the complexity of some common geometric location problems, SIAM Journal on
Computing 13(1984), 182–196.
[25] R.W. Stanforth, E. Kolossov and B. Mirkin, A measure of domain of applicability for QSAR modelling based on intelligent
K–means clustering, QSAR Comb. Sci. 26 (2007), 837–844.
[26] D.S. Shepard, A two-dimensional interpolation function for irregularly spaced data, Proceedings of 23rd National Conference, Association for Computing Machinery. Princeton, NJ: Brandon/Systems Press, 1968, pp. 517–524.
[27] M. Teboulle, A unified continuous optimization framework for center–based clustering methods, J. Machine Learning
8(2007), 65–102.
[28] E. Weiszfeld, Sur le point par lequel la somme des distances de n points donnés est minimum, Tohoku Math. J. 43 (1937),
355–386.
[29] D.H. Ye, K.M. Pohl, H. Litt and C. Davatzikos, Groupwise morphometric analysis based on high dimensional clustering,
IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2010), 47–54
[30] J.I. Yellott, Jr., Luce’s choice axiom. In N.J. Smelser and P.B. Baltes, editors. International Encyclopedia of the Social &
Behavioral Sciences, pp. 9094–9097. ISBN 0-08-043076-7, 2001.
[31] B. Zhang, M. Hsu, and U. Dayal, k–Harmonic Means A Spatial Clustering Algorithm with Boosting, Temporal, Spatial,
and SpatioTemporal Data Mining, pp. 31–45, 2000.
[32] B. Zhang, M. Hsu, and U. Dayal, Harmonic Average Based Clustering Method and System, US Patent 6,584,433, 2000.
Appendix A: Relation to previous work
Our work brings together ideas from four different areas: inverse distance weighted interpolation, fuzzy
clustering, subjective probability, and optimality principles.
1. Inverse distance weighted (or IDW) interpolation was introduced in 1965 by Donald Shepard,
who published his results [26] in 1968. Shepard, then an undergraduate at Harvard, worked on the following
problem:
A function u : Rn → R is evaluated at K given points {xk : k ∈ 1,K} in Rn , giving the values
{uk : k ∈ 1,K}, respectively. These values are the only information about the function. It is required to
estimate u at any point x.
Examples of such functions include rainfall in meteorology, and altitude in topography. The point x
cannot be too far from the data points, and ideally lies in their convex hull.
Shepard estimated the value u(x) as a convex combination of the given values uk ,
u(x) =
K
X
λk (x) uk
(A-1)
k=1
where the weights λk (x) are inversely ptoportional to the distances d(x, xk ) between x and xk , say
1
K
X
d(x, xk )
(A-2)
u(x) =
uk
K
P
1
k=1
j=1 d(x, xj )
12
TSVETAN ASAMOV AND ADI BEN–ISRAEL
giving the weights
λk (x) =
Q
d(x, xj )
j6=k
K Q
P
(A-3)
d(x, xm )
ℓ=1 m6=ℓ
that are identical with the probabilities (24), if the data points are identified with the centers. IDW
interpolation is used widely in spatial data analysis, geology, geography, ecology and related areas.
Interpolating the K distances d(x, xk ), i.e. taking uk = d(x, xk ) in (A-2), gives
K
K
Q
d(x, xj )
j=1
K Q
P
(A-4)
d(x, xm )
ℓ=1 m6=ℓ
the harmonic mean of the distances {d(x, xk ) : k ∈ 1,K}, which is the JDF in (25) multiplied by a scalar.
The harmonic mean pops up in several areas of spatial data analysis. In 1980 Dixon and Chapman [12]
posited that the home–range of a species is a contour of the harmonic mean of the areas it frequents, and
this has since been confirmed for hundreds of species. The importance of the harmonic mean in clustering
was established by Teboulle [27], Stanforth, Kolossov and Mirkin [25], Zhang, Hsu, and Dayal [31]–[32],
Ben–Israel and Iyigun [5] and others. Arav [3] showed the harmonic mean of distances to satisfy a system
of reasonable axioms for contour approximation of data.
2. Fuzzy clustering introduced by J.C. Bezdek in 1973, [8], is a relaxation of the original problem,
replacing the hard assignments of points to clusters by soft, or fuzzy, assignments of points simultaneously
to all clusters, the strength of association of x i with the k th cluster is measured by wik ∈ [0, 1].
In the fuzzy c–means (FCM) method [9] the centers {c k } are computed by
min
K
N X
X
i=1 k=1
where the weights xik are updated as
2
wik =
m
wik
kx i − c k k22 ,
1
K
P
j=1
kx i − c k k2
kx i − c j k2
2/m−1 ,
and the centers are then calculated as convex combinations of the data points,
ck =
N
m
X
wik
xi . k ∈ 1,K.
N
P m
i=1
wjk
(A-5)
(A-6)
(A-7)
j=1
2The weights (A-6) are optimal for the problem (A-5) if they are probabilities, i.e. if they are required to add to 1 for every
point xi .
HIGH DIMENSIONAL CLUSTERING
13
The constant m ≥ 1 (the “fuzzifier”) controls he fuzziness of the assignments, which become hard assignments in the limit as m ↓ 1. For m = 1, FCM is the classical K–means method. If m = 2 then the weights
wik are inversely proportional to the square distance kx i − c k k22 , analogously to (21).
Fuzzy clustering is one of the best known, and most widely used, clustering methods. However, it may
need some modification if the data in question is very high–dimensional, see, e.g. [20].
3. Subjective probability. There is some arbitrariness in the choice of the model and the fuzzifier m
in (A-5)–(A-6). In contrast, the probabilities (24) can be justified axiomatically. Using ideas and classical
results ([22], [30]) from subjective probability it is shown in Appendix B that the cluster membership
probabilities pk (x), and distances dk (x), satisfy an inverse relationship, such as,
pk (x) φ(d(x, ck )) = f (x), k ∈ 1,K,
(A-8)
where φ(·) is non–decreasing, and f (x) does not depend on k. In particular, the choice φ(d) = d gives (21),
which works well in practice.
4. Optimality principle. Equation (A-8) is a necessary optimality condition for the problem
(K
)
K
X
X
min
p2 φ(d(x, ck )) :
pk = 1, pk ≥ 0, k ∈ 1,K ,
k=1
(A-9)
k=1
that reduces to (22) for the choice φ(d) = d. This shows the probabilities {pk (x)} of (24) to be optimal,
for the model chosen.
Remark 3. Minimizing a function of squares of probabilities seems unnatural, so a physical analogy may
help. Consider an electric circuit with K resistances {Rk } connected in parallel. A current I through
the circuit splits into K currents, with current Ik through the resistance Rk . These currents solve an
optimization problem (the Kelvin principle)
(K
)
K
X
X
2
min
I k Rk :
Ik = I
(A-10)
I1 ,··· ,IK
k=1
k=1
that is analogous to (22). The optimality condition for (A-10) is Ohm’s law,
Ik Rk = constant
a statement that potential is well defined, and an analog of (21). The equivalent resistance of the circuit,
i.e. the resistance R such that I 2 R is equal to the minimal value in (A-10), is then the JDF (25) with Rj
instead of d(x, c j ) and w = 1.
Appendix B: Numerical Examples
In the following examples we use synthetic data to be clustered into K = 2 clusters. The data consists
of two randomly generated clusters, X1 with N1 points, and X2 with N2 points.
The data points x = (x1 , · · · , xn ) ∈ Rn of cluster Xk are such that all of their components xi , 1 ≤ i ≤ n
are generated by sampling from a distribution Fk with mean µk , k = 1, 2. In cluster X1 we take µ1 = 1,
and in cluster X2 , µ2 = −1.
We ran Algorithm PCM(ℓ1 ), with the parameters ν0 = 1, ∆ = 0.1, and compared its performance with
that of the fuzzy clustering method [9] with the ℓ1 norm, as well as the generalized Weiszfeld algorithm
14
TSVETAN ASAMOV AND ADI BEN–ISRAEL
Method
n = 104 n = 5 · 104 n = 105 n = 5 · 105 n = 106
PCM (ℓ1 )
0.0
0.0
0.0
0.0
0.0
FCM (ℓ1 )
27.1
38.6
24.4
40.9
40.1
K-means (ℓ1 )
28.9
26.8
12.7
22.4
22.9
Gen. Weiszfeld
48.5
48.8
48.0
48.2
47.9
σ = 16 PCM (ℓ1 )
4.3
0.0
0.0
4.7
0.0
FCM (ℓ1 )
41.0
42.1
44.5
43.9
39.5
K-means (ℓ1 )
41.8
35.2
23.7
23.5
23.6
Gen. Weiszfeld
48.0
47.0
48.4
48.6
48.0
σ = 24 PCM (ℓ1 )
42.6
8.8
0.8
4.8
0.0
FCM (ℓ1 )
46.4
45.9
47.5
39.5
45.1
K-means (ℓ1 )
45.5
42.6
35.6
28.0
24.5
Gen. Weiszfeld
47.9
47.8
47.1
48.0
48.2
σ = 32 PCM (ℓ1 )
46.0
42.2
13.4
13.6
0.0
FCM (ℓ1 )
47.4
46.0
44.8
46.0
46.0
K-means (ℓ1 )
46.4
45.7
40.3
36.0
30.7
Gen. Weiszfeld
48.2
48.9
48.5
48.9
47.8
Table 1. Percentages of misclassified data in Example 1
σ
σ=8
of [18] (that uses Euclidean distances), and the ℓ1 –K-Means algorithm [23]. For each method we used a
stopping rule of at most 100 iterations (for Algorithm PCM(ℓ1 ) this replaces Step 4). For each experiment
we record the average percentage of misclassification (a misclassification occurs when a point in X1 is
declared to be in X2 , or vice versa) from 10 independent problems. In examples 1,2,3 we choose the
probability distributions to be Fk = N (µk , σ).
Example 1. In this example the clusters are of equal size, N1 = N2 = 100. Table 1 gives the percentages
of misclassification under the five methods tested, for different values of σ and dimension n.
Example 2. We use N1 = 200 and N2 = 100. Table 2 gives the percentages of misclassifications for
different values of σ and dimension n.
Example 3. In this case N1 = 1000, N2 = 10. The percentages of misclassification are included in Table 3.
In addition to experiments with normal data, we also consider instances with uniform data in Examples
4 and 5. In this case Fk is a uniform distribution with mean µk and support length |supp(Fk )|.
Example 4. We use N1 = 100, N2 = 100. The results are shown in Table 4.
Example 5. In this instance N1 = 200, N2 = 100. The results are shown in Table 5.
HIGH DIMENSIONAL CLUSTERING
15
Method
n = 104 n = 5 · 104 n = 105 n = 5 · 105 n = 106
PCM (ℓ1 )
0.0
0.0
0.0
0.0
0.0
FCM (ℓ1 )
11.9
19.1
25.2
30.1
22.6
K-means (ℓ1 )
20.8
25.9
18.4
31.4
13.6
Gen. Weiszfeld
37.8
37.9
37.2
36.7
36.4
σ = 16 PCM (ℓ1 )
10.4
0.0
0.0
0.0
0.0
FCM (ℓ1 )
37.7
35.6
35.0
36.2
39.4
K-means (ℓ1 )
35.8
32.0
23.6
31.7
14.1
Gen. Weiszfeld
38.0
37.7
35.8
36.6
37.8
σ = 24 PCM (ℓ1 )
44.1
5.9
1.2
0.0
0.0
FCM (ℓ1 )
41.3
37.7
38.9
36.7
34.6
K-means (ℓ1 )
40.3
39.9
32.7
33.3
15.5
Gen. Weiszfeld
36.8
37.7
36.7
36.9
37.2
σ = 32 PCM (ℓ1 )
47.2
38.7
18.5
0.0
0.0
FCM (ℓ1 )
42.3
38.8
37.0
39.7
38.9
K-means (ℓ1 )
41.5
42.9
37.2
36.8
22.6
Gen. Weiszfeld
36.7
36.9
36.0
36.5
37.4
Table 2. Percentages of misclassified data in Example 2
σ
σ=8
σ
Method
n = 103 n = 5 · 103 n = 104 n = 5 · 104 n = 105
σ = 0.4 PCM (ℓ1 )
46.4
41.1
24.1
5.1
0.9
FCM (ℓ1 )
13.4
0.5
0.0
19.5
32.0
K-means (ℓ1 )
37.4
31.6
27.1
36.5
32.6
Gen. Weiszfeld
35.4
36.7
32.6
33.7
38.7
σ = 0.8 PCM (ℓ1 )
47.4
31.4
23.4
5.4
1.8
FCM (ℓ1 )
29.3
9.5
13.0
27.3
37.2
K-means (ℓ1 )
37.5
32.0
27.1
36.4
32.6
Gen. Weiszfeld
30.3
31.6
25.9
27.9
34.5
σ = 1.2 PCM (ℓ1 )
47.3
33.9
26.2
7.7
1.6
FCM (ℓ1 )
36.4
20.8
23.2
31.1
23.9
K-means (ℓ1 )
38.4
32. 2
28.3
36.4
32.6
Gen. Weiszfeld
22.1
23.8
26.8
21.6
25.5
σ = 1.6 PCM (ℓ1 )
47.8
35.4
27.9
9.8
3.6
FCM (ℓ1 )
41.1
27.8
30.0
27.6
24.2
K-means (ℓ1 )
37.6
32.3
28.3
36.4
33.4
Gen. Weiszfeld
23.1
23.2
21.1
25.4
31.6
Table 3. Percentages of misclassified data in Example 3
In all examples Algorithm PCM(ℓ1 ) was unsurpassed and was the clear winner in Examples 1, 2, 4 and
5.
(Tsvetan Asamov) Department of Operations Research and Financial Engineering, Princeton University, 98
Charlton Street, Princeton, NJ 08540, USA
E-mail address: [email protected]
(Adi Ben–Israel) Rutgers Business School, Rutgers University, 100 Rockafeller Road, Piscataway, NJ 08854,
USA
E-mail address: [email protected]
16
TSVETAN ASAMOV AND ADI BEN–ISRAEL
Method
n = 104 n = 5 · 104 n = 105 n = 5 · 105 n = 106
PCM (ℓ1 )
0.0
0.0
0.0
0.0
0.0
FCM (ℓ1 )
0.0
0.1
0.3
2.7
0.1
K-means (ℓ1 )
5.0
5.0
4.8
0.0
0.0
Gen. Weiszfeld
0.0
0.0
0.0
0.0
0.0
|supp(F )| = 16 PCM (ℓ1 )
0.0
0.0
0.0
0.0
0.0
FCM (ℓ1 )
8.9
29.1
26.6
21.9
25.6
K-means (ℓ1 )
23.8
25.9
18.0
23.4
17.8
Gen. Weiszfeld
47.0
49.2
46.8
46.2
47.4
|supp(F )| = 24 PCM (ℓ1 )
0.0
0.0
0.0
0.0
0.0
FCM (ℓ1 )
23.6
39.1
20.1
27.8
25.4
K-means (ℓ1 )
32.0
27.2
18.7
23.2
18.1
Gen. Weiszfeld
47.1
47.4
48.0
47.4
47.3
|supp(F )| = 32 PCM (ℓ1 )
0.3
0.0
0.0
0.0
0.0
FCM (ℓ1 )
28.8
39.9
36.6
42.6
38.8
K-means (ℓ1 )
35.7
27.5
19.3
23.4
18.6
Gen. Weiszfeld
48.1
48.0
47.9
47.8
47.9
Table 4. Percentages of misclassified data in Example 4
|supp(F )|
|supp(F )| = 8
Method
n = 104 n = 5 · 104 n = 105 n = 5 · 105 n = 106
PCM (ℓ1 )
0.0
0.0
0.0
0.0
0.0
FCM (ℓ1 )
0.0
10.0
7.2
0.4
0.4
K-means (ℓ1 )
4.9
13.5
0.0
4.9
14.4
Gen. Weiszfeld
0.0
0.0
13.1
0.0
0.0
|supp(F )| = 16 PCM (ℓ1 )
0.0
0.0
0.0
0.0
0.0
FCM (ℓ1 )
30.8
28.0
28.3
14.8
18.3
K-means (ℓ1 )
22.2
31.8
18.4
17.6
32.0
Gen. Weiszfeld
39.2
36.6
35.7
36.7
36.3
|supp(F )| = 24 PCM (ℓ1 )
0.0
0.0
0.0
0.0
0.0
FCM (ℓ1 )
21.0
26.1
30.3
27.5
37.6
K-means (ℓ1 )
32.3
36.3
22.6
18.1
35.1
Gen. Weiszfeld
37.4
38.4
37.6
36.5
37.8
|supp(F )| = 32 PCM (ℓ1 )
1.5
0.0
0.0
0.0
0.0
FCM (ℓ1 )
38.0
35.0
36.5
38.5
33.5
K-means (ℓ1 )
35.1
36.6
23.1
18.6
35.4
Gen. Weiszfeld
39.7
36.0
37.5
40.0
38.0
Table 5. Percentages of misclassified data in Example 5
|supp(F )|
|supp(F )| = 8
| 10 |
Best Practices for Applying Deep Learning to Novel Applications
Leslie N. Smith
Navy Center for Applied Research in Artificial Intelligence
U.S. Naval Research Laboratory, Code 5514
Washington, DC 20375
[email protected]
ABSTRACT
This report is targeted to groups who are subject matter experts in their application but
deep learning novices. It contains practical advice for those interested in testing the
use of deep neural networks on applications that are novel for deep learning. We
suggest making your project more manageable by dividing it into phases. For each
phase this report contains numerous recommendations and insights to assist novice
practitioners.
Introduction
Although my focus is on deep learning (DL) research, I am finding that more and more frequently I am
being asked to help groups without much DL experience who want to try deep learning on their novel
(for DL) application. The motivation for this NRL report derives from noticing that much of my advice
and guidance is similar for all such groups. Hence, this report discusses the aspects of applying DL that
are more universally relevant.
While there are several useful sources of advice on best practices for machine learning [1-5], there are
differences relevant to DL that this report addresses. Still, I recommend the reader read and become
familiar with these references as they contain numerous gems. In addition, there are many sources on
best practices on the topic of software engineering and agile methodologies that I assume the reader is
already familiar with (e.g., [6, 7]). The closest reference to the material in this report can be found in
Chapter 11 of “Deep Learning” [8] on “Practical Methodology” but here I discuss a number of factors
and insights not covered in this textbook.
You can see below that a deep learning application project is divided into phases. However, in practice
you are likely to find it helpful to return to an earlier phase. For example, while finding an analogy in
phase 3, you might discover new metrics that you hadn’t considered in phase 1. All of these best
practices implicitly include iteratively returning to a phase and continuous improvement as the project
proceeds.
Phase 1: Getting prepared
In this report I assume you are (or have access to) a subject matter expert for your application. You
should be familiar with the literature and research for solving the associated problem and know the
state-of-the-art solutions and performance levels. I recommend you consider here at the beginning if a
deep learning solution is a worthwhile effort. You must consider the performance level of the state-ofthe-art and if it is high, whether it is worthwhile to put in the efforts outlined in this report for an
incremental improvement. Don’t jump into deep learning only because it seems like the latest and
1
Version 1.0
2/27/17
greatest methodology. You should also consider if you have the computer resources since each job to
train a deep network will likely take days or weeks. I have made ample use of DoD’s HPC systems in my
own research. In addition, you should consider if machine learning is appropriate at all – remember
training a deep network requires lots of labeled data, as described in phase 2.
The first step is quantitatively defining what success looks like. What will you see if this is successful,
whether it is done by human or machine? This helps define your evaluation metrics. Which metrics are
important? Which are less important? You need to specify all quantitative values that play a role in the
success of this project and determine how to weigh each of them. You also need to define objectives for
your metrics; is your goal surpass human level performance? Your objectives will strongly influence the
course of the project. Knowing quantitatively what human performance is on this task should guide
your objectives; how does the state-of-the-art compare to human performance? Also, knowing how a
human solves this task will provide valuable information on how the machine might solve it.
Some of these metrics can also lead to the design of the loss function, which is instrumental in guiding
the training of the networks. Don’t feel obligated to only use softmax/cross entropy/log loss just
because that is the most common loss function, although you should probably start with it. Your
evaluation metrics are by definition the quantities that are important for your application. Be willing to
test these metrics as weighted components of the loss function to guide the training (see phase 6).
Although you are likely considering deep learning because of its power, consider how to make the
network’s “job” as easy as possible. This is anti-intuitive because it is the power of deep networks that
likely motivates you to try it out. However, the easier the job that the networks must perform, the
easier it will be to train and the better the performance. Are you (or the state-of-the-art) currently
using heuristics/physics that can be utilized here? Can the data be preprocessed? While the network
can learn complex relationships, remember: “the easier the network’s job, the better it will perform”.
So it is worthwhile to spend time considering what you can leverage from previous work and what the
network needs to do for you. Let’s say you want to improve on a complex process where the physics is
highly approximated (i.e., a “spherical cow” situation); you have a choice to input the data into a deep
network that will (hopefully) output the desired result or you can train the network to find the
correction in the approximate result. The latter method will almost certainly outperform the former.
On the other hand, do not rely on manual effort to define potential heuristics – the scarcest resource is
human time so let the network learn its representations rather than require any fixed, manual
preprocessing.
In addition, you might want to write down any assumptions or expectations you have regarding this
state-of-the-art process as it will clarify them for yourself.
Phase 2: Preparing your data
Deep learning requires a great deal of training data. You are probably wondering “how much training
data do I need?” The number of parameters in the network is correlated with the amount of training
data. The number of training samples will limit your architectural choices in phase 6. The more training
data, the larger and more accurate the network can be. So the amount of training data depends on the
objectives you defined in phase 1.
2
Version 1.0
2/27/17
In addition to training data, you will need a smaller amount of labeled validation or test data. This test
data should be similar to the training data but not the same. The network is not trained on the test data
but it is used to test the generalization ability of the network.
If the amount of training data is very limited, consider transfer learning [9] and domain adaptation [10,
11]. If this is appropriate, download datasets that are closest to your data to use for pre-training. In
addition, consider creating synthetic data. Synthetic data has the advantages that you can create plenty
of samples and make it diverse.
The project objectives also guides the choosing of the training data samples. Be certain that the training
data is directly relevant to the task and that it is diverse enough that it covers the problem space. Study
the statistics of each class. For example, are the classes balanced? An example of a balanced class is
cats versus dogs while an unbalanced class with be cats versus all other mammals (if your problem is
inherently unbalanced, talk with a deep learning expert).
What preprocessing is possible? Can you zero mean and normalize the data? This makes the network’s
job easier as it removes the job of learning the mean. Normalization also makes the network’s job
easier by creating greater similarity between training samples.
As discussed above, investigate if there are ways to lower the dimensionality of the data using a priori
knowledge or known heuristics. You don’t need to spend time to manually determine heuristics
because the goal is to save human time and you can let the network learn its own representations. Just
know that the more irrelevant data the network has to sift through, the more training data is needed
and the more time it will take to train the network. So leverage what you can from prior art.
Phase 3: Find an analogy between your application and the closest deep learning applications
Experts know not to start from scratch for every project. This is what makes them experts. They reuse
solutions that have worked in the past and they search the deep learning literature for solutions from
other researchers. Even if no one has ever done what you are trying to do, you still need to leverage
whatever you can from the experts.
Deep learning has been applied to a variety of applications. In order to create your baseline model –
your starting point – you need to find the applications that are in some ways similar to your application.
You should search the DL literature and consider the “problems” various applications are solving to
compare with the “problem” you need to solve in your application. Find similarities and analogies
between these problems. Also, note of the differences between your new application and the deep
learning application because these differences might require changing the architecture in phase 6.
When you find the closest application, look for code to download. Many researchers make their code
available when they publish in a wide spread effort to release reproducible research. Your first aim is to
replicate the results in the paper of the closest application. Later you should modify various aspects to
see the effects on the results in a “getting to know it” stage. If you are lucky, there will be several codes
available and you should replicate the results for all of them. This comparison will provide you with
enough information so you can create a baseline in phase 4.
There are a few “classic” applications of deep learning and well known solutions. These include image
classification/object recognition (convolutional networks), processing sequential data (RNN/LSTM/GRU)
3
Version 1.0
2/27/17
such as language processing, and complex decision making (deep reinforcement learning). There are
also a number of other applications that are common, such as image segmentation and super-resolution
(fully convolutional networks) and similarity matching (Siamese networks). Appendix A lists a number of
recent deep learning applications, the architecture used, and links to the papers that describe this
application. This can give you some ideas but should not be your source for finding deep learning
applications. Instead you should carefully search https://scholar.google.com and https://arxiv.org for
the deep learning applications.
Phase 4: Create a simple baseline model
Always start simple, small, and easy. Use a smaller architecture than you anticipate you might need.
Start with a common objective function. Use common settings for the hyper-parameters. Use only part
of the training data. This is a good place to adopt some of the practices of agile software
methodologies, such as simple design, unit testing, and short releases. Only get the basic functionality
now and improve on it during phase 6. That is, plan on small steps, continuous updates, and to iterate.
Choose only one of the common frameworks, such as Caffe, TensorFlow, or MXnet. Plan to only use one
framework and one computer language to minimize errors from unnecessary complexity. The
framework and language choice will likely be driven by the replication effort you performed in phase 3.
If the network will be part of a larger framework, here is a good place to check that the framework APIs
are working properly.
Phase 5: Create visualization and debugging tools
Understanding what is happening in your model will affect the success of your project. Carpenters have
an expression “measure twice, cut once”. You should think “code once, measure twice”. In addition to
evaluating the output, you should visualize your architecture and measure internal entities to
understand why you are getting the results you are obtaining. Without diagnostics, you will be shooting
in the dark to fix problems or improve performance.
You should have a general understanding of problems related to high bias (converging on the wrong
result) versus high variance (not converging well) because there are different solutions for each type of
problem; for example, you might fix high bias problems with a larger network but you would handle high
variance problems by increasing the size of your training dataset.
Set up visualizations so you can monitor as much as possible while the architecture evolves. When
possible, set up unit tests for all of your code modifications. You should compare training error to test
error and both to human level performance. You might find your network will behave strangely and you
need ways to determine what is going on and why. Start debugging the worst problems first. Find out if
the problems are with the training data, aspects of the architecture, or the loss function.
Keep in mind that error analysis tries to explain the difference between current performance and
perfect performance. Ablative analysis tries to explain the difference between some baseline
performance and current performance. One or the other or both can be useful.
One motivation for using TensorFlow as your framework is that it has a visualization system called
TensorBoard that is part of the framework. One can output the necessary files from TensorFlow and
TensorBoard can be used to visualize your architecture, monitor the weights and feature maps, and
4
Version 1.0
2/27/17
explore the embedded space the network creates. Hence, the debugging and visualization tools are
available in the framework. With other frameworks, you need to find these tools (they are often
available online) or create your own.
Phase 6: Fine tune your model
This phase will likely take the most time. You should experiment extensively. And not just with factors
you believe will improve the result but try changing every factor just to learn what happens when it
changes. Change the architecture design, depth, width, pathways, weight initialization, loss function,
etc. Change each hyper-parameter to learn what the effect of increasing or decreasing it is. I
recommend using the learning rate range test [12] to learn about the behavior of your network over a
large range of learning rates. A similar program can be made to study the effect of other hyperparameters.
Try various regularization methods, such as data augmentation, dropout, and weight decay.
Generalization is one of the key advantages of deep networks so be certain to test regularization
methods in order to maximize this ability to generalize to unseen cases.
You should experiment with the loss function. You used a simple loss function in the baseline but you
also created several evaluation metrics that you care about and define success. The only difference
between the evaluation metrics and the loss function is that the metrics apply to the test data and the
loss function is applied to the training data in order to train the network. Can a more complicated loss
function produce a more successful result? You can add weighted components to the loss function to
reflect the importance of each metric to the results. Just be very careful to not complicate the loss
function with unimportant criterion because it is the heart of your model.
Earlier you found analogies between your application and existing deep learning applications and chose
the closest to be your baseline. Now compare to the second closest application. Or the third. What
happens if you follow another analogy and use that architecture? Can you imagine a combination of the
two to test?
In the beginning, you should have successes from some low hanging fruit. As you go on it will become
more difficult to improve the performance. The objectives you defined in phase 1 should guide how far
you want to pursue the performance improvements. Or you might want to now revise the objectives
your defined earlier.
Phase 7: End-to-end training, ensembles and other complexities
If you have the time and budget, you can investigate more complex methods and there are worlds of
complexities that are possible. There exists a huge amount of deep learning literature and more papers
are appearing daily. Most of these papers declare new state-of-the-art results with one twist or another
and some might provide you a performance boost. This section alone could fill a long report because
there are so many architectures and other options to consider but if you are at this stage, consider
talking with someone with a great deal of deep learning expertise because advice at this stage is likely to
be unique to your application. .
However, there are two common methods you might consider; end-to-end training and ensembles.
5
Version 1.0
2/27/17
As a general rule, end-to-end training of a connected system will outperform a system with multiple
parts because a combined system with end-to-end training allows each of the parts to adapt to the task.
Hence, it is useful to consider combining parts, if it is relevant for your application.
Ensembles of diverse learners (i.e., bagging, boosting, stacking) can also improve the performance over
a single model. However, this will require you to train and maintain all the members of the ensemble. If
your performance objectives warrant this, it is worthwhile to test an ensemble approach.
Summary
This report lays out many factors for you to consider when experimenting with deep learning on an
application where it hasn’t been previously used. Not every item here will be relevant but I hope that it
covers most of the factors you should consider during a project. I wish you much luck and success in
your efforts.
6
Version 1.0
2/27/17
References:
1. Martin Zinkevich, “Rules of Machine Learning: Best Practices for ML Engineering”,
http://martin.zinkevich.org/rules_of_ml/rules_of_ml.pdf
2. Brett Wujek, Patrick Hall, and Funda Güneș, “Best Practices for Machine Learning Applications”,
https://support.sas.com/resources/papers/proceedings16/SAS2360-2016.pdf
3. Jason Brownlee, “How to Use a Machine Learning Checklist to Get Accurate Predictions,
Reliably”, http://machinelearningmastery.com/machine-learning-checklist/
4. Domingos, Pedro. "A few useful things to know about machine learning." Communications of the
ACM 55.10 (2012): 78-87.
5. Grégoire Montavon, Geneviève Orr, Klaus-Robert Müller, “Neural Networks: Tricks of the
Trade”, Springer, 2012
6. Fergus Henderson, “Software engineering at Google”, CoRR, arXiv:1702.01715, 2017.
7. Gamma, Erich. Design patterns: elements of reusable object-oriented software. Pearson
Education India, 1995.
8. Goodfellow, I., Bengio, Y., Courville, A., “Deep Learning”, MIT Press, 2016
9. Weiss, Karl, Taghi M. Khoshgoftaar, and DingDing Wang. "A survey of transfer learning." Journal
of Big Data 3.1 (2016): 1-40.
10. Patel, Vishal M., Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. "Visual domain
adaptation: A survey of recent advances." IEEE signal processing magazine 32, no. 3 (2015): 5369.
11. Gabriela Csurka “Domain Adaptation for Visual Applications: A Comprehensive Survey”, CoRR,
arXiv:1702.05374, 2017.
12. Leslie N. Smith. Cyclical learning rates for training neural networks. In Proceedings of the IEEE
Winter Conference on Applied Computer Vision, 2017.
Appendix A: Table of various deep learning applications
The following table lists some recent applications of deep learning, the architecture used for this
application and a few references to papers in the literature that describe the application in much more
detail.
Application
Architecture
Comments
Colorization of Black and White
Images.
Large, fully convolutional
http://www.cs.cityu.edu.hk/~qiyang/publicatio
ns/iccv-15.pdf
http://arxiv.org/pdf/1603.08511.pdf
7
Version 1.0
2/27/17
Adding Sounds To Silent Movies.
CNN + LSTM
http://arxiv.org/pdf/1512.08512.pdf
Automatic Machine Translation.
Stacked networks of large
LSTM recurrent neural
networks
http://www.nlpr.ia.ac.cn/cip/ZongPublications/
2015/IEEE-Zhang-8-5.pdf
https://arxiv.org/abs/1612.06897
https://arxiv.org/abs/1611.04558
Object Classification in Photographs.
Residual CNNs, ResNeXt,
Densenets
http://papers.nips.cc/paper/5207-deep-neuralnetworks-for-object-detection.pdf
Automatic Handwriting Generation.
RNNs
http://arxiv.org/pdf/1308.0850v5.pdf
Character Text Generation.
RNNs
http://arxiv.org/pdf/1308.0850v5.pdf
Image Caption Generation.
CNN + LSTM
http://arxiv.org/pdf/1505.00487v3.pdf
Automatic Game Playing.
Reinforcement learning +
CNNs
http://www.nature.com/nature/journal/v529/n
7587/full/nature16961.html
https://arxiv.org/abs/1612.00380
Generating audio
WaveNet = Dilated PixelCNN
https://arxiv.org/abs/1609.03499
https://arxiv.org/abs/1612.07837
https://arxiv.org/abs/1610.09001
Object tracking
CNN + hierarchical LSTMs
https://arxiv.org/abs/1701.01909
https://arxiv.org/abs/1611.06878
https://arxiv.org/abs/1611.05666
Lip reading
CNN + LSTMs
https://arxiv.org/abs/1701.05847
https://arxiv.org/abs/1611.05358
Modifying synthetic data into
labeled training data
GAN
https://arxiv.org/abs/1701.05524
Single image super-resolution
Deep, fully convolutional
networks
https://arxiv.org/abs/1612.07919
8
Version 1.0
2/27/17
https://arxiv.org/abs/1611.03679
https://arxiv.org/abs/1511.04587
https://arxiv.org/abs/1611.00591
Speech recognition
LSTMs
https://arxiv.org/abs/1701.03360
https://arxiv.org/abs/1701.02720
Generate molecular structures
RNNs
https://arxiv.org/abs/1701.01329
Time series analysis
Resnet + RNNs
https://arxiv.org/abs/1701.01887
https://arxiv.org/abs/1611.06455
Intrusion detection
RNNs or CNNs
https://arxiv.org/abs/1701.02145
Autonomous Planning
Predictron architecture
https://arxiv.org/abs/1612.08810
https://arxiv.org/abs/1612.08242
Object detection
Multi-modality classification
CNN + GAN
https://arxiv.org/abs/1612.07976
https://arxiv.org/abs/1612.00377
https://arxiv.org/abs/1611.06306
Health monitoring
All are used
https://arxiv.org/abs/1612.07640
Robotics
CNN (perception)
https://arxiv.org/abs/1612.07139
RL (control)
https://arxiv.org/abs/1611.00201
https://arxiv.org/abs/1612.06897
Domain adaptation
Self-driving
CNNs
https://arxiv.org/abs/1612.06573
https://arxiv.org/abs/1611.08788
https://arxiv.org/abs/1611.05418
Visual question answering
Resnet
https://arxiv.org/abs/1612.05386
9
Version 1.0
2/27/17
https://arxiv.org/abs/1611.01604
https://arxiv.org/abs/1611.05896
https://arxiv.org/abs/1611.05546
Weather prediction
Graphical RNN
https://arxiv.org/abs/1612.05054
Detecting cancer
RBM
https://arxiv.org/abs/1612.03211
Genomics
Multiple NNs
https://arxiv.org/abs/1611.09340
Semantic segmentation
Fully conv Densenet
https://arxiv.org/abs/1611.09326
https://arxiv.org/abs/1611.06612
Hyperspectral classification
CNN
https://arxiv.org/abs/1611.09007
Natural Language Processing (NLP)
LSTM, GRU
https://arxiv.org/abs/1606.0673
Face detection
CNN
https://arxiv.org/abs/1611.00851
10
Version 1.0
2/27/17
| 9 |
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
Application of three graph Laplacian based semisupervised learning methods to protein function
prediction problem
Loc Tran
University of Minnesota
[email protected]
Abstract:
Protein function prediction is the important problem in modern biology. In this paper, the un-normalized,
symmetric normalized, and random walk graph Laplacian based semi-supervised learning methods will be
applied to the integrated network combined from multiple networks to predict the functions of all yeast
proteins in these multiple networks. These multiple networks are network created from Pfam domain
structure, co-participation in a protein complex, protein-protein interaction network, genetic interaction
network, and network created from cell cycle gene expression measurements. Multiple networks are
combined with fixed weights instead of using convex optimization to determine the combination weights
due to high time complexity of convex optimization method. This simple combination method will not affect
the accuracy performance measures of the three semi-supervised learning methods. Experiment results
show that the un-normalized and symmetric normalized graph Laplacian based methods perform slightly
better than random walk graph Laplacian based method for integrated network. Moreover, the accuracy
performance measures of these three semi-supervised learning methods for integrated network are much
better than the best accuracy performance measures of these three methods for the individual network.
Keywords:
semi-supervised learning, graph Laplacian, yeast, protein, function
1. Introduction
Protein function prediction is the important problem in modern biology. Identifying the function
of proteins by biological experiments is very expensive and hard. Hence a lot of computational
methods have been proposed to infer the functions of the proteins by using various types of
information such as gene expression data and protein-protein interaction networks [1].
First, in order to predict protein function, the sequence similarity algorithms [2, 3] can be
employed to find the homologies between the already annotated proteins and theun-annotated
protein. Then the annotated proteins with similar sequences can be used to assign the function to
the un-annotated protein. That’s the classical way to predict protein function [4].
Second, to predict protein function, a graph (i.e. kernel) which is the natural model of relationship
between proteinscan also be employed. In this model, the nodes represent proteins. The edges
represent for the possible interactions between nodes. Then, machine learning methods such as
Support Vector Machine [5], Artificial Neural Networks [4], un-normalized graph Laplacian
based semi-supervised learning method [6,14], or neighbor counting method [7] can be applied to
this graph to infer the functions of un-annotated protein. The neighbor counting method labels the
protein with the function that occurs frequently in the protein’s adjacent nodes in the proteinprotein interaction network. Hence neighbor counting method does not utilize the full topology of
the network. However, the Artificial Neural Networks, Support Vector Machine, andunnormalized graph Laplacian based semi-supervised learning method utilize the full topology of
DOI: 10.5121/ijbb.2013.3202
11
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
the network.Moreover, the Artificial Neural Networks and Support Vector Machine are
supervised learning methods.
While the neighbor counting method, the Artificial Neural Networks, and the un-normalized
graph Laplacian based semi-supervised learningmethod are all based on the assumption that the
labels of two adjacent proteins in graph are likely to be the same, SVM do not rely on this
assumption. Graphs used in neighbor counting method, Artificial Neural Networks, and the unnormalized graph Laplacian based semi-supervised learningmethod are very sparse.However, the
graph (i.e. kernel) used in SVM is fully-connected.
Third, the Artificial Neural Networks method is applied to the single protein-protein interaction
network only. However, the SVM method and un-normalized graph Laplacian based semisupervised learning method try to use weighted combination of multiple networks(i.e. kernels)
such as gene co-expression networkand protein-protein interaction network to improve the
accuracy performance measures. While [5] (SVM method) determines the optimal weighted
combination of networks by solving the semi-definite problem, [6,14] (un-normalized graph
Laplacian based semi-supervised learning method) uses a dual problem and gradient descent to
determine the weighted combination of networks.
In the last decade, the normalized graph Laplacian [8] and random walk graph Laplacian [9]
based semi-supervised learning methods have successfully been applied to some specific
classification tasks such as digit recognition and text classification. However, to the best of my
knowledge, the normalized graph Laplacian and random walk graph Laplacian based semisupervised learning methods have not yet been applied to protein function prediction problem and
hence their overall accuracy performance measure comparisons have not been done. In this paper,
we will apply three un-normalized, symmetric normalized, and random walk graph Laplacian
based semi-supervised learning methods to the integrated network combined with fixed
weights.These five networksused for the combination are available from [6]. The main point of
these three methods is to let every node of the graph iteratively propagates its label information to
its adjacent nodes and the process is repeated until convergence [8]. Moreover, since [6] has
pointed out that the integrated network combined with optimized weights has similar performance
to the integrated network combined with equal weights, i.e. without optimization, we will use the
integrated network combined with equal weights due to high time-complexity of these
optimization methods. This type of combination will be discussed clearly in the next sections.
We will organize the paper as follows: Section 2 will introduce random walk and symmetric
normalized graph Laplacian based semi-supervised learning algorithms in detail.Section 3will
show how to derive the closed form solutions of normalized and un-normalized graph Laplacian
based semi-supervised learning from regularization framework. In section 4, we will apply these
three algorithms to the integrated network of five networks available from [6]. These five
networks are network created from Pfam domain structure, co-participation in a protein complex,
protein-protein interaction network, genetic interaction network, and network created from cell
cycle gene expression measurements. Section 5 will conclude this paper and discuss the future
directions of researches of this protein function prediction problem utilizing hypergraph
Laplacian.
Claim:
12
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
We do not claim that the accuracy performance measures of these two methods will be better
than the accuracy performance measure of the un-normalized graph Laplacian based semisupervised learning method (i.e. the published method) in this protein function prediction
problem. We just do the comparisons.
To the best of my knowledge, no theoretical framework have been given to prove that which
graph Laplacian method achieves the best accuracy performance measure in the classification
task. In the other words, the accuracy performance measures of these three graph Laplacian based
semi-supervised learning methods depend on the datasets we used. However, in [8], the author
have pointed out that the accuracy performance measure of the symmetric normalized graph
Laplacian based semi-supervised learning method are better than accuracy performance measures
of the random walk and un-normalized graph Laplacian based semi-supervised learning methods
in digit recognition and text categorization problems. Moreover, its accuracy performance
measure is also better than the accuracy performance measure of Support Vector Machine method
(i.e. the known best classifier in literature) in two proposed digit recognition and text
categorization problems. This fact is worth investigated in protein function prediction problem.
Again, we do not claim that our two proposed random walk and symmetric normalized graph
Laplacian based semi-supervised learning methods will perform better than the published method
(i.e. the un-normalized graph Laplacian method)in this protein function prediction problem. At
least, the accuracy performance measures of two new proposed methods are similar to or are not
worse than the accuracy performance measure of the published method (i.e. the un-normalized
graph Laplacian method).
2. Algorithms
Given
networks in the dataset, the weights for individual networks used to combine to
integrated network are .
Given a set of proteins{ , … , ,
,…,
} where = + is the total number of proteins
in the integrated network, define c bethe total number of functional classes and the matrix ∈
∗
be the estimated label matrix for the set of proteins { , … , ,
,…,
}, where the
point is labeled as sign( ) for each functional class j (1 ≤ ≤ ). Please note that { , … , }
is the set of all labeled points and {
,…,
} is the set of all un-labeled points.
Let
∈
∗
the initial label matrix for n proteins in the network be defined as follows
= −1
1
0
+1≤ ≤
1≤ ≤
1≤ ≤
Our objective is to predict the labels of the un-labeled points
,…,
. We can achieve this
objective by letting every node (i.e. proteins) in the network iteratively propagates its label
information to its adjacent nodes and this process is repeated until convergence. These three
algorithms are based on three assumptions:
Let
local consistency: nearby proteins are likely to have the same function
global consistency: proteins on the same structure (cluster or sub-manifolds) are
likely to have the same function
these protein networks contain no self-loops
( )
represents the
individual network in the dataset.
13
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
Random walk graph Laplacian based semi-supervised learning algorithm
In this section, we slightly change the original random walk graph Laplacian based semisupervised learning algorithm can be obtained from [9]. The outline of the new version of this
algorithm is as follows
1. Form the affinity matrix
( )
( )
2. Construct
( )
( )
=
∑
(for each k such that 1 ≤ ≤
|| − ||
exp −
≠
=
2∗
0
=
( )
( )
where
( )
):
= diag(
( )
,…,
( )
) and
=∑
3. Iterate until convergence
(
)
( )
=
+ (1 − ) , where α is an arbitrary parameter belongs to
[0,1]
4. Let ∗ be the limit of the sequence { ( ) }. For each protein functional class j, label
each protein ( + 1 ≤ ≤ + ) as sign( ∗ )
Next, we look for the closed-form solution of the random walk graph Laplacian based semisupervised learning. In the other words, we need to show that
…
Thus, by induction,
Since
is the stochastic matrix, its eigenvalues are in [-1,1]. Moreover, since 0<α<1,
thus
14
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
Therefore,
Now, from the above formula, we can compute
∗
directly.
The original random walk graph Laplacian based semi-supervised learning algorithm developed
by Zhu can be derived from the modified algorithm by setting
= 0, where 1 ≤ ≤ and
= 1,where + 1 ≤ ≤ + . In the other words, we can express ( ) in matrix form as
follows
(
)
=
0 …
⋮
0
…
=
( )
0
⋮
0
+( −
0
) , where
(
1
0
0
⋮
⋮
0 … 1
Normalized graph Laplacian based semi-supervised learning algorithm
I is the identity matrix and
)
Next, we will give the brief overview of the original normalized graph Laplacian based semisupervised learning algorithm can be obtained from [8]. The outline of this algorithm is as
follows
1. Form the affinity matrix
( )
( )
2. Construct
( )
( )
=
(for each 1 ≤ ≤ ):
|| − ||
exp −
≠
=
2∗
0
=
( )
∑
( )
( )
where
( )
= diag(
( )
,…,
( )
) and
=∑
3. Iterate until convergence
(
)
( )
=
+ (1 − ) , where α is an arbitrary parameter belongs to
[0,1]
4. Let ∗ be the limit of the sequence { ( ) }. For each protein functional class j, label
each protein ( + 1 ≤ ≤ + ) as sign( ∗ )
Next, we look for the closed-form solution of the normalizedgraph Laplacian based semisupervised learning. In the other words, we need to show that
Suppose
( )
∗
= lim
= , then
→∞
( )
= (1 − )
−
15
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
…
Thus, by induction,
Since for every integer k such that 1 ≤
≤
( )
is a stochastic matrix, eigenvalues of
( )
( )
( )
( )
( )
( )
( )
( )
is similar to
( )
( )
which
belong to [-1,1]. Moreover, for every k,
( )
( ) ( )
is symmetric, then ∑
is also symmetric. Therefore, by
using Weyl’s inequality in [10] and the references therein, the largest eigenvalue of
∑
( )
,
( )
( )
is at most the sum of every largest eigenvalues of
( )
the smallest eigenvalue of ∑
( )
( )
lim
→∞
Therefore,
∗
= lim
→∞
(
→∞
( )
=0
) =( −
( )
and
=
∑
( )
( )
( )
)
= (1 − )( −
Now, from the above formula, we can compute
3. Regularization Frameworks
( )
is at least the sum of every smallest
( ) ( )
eigenvalues of ( )
. Thus, the eigenvalues of
belong to [-1,1]. Moreover, since 0<α<1, thus
lim
( )
∗
)
directly.
In this section, we will develop the regularization framework for the normalized graph Laplacian
based semi-supervised learning iterative version. First, let’s consider the error function
16
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
In this error function ( ), and belong to . Please note that c is the total number of protein
( )
( )
functional classes,
=∑
, and is the positive regularization parameter. Hence
=
=
⋮
⋮
Here ( ) stands for the sum of the square loss between the estimated label matrix and the initial
label matrix and the smoothness constraint.
Hence we can rewrite ( ) as follows
( )=
−
(( − ) ( − ))
+
Our objective is to minimize this error function. In the other words, we solve
=0
This will lead to
Let
=
. Hence the solution
∗
of the above equations is
∗
= (1 − )( −
)
( )
( )
Also, please note that
= ∑
is not the symmetric matrix, thus we cannot
develop the regularization framework for the random walk graph Laplacian based semisupervised learning iterative version.
Next, we will develop the regularization framework for the un-normalized graph Laplacian based
semi-supervised learning algorithms. First, let’s consider the error function
( )=
1
2
In this error function ( ),
protein functional classes and
( )
,
−
+
‖
− ‖
and
belong to
. Please note that c is the total number of
is the positive regularization parameter. Hence
=
⋮
=
⋮
Here ( ) stands for the sum of the square loss between the estimated label matrix and the initial
label matrix and the smoothness constraint.
17
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
Hence we can rewrite ( ) as follows
( )=
1
( )
(( − ) ( − ))
+
Please note that un-normalized Laplacian matrix of the
networkis
objective is to minimize this error function. In the other words, we solve
1
∗
=
( )
−
( )
. Our
=0
This will lead to
Hence the solution
( )
( )
∑
+ ( − )=0
of the above equations is
∗
= (
1
( )
+
( )
=
+
)
+
)
Similarly, we can also obtain the other form of solution ∗ of the normalized graph Laplacian
based semi-supervised learning algorithm as follows (note normalized Laplacian matrix of
networkis
( )
= −
( )
( )
( )
∗
4. Experiments and results
)
= (
1
( )
The Dataset
The three symmetric normalized, random walk, and un-normalized graph Laplacian based semisupervised learning are applied to the dataset obtained from [6]. This dataset is composed of 3588
yeast proteins from Saccharomyces cerevisiae, annotated with 13 highest-level functional classes
from MIPS Comprehensive Yeast Genome Data (Table 1). This dataset contains five networks of
pairwise relationships, which are very sparse.These five networks are network created from Pfam
domain structure ( ( ) ), co-participation in a protein complex ( ( ) ), protein-protein interaction
network ( ( ) ), genetic interaction network ( ( ) ), and network created from cell cycle gene
expression measurements ( ( ) ).
The first network, ( ) , was obtained from the Pfam domain structure of the given genes. At the
time of the curation of the dataset, Pfam contained 4950 domains. For each protein, a binary
vector of this length was created. Each element of this vector represents the presence or absence
( )
of one Pfam domain. The value of
is then the normalization of the dot product between the
domain vectors of proteins i and j.
The fifth network, ( ) , was obtained from gene expression data collected by [12]. In this
network, an edge with weight 1 is created between two proteins if their gene expression profiles
are sufficiently similar.
18
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
The remaining three networks were created with data from the MIPS Comprehensive Yeast
Genome Database (CYGD). ( ) is composed of binary edges indicating whether the given
proteins are known to co-participate in a protein complex. The binary edges of ( ) indicate
known protein-protein physical interactions. Finally, the binary edges in ( ) indicate known
protein-protein genetic interactions.
The protein functional classes these proteins were assigned to are the 13 functional classes
defined by CYGD at the time of the curation of this dataset. A brief description of these
functional classes is given in the following Table 1.
Table 1: 13 CYGD functional classes
Classes
1
Metabolism
2
Energy
3
Cell cycle and DNA processing
4
Transcription
5
Protein synthesis
6
Protein fate
7
Cellular transportation and transportation mechanism
8
Cell rescue, defense and virulence
9
Interaction with cell environment
10
Cell fate
11
Control of cell o rganization
12
Transport facilitation
13
Others
Results
In this section, we experiment with the above three methods in terms of classification accuracy
performance measure. All experiments were implemented in Matlab 6.5 on virtual machine.
For the comparisons discussed here, the three-fold cross validation is used to compute the
accuracy performance measures for each class and each method. The accuracy performance
measure Q is given as follows
=
+
+
+
+
True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) are defined
in the following table 2
19
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
Table 2: Definitions of TP, TN, FP, and FN
Predicted Label
Known Label
Positive
Negative
Positive
True Positive (TP)
False Negative (FN)
Negative
False Positive (FP)
True Negative (TN)
In these experiments, parameter is set to 0.85 and = 1. For this dataset, the third table shows
the accuracy performance measures of the three methods applying to integrated network for 13
functional classes
Table 3: Comparisons of symmetric normalized, random walk, and un-normalized graph
Laplacian based methods using integrated network
Functional Classes
Accuracy Performance Measures (%)
Integrated Network
Normalized
Random Walk
Un-normalized
1
76.87
76.98
77.20
2
85.90
85.87
85.81
3
78.48
78.48
77.56
4
78.57
78.54
77.62
5
86.01
85.95
86.12
6
80.43
80.49
80.32
7
82.02
81.97
81.83
8
84.17
84.14
84.17
9
86.85
86.85
86.87
10
80.88
80.85
80.52
11
85.03
85.03
85.92
12
87.49
87.46
87.54
13
88.32
88.32
88.32
From the above table 3, we recognized that the symmetric normalized and un-normalized graph
Laplacian based semi-supervised learning methods slightly perform better than the random walk
graph Laplacian based semi-supervised learning method.
20
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
Next, we will show the accuracy performance measures of the three methods for each individual
network ( ) in the following tables:
Table 4: Comparisons of symmetric normalized, random walk, and un-normalized graph
Laplacian based methods using network ( )
Functional Classes
Accuracy Performance Measures (%)
Network
( )
Normalized
Random Walk
Un-normalized
1
64.24
63.96
64.30
2
71.01
71.07
71.13
3
63.88
63.66
63.91
4
65.55
65.41
65.47
5
71.35
71.46
71.24
6
66.95
66.69
67.11
7
67.89
67.70
67.84
8
69.29
69.29
69.31
9
71.49
71.40
71.52
10
65.30
65.47
65.50
11
70.09
70.04
70.12
12
72.71
72.66
72.63
13
72.85
72.77
72.85
Table 5: Comparisons of symmetric normalized, random walk, and un-normalized graph
Laplacian based methods using network ( )
Functional Classes
Accuracy Performance Measures (%)
Network
( )
Normalized
Random Walk
Un-normalized
1
24.64
24.64
24.64
2
27.84
27.84
27.79
3
23.16
23.16
23.08
21
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
4
22.60
22.60
22.52
5
26.37
26.37
26.23
6
24.39
24.39
24.19
7
26.11
26.11
26.37
8
27.65
27.65
27.62
9
28.43
28.43
28.34
10
25.81
25.81
25.22
11
27.01
27.01
25.98
12
28.43
28.43
28.40
13
28.54
28.54
28.54
Table 6: Comparisons of symmetric normalized, random walk, and un-normalized graph
Laplacian based methods using network ( )
Functional Classes
Accuracy Performance Measures (%)
Network
( )
Normalized
Random Walk
Un-normalized
1
29.63
29.57
29.40
2
34.11
34.11
33.95
3
27.93
27.90
27.70
4
28.51
28.48
28.57
5
34.03
34.03
33.92
6
30.57
30.55
30.04
7
32.08
32.08
32.02
8
33.05
33.03
32.92
9
33.78
33.78
33.75
10
30.18
30.18
29.99
11
32.64
32.64
32.53
22
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
12
34.53
34.53
34.45
13
34.48
34.48
34.31
Table 7: Comparisons of symmetric normalized, random walk, and un-normalized graph
Laplacian based methods using network ( )
Functional Classes
Accuracy Performance Measures (%)
Network
( )
Normalized
Random Walk
Un-normalized
1
18.31
18.28
18.26
2
20.93
20.90
20.88
3
18.09
18.06
18.09
4
18.39
18.39
18.39
5
21.07
21.07
21.04
6
18.98
18.98
18.90
7
18.73
18.73
18.67
8
19.90
19.90
19.62
9
20.04
20.04
19.93
10
17.31
17.28
17.17
11
19.18
19.18
19.09
12
20.54
20.54
20.57
13
20.54
20.54
20.48
Table 8: Comparisons of symmetric normalized, random walk, and un-normalized graph
Laplacian based methods using network ( )
Functional Classes
Accuracy Performance Measures (%)
Network
( )
Normalized
Random Walk
Un-normalized
1
26.45
26.45
26.51
2
29.21
29.21
29.21
23
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
3
25.89
25.78
25.92
4
26.76
26.62
26.76
5
29.18
29.18
29.18
6
27.42
27.23
27.42
7
28.21
28.18
28.01
8
28.51
28.54
28.54
9
29.71
29.68
29.65
10
26.81
26.95
27.01
11
28.79
28.82
28.85
12
30.16
30.13
30.16
13
30.18
30.16
30.18
From the above tables, we easily see that the un-normalized (i.e. the published) and normalized
graph Laplacian based semi-supervised learning methods slightly perform better than the random
walk graph Laplacian based semi-supervised learning method using network ( ) and ( ) . For
( )
( )
( )
,
, and
, the random walk and the normalized graph Laplacian based semisupervised learning methods slightly perform better than the un-normalized (i.e. the published)
graph Laplacian based semi-supervised learning method. ( ) , ( ) , and ( ) are all three
networks created with data from the MIPS Comprehensive Yeast Genome Database (CYGD).
Moreover, the accuracy performance measures of all three methods for ( ) , ( ) , ( ) , and
( )
are un-acceptable since they are worse than random guess. Again, this fact occurs due to the
sparseness of these four networks.
( )
For integrated network and every individual network except
, we recognize that the
symmetric normalized graph Laplacian based semi-supervised learning method performs slightly
better than the other two graph Laplacian based methods.
Finally, the accuracy performance measures of these three methods for the integrated network are
much better than the best accuracy performance measure of these three methods for individual
network. Due to the sparseness of the networks, the accuracy performance measures for
individual networks W2, W3, W4, and W5 are unacceptable. They are worse than random guess.
The best accuracy performance measure of these three methods for individual network will be
shown in the following supplemental table.
24
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
Supplement Table:Comparisons of un-normalized graph Laplacian based methods using
network ( ) and integrated network
Functional Classes
Accuracy Performance Measures (%)
Integrated network
Best individual network
(un-normalized)
(un-normalized)
1
77.20
64.30
2
85.81
71.13
3
77.56
63.91
4
77.62
65.47
5
86.12
71.24
6
80.32
67.11
7
81.83
67.84
8
84.17
69.31
9
86.87
71.52
10
80.52
65.50
11
84.92
70.12
12
87.54
72.63
13
88.32
72.85
( )
5. Conclusion
The detailed iterative algorithms and regularization frameworks for the three normalized, random
walk, and un-normalized graph Laplacian based semi-supervised learning methods applying to
the integrated network from multiple networks have been developed. These three methodsare
successfully applied to the protein function prediction problem (i.e. classification problem).
Moreover, the comparison of the accuracy performance measures for these three methods has
been done.
These three methods can also be applied to cancer classification problems using gene expression
data.
Moreover, these three methods can not only be used in classification problem but also in ranking
problem. In specific, given a set of genes (i.e. the queries) making up a protein complex/pathways
or given a set of genes (i.e. the queries) involved in a specific disease (for e.g. leukemia), these
three methods can also be used to find more potential members of the complex/pathway or more
genes involved in the same disease by ranking genes in gene co-expression network (derived
25
International Journal on Bioinformatics & Biosciences (IJBB) Vol.3, No.2, June 2013
from gene expression data) or the protein-protein interaction network or the integrated network of
them. The genes with the highest rank then will be selected and then checked by biologist experts
to see if the extended genes in fact belong to the same complex/pathway or are involved in the
same disease. These problems are also called complex/pathway membership determination and
biomarker discovery in cancer classification. In cancer classification problem, only the submatrix of the gene expression data of the extended gene list will be used in cancer classification
instead of the whole gene expression data.
Finally, to the best of my knowledge, the normalized, random walk, and un-normalized
hypergraph Laplacian based semi-supervised learning methods have not been applied to the
protein function prediction problem. These methods applied to protein function prediction are
worth investigated since [11] have shown that these hypergraph Laplacian based semisupervisedlearning methods outperform the graph Laplacian based semi-supervised learning
methods in text-categorization and letter recognition.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
Shin H.H., Lisewski A.M.and Lichtarge O.Graph sharpening plus graph integration: a synergy that
improves protein functional classificationBioinformatics23(23) 3217-3224, 2007
Pearson W.R. and Lipman D.J. Improved tools for biological sequence comparison Proceedings of
the National Academy of Sciences of the United States of America, 85(8), 2444–2448, 1998
Lockhart D.J., Dong H., Byrne M.C., Follettie M.T., Gallo M.V., Chee M.S., Mittmann M., Wang
C., Kobayashi M., Horton H., and Brown E.L. Expression monitoring by hybridization to highdensity oligonucleotide arrays Nature Biotechnology, 14(13), 1675–1680, 1996
Shi L., Cho Y., and Zhang A. Prediction of Protein Function from Connectivity of Protein
Interaction Networks International Journal of Computational Bioscience, Vol.1, No. 1, 2010
Lanckriet G.R.G., Deng M., Cristianini N., Jordan M.I., and Noble W.S. Kernel-based data fusion
and its application to protein function prediction in yeastPacific Symposium on Biocomputing
(PSB), 2004
Tsuda K., Shin H.H, and Schoelkopf B. Fast protein classification with multiple networks
Bioinformatics (ECCB’05), 21(Suppl. 2):ii59-ii65, 2005
Schwikowski B., Uetz P., and Fields S. A network of protein–protein interactions in yeast Nature
Biotechnology, 18(12), 1257–1261, 2000
Zhou D., Bousquet O., Lal T.N., Weston J. and Schölkopf B. Learning with Local and Global
Consistency Advances in Neural Information Processing Systems (NIPS) 16, 321-328. (Eds.) S.
Thrun, L. Saul and B. Schölkopf, MIT Press, Cambridge, MA, 2004
Xiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label
propagation Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002
Knutson A. and Tao T. Honeycombs and sums of Hermitian matrices Notices Amer. Math. Soc.
48, no. 2, 175–186, 2001
Zhou D., Huang J. and Schölkopf B. Learning with Hypergraphs: Clustering, Classification, and
Embedding Advances in Neural Information Processing System (NIPS) 19, 1601-1608. (Eds.) B.
Schölkopf, J.C. Platt and T. Hofmann, MIT Press, Cambridge, MA, 2007.
Spellman P., Sherlock G., and et al. Comprehensive identification of cell cycle-regulated genes of
the yeast saccharomyces cerevisiae by microarray hybridization Mol. Biol. Cell, 9:3273– 3297,
1998
Luxburg U. A Tutorial on Spectral Clustering Statistics and Computing 17(4): 395-416, 2007
Shin H., Tsuda K., and Schoelkopf B.Protein functional class prediction with a combined
graphExpert Systems with Applications, 36:3284-3292, 2009
26
| 5 |
XQOWL: An Extension of XQuery
for OWL Querying and Reasoning
Jesús M. Almendros-Jiménez∗
Dpto. de Informática
University of Almería
04120-Almería, SPAIN
[email protected]
One of the main aims of the so-called Web of Data is to be able to handle heterogeneous resources
where data can be expressed in either XML or RDF. The design of programming languages able to
handle both XML and RDF data is a key target in this context. In this paper we present a framework
called XQOWL that makes possible to handle XML and RDF/OWL data with XQuery. XQOWL
can be considered as an extension of the XQuery language that connects XQuery with SPARQL
and OWL reasoners. XQOWL embeds SPARQL queries (via Jena SPARQL engine) in XQuery and
enables to make calls to OWL reasoners (HermiT, Pellet and FaCT++) from XQuery. It permits to
combine queries against XML and RDF/OWL resources as well as to reason with RDF/OWL data.
Therefore input data can be either XML or RDF/OWL and output data can be formatted in XML
(also using RDF/OWL XML serialization).
1
Introduction
There are two main formats to publish data on the Web. The first format is XML, which is based on a
tree-based model and for which the XPath and XQuery languages for querying, and the XSLT language
for transformation, have been proposed. The second format is RDF which is a graph-based model and for
which the SPARQL language for querying and transformation has been proposed. Both formats (XML
and RDF) can be used for describing data of a certain domain of interest. XML is used for instance in
the Dublin Core 1 , MPEG-7 2 , among others, while RDF is used in DBPedia 3 and LinkedLifeData 4 ,
among others. The number of organizations that offers their data from the Web is increasing in the last
years. The so-called Linked open data initiative5 aims to interconnect the published Web data.
XML and RDF share the same end but they have different data models and query/transformation
languages. Some data can be available in XML format and not in RDF format and vice versa. The W3C
(World Wide Web Consortium) 6 proposes transformations from XML data to RDF data (called lifting),
and vice versa (called lowering). RDF has XML-based representations (called serializations) that makes
possible to represent in XML the graph based structure of RDF. However, XML-based languages are not
usually used to query/transform serializations of RDF. Rather than SPARQL is used to query RDF whose
∗ This
work was supported by the EU (FEDER) and the Spanish MINECO Ministry (Ministerio de Economía y Competitividad) under grant TIN2013-44742-C4-4-R, as well as by the Andalusian Regional Government (Spain) under Project
P10-TIC-6114.
1 http://www.dublincore.org/.
2 http://mpeg.chiariglione.org/.
3 http://www.dbpedia.org/.
4 http://linkedlifedata.com/.
5 http://linkeddata.org/
6 http://www.w3.org/.
S. Escobar (Ed.): XIV Jornadas sobre Programación
y Lenguajes, PROLE 2014, Revised Selected Papers
EPTCS 173, 2015, pp. 41–55, doi:10.4204/EPTCS.173.4
c Jesús M. Almendros-Jiménez
This work is licensed under the
Creative Commons Attribution License.
42
XQOWL: An Extension of XQuery for OWL Querying and Reasoning
syntax resembles SQL and abstract from the XML representation of RDF. The same happens when data
are available in XML format: queries and transformations are usually expressed in XPath/XQuery/XSLT,
instead of transforming XML to RDF, and using SPARQL.
One of the main aims of the so-called Web of Data is to be able to handle heterogeneous resources
where data can be expressed in either XML or RDF. The design of programming languages able to
handle both XML and RDF data is a key target in this context and some recent proposals have been
presented with this end. One of most known is XSPARQL [6] which is a hybrid language which combines
XQuery and SPARQL allowing to query XML and RDF. XSPARQL extends the XQuery syntax with
new expressions able to traverse an RDF graph and construct the graph of the result of a query on RDF.
One of the uses of XSPARQL is the definition of lifting and lowering from XML to RDF and vice versa.
But also XSPARQL is able to query XML and RDF data without transforming them, and obtaining the
result in any of the formats. They have defined a formal semantics for XSPARQL which is an extension
of the XQuery semantics. The SPARQL2XQuery interoperability framework [5] aims to overcome the
same problem by considering as query language SPARQL for both formats (XML and RDF), where
SPARQL queries are transformed into XQuery queries by mapping XML Schemas into RDF metadata.
In early approaches, SPARQL queries are embedded in XQuery and XSLT [8] and XPath expressions
are embedded in SPARQL queries [7].
OWL is an ontology language working with concepts (i.e., classes) and roles (i.e., object/data properties) as well as with individuals (i.e., instances) which fill concepts and roles. OWL can be considered
as an extension of RDF in which a richer vocabulary allows to express new relationships. OWL offers
more complex relationships than RDF between entities including means to limit the properties of classes
with respect to the number and type, means to infer that items with various properties are members of a
particular class, and a well-defined model of property inheritance. OWL reasoning [17] is a topic of research of increasing interest in the literature. Most of OWL reasoners (for instance, HermiT [12], Racer
[15], FaCT++ [18], Pellet [16]) are based on tableaux based decision procedures.
In this context, we can distinguish between (1) reasoning tasks and (2) querying tasks from a given
ontology. The most typical (1) reasoning tasks, with regard to a given ontology, include: (a) instance
checking, that is, whether a particular individual is a member of a given concept, (b) relation checking,
that is, whether two individuals hold a given role, (c) subsumption, that is, whether a concept is a subset
of another concept, (d) concept consistency, that is, consistency of the concept relationships, and (e) a
more general case of consistency checking is ontology consistency in which the problem is to decide
whether a given ontology has a model. However, one can be also interested in (2) querying tasks such
as: (a) instance retrieval, which means to retrieve all the individuals of a given concept, and (b) property
fillers retrieval which means to retrieve all the individuals which are related to a given individual with
respect to a given role.
SPARQL provides mechanisms for querying tasks while OWL reasoners are suitable for reasoning
tasks. SPARQL is a query language for RDF/OWL triples whose syntax resembles SQL. OWL reasoners
implement a complex deduction procedure including ontology consistency checking that SPARQL is not
able to carry out. Therefore SPARQL/OWL reasoners are complementary in the world of OWL.
In this paper we present a framework called XQOWL that makes possible to handle XML and RDF/OWL data with XQuery. XQOWL can be considered as an extension of the XQuery language that
connects XQuery with SPARQL and OWL reasoners. XQOWL embeds SPARQL queries (via Jena
SPARQL engine) in XQuery and enables to make calls to OWL reasoners (HermiT, Pellet and FaCT++)
from XQuery. It permits to combine queries against XML and RDF/OWL resources as well as to reason
with RDF/OWL data. Therefore input data can be either XML or RDF/OWL and output data can be
formatted in XML (also using RDF/OWL XML serialization). We present two case studies: the first one
Jesús M. Almendros-Jiménez
43
which consists on lowering and lifting similar to the presented in [6]; and the second one in which XML
analysis is carried out by mapping XML to an ontology and using a reasoner.
Thus the framework proposes to embed SPARQL code in XQuery as well as to make calls to OWL
reasoners from XQuery. With this aim a Java API has been implemented on top of the OWL API [11] and
OWL Reasoner API [10] that makes possible to interconnect XQuery with SPARQL and OWL reasoners.
The Java API is invoked from XQuery thanks to the use of the Java Binding facility available in most of
XQuery processors (this is the case, for instance, of BaseX [9], Exist [14] and Saxon [13]). The Java API
enables to connect XQuery to HermiT, Pellet and FaCT++ reasoners as well as to Jena SPARQL engine.
The Java API returns the results of querying and reasoning in XML format which can be handled from
XQuery. It means that querying and reasoning RDF/OWL with XQOWL one can give XML format to
results in either XML or RDF/OWL. In particular, lifting and lowering is possible in XQOWL.
Therefore our proposal can be seen as an extension of the proposed approaches for combining
SPARQL and XQuery. Our XQOWL framework is mainly focused on the use of XQuery for querying and reasoning with OWL ontologies. It makes possible to write complex queries that combines
SPARQL queries with reasoning tasks. As far as we know our proposal is the first to provide such a
combination.
The implementation has been tested with the BaseX processor [9] and can be downloaded from our
Web site http://indalog.ual.es/XQOWL. There the XQOWL API and the examples of the paper are
available as well as installation instructions.
Let us remark that here we continue our previous works on combination of XQuery and the Semantic
Web. In [1] we have described how to extend the syntax of XQuery in order to query RDF triples. After,
in [2] we have presented a (Semantic Web) library for XQuery which makes possible to retrieve the
elements of an ontology as well as to use SWRL. Here, we have followed a new direction, by embedding
existent query languages (SPARQL) and reasoners in XQuery.
The structure of the paper is as follows. Section 2 will show an example of OWL ontology used
in the rest of the paper as running example. Section 3 will describe XQOWL: the Java API as well as
examples of use. Section 4 will present the case study of XML analysis by using an ontology. Finally,
Section 5 will conclude and present future work.
2
OWL
In this section we show an example of ontology which will be used in the rest of the paper as running
example. Let us suppose an ontology about a social network (see Table 1) in which we define ontology
classes: user, user_item, activity; and event, message v activity (1); and wall, album v user_item (2).
In addition, we can define (object) properties as follows: created_by which is a property whose domain
is the class activity and the range is user (3), and has two sub-properties: added_by, sent_by (4) (used
for events and messages, respectively).
We have also belongs_to which is a functional property (5) whose domain is user_item and range
is user (6); friend_of which is a irreflexive (7) and symmetric (8) property whose domain and range
is user (9); invited_to which is a property whose domain is user and range is event (10); recommended_friend_of which is a property whose domain and range is user (11), and is the composition
of friend_of and friend_of (12); replies_to which is an irreflexive property (13) whose domain and
range is message (14); written_in which is a functional property (15) whose domain is message and
range is wall (16); attends_to which is a property whose domain is user and range is event (17) and is
the inverse of the property confirmed_by (18); i_like_it which is a property whose domain is user and
44
XQOWL: An Extension of XQuery for OWL Querying and Reasoning
Ontology
(1) event, message v activity
(2) wall, album v user_item
(3) ∀ created_by.activity v user
(4) added_by, sent_by v created_by
(5) > v ≤ 1. belongs_to
(6) ∀ belongs_to.user_item v user
(7) ∃ friend_of.Self v ⊥
(8) friend_of− v friend_of
(9) ∀ friend_of.user v user
(10) ∀ invited_to.user v event
(11) ∀ recommended_friend_of.user
(12) friend_of · friend_of v
v user
recommended_friend_of
(13) ∃ replies_to.Self v ⊥
(14) ∀ replies_to.message v message
(15) > v ≤ 1.written_in
(16) ∀ written_in.message v wall
(17) ∀ attends_to.user v event
(18) attends_to− ≡ confirmed_by
(19) ∀ i_like_it.user v activity
(20) i_like_it− ≡ liked_by
(21) ∀ content.message v String
(22) ∀ date.event v DateTime
(23) ∀ name.event v String
(24) ∀ nick.user v String
(25) ∀ password.user v String
(26) event u ∃confirmed_by.user v popular
(27) activityu ∃liked_by.user v popular (28) activity v ≤ 1 created_by.user
(29) message u event ≡ ⊥
Table 1: Social Network Ontology (in Description Logic Syntax)
range is activity (19), which is the inverse of the property liked_by (20).
Besides, there are some (data) properties: the content of a message (21), the date (22) and name (23)
of an event, and the nick (24) and password (25) of a user. Finally, we have defined the concepts popular
which are events confirmed_by some user and activities liked_by some user ((26) and (27)) and we have
defined constraints: activities are created_by at most one user (28) and message and event are disjoint
classes (29). Let us now suppose the set of individuals and object/data property instances of Table 2.
From OWL reasoning we can deduce new information. For instance, the individual message1 is an
activity, because message is a subclass of activity, and the individual event1 is also an activity because
event is a subclass of activity. The individual wall_jesus is an user_item because wall is a subclass
of user_item. These inferences are obtained from the subclass relation. In addition, object properties
give us more information. For instance, the individuals message1, message2 and event1 have been created_by jesus, luis and luis, respectively, since the properties sent_by and added_by are sub-properties
of created_by. In addition, the individual luis is a friend_of jesus because friend_of is symmetric. More
interesting is that the individual vicente is a recommended_friend_of jesus, because jesus is a friend_of
luis, and luis is a friend_of vicente, which is deduced from the definition of recommended_friend_of,
which is the composition of friend_of and friend_of. Besides, the individual event1 is confirmed_by vicente, because vicente attends_to event1 and the properties confirmed_by and attends_to are inverses.
Finally, there are popular concepts: event1 and message2; the first one has been confirmed_by vicente
and the second one is liked_by vicente.
The previous ontology is consistent. The ontology might introduce elements that make the ontology
inconsistent. We might add a user being friend_of of him(er) self. Even more, we can define that certain
events and messages are created_by (either added_by or sent_by) more than one user. Also a message
can reply to itself. However, there are elements that do not affect ontology consistency. For instance,
event2 has not been created_by users. The ontology only requires to have at most one creator. Also,
messages have not been written_in a wall.
Jesús M. Almendros-Jiménez
45
Ontology Instance
user(jesus), nick(jesus,jalmen),
password(jesus,passjesus), friend_of(jesus,luis)
user(luis), nick(luis,Iamluis), password(luis,luis0000)
user(vicente), nick(vicente,vicente), password(vicente,vicvicvic),
friend_of(vicente,luis), i_like_it(vicente,message2),
invited_to(vicente,event1), attends_to(vicente,event1)
event(event1), added_by(event1,luis),
name(event1,“Next conference”), date(event1,21/10/2014)
event(event2)
message(message1), sent_by(message1,jesus),
content(message1,“I have sent the paper”)
message(message2), sent_by(message2,luis),
content(message2,“good luck!”), replies_to(message2,message1)
wall(wall_jesus), belongs_to(wall_jesus,jesus)
wall(wall_luis), belongs_to(wall_luis,luis)
wall(wall_vicente), belongs_to(wall_vicente,vicente)
Table 2: Individuals and object/data properties of the ontology
Java API
public OWLReasoner getOWLReasonerHermiT(OWLOntology ontology)
public OWLReasoner getOWLReasonerPellet(OWLOntology ontology)
public OWLReasoner getOWLReasonerFact(OWLOntology ontology)
public String OWLSPARQL(String filei,String queryStr)
public <T extends OWLAxiom> String OWLQuerySetAxiom(Set<T> axioms)
public <T extends OWLEntity> String[] OWLQuerySetEntity(Set<T> elems)
public <T extends OWLEntity> String[] OWLReasonerNodeEntity(Node <T> elem)
public <T extends OWLEntity> String[] OWLReasonerNodeSetEntity(NodeSet<T> elems)
Table 3: Java API of XQOWL
3
XQOWL
XQOWL allows to embed SPARQL queries in XQuery. It also makes possible to make calls to OWL
reasoners. With this aim a Java API has been developed.
3.1
The Java API
Now, we show the main elements of the Java API developed for connecting XQuery and SPARQL and
OWL reasoners. Basically, the Java API has been developed on top of the OWL API and the OWL
Reasoner API and makes possible to retrieve results from SPARQL and OWL reasoners. The elements
of the library are shown in Table 3.
The first three elements of the library: getOWLReasonerHermiT, getOWLReasonerPellet and getOWLReasonerFact make possible to instantiate HermiT, Pellet and FaCT++ reasoners. For instance, the code
46
XQOWL: An Extension of XQuery for OWL Querying and Reasoning
of getOWLReasonerHermiT is as follows:
public OWLReasoner g e t O W L R e a s o n e r H e r m i T ( OWLOntology ontology ) {
org . semanticweb . HermiT . Reasoner reasoner = new Reasoner ( ontology ) ;
reasoner . p r e c o m p u t e I n f e r e n c e s ( InferenceType . CLASS_HIERARCHY ,
InferenceType . CLASS_ASSERTIONS ,
...) ;
return reasoner ;
};
The fourth element of the library OWLSPARQL makes possible to instantiate SPARQL Jena engine.
The input of this method is an ontology included in a file and a string representing the SPARQL query.
The output is a file (name) including the result of the query. The code of OWLSPARQL is as follows:
public String OWLSPARQL ( String filei , String queryStr )
throws F i l e N o t F o u n d E x c e p t i o n {
OntModel model = ModelFactory . c r e a t e O n t o l o g y M o d e l () ;
model . read ( filei ) ;
com . hp . hpl . jena . query . Query query = QueryFactory . create ( queryStr ) ;
ResultSet result =
( ResultSet ) S p a r q l D L E x e c u t i o n F a c t o r y . create ( query , model ) . execSelect () ;
String fileName = " ./ tmp / " + result . hashCode () + " result . owl " ;
File f = new File ( fileName ) ;
F i l e O u t p u t S t r e a m file = new F i l e O u t p u t S t r e a m ( f ) ;
R e s u l t S e t F o r m a t t e r . outputAsXML ( file ,( com . hp . hpl . jena . query . ResultSet ) result ) ;
try { file . close () ; } catch ( IOException e ) { e . p r in tS t ac kT r ac e () ;}
return fileName ;
};
We can see in the code that the result of the query is obtained in XML format and stored in a file.
The rest of elements (i.e, OWLQuerySetAxiom, OWLQuerySetEntity, OWLReasonerNodeSetEntity and
OWLReasonerNodeEntity) of the Java API make possible to handle the results of calls to SPARQL and
OWL reasoners. OWL Reasoners implement Java interfaces of the OWL API for storing OWL elements.
The main Java interfaces are OWLAxiom and OWLEntity. OWLAxiom is a Java interface which is a
super-interface of all the types of OWL axioms: OWLSubClassOfAxiom, OWLSubDataPropertyOfAxiom, OWLSubObjectPropertyOfAxiom, etc. OWLEntity is a Java interface which is a super-interface of
all types of OWL elements: OWLClass, OWLDataProperty, OWLDatatype, etc.
The XQOWL API includes the method OWLQuerySetAxiom that returns a file name where a set
of axioms are included. It also includes OWLQuerySetEntity that returns in an array the URI’s of a
set of entities. Moreover, OWLReasonerNodeEntity returns in an array the URI’s of a node. Finally,
OWLReasonerNodeSetEntity returns in an array the URIs of a set of nodes. For instance, the code of
OWLQuerySetEntity is as follows:
public <T extends OWLEntity > String [] O W L Q u e r y S e t E n t i t y ( Set <T > elems )
{
String [] result = new String [ elems . size () ];
Iterator <T > it = elems . iterator () ;
for ( int i =0; i < elems . size () ; i ++) {
result [ i ]= it . next () . toStringID () ;
};
return result ;
};
3.2
XQOWL: SPARQL
XQOWL is an extension of the XQuery language. Firstly, XQOWL allows to write XQuery queries in
which calls to SPARQL queries are achieved and the results of SPARQL queries in XML format (see
Jesús M. Almendros-Jiménez
47
[4]) can be handled by XQuery. In XQOWL, XQuery variables can be bounded to results of SPARQL
queries and vice versa, XQuery bounded variables can be used in SPARQL expressions. Therefore, in
XQOWL both XQuery and SPARQL queries can share variables.
Example 3.1 For instance, the following query returns the individuals of concepts user and event in the
social network:
declare namespace spql = " http: // www . w3 . org /2005/ sparql - results # " ;
declare namespace xqo = " java:xqowl . XQOWL " ;
let $ model : = " socialnetwork . owl "
for $ class in ( " sn:user " ," sn:event " )
return
let $ queryStr : = concat (
" PREFIX rdf: < http: // www . w3 . org /1999/02/22 - rdf - syntax - ns # >
PREFIX sn: < http: // www . semanticweb . org / socialnetwork . owl # >
SELECT ? Ind
WHERE { ? Ind rdf:type " , $ class , " } " )
return
let $ xqo : = xqo:new ()
let $ res: = xqo:OWLSPARQL ($ xqo ,$ model ,$ queryStr )
return
doc ($ res ) / spql:sparql / spql:results / spql:result / spql:binding / spql:uri / text ()
Let us observe that the name of the classes (i.e., sn:user and sn:event) is defined by an XQuery
variable (i.e., $class) in a for expression, which is passed as parameter of the SPARQL expression. In
addition, the result is obtained in an XQuery variable (i.e. $res). Here OWLSPARQL of the XQOWL
API is used to call the SPARQL Jena engine, which returns a file name (a temporal file) in which the
result is found. Now, $res can be used from XQuery to obtain the URIs of the elements:
doc($res)/spql : sparql/spql : results/spql : result/spql : binding/spql : uri/text()
In this case, we obtain the following plain text:
http: // www . semanticweb . org / socialnetwork . owl # vicente
http: // www . semanticweb . org / socialnetwork . owl # jesus
http: // www . semanticweb . org / socialnetwork . owl # luis
http: // www . semanticweb . org / socialnetwork . owl # event2
http: // www . semanticweb . org / socialnetwork . owl # event1
Example 3.2 Another example of using XQOWL and SPARQL is the code of lowering from the document:
< rdf:RDF xmlns:rdf = " http: // www . w3 . org /1999/02/22 - rdf - syntax - ns # "
xmlns = " http: // relations . org " >
< foaf:Person xmlns:foaf = " http: // xmlns . com / foaf /0.1/ " rdf:about = " # b1 " >
< foaf:name > Alice </ foaf:name >
< foaf:knows >
< foaf:Person rdf:about = " # b4 " / >
</ foaf:knows >
< foaf:knows >
< foaf:Person rdf:about = " # b6 " / >
</ foaf:knows >
</ foaf:Person >
< foaf:Person xmlns:foaf = " http: // xmlns . com / foaf /0.1/ " rdf:about = " # b4 " >
< foaf:name > Bob </ foaf:name >
< foaf:knows >
< foaf:Person rdf:about = " # b6 " / >
</ foaf:knows >
</ foaf:Person >
48
XQOWL: An Extension of XQuery for OWL Querying and Reasoning
< foaf:Person xmlns:foaf = " http: // xmlns . com / foaf /0.1/ " rdf:about = " # b6 " >
< foaf:name > Charles </ foaf:name >
</ foaf:Person >
</ rdf:RDF >
to the document:
< relations >
< person name = " Alice " >
< knows > Bob </ knows >
< knows > Charles </ knows >
</ person >
< person name = " Bob " >
< knows > Charles </ knows >
</ person >
< person name = " Charles " / >
</ relations >
This example has been taken from [6]7 in which they show the lowering example in XSPARQL. In our
case the code of the lowering example is as follows:
declare namespace spql = " http: // www . w3 . org /2005/ sparql - results # " ;
declare namespace xqo = " java:xqowl . XQOWL " ;
declare variable $ model : = " relations . rdf " ;
let $ query1 : =
" PREFIX rdfs: < http: // www . w3 . org /2000/01/ rdf - schema # >
PREFIX rdf: < http: // www . w3 . org /1999/02/22 - rdf - syntax - ns # >
PREFIX foaf: < http: // xmlns . com / foaf /0.1/ >
SELECT ? Person ? Name
WHERE {
? Person foaf:name ? Name
} ORDER BY ? Name "
let $ xqo : = xqo:new () ,
$ result : = xqo:OWLSPARQL ($ xqo ,$ model ,$ query1 )
return
for $ Binding in doc ($ result ) / spql:sparql / spql:results / spql:result
let $ Name : = $ Binding / spql:binding [ @name = " Name " ]/ spql:literal / text () ,
$ Person : = $ Binding / spql:binding [ @name = " Person " ]/ spql:uri / text () ,
$ PersonName : = functx:fragment - from - uri ($ Person )
return
< person name = " {$ Name } " >{
let $ query2 : =
concat (
" PREFIX rdfs: < http: // www . w3 . org /2000/01/ rdf - schema # >
PREFIX rdf: < http: // www . w3 . org /1999/02/22 - rdf - syntax - ns # >
PREFIX rel: < http: // relations . org # >
PREFIX foaf: < http: // xmlns . com / foaf /0.1/ >
SELECT ? FName
WHERE {
_: " ,$ PersonName , " foaf:knows ? Friend .
_: " ,$ PersonName , " foaf:name " , " ’" ,$ Name , " ’ .
? Friend foaf:name ? FName
}")
let $ result2 : = xqo:OWLSPARQL ($ xqo ,$ model ,$ query2 )
return
for $ FName in doc ($ result2 ) / spql:sparql / spql:results / spql:result / spql:binding /
spql:literal / text ()
return
< knows > {$ FName } </ knows >
7 XSPARQL works with blank nodes, and there the RDF document includes nodeID tag for each RDF item.
In XQOWL we
cannot deal with blank nodes at all, and therefore a preprocessing of the RDF document is required: nodeID tags are replaced
by about.
Jesús M. Almendros-Jiménez
49
}
</ person >
}
</ relations >
In this example, two SPARQL queries are nested and share variables. The result of the first SPARQL
query (i.e., $PersonName and $Name) is used in the second SPARQL query.
3.3
XQOWL: OWL Reasoners
XQOWL can be also used for querying and reasoning with OWL. With this aim the OWL API and
OWL Reasoner API have been integrated in XQuery. Also for this integration, the XQOWL API is
required. For using OWL Reasoners from XQOWL there are some calls to be made from XQuery code.
Firstly, we have to instantiate the ontology manager by using createOWLOntologyManager; secondly,
the ontology has to be loaded by using loadOntologyFromOntologyDocument; thirdly, in order to handle
OWL elements we have to instantiate the data factory by using getOWLDataFactory; finally, in order to
select a reasoner getOWLReasonerHermiT, getOWLReasonerPellet and getOWLReasonerFact are used.
Example 3.3 For instance, we can query the object properties of the ontology using the OWL API as
follows:
let $ xqo : = xqo:new () ,
$ man : = a p i : c r e a t e O W L O n t o l o g y M a n a g e r () ,
$ fileName : = file:new ($ file ) ,
$ ont : = o m : l o a d O n t o l o g y F r o m O n t o l o g y D o c u m e n t ($ man ,$ fileName )
return
doc ( x q o : O W L Q u e r y S e t A x i o m ($ xqo , o:getAxioms ($ ont ) ) ) / rdf:RDF / o w l : O b j e c t P r o p e r t y
obtaining the following result:
< Object Proper ty ... rdf:about = " ...# added_by " >
< r d f s : s u b P r o p e r t y O f rdf:resource = " ...# created_by " / >
< rdfs:domain rdf:resource = " ...# event " / >
< rdfs:range rdf:resource = " ...# user " / >
</ Objec tPrope rty >
< Object Proper ty ... rdf:about = " ...# attends_to " >
< inverseOf rdf:resource = " ...# confirmed_by " / >
< rdfs:range rdf:resource = " ...# event " / >
< rdfs:domain rdf:resource = " ...# user " / >
</ Objec tPrope rty >
...
Example 3.4 Another example of query using the OWL API is the following which requests class axioms
related to wall and event:
let $ xqo : = xqo:new () ,
$ man : = a p i : c r e a t e O W L O n t o l o g y M a n a g e r () ,
$ fileName : = file:new ($ file ) ,
$ ont : = o m : l o a d O n t o l o g y F r o m O n t o l o g y D o c u m e n t ($ man ,$ fileName ) ,
$ fact : = o m : g e t O W L D a t a F a c t o r y ($ man )
return
for $ class in ( " wall " ," event " )
let $ iri : = iri:create ( concat ($ base ,$ class ) ) ,
$ class : = df:ge tOWLCl ass ($ fact ,$ iri )
return
doc ( x q o : O W L Q u e r y S e t A x i o m ($ xqo , o:getAxioms ($ ont ,$ class ) ) ) / rdf:RDF / owl:Class
in which a for expression is used to define the names of the classes to be retrieved, obtaining the following result:
50
XQOWL: An Extension of XQuery for OWL Querying and Reasoning
< Class ... rdf:about = " ...# user_item " / >
< Class ... rdf:about = " ...# wall " >
< r df s :s ub C la ss O f rdf:resource = " ...# user_item " / >
</ Class >
< Class ... rdf:about = " ...# activity " / >
< Class ... rdf:about = " ...# event " >
< r df s :s ub C la ss O f rdf:resource = " ...# activity " / >
< disjointWith rdf:resource = " ...# message " / >
</ Class >
< Class ... rdf:about = " ...# message " / >
Now we can see examples about how to use XQOWL for reasoning with an ontology. With this aim,
we can use the OWL Reasoner API (as well as the XQOWL API). The XQOWL API allows easily to
use HermiT, Pellet and FaCT++ reasoners.
Example 3.5 For instance, let us suppose we want to check the consistence of the ontology by the HermiT reasoner. The code is as follows:
let $ xqo : = xqo:new () ,
$ man : = a p i : c r e a t e O W L O n t o l o g y M a n a g e r () ,
$ fileName : = file:new ($ file ) ,
$ ont : = o m : l o a d O n t o l o g y F r o m O n t o l o g y D o c u m e n t ($ man ,$ fileName ) ,
$ fact : = o m : g e t O W L D a t a F a c t o r y ($ man ) ,
$ reasoner : = x q o : g e t O W L R e a s o n e r H e r m i T ($ xqo ,$ ont ) ,
$ boolean : = r: isCons istent ($ reasoner ) ,
$ dispose : = r:dispose ($ reasoner )
return $ boolean
which returns true. Here the HermiT reasoner is instantiated by using getOWLReasonerHermiT. In
addition, the OWL Reasoner API method isConsistent is used to check ontology consistence. Each time
the work of the reasoner is done, a call to dispose is required.
Example 3.6 Let us suppose now we want to retrieve instances of concepts activity and user. Now,
we can write the following query using the HermiT reasoner:
for $ classes in ( " activity " ," user " )
let $ xqo : = xqo:new () ,
$ man : = a p i : c r e a t e O W L O n t o l o g y M a n a g e r () ,
$ fileName : = file:new ($ file ) ,
$ ont : = o m : l o a d O n t o l o g y F r o m O n t o l o g y D o c u m e n t ($ man ,$ fileName ) ,
$ fact : = o m : g e t O W L D a t a F a c t o r y ($ man ) ,
$ iri : = iri:create ( concat ($ base ,$ classes ) ) ,
$ reasoner : = x q o : g e t O W L R e a s o n e r H e r m i T ($ xqo ,$ ont ) ,
$ class : = df:ge tOWLCl ass ($ fact ,$ iri ) ,
$ result: = r:getI nstanc es ($ reasoner ,$ class , false () ) ,
$ dispose : = r:dispose ($ reasoner )
return
< concept class = " {$ classes } " >
{ for $ instances in x q o : O W L R e a s o n e r N o d e S e t E n t i t y ($ xqo ,$ result )
return < instance >{ substring - after ($ instances , ’# ’) } </ instance >}
</ concept >
obtaining the following result in XML format:
< concept class = " activity " >
< instance > message1 </ instance >
< instance > message2 </ instance >
< instance > event1 </ instance >
< instance > event2 </ instance >
</ concept >
Jesús M. Almendros-Jiménez
51
< concept class = " user " >
< instance > jesus </ instance >
< instance > vicente </ instance >
< instance > luis </ instance >
</ concept >
Here getInstances of the OWL Reasoner API is used to retrieve the instances of a given ontology class. In
addition, a call to create of the OWL API, which creates the IRI of the class, and a call to getClass of the
OWL API, which retrieves the class, are required. The OWL Reasoner is able to deduce that message1
and message2 belong to concept activity since they belong to concept message and message is a
subconcept of activity. The same can be said for events.
Example 3.7 Let us suppose now we want to retrieve the subconcepts of activity using the Pellet
reasoner. The code is as follows:
let $ xqo : = xqo:new () ,
$ man : = a p i : c r e a t e O W L O n t o l o g y M a n a g e r () ,
$ fileName : = file:new ($ file ) ,
$ ont : = o m : l o a d O n t o l o g y F r o m O n t o l o g y D o c u m e n t ($ man ,$ fileName ) ,
$ fact : = o m : g e t O W L D a t a F a c t o r y ($ man ) ,
$ iri : = iri:create ( concat ($ base , " activity " ) ) ,
$ reasoner : = x q o : g e t O W L R e a s o n e r P e l l e t ($ xqo ,$ ont ) ,
$ class : = df:ge tOWLCl ass ($ fact ,$ iri ) ,
$ result: = r :g e tS ub C la ss e s ($ reasoner ,$ class , false () ) ,
$ dispose : = r:dispose ($ reasoner )
return
for $ subclass in x q o : O W L R e a s o n e r N o d e S e t E n t i t y ($ xqo ,$ result )
return < subclass >{ substring - after ($ subclass , ’# ’) } </ subclass >
and the result in XML format is as follows:
< subclass > p op u la r _m es s ag e </ subclass >
< subclass > event </ subclass >
< subclass > Nothing </ subclass >
< subclass > popular_event </ subclass >
< subclass > message </ subclass >
Here getSubClasses of the OWL Reasoner API is used.
Example 3.8 Finally, let us suppose we want to retrieve the recommended friends of jesus. Now, the
query is as follows:
let $ xqo : = xqo:new () ,
$ man : = a p i : c r e a t e O W L O n t o l o g y M a n a g e r () ,
$ fileName : = file:new ($ file ) ,
$ ont : = o m : l o a d O n t o l o g y F r o m O n t o l o g y D o c u m e n t ($ man ,$ fileName ) ,
$ fact : = o m : g e t O W L D a t a F a c t o r y ($ man ) ,
$ iri : = iri:create ( concat ($ base , " r e c o m m e n d e d _ f r i e n d _ o f " ) ) ,
$ iri2 : = iri:create ( concat ($ base , " jesus " ) ) ,
$ reasoner : = x q o : g e t O W L R e a s o n e r P e l l e t ($ xqo ,$ ont ) ,
$ property : = d f : g e t O W L O b j e c t P r o p e r t y ($ fact ,$ iri ) ,
$ ind : = d f : g e t O W L N a m e d I n d i v i d u a l ($ fact ,$ iri2 ) ,
$ result: = r : g e t O b j e c t P r o p e r t y V a l u e s ($ reasoner ,$ ind ,$ property ) ,
$ dispose : = r:dispose ($ reasoner )
return
for $ rfriend in x q o : O W L R e a s o n e r N o d e S e t E n t i t y ($ xqo ,$ result )
return
< recommended_friend >
{ substring - after ($ rfriend , ’# ’) }
</ r e c o m m e n d e d _ f r i e n d >
and the answer as follows:
52
XQOWL: An Extension of XQuery for OWL Querying and Reasoning
< r e c o m m e n d e d _ f r i e n d > jesus </ r e c o m m e n d e d _ f r i e n d >
< r e c o m m e n d e d _ f r i e n d > vicente </ r e c o m m e n d e d _ f r i e n d >
Here the OWL Reasoner API is used to deduce the friends of friends of jesus. Due to symmetry of
friend relationship, a person is a recommended friend of itself.
4
Using XQOWL for XML Analysis
Now, we show an example in which XQOWL is used to analyze the semantic content of an XML document. This example was used in our previous work [3] to illustrate the use of our Semantic Web library
for XQuery. The example takes an XML document as input as follows:
<? xml version = ’ 1.0 ’? >
< conference >
< papers >
< paper id = " 1 " studentPaper = " true " >
< title > XML Schemas </ title >
< wordCount > 1200 </ wordCount >
</ paper >
< paper id = " 2 " studentPaper = " false " >
< title > XML and OWL </ title >
< wordCount > 2800 </ wordCount >
</ paper >
< paper id = " 3 " studentPaper = " true " >
< title > OWL and RDF </ title >
< wordCount > 12000 </ wordCount >
</ paper >
</ papers >
< researchers >
< researcher id = " a " isStudent = " false " manuscript = " 1 " referee = " 1 " >
< name > Smith </ name >
</ researcher >
< researcher id = " b " isStudent = " true " manuscript = " 1 " referee = " 2 " >
< name > Douglas </ name >
</ researcher >
< researcher id = " c " isStudent = " false " manuscript = " 2 " referee = " 3 " >
< name > King </ name >
</ researcher >
< researcher id = " d " isStudent = " true " manuscript = " 2 " referee = " 1 " >
< name > Ben </ name >
</ researcher >
< researcher id = " e " isStudent = " false " manuscript = " 3 " referee = " 3 " >
< name > William </ name >
</ researcher >
</ researchers >
</ conference >
The document lists papers and researchers involved in a conference. Each paper and researcher has
an identifier (represented by the attribute id), and has an associated set of labels: title and wordCount
for papers and name for researchers. Furthermore, they have attributes studentPaper for papers and
isStudent, manuscript and referee for researchers. The meaning of manuscript and referee is that the
given researcher has submitted the paper of number described by manuscript as well as has participated
as reviewer of the paper of number given by referee.
Now, let us suppose that we would like to analyze the content of the XML document in order to
detect constraints which are violated. In particular, the revision system of the conference forbids that an
student is a reviewer as well as a research is a reviewer of his(her) own paper.
Jesús M. Almendros-Jiménez
53
In order to analyze the document the idea is to create an ontology to represent the same elements
of the XML document. This ontology contains in the TBox a vocabulary to represent submissions. It
includes class names Paper and Researcher. But also it includes PaperofSenior, PaperofStudent, Student
and Senior. The individuals of PaperofSenior are the papers for which studentPaper of the XML document has been set to false. The individuals of PaperofStudent are the papers for which studentPaper of
the XML document has been set to true. Analogously, the individuals of Senior and Student are the researchers for which isStudent has been set to false, respectively, to true. In addition the ontology includes
object properties manuscript and referee, and data properties wordCount, name and title.
Now, the idea is to express the revision system constraints as constraints of the ontology. Thus,
the ontology includes two restrictions to be checked: Student and Reviewer classes are disjoint while
manuscript and referee are disjoint object properties.
In order to analyze a given XML document, we can use XQOWL with two ends.
• To transform the XML document to the ontology ABox.
• To check consistence of the ontology.
The code of the transformation to the ontology ABox is as follows:
let $ name : = / conference
let $ ontology1 : =
( for $ x in $ name / papers / paper return
s w : t o C l a s s F i l l e r ( sw:ID ($ x / @id ) ," # Paper " ) union
(
let $ studentPaper: = $ x / @studentPaper return
if ( data ($ studentPaper ) = " true " ) then
s w : t o C l a s s F i l l e r ( sw:ID ($ x / @id ) ," # P aperof Studen t " )
else s w : t o C l a s s F i l l e r ( sw:ID ($ x / @id ) ," # PaperofSenior " )
) union
s w: to D at a Fi ll e r ( sw:ID ($ x / @id ) ," title " ,$ x / title , " string " ) union
s w: to D at a Fi ll e r ( sw:ID ($ x / @id ) ," wordCount " ,$ x / wordCount , " integer " )
)
let $ ontology2 : =
( for $ y in $ name / researchers / researcher return
s w : t o C l a s s F i l l e r ( sw:ID ($ y / @id ) ," # Researcher " ) union
s w: to D at a Fi ll e r ( sw:ID ($ y / @id ) ," name " ,$ y / name , " string " ) union
(
let $ student: = $ y / @isStudent return
if ( data ($ student ) = " true " ) then
s w : t o C l a s s F i l l e r ( sw:ID ($ y / @id ) ," # Student " )
else s w : t o C l a s s F i l l e r ( sw:ID ($ y / @id ) ," # Senior " )
) union
s w : t o O b j e c t F i l l e r ( sw:ID ($ y / @id ) ," manuscript " , sw:ID ($ y / @manuscript ) ) union
s w : t o O b j e c t F i l l e r ( sw:ID ($ y / @id ) ," referee " , sw:ID ($ y / @referee ) ) )
return
let $ mapping : = $ ontology1 union $ ontology2
return
let $ doc : =
document {
< rdf:RDF ... >
{ doc ( " o nt ol o gy _p a pe r s . owl " ) / rdf:RDF /*}
{$ mapping }
</ rdf:RDF >
}
Here we have used the Semantic Web library for XQuery defined in [3]. Basically, we have created
the instance of the ontology by using sw:toClassFiller, sw:toDataFiller and sw:toObjectFiller which
make possible to create instances of classes, data and object properties, respectively. At the end of the
code, the ontology TBox is incorporated (which is stored in the file “ontology_papers.owl”). Now, the
54
XQOWL: An Extension of XQuery for OWL Querying and Reasoning
consistence checking using the Hermit reasoner is as follows, where $doc is the result of the previous
query:
let $ xqo : = xqo:new () ,
$ man : = a p i : c r e a t e O W L O n t o l o g y M a n a g e r () ,
$ seq : = file:write ( " o n t o l o g y _ a n a l y s i s . owl " ,$ doc ) ,
$ fileName : = file_io:new ($ file ) ,
$ ont : = o m : l o a d O n t o l o g y F r o m O n t o l o g y D o c u m e n t ($ man ,$ fileName ) ,
$ fact : = o m : g e t O W L D a t a F a c t o r y ($ man ) ,
$ reasoner : = x q o : g e t O W L R e a s o n e r H e r m i t ($ xqo ,$ ont ) ,
$ boolean : = r: isCons istent ($ reasoner ) ,
$ dispose : = r:dispose ($ reasoner )
return $ boolean
5
Conclusions and Future Work
In this paper we have presented an extension of XQuery called XQOWL to query XML and RDF/OWL
documents, as well as to reason with RDF/OWL resources. We have described the XQOWL API that
allows to make calls from XQuery to SPARQL and OWL Reasoners. Also we have shown examples
of use of XQOWL. The main advantage of the approach is to be able to handle both types of documents through the sharing of variables between XQuery and SPARQL/OWL Reasoners. The implementation has been tested with the BaseX processor [9] and can be downloaded from our Web site
http://indalog.ual.es/XQOWL. As future work, we would like to extend our work as follows. Firstly,
we would like to extend our Java API. More concretely, with the SWRL API in order to execute rules
from XQuery, and to be able to provide explanations about ontology inconsistence. Secondly, we would
like to use our framework in ontology transformations (refactoring, coercion, splitting, amalgamation)
and matching.
References
[1] Jesús Manuel Almendros-Jiménez (2009): Extending XQuery for Semantic Web Reasoning. In Salvador
Abreu & Dietmar Seipel, editors: Applications of Declarative Programming and Knowledge Management 18th International Conference, INAP 2009, Évora, Portugal, November 3-5, 2009, Revised Selected Papers,
Lecture Notes in Computer Science 6547, Springer, pp. 117–134, doi:10.1007/978-3-642-20589-7_8.
[2] Jesús Manuel Almendros-Jiménez (2011): Querying and Reasoning with RDF(S)/OWL in XQuery. In Xiaoyong Du, Wenfei Fan, Jianmin Wang, Zhiyong Peng & Mohamed A. Sharaf, editors: Web Technologies and
Applications - 13th Asia-Pacific Web Conference, APWeb 2011, Beijing, China, April 18-20, 2011. Proceedings, Lecture Notes in Computer Science 6612, Springer, pp. 450–459, doi:10.1007/978-3-642-20291-9_51.
[3] Jesús Manuel Almendros-Jiménez (2012): Using OWL and SWRL for the Semantic Analysis of XML Resources. In Robert Meersman, Hervé Panetto, Tharam S. Dillon, Stefanie Rinderle-Ma, Peter Dadam, Xiaofang Zhou, Siani Pearson, Alois Ferscha, Sonia Bergamaschi & Isabel F. Cruz, editors: On the Move
to Meaningful Internet Systems: OTM 2012, Confederated International Conferences: CoopIS, DOA-SVI,
and ODBASE 2012, Rome, Italy, September 10-14, 2012. Proceedings, Part II, Lecture Notes in Computer
Science 7566, Springer, pp. 915–931, doi:10.1007/978-3-642-33615-7_33.
[4] Dave Beckett & Jeen Broekstra (2013): SPARQL Query Results XML Format (Second Edition). http:
//http://www.w3.org/TR/rdf-sparql-XMLres/.
[5] Nikos Bikakis, Chrisa Tsinaraki, Ioannis Stavrakantonakis, Nektarios Gioldasis & Stavros Christodoulakis
(2014): The SPARQL2XQuery interoperability framework. World Wide Web, pp. 1–88, doi:10.1007/s11280013-0257-x.
Jesús M. Almendros-Jiménez
55
[6] Stefan Bischof, Stefan Decker, Thomas Krennwallner, Nuno Lopes & Axel Polleres (2012): Mapping between RDF and XML with XSPARQL. Journal on Data Semantics 1(3), pp. 147–185, doi:10.1007/s13740012-0008-7.
[7] Matthias Droop, Markus Flarer, Jinghua Groppe, Sven Groppe, Volker Linnemann, Jakob Pinggera, Florian Santner, Michael Schier, Felix Schöpf & Hannes Staffler (2009): Bringing the XML and semantic web
worlds closer: transforming XML into RDF and embedding XPath into SPARQL. In: Enterprise Information
Systems, Springer, pp. 31–45, doi:10.1007/978-3-642-00670-8_3.
[8] Sven Groppe, Jinghua Groppe, Volker Linnemann, Dirk Kukulenz, Nils Hoeller & Christoph Reinke (2008):
Embedding SPARQL into XQUERY/XSLT. In: Proceedings of the 2008 ACM symposium on Applied computing, ACM, pp. 2271–2278, doi:10.1145/1363686.1364228.
[9] Christian Grun (2014): BaseX. The XML Database. http://basex.org.
[10] Matthew Horridge (2009): OWL Reasoner API. http://owlapi.sourceforge.net/javadoc/org/
semanticweb/owlapi/reasoner/OWLReasoner.html.
[11] Matthew Horridge & Sean Bechhofer (2011): The OWL API: A Java API for OWL Ontologies. Semant. web
2(1), pp. 11–21. Available at http://dl.acm.org/citation.cfm?id=2019470.2019471.
[12] Ian Horrocks, Boris Motik & Zhe Wang (2012): The HermiT OWL Reasoner. In Ian Horrocks, Mikalai
Yatskevich & Ernesto Jiménez-Ruiz, editors: Proceedings of the 1st International Workshop on OWL Reasoner Evaluation (ORE-2012), Manchester, UK, July 1st, 2012, CEUR Workshop Proceedings 858, CEURWS.org. Available at http://ceur-ws.org/Vol-858/ore2012_paper13.pdf.
[13] Michael Kay (2008): Ten Reasons Why Saxon XQuery is Fast. IEEE Data Eng. Bull. 31(4), pp. 65–74.
Available at http://sites.computer.org/debull/A08dec/saxonica.pdf.
[14] Wolfgang Meier (2003): eXist: An open source native XML database. In: Web, Web-Services, and Database
Systems, Springer, pp. 169–183, doi:10.1007/3-540-36560-5_13.
[15] Ralf Möller, Volker Haarslev & Sebastian Wandelt (2008): The Revival of Structural Subsumption in
Tableau-based Reasoners. In Franz Baader, Carsten Lutz & Boris Motik, editors: Proceedings of the
21st International Workshop on Description Logics (DL2008), Dresden, Germany, May 13-16, 2008,
CEUR Workshop Proceedings 353, CEUR-WS.org. Available at http://ceur-ws.org/Vol-353/
MoellerHaarslevWandelt.pdf.
[16] Evren Sirin, Bijan Parsia, Bernardo C. Grau, Aditya Kalyanpur & Yarden Katz (2007): Pellet: A practical
OWL-DL reasoner. Web Semantics: Science, Services and Agents on the World Wide Web 5(2), pp. 51–53,
doi:10.1016/j.websem.2007.03.004.
[17] Steffen Staab & Rudi Studer (2010): Handbook on ontologies. Springer, doi:10.1007/978-3-540-92673-3.
[18] Dmitry Tsarkov & Ian Horrocks (2006): FaCT++ Description Logic Reasoner: System Description. In
Ulrich Furbach & Natarajan Shankar, editors: Automated Reasoning, Third International Joint Conference,
IJCAR 2006, Seattle, WA, USA, August 17-20, 2006, Proceedings, Lecture Notes in Computer Science 4130,
Springer, pp. 292–297, doi:10.1007/11814771_26.
| 6 |
Efficient Column Generation for Cell Detection and
Segmentation
arXiv:1709.07337v1 [] 21 Sep 2017
Chong Zhanga,∗, Shaofei Wangb , Miguel A. Gonzalez-Ballestera,c , Julian
Yarkonyd,∗
a SimBioSys,
DTIC, Universitat Pompeu Fabra, Barcelona, Spain
b A&E Technologies, Beijing, China
c ICREA, Spain
d Experian Data Lab, USA
Abstract
We study the problem of instance segmentation in biological images with
crowded and compact cells. We formulate this task as an integer program where
variables correspond to cells and constraints enforce that cells do not overlap.
To solve this integer program, we propose a column generation formulation
where the pricing program is solved via exact optimization of very small scale
integer programs. Column generation is tightened using odd set inequalities
which fit elegantly into pricing problem optimization. Our column generation
approach achieves fast stable anytime inference for our instance segmentation
problems. We demonstrate on three distinct light microscopy datasets, with
several hundred cells each, that our proposed algorithm rapidly achieves or
exceeds state of the art accuracy.
Keywords: Combinatorial optimization, Column generation, Integer
programming, Large scale optimization, Linear programming
1. Introduction
Cell detection and instance segmentation are fundamental tasks for the study
of bioimages (Meijering, 2012) in the era of big data. Detection corresponds
∗ Corresponding
author
Email addresses: [email protected] (Chong Zhang), [email protected]
(Julian Yarkony)
Preprint submitted to European Journal of Operational Research
September 22, 2017
to the problem of identifying individual cells and instance segmentation corresponds to the problem of determining the pixels corresponding to each of the
cells. Cells are often in close proximity and/or occlude each other. Traditionally bioimages were manually analyzed, however recent advances in microscope
techniques, automation, long-term high-throughput imaging, etc, result in vast
amounts of data from biological experiments, making manual analysis, and even
many computer aided methods with hand-tuned parameters, infeasible (Meijering et al., 2016; Hilsenbeck et al., 2017). The large diversity of cell lines and
microscopy imaging techniques require the development of algorithms for these
tasks to perform robustly and well across data sets.
In this paper we introduce a novel approach for instance segmentation specialized to bioimage analysis designed to rapidly produce high quality results
with little human intervention. The technique described in this paper is applicable to images that have crowded and compact cell regions acquired from
different modalities and cell shapes, as long as they produce intensity changes
at cell boundaries. Such patterns result from several microscopy imaging techniques, such as trans-illumination (e.g. bright field, dark field, phase contrast)
and fluorescence (e.g. through membrane or cytoplasmic staining) images. Thus
it is specifically suitable for images from which cells are almost transparent.
We formulate instance segmentation as the problem of selecting a set of
visually meaningful cells under the hard constraint that no two cells overlap
(share a common pixel). This problem corresponds to the classic integer linear
programming (ILP) formulation of the set packing problem (Karp, 1972) where
sets correspond to cells and elements correspond to pixels. The number of
possible cells is very large and can not be easily enumerated. We employ a
column generation approach (Barnhart et al., 1996) for solving the combinatorial
problem where the pricing problem is solved via exact optimization of very
small scale integer programs (IPs). Inference is made tractable by relying on
the assumption that cells are small and compact. When needed we tighten
the linear programming (LP) relaxation using odd set inequalities (Heismann
and Borndörfer, 2014). The use of odd set inequalities in our context does
2
not destroy the structure of the pricing problem so branch and price (Barnhart
et al., 1996) is not needed.
For the purpose of dimensionality reduction we employ the common technique of aggregating pixels into superpixels. Superpixels (Levinshtein et al.,
2009; Achanta et al., 2012) are the output of a dimensionality reduction technique that groups pixels in a close proximity with similar visual characteristics
and is commonly used as a preprocessing for image segmentation(Arbelaez et al.,
2011). Superpixels provide a gross over-segmentation of the image meaning
that they capture many boundaries not in the ground truth but miss very few
boundaries that are part of the ground truth. Hence we apply our set packing
formulation on the superpixels meaning that each set corresponds to a subset
of the superpixels and each element is a superpixel.
Our contributions consist of the following:
• Novel formulation of cell instance segmentation amenable to the tools and
methodology of the operations research community
• Structuring our formulation to admit tightening the corresponding LP
relaxation outside of branch/price methods
• Achieve benchmark level results on real microscopy datasets
We structure this document as follows. In Section 2 we consider the related
work in the fields of bioimage analysis and operations research. Next in Section 3
we introduce our set packing formulation of instance segmentation and our
column generation formulation with its corresponding pricing problem. Next
in Sections 4 and 5 we consider the production of anytime integral solutions
and lower bounds respectively. In Section 6 we demonstrate the applicability
of our approach to real bioimage data sets. Finally we conclude and consider
extensions in Section 7.
3
2. Related Work
2.1. Optimization in computer vision and bioimage analysis
Our work should be considered in the context of methods that are based on
cell boundary information and clustering of super-pixels. Relevant methodologies include contour profile pattern (Kvarnström et al., 2008; Mayer et al., 2013;
Dimopoulos et al., 2014), constrained label cost model (Zhang et al., 2014a),
correlation clustering (Zhang et al., 2014b; Yarkony et al., 2015), structured
learning (Arteta et al., 2012; Liu et al., 2014; Funke et al., 2015), and deep
learning (Ronneberger et al., 2015), etc. A comprehensive review can be found
in (Xing and Yang, 2016). Here we discuss the most relevant work.
The method of (Zhang et al., 2014b) frames instance segmentation as correlation clustering on a planar or nearly planar graph and relies heavily on
the planarity of their clustering problem’s structure in order to achieve efficient inference. Our work differs from (Zhang et al., 2014b) primarily from the
perspective of optimization. Notably, our model is not bound by planarity restrictions and instead relies on the assumption that cells are typically small and
compact. Therefore, our model is also applicable to 3D segmentation.
In (Zhang et al., 2015) the authors use depth to transform instance segmentation into a labeling problem and thus break the difficult symmetries found in
instance segmentation. They formulate the optimization as an ILP and solve it
using greedy network flow methods (Boykov et al., 2001), notably the Quadratic
Pseudo-Boolean optimization (QPBO) (Boros and Hammer, 2002; Rother et al.,
2007). The approach in (Zhang et al., 2015) also requires prior knowledge of the
number of labels present in the image, which is not realistic for images crowded
with hundreds or thousands of cells. In contrast our proposed ILP framework
does not require knowledge of the number of objects in the image.
Our inference approach is inspired by (Wang et al., 2017a) which tackles
multi-object tracking using column generation where the corresponding pricing
problem is solved using dynamic programming. This echoes the much earlier
operations research work in diverse areas such as vehicle routing (Ropke and
4
Cordeau, 2009), and cutting stock (Gilmore and Gomory, 1961) which use dynamic programming for pricing. In contrast our pricing problem optimization
is solved by many small ILPs, which can be run in parallel which echoes the
work in the operations research community of (Barahona and Jensen, 1998).
2.2. Column generation in operations research
Column generation (Gilmore and Gomory, 1961; Desaulniers et al., 2006;
Barnhart et al., 1996) is a popular approach for solving ILPs in which compact
formulations result in loose LP relaxations where the fractional solution tends
to be uninformative. Here uninformative means that the fractional solution can
not easily be rounded to a low cost integer solution. Column generation replaces
the LP with a new LP over a much larger space of variables which corresponds
to a tighter LP relaxation of the ILP (Geoffrion, 2010; Armacost et al., 2002).
The new LP retains the property from the original LP that it has a finite number
of constraints.
To solve the new LP, the dual of the new LP is considered which has a finite
number of variables and a huge number of constraints. Optimization considers
only a limited subset of the primal variables, which is initialized as empty, or
set heuristically. Optimization alternates between solving the LP relaxation
over the limited subset of the primal variables (called the master problem) and
identifying variables that correspond to violated dual constraints (which is called
pricing). Pricing often corresponds to combinatorial optimization which is often
an elegant dynamic program which has the powerful feature that many primal
variables (violated dual constraints) are generated at once. Approaches with
dynamic programming based pricing include (but are not limited to) the diverse
fields of cutting stock (Gilmore and Gomory, 1961, 1965), routing crews (Lavoie
et al., 1988; Vance et al., 1997), and routing vehicles (Ropke and Cordeau, 2009),
Column generation formulations can be tightened using branch-price methods (Barnhart et al., 2000, 1996; Vance, 1998) which is a variant of branch and
bound (Land and Doig, 1960) that is structured as to not disrupt the structure
of the pricing problem.
5
Term
Form
Index
Meaning
D
set
d
set of super-pixels
Q
set
q
set of cells
Q
{0, 1}|D|×|Q|
d, q
Qdq = 1 indicates that d in cell q
Γ
R|Q|
q
Γq is the cost of cell q
C
set
c
set of triples
γ
{0, 1}|Q|
q
γq = 1 indicates that cell q is selected.
θ
R|D|
d
θd is the cost of including d in a cell
ω
R
none
ω is the cost of instancing a cell
φ
R|D|×|D|
d1 , d2
φd1 d2 is the cost of including d1 , d2 in the same cell
V
R0+
S
|D|
d
Vd is the volume of super-pixel d
R0+
d1 , d2
Sd1 d2 is the distance between the centers of super-pixels d1 , d2
mV
R+
none
maximum volume of a cell
mR
R+
none
maximum radius of a cell
Q̂
set
q
set of cells generated during column generation
Cˆ
set
c
set of triples generated during column generation
Q̇
set
q
set of cells generated during a given iteration of column generation
C˙
set
c
set of triples generated during a given iteration of column generation
λ
R0+
d
Lagrange multipliers corresponding to super-pixels
κ
R0+
c
Lagrange multipliers corresponding to triples
x
{0, 1}|D|
d
xd = 1 indicates that super-pixel d is included in the column being generated
|D|×|D|
|D|
|C|
Table 1: Summary of Notation
Column generation has had few applications in computer vision until recently but has included diverse variants of correlation clustering (Yarkony and
Fowlkes, 2015; Yarkony et al., 2012; Yarkony, 2015) with applications to image
partitioning, multi-object tracking (Wang et al., 2017a), and multi-human pose
estimation (Wang et al., 2017b). Column generation in (Yarkony and Fowlkes,
2015; Yarkony et al., 2012; Yarkony, 2015) is notable in that the pricing problem is solved using the max cut on a planar graph (Shih et al., 1990; Barahona,
1982, 1991; Barahona and Mahjoub., 1986) which is known to be polynomial
time solvable via a reduction to perfect matching (Fisher, 1966).
3. Problem formulation
We now discuss our approach in detail. Given an image we start with computing a set of super-pixels (generally named super-voxels in 3D), which provides
6
an over-segmentation of cells. These super-pixels are then clustered into “perceptually meaningful” regions by constructing an optimization problem that
either groups the super-pixels into small coherent cells or labels them as background. The solution to this optimization problem corresponds to the globally
optimal selection of cells according to our model, which we formulate/solve as an
ILP. We consider our model below and summarize the corresponding notation
in Table 1.
Definitions. Let D be the set of super-pixels in an image, Q be the set of
all possible cells, and G ∈ {0, 1}|D|×|Q| be the super-pixel/cell incidence matrix
|D|×|D|
where Gdq =1 if and only if super-pixel d is part of the cell q. We use S ∈ R0+
to describe the Euclidean distance between super-pixels; where Sd1 d2 indicates
the distance between the centers of the super-pixel pair d1 and d2 . We use
|D|
V ∈ R+ to describe the area of super-pixels, with Vd being the area of superpixel d. The indicator vector γ ∈ {0, 1}|Q| gives a feasible segmentation solution,
where γq =1 indicates that cell q is included in the solution and γq =0 otherwise.
A collection of cells specified by γ is a valid solution if and only if each superpixel is associated with at most one active cell.
We use Γ ∈ R|Q| to define a cost vector, where Γq is the cost associated with
including cell q in the segmentation. Here we model such a cost with terms
θ ∈ R|D| and φ ∈ R|D|×|D| which are indexed by d and d1 , d2 respectively. We
use θd to denote the cost for including d in a cell and φd1 d2 to denote the cost
for including d1 and d2 in the same cell. We use ω ∈ R to denote the cost of
instancing a cell. We now define Γq in terms of θ, φ and ω.
Γq = ω +
X
d∈D
X
θd Gdq +
φd1 d2 Gd1 q Gd2 q ,
(1)
d1 ,d2 ∈D
Constraints. For most biological problems, it is valid to model cells of a given
type as having a maximum radius mR and a maximum area (volume if in 3D)
mV . Clearly, mV and mR are model defined parameters that vary from one
application to another, but they are also often known a-priori. The radius
7
Term
Effect of Positive Offset
θ
Decrease total volume of cells
φ
Fewer pairs of super-pixels in a common cell
ω
Decrease number of cells detected
mV
Increase maximum volume of cells
mR
Increase maximum radius of cells
Table 2: Summary of effect of offsetting values by positive offset
constraint can be written as follows:
∃[d∗ ; Gd∗ q = 1]
X
s.t. 0 =
Gd2 q [Sd∗ ,d2 > mR ] ∀q ∈ Q.
(2)
d2 ∈D
For any given q ∈ Q, any argument d∗ ∈ D satisfying Eq 2 is called an anchor
of q. Similarly, we write the area constraint as follows.
X
mV ≥
Gdq Vd ∀q ∈ Q.
(3)
d∈D
ILP formulation. Given the above variable definitions, we frame instance
segmentation as an ILP that minimizes the total cost of the selected cells:
X
min
Γq γq = min Γ> γ
(4)
P
γq ∈{0,1}∀q∈Q
Gdq γq ≤1 ∀d∈D q∈Q
q∈Q
γ∈{0,1}|Q|
Gγ≤1
The effect of our modeling parameters is summarized in Table 2.
Primal and Dual formulation. The LP relaxation of Eq 4 only contains
constraints for cells that share a common super-pixel. This generally results in
a tight relaxation, although not always. We tighten the relaxation using odd
set inequalities (Heismann and Borndörfer, 2014). Specifically we use odd set
inequalities of size three, (called triples), as similarly imposed in (Wang et al.,
2017a).
Triples are defined as follows: for any set of three unique super-pixels (called
a triple) the number of selected cells of Q that include two or more of superpixels in {d1 , d2 , d3 } can be no larger than one, i.e.
X
[Gd1 q + Gd2 q + Gd3 q ≥ 2]γq ≤ 1.
q∈Q
8
(5)
We denote the set of triples as C and describe it by a constraint matrix C∈{0, 1}|C|×|Q| ,
where Ccq = 1 if and only if cell q contains two or more members of set c. The
P
constraint matrix has a row for each triple: Ccq =[ d∈c Gdq ≥2], ∀c∈C, q∈Q.
The primal and dual LP relaxations of instance segmentation with constraints
on inequalities corresponding to triples are written below. The dual is expressed
|D|
|C|
using Lagrange multipliers λ ∈ R0+ and κ ∈ R0+ .
min Γ> γ =
γ≥0
Qγ≤1
Cγ≤1
max
λ≥0
κ≥0
>
Γ+Q λ+C > κ≥0
1> λ + 1> κ.
(6)
3.1. Algorithm
Since Q, C are intractably large, we use cutting plane method in the primal
and dual to build a sufficient subsets of Q,C.
We denote the nascent subsets of Q, C as Q̂, Cˆ respectively. In Alg 1 we write
column generation algorithm. We define the cutting plane/column generation
in Sections 3.2 and 3.3 respectively and display optimization in Fig 1. We use
Q̇, C˙ to refer to the columns and rows generated during a given iteration of our
algorithm.
9
Algorithm 1 Dual Optimization
Q̂ ← {}
Cˆ ← {}
repeat
Q̇ ← {}
C˙ ← {}
[λ, κ, γ] ← Solve Primal and Dual of Eq 6 over Q̂, Cˆ
for d∗ ∈ D do
q∗ ← arg min
0=
P
d∈c
P
d2
q∈Q
Qd∗ q =1
∈D Gd2 q [Sd∗ ,d2 >mR ]
Γq +
P
d∈D
Qdq λd +
P
c∈C
κc [2 ≤
Qdq ]
if Γq∗ +
P
d∈D
Qdq∗ λd +
P
c∈C
κc [2 ≤
P
d∈c
Qdq ] < 0 then
Q̇ ← [Q̇ ∪ q∗ ]
end if
end for
P
c∗ ← maxc∈C q∈Q Ccq γq
P
if q∈Q Ccq γq > 1 then
C˙ ← c∗
end if
Q̂ ← [Q̂, Q̇]
ˆ C]
˙
Cˆ ← [C,
until Q̇ = [] and C˙ = []
3.2. Row generation
Finding the most violated row consists of the following optimization.
max
c∈C
X
Ccq γq
(7)
q∈Q
Enumerating C is unnecessary and we generate its rows as needed by considering
only c = {dc1 dc2 dc3 } such that for each of pair dci , dcj there exists an index q
such that γq > 0 and Qdi q = Qdj q = 1. Generating rows is done only when
no (significantly) violated columns exist. Triples are only added to Cˆ if the
10
corresponding constraint is violated. We can add one or more than one per
iteration depending on a schedule chosen.
3.3. Generating columns
Violated constraints in the dual correspond to primal variables (cells) that
may improve the primal objective. To identify such primal variables we compute for each d∗ ∈ D the most violated dual constraint corresponding to a cell
such that d∗ is an anchor of that cell. The corresponding cell is described using indicator vector x∈{0, 1}|D| , where the corresponding column is defined as
Qdq ← xd , ∀d ∈ D. We write the pricing problem as an IP below.
min
x∈{0,1}|Q|
X
(θd + λd )xd +
d∈D
X
φd1 d2 xd1 xd2 +
d1 ,d2 ∈D
X
c∈C
κc ([2 ≤
X
xd ])
d∈c
s.t. xd∗ = 1
xd = 0 ∀d ∈ D
X
V d xd ≤ m V
s.t. Sd,d∗ > mR
(8)
d∈D
For our data sets of images crowded with several hundreds of cells, the maximum radius of a cell is relatively small and the number of super-pixels within the
radius of a given super-pixel is of the order of tens and often around ten. Therefore, solving Eq 8 is efficient and can be done in parallel for each d∗ ∈ D. We
tackle this by converting Eq 8 to an ILP and then solving it with an off-the-shelf
ILP solver.
4. Anytime Integral Solutions
We now consider the anytime production of integral solutions in the master
problem. While set packing NP hard in general, in practice the LP relaxations
are integral at termination and generally integral after each step of optimization. For cases where the LP is loose, we find that solving the ILP given the
primal variables generated takes little additional time beyond solving the LP.
However we can use rounding procedures (Wang et al., 2017a) when difficult
11
Algorithm 2 Upper Bound Rounding
while ∃q ∈ Q s.t. γq ∈
/ {0, 1} do
P
∗
q ← arg min q∈Q Γq γq − q̂∈Q⊥q Γq̂ γq̂
1>γq >0
γq̂ ← 0
∀q̂ ∈ Q⊥q∗
γq ∗ ← 1
end while
RETURN γ
ILPs occur. Specifically, we tackle the rounding of a fractional γ with a greedy
iterative approach. At each iteration, it selects the cell q with non-binary γq
that minimizes Γq γq discounted by the fractional cost of any cells that share
a super-pixel with q; hence can no longer be added to the segmentation if q is
already added. We write the rounding procedure in Alg 2 using the notation
Q⊥q to indicate the set of cells in Q that intersect cell q (excluding q itself).
5. Lower bounds
We now consider the production of anytime lower bounds on the optimal
integral solution. We first write the ILP for cell instance segmentation and then
introduce Lagrange multipliers.
min
γ∈{0,1}|Q|
Qγ≤1
Cγ≤1
Γ> γ =
min
max Γ> γ + (−λ> 1 + λ> Qγ) + (−κ> 1 + κ> Cγ) (9)
γ∈{0,1}|Q| λ≥0
κ≥0
Qγ≤1
We now relax the constraint in Eq 9 that the dual variables are optimal producing the following lower bound.
Eq 9 ≥
min
γ∈{0,1}|Q|
Qγ≤1
Γ> γ + (−λ> 1 + λ> Qγ) − (κ> 1 + κ> Cγ)
= −κ> 1 − λ> 1 +
min
γ∈{0,1}|Q|
Qγ≤1
(Γ + Q> λ + C > κ)> γ
(10)
Recall that every cell is associated with at least one anchor. We denote the set
of anchors associated with a given cell q as Nq . We use Q:,q , C:,q to refer to the
12
column q of the matrices Q, C respectively. Given any fixed γ ∈ {0, 1}|Q| such
that Qγ ≤ 1 observe the following.
(Γ + Q> λ + C > κ)> γ ≥
X
>
min[0, min γq (Γq + Q>
:,q λ + C:,q κ)]
q∈Q
d∈Nq
d∈D
(11)
We now use Eq 11 to produce following lower bound on Eq 10.
Eq 10 ≥
−κ> 1 − λ> 1 +
X
min
γ∈{0,1}|Q|
d∈D
Qγ≤1
>
min[0, min γq (Γq +Q>
:,q λ+C:,q κ)] (12)
q∈Q
d∈Nq
We now relax the constraint in Eq 12 that Qγ ≤ 1 producing the following lower
bound.
Eq 12 ≥ −κ> 1−λ> 1+
= −κ> 1 − λ> 1 +
X
min
X
γ∈{0,1}|Q|
d∈D
>
min[0, min γq (Γq +Q>
:,q λ+C:,q κ)]
q∈Q
d∈Nq
>
min[0, min (Γq + Q>
:,q λ + C:,q κ)]
d∈D
q∈Q
d∈Nq
(13)
>
Observe that the term min q∈Q (Γq + Q>
:,q λ + C:,q κ) is identical to the optimizad∈Nq
tion computed at every stage of column generation.
6. Results
The technique described in this paper is applicable to images crowded with
cells which are mainly discernible by boundary cues. Such images can be acquired from different modalities and cell types. Here we evaluate our algorithm
on three datasets. Challenges of these datasets include: densely packed and
touching cells, out-of-focus artifacts, variations on shape and size, changing
boundaries even on the same cell, as well as other structures showing similar
boundaries.
6.1. Experiment settings
To ensure detecting cell boundaries with varying patterns, a trainable classifier seems to be the right choice. For each dataset, we choose to train a Random
Forest (RF) classifier from the open source software, ilastik (Sommer et al.,
13
2011), to discriminate: (1) boundaries of in-focus cells; (2) in-focus cells; (3)
out-of-focus cells; and (4) background. The posterior probabilities for class (1)
is used as the pairwise potentials. For training, we used < 1% pixels per dataset
with generic features e.g. Gaussian, Laplacian, Structured tensor. Subsequent
steps use the posterior probabilities to calculate parameters and require no more
training. The prediction from the class boundaries of in-focus cells is also used
to generate super-pixels. And those for classes (3) and (4) are combined and
inverted to create a foreground prediction. Here foreground corresponds to the
superpixels that are part of cells which are background otherwise. For each
super-pixel, the proportion of its foreground part defines the unary potential θ
which we then offset by a constant fixed for each dataset. A summary about
the parameters used in our experiments are shown in Table 3.
6.2. Evaluation
A visualization of the results can be seen in Fig 2. Quantitatively, we compare the performance of our algorithm with those reported in the state-of-the-art
methods (Arteta et al., 2012, 2016; Funke et al., 2015; Hilsenbeck et al., 2017;
Dimopoulos et al., 2014; Ronneberger et al., 2015; Zhang et al., 2014b), in terms
of detection (precision, recall and F-score) and segmentation (Dice coefficient
and Jaccard index). For detection, we establish possible matches between found
regions and ground truth (GT) regions based on overlap, and find a Hungarian matching using the centroid distance as minimizer. Unmatched GT regions
are FN, unmatched segmentation regions are FP. Jaccard index is computed
between the area of true positive (TP) detection regions Rtpd and the area of
GT region Rgt : (Rtpd ∩Rgt )/(Rtpd ∪Rgt ). They are summarized in Tables 4
and 5. In general, our method achieves or exceed state of the art performance.
Additionally, our method requires very little training for the RF classifiers, as
opposed to methods like (Arteta et al., 2012, 2016; Funke et al., 2015), which
require fully labeled data for training. This is an advantage of relieving human
annotations when several hundreds of cells need to be labeled per image. Also,
our method can handle very well large variations of cell shape/size even in the
14
Cell image
Compute
θ, φ
Merge
λ, κ, θ
Column
Column
Column
Column
(λ, κ, 1)
(λ, κ, 2)
(λ, κ, d)
(λ, κ, D)
Cˆ
Q̂
Q̂
C˙
Cˆ
λ, κ
LP Solver
γ
Row(γ)
γ
Round LP
γ
Cell
Pipeline
Figure 1: Overview of our system. We use colors to distinguish between the different parts
of the system, which are defined as follows: user input (green), sub-problem solution (blue),
15 (orange). We use Column(λ, κ, d) to refer
triples (red), rounding the output of the LP solver
to generating the column where d is an anchor.
Figure 2: Example cell segmentation results of Datasets 1-3 (left to right). Rows are (top
to bottom): original image, cell of interest boundary classifier prediction image, super-pixels,
color map of segmentation, and enlarged views of the inset (black square)
16
Table 3:
Summary of experimental datasets on the number of cells, cell radius, image size, the number of super-pixels and region adjacent graph
(RAG) edges.
Dataset
# cells
avg. cell radius
image size
# super-pixels
# RAG edges
1 (Zhang et al., 2014a)
1768
30
1024×1024
1225±242
3456±701
2 (Peng et al., 2013)
2340
50
1024×1024
3727±2450
10530±7010
3 (Arteta et al., 2012)
1073
20
400×400
1081±364
3035±1038
Table 4: Evaluation and comparison of detection for Datasets 1-3 (Fig. 2) on precision (P), recall (R), F-score (F), dice coefficient (D) and Jaccard
index (J) are reported for the proposed method, as well as those reported in the state-of-the-art methods. Here (Zhang et al., 2014b) uses the
algorithms planar correlation clustering (PCC) and non-planar correlation clustering (NPCC).
Dataset 1
Dataset 2
Dataset 3
18
P
R
F
P
R
F
P
R
F
(Arteta et al., 2012)
-
-
-
-
-
-
0.89
0.86
0.87
(Arteta et al., 2016)
-
-
-
-
-
-
0.99
0.96
0.97
(Funke et al., 2015)
0.93
0.89
0.91
0.99
0.90
0.94
0.95
0.98
0.97
(Hilsenbeck et al., 2017)
-
-
-
-
-
-
-
-
0.97
(Ronneberger et al., 2015)
-
-
-
-
-
-
-
-
0.97
PCC (Zhang et al., 2014b)
0.95
0.86
0.90
0.80
0.75
0.76
0.92
0.92
0.92
NPCC (Zhang et al., 2014b)
0.71
0.96
0.82
0.75
0.83
0.78
0.85
0.97
0.90
Proposed
0.99
0.97
0.98
1.00
0.94
0.97
1.00
0.97
0.99
Table 5:
Evaluation and comparison of segmentation for Datasets 1-3 (Fig. 2) on dice coefficient (D) and Jaccard index (J) are reported for
the proposed method, as well as those reported in the state-of-the-art methods. Here (Zhang et al., 2014b) uses the algorithms planar correlation
clustering (PCC) and non-planar correlation clustering (NPCC).
19
Dataset 1
Dataset 2
Dataset 3
D
J
D
J
D
J
(Funke et al., 2015)
0.90
0.82
0.90
0.83
0.84
0.73
(Hilsenbeck et al., 2017)
-
-
-
-
-
0.75
(Dimopoulos et al., 2014)
-
0.87
-
-
-
-
(Ronneberger et al., 2015)
-
-
-
-
-
0.74
PCC (Zhang et al., 2014b)
0.87
0.84
0.91
0.85
0.79
0.72
NPCC (Zhang et al., 2014b)
0.86
0.89
0.91
0.84
0.80
0.70
Proposed
0.91
0.90
0.90
0.83
0.82
0.71
Histogram of Inference Time: Cells
150
total time
time without upper bound
counts
100
50
0
0
100
200
300
400
500
600
700
time
Figure 3: Histogram of inference time for Dataset 1.
same image, as shown in Fig 2 for Dataset 2.
6.3. Timing and bounds
We now consider the performance of our approach with regard to the gap
between the upper and lower bounds produced by our algorithm. We normalize these gaps by dividing by the absolute value of the lower bound. For our
three data set the proportion of problem instances that achieve normalized gaps
under 0.1 are 99.28 %, 80 % and 100 %, on Datasets 1,2,3 respectively. The
peak histogram of inference time are around 150, 500 and 100 seconds without
parallelization. As an example, the histogram of inference time for Dataset 1
is shown in Fig 3. Our approach is approximately an order of magnitude faster
than that of (Zhang et al., 2014b).
7. Conclusion
In this article we introduce a novel column generation strategy that efficiently
optimizes an ILP formulation of instance segmentation through clustering superpixels. We use our approach to detect and segment crowded clusters of cells in
20
distinct microscopy image datasets and achieves state of the art or near state
of the art performance.
We now consider some extensions of our approach. The use of odd set
inequalities may prove useful for traditional set cover formulations of vehicle
routing problems. In this context for triples the corresponding inequality is
defined as follows: For any set of three unique depots the number of routes that
pass through one or two or those depots plus two times the number of routes
that pass through all three depots is no less than two. Dynamic programming
formulations for pricing can be adapted to include the corresponding Lagrange
multipliers (Irnich and Desaulniers, 2005) ( in either the elementary or nonelementary (Kallehauge et al., 2005) setting). The approach in (Wang et al.,
2017a) can also be adapted which employs dynamic programming in a branch
and bound context in the pricing problem (never the master problem).
Another extension considers multiple types of cells with a unique model for
each cell type and its own pricing problem. Such types can include rotations,
scalings, or other transformations of a common model which may be useful for
cells that highly non-circular in shape.
In future work one should apply dual feasible inequalities (Ben Amor et al.,
2006; Yarkony and Fowlkes, 2015). In this case one would create separate variable for each pair of cell, feasible anchor for that cell (where the anchor is called
the the main anchor). Then the ILP would be framed as selecting a set of cells
such that (1) no super-pixel is included more than once and (2) no main anchor
is included in more than one cell. However the Lagrange multipliers for (1) can
be bounded from above by the increase in cost corresponding to removing the
super-pixel d from a cell. For a super-pixel d one trivial such bound is minus
one times the sum of the non-positive cost terms involving d.
Acknowledgment
This work was partly supported by the Spanish Ministry of Economy and
Competitiveness under the Maria de Maeztu Units of Excellence Programme
(MDM-2015-0502).
21
References
Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Ssstrunk, S., 2012.
SLIC superpixels compared to state-of-the-art superpixel methods. IEEE
Transactions on Pattern Analysis and Machine Intelligence 34, 2274–2282.
Arbelaez, P., Maire, M., Fowlkes, C., Malik, J., 2011. Contour detection and
hierarchical image segmentation. IEEE Transactions on Pattern Analysis and
Machine Intelligence 33, 898–916.
Armacost, A.P., Barnhart, C., Ware, K.A., 2002. Composite variable formulations for express shipment service network design. Transportation Science 36,
1–20.
Arteta, C., Lempitsky, V., Noble, J., Zisserman, A., 2012. Learning to detect
cells using non-overlapping extremal regions, in: International Conference on
Medical Image Computing and Computer Assisted Intervention (MICCAI).
volume 7510 of Lecture Notes in Computer Science, pp. 348–356.
Arteta, C., Lempitsky, V., Noble, J., Zisserman, A., 2016. Detecting overlapping
instances in microscopy images using extremal region trees. Medical Image
Analysis 27, 3–16.
Barahona, F., 1982. On the computational complexity of ising spin glass models.
Journal of Physics A: Mathematical, Nuclear and General 15, 3241–3253.
Barahona, F., 1991. On cuts and matchings in planar graphs. Mathematical
Programming 36, 53–68.
Barahona, F., Jensen, D., 1998. Plant location with minimum inventory. Mathematical Programming 83, 101–111.
Barahona, F., Mahjoub., A., 1986. On the cut polytope. Mathematical Programming 60, 157–173.
22
Barnhart, C., Hane, C.A., Vance, P.H., 2000. Using branch-and-price-and-cut
to solve origin-destination integer multicommodity flow problems. Operations
Research 48, 318–326.
Barnhart, C., Johnson, E.L., Nemhauser, G.L., Savelsbergh, M.W.P., Vance,
P.H., 1996. Branch-and-price: Column generation for solving huge integer
programs. Operations Research 46, 316–329.
Ben Amor, H., Desrosiers, J., Valério de Carvalho, J.M., 2006. Dual-optimal
inequalities for stabilized column generation. Operations Research 54, 454–
463.
Boros, E., Hammer, P., 2002. Pseudo-boolean optimization. Discrete Applied
Mathematics 123, 155–225.
Boykov, Y., Veksler, O., Zabih, R., 2001. Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine
Intelligence 23, 1222–1239.
Desaulniers, G., Desrosiers, J., Solomon, M.M., 2006. Column Generation.
volume 5. Springer Science & Business Media.
Dimopoulos, S., Mayer, C., Rudolf, F., Stelling, J., 2014. Accurate cell segmentation in microscopy images using membrane patterns. Bioinformatics 30,
2644–2651.
Fisher, M.E., 1966. On the dimer solution of planar ising models. Journal of
Mathematical Physics 7, 1776–1781.
Funke, J., Hamprecht, F., Zhang, C., 2015. Learning to segment: Training
hierarchical segmentation under a topological loss, in: International Conference on Medical Image Computing and Computer Assisted Intervention
(MICCAI). volume 9351 of Lecture Notes in Computer Science, pp. 268–275.
Geoffrion, A.M., 2010. Lagrangian relaxation for integer programming, in: et al.,
M.J. (Ed.), 50 Years of Integer Programming 1958-2008. Springer. chapter 9,
pp. 243–281.
23
Gilmore, P., Gomory, R.E., 1965. Multistage cutting stock problems of two and
more dimensions. Operations research 13, 94–120.
Gilmore, P.C., Gomory, R.E., 1961. A linear programming approach to the
cutting-stock problem. Operations research 9, 849–859.
Heismann, O., Borndörfer, R., 2014. A generalization of odd set inequalities for
the set packing problem, in: Operations Research Proceedings 2013. Springer,
pp. 193–199.
Hilsenbeck, O., Schwarzfischer, M., Loeffler, D., Dimopoulos, S., Hastreiter, S.,
Marr, C., Theis, F., Schroeder, T., 2017. fastER : a user-friendly tool for ultrafast and robust cell segmentation in large-scale microscopy. Bioinformatics
33, 2020–2028.
Irnich, S., Desaulniers, G., 2005. Shortest path problems with resource constraints. Column Generation , 33–65.
Kallehauge, B., Larsen, J., Madsen, O.B., Solomon, M.M., 2005. Vehicle routing
problem with time windows. Column Generation , 67–98.
Karp, R.M., 1972. Reducibility among combinatorial problems, in: Complexity
of computer computations. Springer, pp. 85–103.
Kvarnström, M., Logg, K., Diez, A., Bodvard, K., Kall, M., 2008. Image analysis
algorithms for cell contour recognition in budding yeast. Optics Express 16,
1035–1042.
Land, A.H., Doig, A.G., 1960. An automatic method of solving discrete programming problems. Econometrica: Journal of the Econometric Society ,
497–520.
Lavoie, S., Minoux, M., Odier, E., 1988. A new approach for crew pairing
problems by column generation with an application to air transportation.
European Journal of Operational Research 35, 45–58.
24
Levinshtein, A., Stere, A., Kutulakos, K.N., Fleet, D.J., Dickinson, S.J., Siddiqi, K., 2009. Turbopixels: Fast superpixels using geometric flows. IEEE
Transactions on Pattern Analysis and Machine Intelligence 31, 2290–2297.
Liu, F., Xing, F., Yang, L., 2014. Robust muscle cell segmentation using region
selection with dynamic programming, in: IEEE International Symposium on
Biomedical Imaging (ISBI), pp. 1381–1384.
Mayer, C., Dimopoulos, S., Rudolf, F., Stelling, J., 2013. Using CellX to Quantify Intracellular Events. Current Protocols in Molecular Biology , 14.22.1–
14.22.20.
Meijering, E., 2012. Cell segmentation: 50 years down the road. IEEE Signal
Processing Magazine , 140–145.
Meijering, E., Carpenter, A., Peng, H., Hamprecht, F., Olivo-Marin, J.C., 2016.
Imagining the future of bioimage analysis. Nature Biotechnology 34, 1250–
1255.
Peng, J.Y., Chen, Y.J., Green, M.D., Sabatinos, S.A., Forsburg, S.L., Hsu, C.N.,
2013. PombeX: Robust cell segmentation for fission yeast transillumination
images. PLoS One 8, e81434.
Ronneberger, O., Fischer, P., Brox, T., 2015. U-Net: convolutional networks
for biomedical image segmentation, in: Frangi, A., et al. (Eds.), International
Conference on Medical Image Computing and Computer Assisted Intervention
(MICCAI). Springer. volume 9351 of Lecture Notes in Computer Science, pp.
234–241.
Ropke, S., Cordeau, J.F., 2009. Branch and cut and price for the pickup and
delivery problem with time windows. Transportation Science 43, 267–286.
Rother, C., Kolmogorov, V., Lempitsky, V., Szummer, M., 2007. Optimizing
binary mrfs via extended roof duality, in: 2007 IEEE Conference on Computer
Vision and Pattern Recognition (CVPR).
25
Shih, W.K., Wu, S., Kuo, Y., 1990. Unifying maximum cut and minimum cut
of a planar graph. IEEE Transactions on Computers 39, 694–697.
Sommer, C., Straehle, C., Koethe, U., Hamprecht, F.A., 2011. ilastik: Interactive Learning and Segmentation Toolkit, in: IEEE International Symposium
on Biomedical Imaging (ISBI).
Vance, P.H., 1998. Branch-and-price algorithms for the one-dimensional cutting
stock problem. Computational Optimization and Applications 9, 211–228.
Vance, P.H., Barnhart, C., Johnson, E.L., Nemhauser, G.L., 1997. Airline crew
scheduling: A new formulation and decomposition algorithm. Operations
Research 45, 188–200.
Wang, S., Wolf, S., Fowlkes, C., Yarkony, J., 2017a. Tracking objects with
higher order interactions using delayed column generation, in: International
Conference on Artificial Intelligence and Statistics (AISTATS).
Wang, S., Zhang, C., Gonzalez-Ballester, M.A., Ihler, A., Yarkony, J.,
2017b. Multi-person pose estimation via column generation. arXiv preprint
arXiv:1709.05982 .
Xing, F., Yang, L., 2016. Robust nucleus/cell detection and segmentation in
digital pathology and microscopy images: A comprehensive review. IEEE
Reviews in Biomedical Engineering 9, 234–263.
Yarkony, J., 2015. Next generation multicuts for semi-planar graphs. arXiv
preprint arXiv:1511.01994 .
Yarkony, J., Fowlkes, C., 2015. Planar ultrametrics for image segmentation, in:
Neural Information Processing Systems.
Yarkony, J., Ihler, A., Fowlkes, C., 2012. Fast planar correlation clustering for
image segmentation, in: Proceedings of the 12th European Conference on
Computer Vision (ECCV).
26
Yarkony, J., Zhang, C., Fowlkes, C., 2015. Hierarchical planar correlation clustering for cell segmentation, in: Energy Minimization Methods in Computer
Vision and Pattern Recognition (EMMCVPR), pp. 492–504.
Zhang, C., Huber, F., Knop, M., Hamprecht, F., 2014a. Yeast cell detection and
segmentation in bright field microscopy, in: IEEE International Symposium
on Biomedical Imaging (ISBI).
Zhang, C., Yarkony, J., Hamprecht, F., 2014b. Cell detection and segmentation
using correlation clustering, in: International Conference on Medical Image
Computing and Computer Assisted Intervention (MICCAI). volume 8673 of
Lecture Notes in Computer Science, pp. 9–16.
Zhang, Z., Schwing, A., Fidler, S., Urtasun, R., 2015. Monocular object instance
segmentation and depth ordering with CNNs, in: International Conference on
Computer Vision (ICCV).
27
| 1 |
Linear-time approximation schemes for planar minimum three-edge
connected and three-vertex connected spanning subgraphs
arXiv:1701.08315v1 [] 28 Jan 2017
Baigong Zheng
Oregon State University
[email protected]
Abstract
We present the first polynomial-time approximation schemes, i.e., (1 + )-approximation
algorithm for any constant > 0, for the minimum three-edge connected spanning subgraph
problem and the minimum three-vertex connected spanning subgraph problem in undirected
planar graphs. Both the approximation schemes run in linear time.
This material is based upon work supported by the National Science Foundation under Grant No. CCF1252833.
1
Introduction
Given an undirected unweighted graph G, the minimum k-edge connected spanning subgraph problem (k-ECSS) asks for a spanning subgraph of G that is k-edge connected (remains connected
after removing any k − 1 edges) and has a minimum number of edges. The minimum k-vertex
connected spanning subgraph problem (k-VCSS) asks for a k-vertex connected (remains connected
after removing any k − 1 vertices) spanning subgraph of G with minimum number of edges. These
are fundamental problems in network design and have been well studied. When k = 1, the solution is simply a spanning tree for both problems. For k ≥ 2, the two problems both become
NP-hard [7, 12], so people put much effort into achieving polynomial-time approximation algorithms. Cheriyan and Thurimella [7] give algorithms with approximation ratios of 1 + 1/k for
k-VCSS and 1 + 2/(k + 1) for k-ECSS for simple graphs. Gabow and Gallagher [11] improve the
approximation ratio for k-ECSS to 1 + 1/(2k) + O(1/k 2 ) for simple graphs when k ≥ 7, and they
give a (1 + 21/(11k))-approximation algorithm for k-ECSS in multigraphs. Some researchers have
studied these two problems for the small connectivities k, especially k = 2 and k = 3, and obtained
better approximations. The best approximation ratio for 2-ECSS in general graphs is 5/4 of Jothi,
Raghavachari, and Varadarajan [17], while for 2-VCSS in general graphs, the best ratio is 9/7 of
Gubbala and Raghavachari [14]. Gubbala and Raghavachari [15] also give a 4/3-approximation
algorithm for 3-ECSS in general graphs.
A polynomial-time approximation scheme (PTAS) is an algorithm that, given an instance and
a positive number , finds a (1 + )-approximation for the problem and runs in polynomial time
for fixed . Neither k-ECSS nor k-VCSS have a PTAS even in graphs of bounded degree for k = 2
unless P = NP [10]. However, this hardness of approximation does not hold for special classes
of graphs and small values of k. For example, Czumaj et al. [8] show that there are PTASes for
both of 2-ECSS and 2-VCSS in planar graphs. Both problems are NP-hard in planar graphs (by
a reduction from Hamiltonian cycle). Later, Berger and Grigni improved the PTAS for 2-ECSS to
run in linear time [3].
Following their PTASes for 2-ECSS and 2-VCSS, Czumaj et al. [8] ask the following: can we
extend the PTAS for 2-ECSS to a PTAS for 3-ECSS in planar graphs? A PTAS for 3-VCSS in planar
graphs is additionally listed as an open problem in the Handbook of Approximation Algorithms and
Metaheuristics (Section 51.8.1) [13]. In this paper we answer these questions affirmatively by giving
the first PTASes for both 3-ECSS and 3-VCSS in planar graphs. Our main results are the following
theorems.
Theorem 1. For 3-ECSS, there is an algorithm that, for any > 0 and any undirected planar
graph G, finds a (1 + )-approximate solution in linear time.
Theorem 2. For 3-VCSS, there is an algorithm that, for any > 0 and any undirected planar
graph G, finds a (1 + )-approximate solution in linear time.
In the following, we assume there are no self-loops in the input graph for both of 3-ECSS and
3-VCSS. For 3-ECSS, we allow parallel edges in G, but at most 3 parallel edges between any pair
of vertices are useful in a minimal solution. For 3-VCSS, parallel edges are unnecessary, so we
assume the input graph is simple. Since three-vertex connectivity (triconnectivity) and three-edge
connectivity can be verified in linear time [21, 24], we assume the input graph G contains a feasible
solution. W.l.o.g. we also assume < 1.
1
1.1
The approach
Our PTASes follow the general framework for planar PTASes that grew out of the PTAS for
TSP [18], and has been applied to obtain PTASes for other problems in planar graphs, including
minimum-weight 2-edge-connected subgraph [3], Steiner tree [4, 6], Steiner forest [2] and relaxed
minimum-weight subset two-edge connected subgraph [5]. The framework consists of the following
four steps.
Spanner Step. Find a subgraph G0 that contains a (1 + )-approximation and whose total weight
is bounded by a constant times of the weight of an optimal solution. Such a graph is usually
called a spanner since it often approximates the distance between vertices.
Slicing Step. Find a set of subgraphs, called slices, in G0 such that any two of them are face
disjoint and share only a set of edges with small weight and each of the subgraphs has
bounded branchwidth.
Dynamic-Programming Step. Find the optimal solution in each slice using dynamic programming. Since the branchwidth of each subgraph is bounded, the dynamic programming runs
in polynomial time.
Combining Step. Combine the solutions of all subgraphs obtained by dynamic programming and
some shared edges from Slicing step to give the final approximate solution.
For most applications of the PTAS framework, the challenging step is to illustrate the existence
of a spanner subgraph. However, for 3-ECSS and 3-VCSS, we could simply obtain a spanner from
the input graph G by deleting additional parallel edges since by planarity there are at most O(n)
edges in G and the size of an optimal solution is at least n, where n = |V (G)|. So, different
from those previous applications, the real challenge for our problems is to illustrate the slicing step
and the combining step. For the slicing step, we want to identify a set of slices that have two
properties: (1) three-edge-connectivity for 3-ECSS or triconnectivity for 3-VCSS, and (2) bounded
branchwidth. With these two properties, we can solve 3-ECSS or 3-VCSS on each slice efficiently.
For the combining step, we need to show that we can obtain a nearly optimal solution from the
optimal solutions of all slices found in slicing step and the shared edges of slices, that means the
solution should satisfy the connectivity requirement and its size is at most 1 + times of the size
of an optimal solution for the original input graph.
To identify slices, we generalize a decomposition used by Baker [1]. Before sketching our method,
we briefly mention the difficulty in applying previous techniques. The PTAS for TSP [18] identifies
slices in a spanner G0 by doing a breadth-first search in its planar dual to decompose the edge set
into some levels, and any two adjacent slices could only share all edges of the same level, which form
a set of edge-disjoint simple cycles. This is enough to achieve simple connectivity or biconnectivity
between vertices of different slices. But our problems need stronger connectivity for which one cycle
is not enough. For example, we may need a non-trivial subgraph outside of slice H to maintain the
triconnectivity between two vertices in slice H. See Figure 1 (a).
For 3-ECSS, we construct a graph called 3EC slice. We contract each component of the spanner
that is not in the current 3EC slice. Since contraction can only increase edge-connectivity, this will
give us the 3EC slices that are three-edge connected (Lemma 14 in Section 3). However, if we
directly apply this contraction method in the slicing step of the PTAS for TSP, the branchwidth
of the 3EC slice may not be bounded. This is because there may be many edges that are not in
2
Figure 1: (a) The bold cycle encloses a slice H. To maintain the triconnectivity between two
vertices u and v in H, we need the dashed path outside of H. (b) The bold cycle encloses a 3EC
slice H. The dashed edges divide the outer face of H into distinct regions, which may contain
contracted nodes. (c) The bold cycle encloses a 3EC slice H. The dashed path P between w1 and
w2 will be contracted to obtain a node x. A solution for 3-ECSS on H may contain x but not
the dashed edge between u and v. Then the union S of feasible solutions on all 3EC slices is not
feasible if it does not contain path P .
the slice but have both endpoints in the slice, and such edges may divide the faces of the slice into
distinct regions, each of which may contain a contracted node. See Figure 1 (b). To avoid this
problem, we apply the contractions in the decomposition used by Baker [1], which define a slice
based on the levels of vertices instead of edges. We can prove that each 3EC slice has bounded
branchwidth in this decomposition (Lemma 12 in Section 2).
Although each 3EC slice is three-edge connected, the union S of their feasible solutions may not
be three-edge connected. Consider the following situation. In a solution for a slice, a contracted
node x is contained in all vertex-cuts for a pair of vertices u and v. But in S, the subgraph induced
by the vertex set X corresponding to x may not be connected. Therefore, u and v may not satisfy
the connectivity requirement in S. See Figure 1 (c).
In their paper, Czumaj et al. [8] proposed a structure called bicycle. A bicycle consists of two
nested cycles, and all in-between faces visible from one of the two cycles. This can be used to
maintain the three-edge-connectivity between those connecting endpoints in cycles. This motivates
our idea: we want to combine this structure with Baker’s shifting technique so that two adjacent
slices shared a subgraph similar to a bicycle. In this way, we could maintain the strong connectivity
between adjacent slices by including all edges in this shared subgraph, whose size could be bounded
by the shifting technique. Specifically, we define a structure called double layer for each level i based
on our decomposition, which intuitively contains all the edges incident to vertices of level i and
all edges between vertices of level i + 1. Then we define a 3EC slice based on a maximal circuit
such that any pair of 3EC slices can only share edges in the double layer between them. In this
way, we can obtain a feasible solution for 3-ECSS by combining the optimal solutions for all the
slices and all the shared double layers (Lemma 16 in Section 3). By applying shifting technique on
double layers, we can prove that the total size of the shared double layers is a small fraction of the
size of an optimal solution. So we could add those shared double layers into our solution without
increasing its size by much, and this gives us a nearly optimal solution.
For 3-VCSS, we construct a 3VC slice based on a simple cycle instead of a circuit. The construction is similar to that of 3EC slices. However, contraction does not maintain vertex connectivity.
3
Figure 2: The three horizontal lines in the right figure show the three levels in the left figure. In
this example, V0 = {a, b, c, d, e}, V1 = {f, g, h, i, j} and V2 = {k, l, m, n}.
So we need to prove each 3VC slice is triconnected (Lemma 20 in Section 4). Then similar to
3-ECSS, we also need to prove that the union of the optimal solutions of all 3VC slices and all
shared double layers form a feasible solution (Lemma 22 in Section 4).
For dynamic-programming step, we need to solve a minimum-weight 3-ECSS problem in each
3EC slice and a minimum-weight 3-VCSS problem in each 3VC slice. This is because we need to
carefully assign weights to edges in a slice so that we can bound the size of our solution. We provide
a dynamic program for the minimum-weight 3-ECSS problem in graphs of bounded branchwidth in
Section 5, which is similar to that in the works of Czumaj and Lingas [9, 10]. A dynamic program
for minimum-weight 3-VCSS can be obtained in a similar way. Then we have the following theorem.
Theorem 3. Minimum-weight 3-ECSS problem and minimum-weight 3-VCSS problem both can be
solved on a graph G of bounded branchwidth in O(|E(G)|) time.
2
Preliminaries
Let G be an undirected planar graph with vertex set V (G) and edge set E(G). We denote by G[S]
the subgraph of G induced by S where S is a vertex subset or an edge subset. We simplify |E(G)|
to |G|. We assume we are given an embedding of G in the plane. We denote by ∂(G) the subgraph
induced by the edges on the outer boundary of G in this embedding. A circuit is a closed walk that
may contain repeated vertices but not repeated edges. A simple cycle is a circuit that contains no
repetition of vertices, other than the repetition of the starting and ending vertex. A simple cycle
bounds a finite region in the plane that is a topological disk. We say a simple cycle encloses a
vertex, an edge or a subgraph if the vertex, edge or subgraph is embedded in the topological disk
bounded by the cycle. We say a circuit encloses a vertex, edge or subgraph if the vertex, edge or
subgraph is enclosed by a simple cycle in the circuit.
The level of a vertex of G is defined as follows [1]: a vertex has level 0 if it is on the infinite face
of G; a vertex has level i if it is on the infinite face after deleting all the vertices of levels less than
i. Let Vi be the set of vertices of level i. Let Ei be the edge set of G in which each edge has both
endpoints in level i. Let Ei,i+1 be the edge set of G where each edge has one endpoint in level i and
one endpoint in level i + 1. See Figure 2 as an example. Then we have the following observations.
Observation 1. For any level i ≥ 0, the boundary of any non-trivial two-edge connected component
in G[∪j≥i Vj ] is a maximal circuit in ∂(G[Vi ]).
4
Figure 3: The horizontal lines represent edge set in the same level and slashes and counter slashes
represent the edge set between two adjacent levels. Left: there are three double layers: Di , Di+1
and Di+2 represented by the shaded regions. Right: Gi contains all edges in this figure, but Hi ,
represented by shaded region, does not contain Ef (i)−1 .
Observation 2. For any level i ≥ 0, the boundary of any non-trivial biconnected component in
G[∪j≥i Vj ] is a simple cycle in ∂(G[Vi ]).
For any i ≥ 0, we define the ith double layer
Di = Ei−1,i ∪ Ei ∪ Ei,i+1 ∪ Ei+1
as the set of edges in G[Vi−1 ∪ Vi ∪ Vi+1 ] \ Ei−1 . See Figure 3
Let k be a constant that depends on . For j = 0, 1, . . . , k − 1, let Rj = ∪i mod k=j Di . Let
P
t = argminj |Rj | and R = Rt . Since k−1
j=0 |Rj | ≤ 2|G|, we have the following upper bound for the
size of R.
|R| ≤ 2/k · |G|
(1)
Let f (i) = ik − k + t for integer i ≥ 0. If ik − k + t < 0 for any i, we let f (i) = 0. Let
Gi = G[∪f (i)−1≤j≤f (i+1)+1 Vj ] be the subgraph of G induced by vertices in level [f (i)−1, f (i+1)+1]
and Hi = Gi \Ef (i)−1 be a subgraph of Gi . See Figure 3. Note that Hi contains exactly the edges of
double layers Df (i) through Df (i+1) . Therefore, so long as k ≥ 2, we have Hi ∩ Hi+1 = Df (i+1) ⊆ R,
and Hi ∩ Hj = ∅ for any j 6= i and |i − j| ≥ 2. So for any j 6= i we have
(Hi \ R) ∩ (Hj \ R) = ∅.
(2)
For each i ≥ 0 and each maximal circuit Ca in ∂(G[Vf (i) ]), we construct a graph Hia , called a 3EC
slice, from G as follows. (See Figure 4.) Let U be the subset of vertices of Hi \ (Vf (i)−1 ∪ Vf (i+1)+1 )
that are enclosed by Ca . We contract each connected component of G \ U into a node. After all the
contractions, we delete self-loops and additional parallel edges if there are more than three parallel
edges between any pair of vertices. The resulting graph is the 3EC slice Hia . We call these contracted
vertices nodes to distinguish them from the original vertices of G. We call a contracted node inner
if it is obtained by contracting a component that is enclosed by Ca ; otherwise it is outer. Note that
5
Figure 4: Example for the construction of Hia . Left: a component of Gi . The bold cycles represent
maximal circuits in ∂(G[Vf (i) ]). Right: an example of Hia . The nodes represent the contracted
nodes. The cycles inside of Ca must belong to Ef (i+1) .
a 3EC slice is still planar, and two 3EC slices only share edges in a double layer: the common edges
b
of two 3EC slices Hia and Hi+1
must be in the set Ef (i+1)−1,f (i+1) ∪ Ef (i+1) ∪ Ef (i+1),f (i+1)+1 , while
a
the common edges of Hi and Hic must be in the set Ef (i) . In the similar way, we can construct a
simple graph Hia , called 3VC slice, for each i ≥ 0 and each simple cycle Ca in ∂(G[Vf (i) ]).
Remark There can be a 3EC slice Hia containing only two vertices in Vf (i) . Then the slice must
contain at least two parallel edges between the two vertices in Vf (i) . But any 3EC slice Hia cannot
contain only one vertex in Vf (i) since we define 3EC slice based on a maximal circuit and we assume
there is no self-loop in G. Similarly, any 3VC slice Hia contains at least three vertices in Vf (i) .
Lemma 4. If G is two-edge connected (biconnected), then each 3EC (3VC) slice obtained from G
has at most one outer node.
Proof. We first prove the following claims, and then by these claims we prove the lemma.
Claim 5. For any l ≥ 0, subgraph G[∪0≤j≤l Vj ] is connected.
Proof. We prove by induction on l that subgraph G[∪0≤j≤l Vj ] is connected for any l ≥ 0. The
base case is l = 0. Since V0 is the set of vertices on the boundary of G, and since G is connected,
subgraph G[V0 ] is connected. Assume subgraph G[∪0≤j≤l Vj ] is connected for l ≥ 0. Then we claim
subgraph G[∪0≤j≤l+1 Vj ] is connected. This is because for each connected component X of G[Vl+1 ],
there exists at least one edge between X and G[Vl ], otherwise graph G cannot be connected. Since
subgraph G[∪0≤j≤l Vj ] is connected, we have G[∪0≤j≤l+1 Vj ] is connected.
Claim 6. If G is two-edge connected, then for any two distinct maximal circuits Ca and Cb in
∂(G[Vl ]), there is a path between Cb and G[Vl−1 ] that is vertex disjoint from Ca .
Proof. Note that Ca and Cb are vertex-disjoint, otherwise Ca is not maximal. We argue that there
cannot be two edge-disjoint paths between Ca and Cb in G[Vl ]. If there are such two edge-disjoint
paths, say P1 and P2 , then Ca cannot be a maximal circuit in ∂(G[Vl ]): if P1 and P2 have the
same endpoint in Ca , then Ca is not maximal; otherwise there is some edge of Ca that cannot be
6
in ∂(G[Vl ]). So we know that any vertex in Ca and any vertex in Cb cannot be two-edge connected
in G[Vl ]. Since G is two-edge connected and since G[Vl−1 ] must be outside of Ca and Cb , vertices
in Ca and those in Cb must be connected through G[Vl−1 ]. Therefore, there exists a path from Cb
to G[Vl−1 ] that does not contain any vertex in Ca .
Similarly, we can obtain the following claim.
Claim 7. If G is biconnected, then for any two distinct simple cycles Ca and Cb in ∂(G[Vl ]), there
is a path between Cb and G[Vl−1 ] that is vertex disjoint from Ca .
Now we prove the lemma. Let H be a 3EC slice based on some maximal circuit Ca in ∂(G[Vl ])
for some l ≥ 0. Let W = ∪0≤j<l Vj be the set of all vertices of G that have levels less than l, and
Q be a two-edge connected component in G[Vl ] disjoint from H. Then the boundary of Q is a
maximal circuit C in ∂(G[Vl ]). Note that Q could be trivial and then C is also trivial. Since G is
connected, each simple cycle must enclose a connected subgraph of G. So circuit C must enclose
a connected subgraph of G. By Claim 6, there is a path between C and G[Vl−1 ] disjoint from Ca .
Since G[∪0≤j<l Vj ] is connected by Claim 5, the set of vertices that are not enclosed by Ca induces
a connected subgraph of G, giving the lemma for H. For 3VC slice, we can obtain the lemma in
the same way by Claim 5 and Claim 7.
Using this lemma, we show how to construct all the 3EC slices in linear time. First we compute the levels of all vertices in linear time by using an appropriate representation of the planar
embedding such as that used by Lipton and Tarjan [20]. We construct all 3EC slices Hia from Gi
in O(|V (Gi )|) time. We first contract all the edges between vertices of level f (i + 1) + 1. Next,
we identify all two-edge connected components in G[Vf (i) ], which can be done in linear time by
finding all the edge cuts by the result of Tarjan [26]. Each such component contains a maximal
circuit in ∂(G[Vf (i) ]). Based on these two-edge connected components of G[Vf (i) ], we could identify
V (Hia ) \ {ria } for all 3EC slices Hia in O(|V (Gi )|) time, where ria is the outer contracted node for
Hia . This is because the inner contracted nodes of a 3EC slice Hia is the same as those contracted in
Gi if they are enclosed by Ca . Then for each 3EC slice Hia we add the outer node ria , and for each
vertex u ∈ V (Ca ) we add an edge between ria and u if there is an edge between u and some vertex
v that is not enclosed by Ca . To add those edges, we only need to travel
Pall the edges of subgraph
Gi [Vf (i)−1 ∪Vf (i) ]. Since all these steps run in O(|V (Gi )| time, and since i≥0 |V (Gi )| = O(|V (G)|),
we can obtain the following lemma.
Lemma 8. All 3EC slices can be constructed in O(|V (G)|) time.
Since we can compute all the biconnected components in G[Vf (i) ] in linear time based on depthfirst search by the result of Hopcroft and Tarjan [16], we can obtain a similar lemma for 3VC sllices
in a similar way.
Lemma 9. All 3VC slices can be constructed in O(|V (G)|) time.
We review the definition of branchwidth given by Seymour and Thomas [25]. A branch decomposition of a graph G is a hierarchical clustering of its edge set. We represent this hierarchy by a
binary tree, called the decomposition tree, where the leaves are in bijection with the edges of the
original graph. If we delete an edge α of this decomposition tree, the edge set of the original graph
is partitioned into two parts Eα and E(G) \ Eα according to the leaves of the two subtrees. The
set of vertices in common between the two subgraphs induced by Eα and E(G) \ Eα is called the
7
separator corresponding to α in the decomposition. The width of the decomposition is the maximum size of the separator in that decomposition, and the branchwidth of G is the minimum width
of any branch decomposition of G. We borrow the following lemmas from Klein and Mozes [19],
which are helpful in bounding the branchwidth of our graphs.
Lemma 10. (Lemma 14.5.1 [19]) Deleting or contracting edges does not increase the branchwidth
of a graph.
Lemma 11. (Lemma 14.6.1 [19] rewritten) There is a linear-time algorithm that, given a planar
embedded graph G, returns a branch-decomposition whose width is at most twice of the depth of a
rooted spanning tree of G.
Lemma 12. If G is two-edge connected (biconnected), the branchwidth of any 3EC (3VC) slice is
O(k).
Proof. We prove this lemma for 3EC slices when G is two-edge connected; by the same proof we
can obtain the lemma for 3VC slices when G is biconnected. Let Hia be a 3EC slice. By Lemma 4,
there is at most one outer contracted node r for Hia . Let the level of r be f (i) − 1, and the level of
all inner contracted nodes be f (i + 1) + 1. Now we add edges to ensure that every vertex of level
l has a neighbor of level l − 1 for each f (i) ≤ l ≤ f (i) + 1, while maintaining planarity. Call the
resulting graph Kia . Then the branchwidth of Hia is no more than that of Kia by Lemma 10. Now
we can find a breadth-first-tree of Kia rooted at r that has depth at most k + 3. By Lemma 11, the
branchwidth of Kia is O(k) and that of Hia is at most O(k).
3
PTAS for 3-ECSS
In this section, we prove Theorem 1. W.l.o.g. we assume G has at most three parallel edges between
any pair of vertices. Then G is our spanner. Let OPT(G) be an optimal solution for G. Since each
vertex in OPT(G) has degree at least three, we have
2|OPT(G)| ≥ 3|V (G)|.
(3)
If G is simple, then by planarity the number of edges is at most three times of the number of
vertices. Since there are at most three parallel edges between any pair of vertices, we have
|G| ≤ 9|V (G)|.
(4)
Combining (3) and (4), we have |G| ≤ 6|OPT(G)|.
In this section, we only consider 3EC slices. So when we say slice, we mean 3EC slice in this
section. We construct all the slices from G. By (1), we have the following
|R| ≤ 2/k · |G| ≤ 12/k · |OPT(G)|.
(5)
We borrow the following lemma from Nagamochi and Ibaraki [22].
Lemma 13. (Lemma 4.1 (2) [22] rewritten) Let G be a k-edge connected graph with more than 2
vertices. Then after contracting any edge in G, the resulting graph is still k-edge connected.
Recall that our slices are obtained from G by contractions and deletions of self-loops. By the
above lemma, we have the following lemma.
8
Figure 5: The bold cycle encloses Yia . The dashed edge is in G but not in OPT(G). Its two
endpoints will be identified, since the dashed edge will be contracted to obtain Hia but it will not
be contracted when contracting connected components of OPT(G) \ Yia .
Lemma 14. If G is three-edge connected, then any slice is three-edge connected.
Since we can include all the edges in shared double layers, they are “free” to us. So we would
like to include those edges as many as possible in the solution for each slice. This can be achieved
by defining an edge-weight function w for each slice Hia : assign weight 0 to edges in Df (i) ∪ Df (i+1)
and weight 1 to other edges. By Lemma 14, any slice is three-edge connected. We solve the
minimum-weight 3-ECSS problem on Hia in linear time by Theorem 3. Let Sol(Hia ) be a feasible
solution for the minimum-weight 3-ECSS problem on Hia . Then it is also a feasible solution for
3-ECSS on Hia . Let OPTw (Hia ) be an optimal solution for the minimum-weight 3-ECSS problem
on Hia . Then we have the following observation.
Observation 3. The weight of any solution Sol(Hia ) is the same as the number of its common
edges with Hia \ R, that is
w(Sol(Hia )) = |Sol(Hia ) ∩ (Hia \ R)|.
Let Ci be the set of all maximal circuits in ∂(G[Vf (i) ]). Then we have the following lemmas.
S
Lemma 15. For any i ≥ 0, let Si = Ca ∈Ci OPTw (Hia ). Then we can bound the number of edges
in Si by the following inequality
|Si | ≤ |OPT(G) ∩ (Hi \ R)| + |Df (i) | + |Df (i+1) |.
Proof. We show that OPT(G) ∩ Hia is a feasible solution for the minimum-weight 3-ECSS problem
on Hia , and then we bound the size of Si . Let Yia be the set of vertices of Hia that are not
contracted nodes. We first contract connected components of OPT(G) \ Yia just as constructing
Hia from G. Then we need to identify any two contracted nodes, if their corresponding components
in OPT(G) are in the same connected component in G \ Yia . See Figure 5. Finally, we delete all
the self-loops and extra parallel edges if there are more than three parallel edges between any two
vertices. The resulting graph is a subgraph of OPT(G) ∩ Hia and spans V (Hia ). Since identifying
two nodes maintains edge-connectivity, and since contractions also maintain edge-connectivity by
Lemma 13, the resulting graph is three-edge connected. So OPT(G) ∩ Hia is a feasible solution
for minimum-weighted 3-ECSS problem on Hia . Then by the optimality of OPTw (Hia ), we have
w(OPTw (Hia )) ≤ w(OPT(G) ∩ Hia ). And by Observation 3, we have
|OPTw (Hia ) ∩ (Hia \ R)| ≤ |(OPT(G) ∩ Hia ) ∩ (Hia \ R)| = |OPT(G) ∩ (Hia \ R)|.
9
(6)
Figure 6: (a) A slice Hia : the bold cycle is a maximal circuit Ca in Ci and the nodes represent all
the contracted nodes. (b) The graph Mia obtained from Hia by uncontracting inner nodes of Hia .
(c) The subtree T (Hia ).
Note that for any slice Hia , we have E(Hia ) ⊆ E(Hi ) and (Hia ∩R) ⊆ (Hi ∩R) ⊆ (Df (i) ∪Df (i+1) ).
Since for distinct (vertex-disjoint) maximal circuits Ca and Cb in Ci , subgraphs Hia \ R and Hib \ R
are vertex-disjoint, we have the following equalities.
[
Hi \ R =
(Hia \ R)
(7)
Ca ∈Ci
Si ∩ (Hi \ R) =
Then
[
Ca ∈Ci
(OPTw (Hia ) ∩ (Hia \ R))
S
|Si ∩ (Hi \ R)| = Ca ∈Ci (OPTw (Hia ) ∩ (Hia \ R))
P
≤ Ca ∈Ci |OPTw (Hia ) ∩ (Hia \ R)|
P
≤ Ca ∈Ci |OPT(G) ∩ (Hia \ R)|
≤ |OPT(G) ∩ (Hi \ R)|.
(8)
by (8)
by (6)
by (7)
So we have |Si | = |Si ∩ (Hi \ R)| + |Si ∩ (Hi ∩ R)| ≤ |OPT(G) ∩ (Hi \ R)| + |Df (i) | + |Df (i+1) |.
S
a
Lemma 16. The union
i≥0,Ca ∈Ci Sol(Hi ) ∪ R is a feasible solution for G.
Proof. For any i ≥ 0 and any maximal circuit Ca ∈ Ci , let Mia be the graph obtained from Hia by
uncontracting all its inner contracted nodes. See Figure 6. By Lemma 4, there is at most one outer
node ria for each slice Hia .
Define a tree T based on all the slices: each slice is a node of T , and two nodes Hia and Hjb
are adjacent if they share any edge and |i − j| = 1. Root T at the slice H0a , which contains the
boundary of G. Let T (Hia ) be the subtree of T that roots at slice Hia . See Figure 6 as an example.
b
b \ {r b }. Then C is the maximal circuit
For each child Hi+1
of Hia , let Cb be the boundary of Hi+1
b
i+1
a
b
b \ {r b } is
in Ci+1 that is shared by Hi and Hi+1 . Further, by the construction of Hia , graph Mi+1
i+1
a subgraph of Mia .
b
We prove the lemma by induction on this tree T from leaves to root. Assume
S for each child Hi+1
b
of Hia , there is a feasible solution S b for the graph Mi+1
such that S b =
∪
b ) Sol(H)
H∈T (Hi+1
S
b ∩
Mi+1
. We prove that there is a feasible solution S a for Mia such that
j≥i+1 Df (j+1)
10
Figure 7: The dashed subgraph is the boundary of G[X]. All the vertices in the dashed subgraph
are in level f (i + 1) + 1, and all its edges are in Ef (i+1)+1 .
Sa =
S
S
H∈T (Hia )
S
Sol(H) ∪ Mia ∩
D
.
j≥i f (j+1)
For the root H0a of T , we have M0a ∩
⊆ R, and then the lemma follows from the case i = 0.
The base case is that Hia is a leaf of T . When Hia is a leaf, there is no inner contracted node in
a
Hi and we have Mia = Hia . So Sol(Hia ) is a feasible solution for Mia .
b
Recall that Hia and Hi+1
only share edges of (Ef (i+1)−1,f (i+1) ∪ Ef (i+1) ∪ Ef (i+1),f (i+1)+1 ) ⊆
Df (i+1) and vertices of Vf (i+1) . Let x be any inner contracted node of Hia and X be the vertex set
of the connected component of G corresponding to x. We need the following claim.
j≥0 Df (j+1)
b
b , then (S b ∪ D
Claim 17. If X ⊆ Mi+1
for some Hi+1
f (i+1) ) ∩ G[X] is connected.
Proof. By the construction of levels, all the vertices on the boundary of G[X] have level f (i +
1) + 1. See Figure 7. Then all the edges of ∂(G[X]) are in Ef (i+1)+1 ⊆ Df (i+1) . So subgraph
b
(S b ∪ Df (i+1) ) ∩ ∂(G[X]) is connected. Let u be any vertex in X and let v be any vertex in Mi+1
b , there exists
that has level f (i + 1). Then v is not in X. Since S b is a feasible solution for Mi+1
b
a path from u to v in S . This path must intersect ∂(G[X]) by planarity. So u and any vertex on
the boundary of G[X] are connected in (S b ∪ Df (i+1) ) ∩ G[X], giving the claim.
Let u and v be any two vertices of Mia. To prove the feasibility of S a , we prove
u and v
S
a
b
b
a
are three-edge connected in S . Let M =
b
Hi+1
is a child of Hia V (Mi+1 \ {ri+1 }) and Yi =
V (Hia ) \ {inner contracted nodes of Hia }. Then V (Mia ) = Yia ∪ M . Depending on the locations of
u and v, we have three cases.
Case 1: u, v ∈ Yia . Note that we could construct S a in the following way. Initially we have S ∗ =
Sol(Hia ) ∪ (Df (i+1) ∩ Hia ). For any inner contracted node x of Hia , let X be the vertex set
b
of its corresponding connected component in G. Then there exists a child Hi+1
of Hia such
b
∗
b
that X ⊆ V (Mi+1 ), and we replace x with (S ∪ Df (i+1) ) ∩ G[X] in S . We do this for all
inner contracted nodes of Hia . Finally we add some edges of Df (i+1) into the resulting graph
such that Df (i+1) ⊆ S ∗ . Then the resulting S ∗ is the same as S a by the definition of S a . We
prove that any pair of the remaining vertices in V (Hia ) are three-edge connected during the
construction. This includes the remaining inner contracted nodes of Hia during the process,
but after all the replacements, there is no such inner contracted nodes, proving the case.
11
By the definition of Sol(Hia ), any pair of vertices of Hia are three-edge connected in Sol(Hia ).
Assume after the first k replacements, any pair of the remaining vertices in V (Hia ) are threeedge connected in the resulting graph S ∗ . Let x be the next inner contracted node to be
replaced, X be the vertex set of its corresponding component and S 0 be the resulting graph
b
b ). Let C be the simple
after replacing x. Let Hi+1
be the child of Hia such that X ⊆ V (Mi+1
b
b
cycle in ∂(Mi+1 \ {ri+1 }) that encloses X. Then all vertices of C have level f (i + 1) and are
b . Further, C ⊆ H a ∩ D
0
shared by Hia and Hi+1
f (i+1) ⊆ S . Let u and v be any two remaining
i
a
vertices of V (Hi ). There are three edge-disjoint u-to-x paths and three edge-disjoint v-to-x
paths in S ∗ , all of which must intersect C. So there exist three edge-disjoint u-to-X paths
and three edge-disjoint v-to-X paths in S 0 . Now we delete two edges in S 0 . If these two edges
are not both in C, then the vertices of C are still connected. Then one remaining u-to-C path
and one remaining v-to-C path together with the rest of C witness the connectivity between
u and v. If the two deleted edges are both in C, then there exist one u-to-X path and one
v-to-X path after the deletion. By Claim 17, subgraph (S b ∪ Df (i+1) ) ∩ G[X] is connected. So
all vertices of X are connected in S 0 . Then u and v are connected after the deletion. Finally,
after replacing all the inner contracted nodes, we only add edges of Df (i+1) into S ∗ , which
will not break three-edge-connectivity between any pair of vertices. This finishes the proof of
Case 1.
b1
b2
Case 2: u, v ∈ M. Let Mi+1
be the graph contains u and Mi+1
be the graph contains v. (The two
b1
b1
graphs could be identical.) Let Cu (resp. Cv ) be the simple cycle in ∂(Mi+1
\ {ri+1
}) (resp.
b2
b2
∂(Mi+1 \ {ri+1 })) that enclose u (resp. v). (The two cycles Cu and Cv could be identical.)
Since S b1 is three-edge connected, there are three edge-disjoint paths from u to some vertex
of Cu in S b1 . All these three paths must intersect Cu , so there are three edge-disjoint paths
b1
from u to Cu in (S b1 \ {ri+1
}) ⊆ S a . Similarly, there are three edge-disjoint paths from v to
b
b
a
2
Cv in (S 2 \ {ri+1
}) ⊆ S . Now we delete any two edges in S a . After the deletion, there exist
one u-to-w1 path and one v-to-w2 path where w1 ∈ Cu and w2 ∈ Cv . Since all vertices in
V (Cu ∪ Cv ) have level f (i + 1) and are in Yia , they are three-edge connected in S a by Case
1. This means there exists a path from w1 to w2 after the deletion. Therefore, u and v are
connected after deleting any two edges in S a , giving the three-edge-connectivity.
b
be the graph containing v. Then there is a vertex w in
Case 3: u ∈ Yia and v ∈ M. Let Mi+1
a
b
b
Yi ∩ (Mi+1 \ {ri+1 }). By Case 1, vertices u and w are three-edge connected, and by Case 2,
vertices v and w are three-edge connected. Then vertices u and v are three-edge connected
by the transitivity of three-edge-connectivity.
This completes the proof of Lemma 16.
Proof of Theorem 1. We first prove correctness of our algorithm, and then prove its running time.
12
By Lemma 16, S =
S
i≥0,Ca ∈Ci
OPTw (Hia ) ∪ R is a feasible solution. Thus
S
a)
OPT
(H
+ |R|
w
i
S a ∈Ci
P i≥0,C
a
≤ i≥0 Ca ∈Ci OPTw (Hi ) + |R|
P
≤ i≥0 |OPT(G) ∩ (Hi \ R)| + |Df (i) | + |Df (i+1) | + |R| by Lemma 15
P
≤ i≥0 |OPT(G) ∩ (Hi \ R)| + |R| + |R| + |R|
≤ |OPT(G)| + 3|R|
by (2)
≤ |OPT(G)| + 36/k · |OPT(G)|
by (5)
≤ (1 + 36/k)|OPT(G)|
|S| ≤
Let k = 36/, and then we obtain |S| ≤ (1 + )|OPT(G)|.
Let n = |V (G)| be the number of vertices of graph G. We could find R and construct all slices
in O(n) time by Lemma 8. By Lemma 12, each slice has branchwidth O(k). So by Theorem 3,
we could solve the minimum-weight 3-ECSS on each slice in linear time for fixed k. Based on
those optimal solutions for all slices, we could construct our solution in O(n) time. Therefore, our
algorithm runs in O(n) time.
4
PTAS for 3-VCSS
In this section, we prove Theorem 2. W.l.o.g. assume G is simple. Then G is our spanner. Let
OPT(G) be an optimal solution for G. Since G is simple and planar, we have |G| ≤ 3|V (G)|. Then
by (3) we have |G| ≤ 2|OPT(G)|. In this section, we only consider 3VC slices. So in the following,
we simplify 3VC slice to slice. We first construct slices from G. By (1), we have the following
|R| ≤ 2/k · |G| ≤ 4/k · |OPT(G)|.
(9)
Similar to 3-ECSS, we want to solve a minimum-weight 3-VCSS problem on each slice. But
before defining the weights for this problem on each slice, we first need to show any slice is triconnected. The following lemma is proved by Vo [27], we provide a proof for completeness.
Lemma 18. ([27]) Let C be a simple cycle of G that separates G \ C into two parts: A and B. Let
H be any connected component of A. If G is triconnected, then G/H, the graph obtained from G
by contracting H, is triconnected.
Proof. Let x be the contracted node of G/H. Then x and any other vertex of G/H are triconnected
since G is triconnected. Let u and v be any two vertices of G/H distinct from x. To prove the
lemma, we show u and v are triconnected. Since x and u are triconnected, there are three vertexdisjoint paths between u and x. Note that all the three paths must intersect cycle C since V (C)
form a cut for x and all the other vertices in G/H. Similarly, there are three vertex-disjoint paths
between v and x, all of which intersect cycle C. Now we delete any two vertices different from u
and v in G/H. If the two deleted vertices are both in C, then there exist one u-to-x path and one
v-to-x path after the deletion, which witness the connectivity between u and v. If the two deleted
vertices are not both in C, the remaining vertices in C are connected and then the remaining u-to-C
path and the remaining v-to-C path together with the rest of edges in C witness the connectivity
between u and v. So u and v are triconnected.
By the same proof, we can obtain the following lemma.
13
Lemma 19. Let C be a simple cycle of G. Let u and v be two vertices of G \ C whose neighbors
in G are all in C. Then if G is triconnected, the graph obtained from G by identifying u and v is
triconnected.
Let Ci be the set of all simple cycles in ∂(G[Vf (i) ]). Then we have the following lemma.
Lemma 20. For any i ≥ 0 and any simple cycle Ca ∈ Ci , the slice Hia is triconnected.
Proof. Let Yia be the set of vertices of Hia that are not contracted nodes. We could obtain Hia by
contracting each connected component of G \ Yia into a node. Each time we contract a connected
component H of G \ Yia , there is a simple cycle C that separates H and other vertices: if H is
outside of Ca , then C = Ca ; otherwise C is some simple cycle in Ci+1 that encloses H. Then
by Lemma 18 the resulting graph is still triconnected after each contraction. Therefore, the final
resulting graph Hia is triconnected.
Now we define the edge-weight function w on a slice Hia : we assign weight 0 to edges in
Hia ∩ (Df (i) ∪ Df (i+1) ) and weight 1 to other edges. Then we solve the minimum-weight 3-VCSS
problem on slice Hia by Theorem 3. Let Sol(Hia ) be a feasible solution for the minimum-weight
3-VCSS problem on Hia . Then it is also a feasible solution for 3-ECSS on Hia . Let OPTw (Hia ) be
an optimal solution for this problem on Hia . Then we can prove the following two lemmas, whose
proofs follow the same outlines of the proofs of Lemmas 15 and 16 respectively.
S
Lemma 21. For any i ≥ 0, let Si = Ca ∈Ci OPTw (Hia ). Then we can bound the number of edges
in Si by the following inequality
|Si | ≤ |OPT(G) ∩ (Hi \ R)| + |Df (i) | + |Df (i+1) |.
Proof. We first show OPT(G)∩Hia is a feasible solution for minimum-weighted 3-VCSS problem on
any slice Hia . Let Yia be the set of vertices of Hia that are not contracted nodes. We first contract
each component of OPT(G) \ Yia into a node. The resulting graph after each contraction is still
triconnected by Lemma 18. After all the contractions, we identify any two contracted nodes x1 and
x2 if their corresponding components in OPT(G) are connected in G \ Yia . This implies there exists
a simple cycle C in Ci or Ci+1 such that all neighbors of x1 and x2 are in C. So by Lemma 19 the
resulting graph after each identification is also triconnected. Finally we delete parallel edges and
self-loops if possible. After identifying all possible nodes, the resulting graph has the same vertex
set as Hia and is triconnected. Since the resulting graph is a subgraph of OPT(G) ∩ Hia , we know
OPT(G) ∩ Hia is a feasible solution for minimum-weighted 3-VCSS problem on Hia .
Note that for any slice Hia , we have (Hia ∩ R) ⊆ (Hi ∩ R) ⊆ (Df (i) ∪ Df (i+1) ). By the optimality
of OPTw (Hia ), we have w(OPTw (Hia )) ≤ w(OPT(G) ∩ Hia ). Since all the nonzero-weighted edges
are in Hia \ R, Observation 3 still holds. Then we have
|OPTw (Hia ) ∩ (Hia \ R)| ≤ |(OPT(G) ∩ Hia ) ∩ (Hia \ R)| = |OPT(G) ∩ (Hia \ R)|.
(10)
Since for distinct (edge-disjoint) simple cycles Ca and Cb in Ci , subgraphs Hia \ R and Hib \ R are
vertex-disjoint, we have the following equalities.
[
Hi \ R =
(Hia \ R)
(11)
Ca ∈Ci
14
Si ∩ (Hi \ R) =
Then
[
Ca ∈Ci
(OPTw (Hia ) ∩ (Hia \ R))
S
|Si ∩ (Hi \ R)| = Ca ∈Ci (OPTw (Hia ) ∩ (Hia \ R))
P
≤ Ca ∈Ci |OPTw (Hia ) ∩ (Hia \ R)|
P
≤ Ca ∈Ci |OPT(G) ∩ (Hia \ R)|
≤ |OPT(G) ∩ (Hi \ R)|.
(12)
by (12)
by (10)
by (11)
So we have |Si | = |Si ∩ (Hi \ R)| + |Si ∩ (Hi ∩ R)| ≤ |OPT(G) ∩ (Hi \ R)| + |Df (i) | + |Df (i+1) |.
S
a ) ∪ R is a feasible solution for G.
Lemma 22. The union
Sol(H
i
i≥0,Ca ∈Ci
Proof. For any i ≥ 0 and any simple cycle Ca ∈ Ci , let Mia be the graph obtained from slice Hia
by uncontracting all the inner contracted nodes of Hia . By Lemma 4, there is at most one outer
contracted node ria for any slice Hia .
Define a tree T based on all the slices: each slice is a node of T , and two nodes Hia and Hjb
are adjacent if they share any edge and |i − j| = 1. Root T at the slice H0a , which contains the
b
boundary of G. Let T (Hia ) be the subtree of T that roots at slice Hia . For each child Hi+1
of Hia ,
a
b
let Cb be the simple cycle in Ci+1 that is shared by Hi and Hi+1 . Then Cb is the boundary of
b
b }.
Hi+1
\ {ri+1
b
We prove the lemma by induction on this tree from leaves to root. Assume
S for each child Hi+1
b
of Hia , there is a feasible solution S b for the graph Mi+1
such that S b =
∪
b ) Sol(H)
H∈T (Hi+1
S
b ∩
Mi+1
Df (j+1) . We prove that there is a feasible solution S a for Mia such that
S
S j≥i+1
a∩
D
. For the root H0a of T , we have M0a ∩
Sol(H)
∪
M
Sa =
a
f
(j+1)
i
j≥i
H∈T (Hi )
S
j≥0 Df (j+1) ⊆ R, and then the lemma follows from the case i = 0.
The base case is that Hia is a leaf of T . When Hia is a leaf, there is no inner contracted node in
a
Hi and we have Mia = Hia . So Sol(Hia ) is a feasible solution for Mia .
We first need a claim the same as Claim 17. Note that any inner contracted node of Hia is
enclosed by some cycle Cb . Let x be any inner contracted node of Hia that is enclosed by Cb , and X
be the vertex set of the connected component of G corresponding to x. Then we have the following
claim, whose proof is the same as that of Claim 17.
b
b , then (S b ∪ D
Claim 23. If X ⊆ Mi+1
for some Hi+1
f (i+1) ) ∩ G[X] is connected.
Now we ready to prove S a is a feasible solution for Mia . That is, we prove it is triconnected.
Let u and v be
any two vertices of Mia . Let Yia = V (Hia) \ {inner contracted nodes of Hia }. Since
S
b
b
V (Mia ) = Yia ∪
Hb
is a child of H a V (Mi+1 \ {ri+1 }) , we have four cases.
i+1
i
Case 1: u, v ∈ Yia .
node of Hia , by
For any contracted component X in G that corresponds to an inner contracted
b .
Claim 23 all vertices in X are connected in (S b ∪ Df (i+1) ) ∩ G[X] if X ⊆ Mi+1
a
b
a
Then all vertices of X are connected in S ∩ G[X], since for any child Hi+1 of Hi we have
(S b ∪ Df (i+1) ) ⊆ S a . By the triconnectivity of Sol(Hia ), there are three vertex-disjoint paths
between u and v in Sol(Hia ). Since each inner contracted node of Hia could be in only
one path witnessing connectivity, the three vertex-disjoint u-to-v paths in Sol(Hia ) could be
transferred into another three vertex-disjoint u-to-v paths in S a by replacing each contracted
15
inner contracted node x with a path in the corresponding component X. So u and v are
triconnected in S a .
Case 2: u, v ∈ Mbi+1 \ {rbi+1 }. Since V (Cb ) is a cut for vertices enclosed by Cb and those not
enclosed by Cb , by the triconnectivity of G we have |V (Cb )| ≥ 3. By inductive hypothesis,
b , so there are three vertex-disjoint u-to-r b
b
S b is a feasible solution for Mi+1
i+1 paths in S .
All these three paths must intersect Cb by planarity, so there are three vertex-disjoint u-tob }) ⊆ S a . Similarly, there are three vertex-disjoint v-to-C paths in
Cb paths in (S b \ {ri+1
b
b }) ⊆ S a . If we delete any two vertices in S a , then there exist at least one u-to-w
(S b \ {ri+1
1
path and one v-to-w2 path for some vertices w1 , w2 ∈ Cb . Since all vertices in Cb have level
f (i + 1), they are in Yia . Then by Case 1, vertices w1 and w2 are triconnected in S a , so they
are connected after deleting any two vertices. Therefore, u and v are also connected after the
deletion.
Case 3: u ∈ Yia and v ∈ Mbi+1 \ {rbi+1 }. If one of u and v is in Cb , they are triconnected by Case
1 or 2. So w.l.o.g. we assume u is not enclosed by Cb and v is strictly enclosed by Cb . Since G
is triconnected, we have |V (Cb )| ≥ 3. We could delete any two vertices in S a and there exists
at least one vertex w in Cb . By Case 1, vertices u and w are connected after the deletion,
and by Case 2, vertices v and w are connected after the deletion. So u and v are connected
after the deletion.
1
1
2
2
Case 4: u ∈ Mbi+1
\ {rbi+1
} and v ∈ Mbi+1
\ {rbi+1
}. W.l.o.g. assume u is strictly enclosed by Cb1
and v is strictly enclosed by Cb2 , otherwise, by Case 3 they are triconnected. Since G is
triconnected, we have |V (Cb1 )| ≥ 3. After deleting any two vertices in S a , there exists a
vertex w ∈ Cb1 . By Case 2, vertices u and w are connected after deletion, and by Case 3
vertices v and w are connected after deletion. So u and v are connected after deletion.
This completes the proof of Lemma 22.
Proof of Theorem
2. We first prove
S
the correctness, and then prove the running time. Let the
a
union S =
i≥0,Ca ∈Ci OPTw (Hi ) ∪ R be our solution. By Lemma 22, the solution S is feasible
for G. Then we have
S
a
|S| =
i≥0,Ca ∈Ci OPTw (Hi ) ∪ R
S
a
=
+ |R|
i≥0,Ca ∈Ci OPTw (Hi )
P
S
a
≤ i≥0 Ca ∈Ci OPTw (Hi ) + |R|
P
≤ i≥0 |OPT(G) ∩ (Hi \ R)| + |Df (i) | + |Df (i+1) | + |R| by Lemma 21
P
≤ i≥0 |OPT(G) ∩ (Hi \ R)| + |R| + |R| + |R|
≤ |OPT(G)| + 3|R|
by (2)
≤ (1 + 12/k)|OPT(G)|.
by (9)
We set k = 12/ and then we have |S| ≤ (1 + )|OPT(G)|.
Let n = |V (G)| be the number of vertices in graph G. We could find the edge set R in linear
time. By Lemma 9 we could construct all slices in O(n) time. So the slicing step runs in linear
time. By Lemma 12, the branchwidth of each slice is O(k) = O(1/). Therefore, we could solve the
minimum-weight 3-VCSS problem on each slice in linear time by Theorem 3. Based on the optimal
solutions for all the slices, we could construct our final solution S in linear time. So our algorithm
runs in linear time.
16
5
Dynamic Programming for Minimum-Weight 3-ECSS on graphs
with bounded branchwidth
In this section, we give a dynamic program to compute the optimal solution of minimum-weighted
3-ECSS problem on a graph G with bounded branchwidth w. This will prove Theorem 3 for
the minimum-weight 3-ECSS problem. Our algorithm is inspired by the work of Czumaj and
Lingas [9, 10]. Note that G need not be planar.
Given a branch decomposition of G, we root its decomposition tree T at an arbitrary leaf.
For any edge α in T , let Lα be the separator corresponding to it, and Eα be the subset of E(G)
mapped to the leaves in the subtree of T \ {α} that does not include the root of T . Let H be a
spanning subgraph of G[Eα ]. We adapt some definitions of Czumaj and Lingas [9, 10]. An separator
completion of α is a multiset of edges between vertices of Lα , each of which may appear up to 3
times.
Definition 1. A configuration of a vertex v of H for an edge α of T is a pair (A, B), where A is
a tuple (a1 , a2 , . . . , a|Lα | ), representing that there are ai edge-disjoint paths from v to the ith vertex
of Lα in H, and B is a set of tuples (xi , yi , bi ), representing that there are bi edge-disjoint paths
between the vertices xi and yi of Lα in H. (We only need those configurations where |ai | ≤ 3 for
P|Lα |
P
all 0 ≤ i ≤ |Lα | and |bi | ≤ 3 for all i ≥ 0.) All the i=1
ai + i bi paths in a configuration should
be mutually edge-disjoint in H.
Definition 2. For any pair of vertices u and v in H, let ComH (u, v) be the set of separator
completions of α each of which augments H to a graph where u and v are three-edge connected.
For each vertex v in H, let P athH (v) be a set of configurations of v for α. Let P athH be the set of
all the non-empty B in which all tuples can be satisfied in H. Let CH be the set consisting of one
value in each ComH (u, v) for all pairs of vertices u and v in H, and PH be the set consisting of one
value in each P athH (v) for all vertices v in H. We call the tuple (CH , PH , P athH ) the connectivity
characteristic of H, and denote it by Char(H).
Note that |Lα | ≤ w for any edge α. Subgraph H may correspond to multiple CH and PH , so H
may have multiple connectivity characteristics. Further, each value in PH represents at least one
2
vertex. For any edge α, there are at most 4O(w ) distinct separator completions (O(w2 ) pairs of
O(w2 )
vertices, each of which can be connected by at most 3 parallel edges) and at most 24
distinct
2
sets CH of separator completions. For any edge α, there are at most 4O(w ) different configurations
for any vertex in H since the number of different sets A is at most 4w , the number of different
2
sets B is at most 4O(w ) (the same as the number of separator completions). So there are at most
O(w2 )
O(w2 )
24
different sets of configurations PH , and at most 24
different sets B. Therefore, there
O(w2 )
4
are at most 2
distinct connectivity characteristics for any edge α.
P|L|
Definition 3. A configuration (A, B) of vertex v for α is connecting if the inequality i=1 ai ≥ 3
holds where ai is the ith coordinate in A. That is, there are enough edge-disjoint paths from
v to the corresponding separator Lα which can connect v and vertices outside Lα . Char(H) is
connecting if all configurations in its PH set are connecting. Subgraph H is connecting if at least
one of Char(H) is connecting. In the following, we only consider connecting subgraphs and their
connecting connectivity characteristics.
17
In the following, we need as a subroutine an algorithm to solve the following problem: when
given a set of demands (xi , yi , bi ) and a multigraph, we wantPto decide if there exist bi edge-disjoint
paths between vertices xi and yi in the graph and all the i bi paths are mutually edge-disjoint.
Although we do not have a polynomial time algorithm for this problem, we only need to solve
this on graphs with O(w) vertices, O(w2 ) edges and O(w2 ) demands. So even an exponential time
algorithm is acceptable for our purpose here. Let ALG be an algorithm for this problem, whose
running time is bounded by a function f (w), which may be exponential in w.
For an edge α in the decomposition tree T , let β and γ be its two child edges. Let H1 (H2 ) be
a spanning subgraph of G[Eβ ] (G[Eγ ]). Let H = H1 ∪ H2 . Then we have the following lemma.
Lemma 24. For any pair of Char(H1 ) and Char(H2 ), all the possible Char(H), that could be
2
obtained from Char(H1 ) and Char(H2 ), can be computed in O(4w f (w) + 4w
2 4w2
) time.
Proof. We compute all the possible sets for the three components of Char(H).
Compute all possible CH Each CH contains two parts: the first part covers all pairs of vertices
in the same Hi for i = 1, 2 and the second part covers all pairs of vertices from distinct subgraphs.
For the first part, we generalize each value C ∈ CHi for i = 1, 2 into a possible set XC . Notice
that each separator completion can be represented by a set of demands (x, y, b) where x and y
are in the separator. For a candidate separator completion C 0 of α, we combine C 0 with each
B ∈ P athH3−i to construct a graph H 0 and define the demand set the same as C. By running
ALG on this instance, we can check if C 0 is a legal generalization for C. This could be computed
2
2
in 4O(w ) w2 + 4O(w ) f (w) time for each C. All the legal generalizations for C form XC .
Now we compute the second part. Let (A1 , B1 ) ∈ PH1 and (A2 , B2 ) ∈ PH2 be the configurations
for some pair of vertices u ∈ H1 and v ∈ H2 respectively. We will compute possible ComH (u, v).
We first construct a graph H 0 on Lβ ∪ Lγ ∪ {u, v} by the two configurations: add i parallel edges
between two vertices if there are i paths between them represented in the configurations. Then we
check for each candidate separator completion C 0 if u and v are three-edge connected in H 0 ∪ C 0 .
We need O(w3 ) for this checking if we use Orlin’s max-flow algorithm [23]. All those C 0 that are
capable of providing three-edge-connectivity with H 0 form ComH (u, v). This can be computed in
2
4O(w ) w3 time for each pair of configurations.
A possible CH consists of each value in XC for every C ∈ CHi for i = 1, 2 and each value in
ComH (u, v) for all pairs of configurations of PH1 and PH2 . To compute all the sets, we need at
2
2
2
most 4O(w ) w3 + 4O(w ) f (w) time. There are at most 4O(w ) sets and each may contain at most
2 O(w2 )
2
4O(w ) values. Therefore, to generate all the possible CH from those sets, we need at most 4w 4
time.
Compute all possible PH We generalize each configuration (A, B) of v in PHi (i = 1, 2) into a
set Yv of possible configurations. For each set B 0 in P athH3−i , we construct a graph H 0 by A, B and
B 0 on vertex set Lβ ∪ Lγ ∪ {v}: if there are b disjoint paths between a pair of vertices represented
in A, B or B 0 , we add b parallel edges between the same pair of vertices in H 0 , taking O(w2 ) time.
For a candidate value (A∗ , B ∗ ) for α, we define a set of demands according to A∗ and B ∗ and run
ALG on all the possible H 0 we construct for sets in P athH3−i . If there exists one such graph that
satisfies all the demands, then we add this candidate value into Yv . We can therefore compute each
2
2
set Yv in 4O(w ) w2 + 4O(w ) f (w) time. A possible PH consists of each value in Yv for all v ∈ V (H).
2
2
There are at most 4O(w ) such sets and each may contain at most 4O(w ) values. So we can generate
2
2 O(w )
all possible PH from those sets in 4w 4
time.
18
Compute PathH For each pair of B1 ∈ P athH1 and B2 ∈ P athH2 , we construct a graph H 0 on
vertex set Lβ ∪Lγ : if two vertices are connected by b disjoint paths, we add b parallel edges between
those vertices in H 0 . Since each candidate B 0 for α can be represented by a set of demands, we only
need to run ALG on all possible H 0 to check if B 0 can be satisfied. We add all satisfied candidates
2
2
B 0 into P athH . This can be computed in 4O(w ) w2 + 4O(w ) f (w) time.
2
Therefore, the total running time is O(4w f (w) + 4w
all possible cases, and the correctness follows.
2 4w2
). For each component we enumerate
Our dynamic programming is guided by the decomposition tree T from leaves to root. For each
edge α, our dynamic programming table is indexed by all the possible connectivity characteristics. Each entry indexed by the connectivity characteristic Char in the table is the weight of the
minimum-weight spanning subgraph of G[Eα ] that has Char as its connectivity characteristic.
Base case For each leaf edge uv of T , the only subgraph H is the edge uv and the separator only
contains the endpoints u and v. ComH (u, v) contains the multisets of edge uv that appears twice.
P athH (u) contains two configurations: ((3, 0), {(u, v, 1)}) and ((3, 1), ∅), and P athH (v) contains
two configurations: ((0, 3), {(u, v, 1)}) and ((1, 3), ∅). P athH contains one set: {(u, v, 1)}.
For each non-leaf edge α in T , we combine every pair of connectivity characteristics from its
two child edges to fill in the dynamic programming table for α. The root can be seen as a base
case, and we can combine it with the computed results. The final result will be the entry indexed
by (∅, ∅, ∅) in the table of the root. Let m = |E(G)|. Then the size of the decomposition tree T
2
is O(m). By Lemma 24, we need O(4w f (w) + 4w
2 4w2
) time to combine each pair of connectivity
2
4O(w )
characteristics. Since there are at most 2
connectivity characteristics for each node, the total
2 4w 2
w2
w
4
m). Since the branchwidth w of G is bounded, the
running time will be O(2 f (w)m + 4
running time will be O(|E(G)|).
Correctness The separator completions guarantee the connectivity for the vertices in H, and
the connecting configurations enumerate all the possible ways to connect vertices in H and vertices
of V (G) \ V (H). So the connectivity requirement is satisfied. The correctness of the procedure
follows from Lemma 24.
Acknowledgements
We thank Glencora Borradaile and Hung Le for helpful discussions.
19
References
[1] B. Baker. Approximation algorithms for NP-complete problems on planar graphs. Journal of
the ACM, 41(1):153–180, 1994.
[2] M. Bateni, M. Hajiaghayi, and D. Marx. Approximation schemes for Steiner forest on planar
graphs and graphs of bounded treewidth. J. ACM, 58(5):21, 2011.
[3] A. Berger and M. Grigni. Minimum weight 2-edge-connected spanning subgraphs in planar
graphs. In Proceedings of the 34th International Colloquium on Automata, Languages and
Programming, volume 4596 of Lecture Notes in Computer Science, pages 90–101, 2007.
[4] G. Borradaile, C. Kenyon-Mathieu, and P. Klein. A polynomial-time approximation scheme
for Steiner tree in planar graphs. In Proceedings of the 18th Annual ACM-SIAM Symposium
on Discrete Algorithms, volume 7, pages 1285–1294, 2007.
[5] G. Borradaile and P. Klein. The two-edge connectivity survivable network problem in planar
graphs. In Proceedings of the 35th International Colloquium on Automata, Languages and
Programming, pages 485–501, 2008.
[6] G. Borradaile, P. Klein, and C. Mathieu. An O(n log n) approximation scheme for Steiner tree
in planar graphs. ACM Transactions on Algorithms, 5(3):1–31, 2009.
[7] J. Cheriyan and R. Thurimella. Approximating minimum-size k-connected spanning subgraphs
via matching. SIAM Journal on Computing, 30(2):528–560, 2000.
[8] A. Czumaj, M. Grigni, P. Sissokho, and H. Zhao. Approximation schemes for minimum 2edge-connected and biconnected subgraphs in planar graphs. In Proceedings of the fifteenth
Annual ACM-SIAM Symposium on Discrete Algorithms, pages 496–505. Society for Industrial
and Applied Mathematics, 2004.
[9] A. Czumaj and A. Lingas. A polynomial time approximation scheme for euclidean minimum
cost k-connectivity. In Automata, Languages and Programming, pages 682–694. Springer, 1998.
[10] A. Czumaj and A. Lingas. On approximability of the minimum cost k-connected spanning
subgraph problem. In Proceedings of the 10th Annual ACM-SIAM Symposium on Discrete
Algorithms, pages 281–290, 1999.
[11] H. N. Gabow and S. R. Gallagher. Iterated rounding algorithms for the smallest k-edge
connected spanning subgraph. SIAM Journal on Computing, 41(1):61–103, 2012.
[12] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of
NP-Completeness. WH Freeman & Co., 1979.
[13] T. F. Gonzalez. Handbook of Approximation Algorithms and Metaheuristics. CRC Press, 2007.
[14] P. Gubbala and B. Raghavachari. Approximation algorithms for the minimum cardinality
two-connected spanning subgraph problem. In Integer Programming and Combinatorial Optimization, pages 422–436. Springer, 2005.
20
[15] P. Gubbala and B. Raghavachari. A 4/3-approximation algorithm for minimum 3-edgeconnectivity. In Algorithms and Data Structures, pages 39–51. Springer, 2007.
[16] J. Hopcroft and R. Tarjan. Algorithm 447: Efficient algorithms for graph manipulation.
Commun. ACM, 16(6):372–378, June 1973.
[17] R. Jothi, B. Raghavachari, and S. Varadarajan. A 5/4-approximation algorithm for minimum
2-edge-connectivity. In Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete
Algorithms, pages 725–734, 2003.
[18] P. Klein. A linear-time approximation scheme for TSP in undirected planar graphs with
edge-weights. SIAM Journal on Computing, 37(6):1926–1952, 2008.
[19] P. Klein and S. Mozes. Optimization algorithms for planar graphs. In preparation, manuscript
at http://planarity.org.
[20] R. Lipton and R. Tarjan. A separator theorem for planar graphs. SIAM Journal on Applied
Mathematics, 36(2):177–189, 1979.
[21] K. Mehlhorn, A. Neumann, and J. M. Schmidt. Certifying 3-edge-connectivity. In GraphTheoretic Concepts in Computer Science, pages 358–369. Springer, 2013.
[22] H. Nagamochi and T. Ibaraki. Computing edge-connectivity in multigraphs and capacitated
graphs. SIAM Journal on Discrete Mathematics, 5(1):54–66, 1992.
[23] J. B. Orlin. Max flows in O(nm) time, or better. In Proceedings of the forty-fifth Annual ACM
Symposium on Theory of Computing, pages 765–774. ACM, 2013.
[24] J. M. Schmidt. Contractions, removals, and certifying 3-connectivity in linear time. SIAM
Journal on Computing, 42(2):494–535, 2013.
[25] P. Seymour and R. Thomas. Call routing and the ratcatcher. Combinatorica, 14(2):217–241,
1994.
[26] R. Tarjan. A note on finding the bridges of a graph. Information Processing Letters, 1974.
[27] K.-P. Vo. Finding triconnected components of graphs.
13(2):143–165, 1983.
21
Linear and multilinear algebra,
| 8 |
1
Guiding Designs of Self-Organizing Swarms:
Interactive and Automated Approaches
arXiv:1308.3400v1 [] 14 Aug 2013
Hiroki Sayama
Collective Dynamics of Complex Systems Research Group
Binghamton University, State University of New York
Binghamton, NY 13902-6000, USA
[email protected]
Summary. Self-organization of heterogeneous particle swarms is rich in its dynamics but hard
to design in a traditional top-down manner, especially when many types of kinetically distinct
particles are involved. In this chapter, we discuss how we have been addressing this problem
by (1) utilizing and enhancing interactive evolutionary design methods and (2) realizing spontaneous evolution of self-organizing swarms within an artificial ecosystem. 1
1.1 Introduction
Engineering design has traditionally been a top-down process in which a designer
shapes, arranges and combines various components in a specific, precise, hierarchical
manner, to create an artifact that will behave deterministically in an intended way [Minai et al., 2006, Pahl et al., 2007]. However, this process does not apply to complex
systems that show self-organization, adaptation and emergence. Complex systems consist of a massive amount of simpler components that are coupled locally and loosely,
whose behaviors at macroscopic scales emerge partially stochastically in a bottom-up
way. Such emergent properties of complex systems are often very robust and dynamically adaptive to the surrounding environment, indicating that complex systems bear
great potential for engineering applications [Ottino, 2004].
In an attempt to design engineered complex systems, one of the most challenging
problems has been how to bridge the gap between macro and micro scales. Some mathematical techniques make it possible to analytically show such macro-micro relationships in complex systems (e.g., those developed in statistical mechanics and condensed
matter physics [Bar-Yam, 2003, Boccara, 2010]). However, those techniques are only
applicable to “simple” complex systems, in which: system components are reasonably
1
This chapter is based on our previous publications [Sayama, 2007, Sayama, 2009, Sayama
et al., 2009, Sayama, 2010, Bush and Sayama, 2011, Sayama, 2011, Sayama and Wong, 2011,
Sayama, 2012].
2
Hiroki Sayama
In Engineering:
Prediction by
experiments
Microscopic properties
Macroscopic properties
Emergence
(Local structure/behavior of
fundamental components)
Analytical treatments possible,
though rather limited
(Global structure/behavior of
the whole system)
In Engineering:
Embedding by
evolutionary design
Fig.
1.1. 1:
Relationships
in complex
complexsystems
systemsand
andhow
how
Figure
Relationshipsofofmacroscopic
macroscopicand
andmicroscopic
microscopic properties
properties in
engineering
has been
handlinghas
thebeen
gap between
complex
systems
engineering
handlingthem.
the gap between them.
In an attempt
to design engineered
complex
systems,can
onebe
of approximated
the most challenging
problems
hasimuniform
and homogeneous,
their
interactions
without
losing
been how to bridge the gap between macro and micro scales, i.e., how to embed macroscopic
portant
dynamical properties, and/or the resulting emergent patterns are relatively regrequirements the designer wants into microscopic rules by which the fundamental components
ular
so
that
be characterized
a smallmake
number
of macroscopic
order
operate [2]. they
Somecan
advanced
mathematicalby
techniques
it possible
to analytically
showparamthese
macro-micro
relationships
in complex
systems,
most
notably those developed
in statistical
eters
[Bar-Yam,
2003, Doursat
et al.,
2012].
Unfortunately,
such cases
are exceptions
and condensed
mattermessy
physicscompendium
[3,5,6]. However,
we point out
that those
techniques[Cainmechanics
a vast, diverse,
and rather
of complex
systems
dynamics
are only applicable to “simple” complex systems, in which: components of the system are
mazine,
2003, Sole and Goodwin, 2008]. To date, the only generalizable methodology
reasonably uniform and homogeneous, their interactions can be approximated without losing
available
predicting
macroscopic
properties
of a complex
from microscopic
importantfor
dynamical
properties,
and/or the
resulting emergent
patternssystem
are relatively
regular so
rules
governing
its fundamental
components
to conduct
experiments—either
comthat they
can be characterized
by a small
number of is
macroscopic
variables
[3,7]. Unfortunately,
such cases or
are physical—to
exceptional in alet
vast,
andshow
ratherits
messy
compendium
of complex
systems
putational
thediverse,
system
emergent
properties
by itself
(Fig.
dynamics
1.1,
top). [8,9]. To date, the only generalizable methodology available for predicting
macroscopic properties of a complex system from microscopic rules of its fundamental
More importantly, the other way of connecting the two scales—embedding macroscopic requirements the designer wants into microscopic rules that will collectively
2
achieve those requirements—is by far more difficult. This is because the mapping between micro and macro scales is highly nonlinear, and also the space of possible microscopic rules is huge and thus hard to explore. So far, the only generalizable methodology available for macro-to-micro embedding in this context is to acquire microscopic
rules by evolutionary means [Bentley, 1999] (Fig. 1.1, bottom). Instead of trying to
1 Guiding Designs of Self-Organizing Swarms
3
derive local rules analytically from global requirements, evolutionary methods let better rules spontaneously arise and adapt to meet the requirements, even though they
do not produce any understanding of the macro-micro relationships. The effectiveness
of such “blind” evolutionary search [Dawkins, 1996] for complex systems design is
empirically supported by the fact that it has been the primary mechanism that has produced astonishingly complex, sophisticated, highly emergent machinery in the history
of real biological systems.
The combination of these two methodologies—experiment and evolution—that
connect macro and micro scales in two opposite directions (the whole cycle in Fig.
1.1) is now a widely adopted approach for guiding systematic design of self-organizing
complex systems [Minai et al., 2006, Anderson, 2006]. Typical design steps are to (a)
create local rules randomly or using some heuristics, (b) conduct experiments using
those local rules, (c) observe what kind of macroscopic patterns emerge out of them,
(d) select and modify successful rules according to the observations, and (e) repeat
these steps iteratively to achieve evolutionary improvement of the microscopic rules
until the whole system meets the macroscopic requirements.
Such experiment-and-evolution-based design of complex systems is not free from
limitations, however. In typical evolutionary design methods, the designer needs to
explicitly define a performance metric, or “fitness”, of design candidates, i.e., how
good a particular design is. Such performance metrics are usually based on relatively
simple observables easily extractable from experimental results (e.g., the distance a
robot traveled, etc.). However, simple quantitative performance metrics may not be
suitable or useful in evolutionary design of more complex structures or behaviors,
such as those seen in real-world biological systems, where the key properties a system
should acquire could be very diverse and complex, more qualitative than quantitative,
and/or even unknown to the designer herself beforehand.
In this chapter, we present our efforts to address this problem, by (1) utilizing
and enhancing interactive evolutionary design methods and (2) realizing spontaneous
evolution of self-organizing swarms within an artificial ecosystem.
1.2 Model: Swarm Chemistry
We use Swarm Chemistry [Sayama, 2007, Sayama, 2009] as an example of selforganizing complex systems with which we demonstrate our design approaches.
Swarm Chemistry is an artificial chemistry [Dittrich et al., 2001] model for designing spatio-temporal patterns of kinetically interacting heterogeneous particle swarms
using evolutionary methods. A swarm population in Swarm Chemistry consists of a
number of simple particles that are assumed to be able to move to any direction at any
time in a two- or three-dimensional continuous space, perceive positions and velocities
of other particles within its local perception range, and change its velocity in discrete
time steps according to the following kinetic rules (adopted and modified from the
rules in Reynolds’ Boids [Reynolds, 1987]; see Fig. 1.2):
• If there are no other particles within its local perception range, steer randomly
(Straying).
4
Hiroki Sayama
Table 1.1. Kinetic parameters involved in the simulation of particle behavior (from [Sayama,
2010]). Unique values are assigned to these parameters for each particle i as its own kinetic
properties.
Name Min Max
Meaning
Unit
Ri
0 300 Radius of local perception range
pixel
Vni
0 20 Normal speed
pixel step−1
i
Vm
0 40 Maximum speed
pixel step−1
ci1
0
1 Strength of cohesive force
step−2
i
c2
0
1 Strength of aligning force
step−1
i
c3
0 100 Strength of separating force
pixel2 step−2
ci4
0 0.5 Probability of random steering
—
ci5
0
1 Tendency of self-propulsion
—
• Otherwise:
– Steer to move toward the average position of nearby particles (Cohesion,
Fig. 1.2(a)).
– Steer toward the average velocity of nearby particles (Alignment, Fig. 1.2(b)).
– Steer to avoid collision with nearby particles (Separation, Fig. 1.2(c)).
– Steer randomly with a given probability (Randomness).
• Approximate its speed to its own normal speed (Self-propulsion).
These rules are implemented in a simulation algorithm that uses kinetic parameters
listed and explained in Table 1.1 (see [Sayama, 2009, Sayama, 2010] for details of
the algorithm). The kinetic interactions in our model uses only one omni-directional
perception range (Ri ), which is much simpler than other typical swarm models that
use multiple and/or directional perception ranges [Reynolds, 1987, Couzin et al.,
2002, Kunz and Hemelrijk, 2003, Hemelrijk and Kunz, 2005, Cheng et al., 2005, Newman and Sayama, 2008]. Moreover, the information being shared by nearby particles
is nothing more than kinetic one (i.e., relative position and velocity), which is externally observable and therefore can be shared without any specialized communication
channels2 . These features make this system uniquely simple compared to other selforganizing swarm models.
Each particle is assigned with its own kinetic parameter settings that specify preferred speed, local perception range, and strength of each kinetic rule. Particles that
share the same set of kinetic parameter settings are considered of the same type. Particles do not have a capability to distinguish one type from another; all particles look
exactly the same to themselves.
For a given swarm, specifications for its macroscopic properties are indirectly and
implicitly woven into a list of different kinetic parameter settings for each swarm component, called a recipe (Fig. 1.3) [Sayama, 2009]. It is quite difficult to manually design
a specific recipe that produces a desired structure and/or behavior using conventional
top-down design methods, because the self-organization of a swarm is driven by com2
An exception is local information transmission during particle recruitment processes, which
will be discussed later.
1 Guiding Designs of Self-Organizing Swarms
5
Fig. 1.2. Kinetic interactions between particles (from [Sayama, 2010]). Top: Particle i senses
only positions and velocities of nearby particles within distance Ri . Bottom: (a) Cohesion. Particle i accelerates toward the center of mass of nearby particles. (b) Alignment. Particle i steers
to align its orientation to the average orientation of nearby particles. (c) Separation. Particle i
receives repulsion forces from each of the nearby particles whose strength is inversely related to
distance.
97 * (226.76, 3.11, 9.61, 0.15, 0.88, 43.35, 0.44, 1.0)
38 * (57.47, 9.99, 35.18, 0.15, 0.37, 30.96, 0.05, 0.31)
56 * (15.25, 13.58, 3.82, 0.3, 0.8, 39.51, 0.43, 0.65)
31 * (113.21, 18.25, 38.21, 0.62, 0.46, 15.78, 0.49, 0.61)
Fig. 1.3. Example of a recipe, formatted as a list of kinetic parameter sets of different types
within a swarm (from [Sayama, 2010]). Each row represents one type, which has a number
of particles of that type at the beginning, followed by its parameter settings in the format of
i
(Ri , Vni , Vm
, ci1 , ci2 , ci3 , ci4 , ci5 ).
plex interactions among a number of kinetic parameters that are intertwined with each
other in highly non-trivial, implicit ways.
In the following sections, we address this difficult design problem using evolutionary methods. Unlike in other typical evolutionary search or optimization tasks,
however, in our swarm design problem, there is no explicit function or algorithm readily available for assessing the quality (or fitness) of each individual design. To meet
with this unique challenge, we used two complementary approaches: The interactive
approach, where human users are actively involved in the evolutionary design process,
and the automated approach, where spontaneous evolutionary dynamics of artificial
ecosystems are utilized as the engine to produce creative self-organizing patterns.
6
Hiroki Sayama
“swinger”
“rotary”
“walker-follower”
Fig. 1.4. Examples of swarms designed using IEC methods. Their recipes are available on the
Swarm Chemistry website (http://bingweb.binghamton.edu/˜sayama/SwarmChemistry/).
1.3 Interactive Approach
The first approach is based on interactive evolutionary computation (IEC) [Banzhaf,
2000,Takagi, 2001], a derivative class of evolutionary computation which incorporates
interaction with human users. Most IEC applications fall into a category known as
“narrowly defined IEC” (NIEC) [Takagi, 2001], which simply outsources the task of
fitness evaluation to human users. For example, a user may be presented with a visual
representation of the current generation of solutions and then prompted to provide
fitness information about some or all of the solutions. The computer in turn uses this
fitness information to produce the next generation of solutions through the application
of a predefined sequence evolutionary operators.
Our initial work, Swarm Chemistry 1.1 [Sayama, 2007,Sayama, 2009], also used a
variation of NIEC, called Simulated Breeding [Unemi, 2003]. This NIEC-based application used discrete, non-overlapping generation changes. The user selects one or two
favorable swarms out of a fixed number of swarms displayed, and the next generation
is generated out of them, discarding all other unused swarms. Selecting one swarm creates the next generation using perturbation and mutation. Selecting two swarms creates
the next generation by mixing them together (similar to crossover, but this mixing is
not genetic but physical). Figure 1.4 shows some examples of self-organizing swarms
designed using Swarm Chemistry 1.1.
As a design tool, NIEC has some disadvantages. One set of disadvantage stems
from the confinement of the user to the role of selection operator (Fig. 1.5, left). Creative users who are accustomed to a more highly involved design process may find the
experience to be tedious, artificial, and frustrating. Earlier literature suggests that it is
important to instill in the user a strong sense of control over the entire evolutionary
process [Bentley and O’Reilly, 2001] and that the users should be the initiators of actions rather than simply responding to prompts from the system [Shneiderman et al.,
2009].
These lines of research suggest that enhancing the level of interaction and control
of IEC may help the user better guide the design process of self-organizing swarms.
Therefore, we developed the concept of hyperinteractive evolutionary computation
(HIEC) [Bush and Sayama, 2011], a novel form of IEC in which a human user actively
chooses when and how to apply each of the available evolutionary operators, playing
the central role in the control flow of evolutionary search processes (Fig. 1.5, right).
1 Guiding Designs of Self-Organizing Swarms
7
MUTATE
create new individual by mutating
selected individual
36+
user selects
2 individuals
user
provides
fitness
data
COMPUTE NEXT GENERATION BASED
ON AQUIRED FITNESS DATA
apply
crossover
operator
apply
mutation
operator
delete
surplus
individuals
create new
individual by
crossing selected
individuals
user selects
1 individual
user
chooses an
evolutionary
operator
copy
selected
individual
delete
selected
individual
user selects
1 individual
NIEC
add a
randomly
generated
individual
into the
population
user selects
1 individual
HIEC
Fig. 1.5. Comparison of control flows between two interactive evolutionary computation (IEC)
frameworks (from [Bush and Sayama, 2011]). Left: Narrowly defined IEC (NIEC). Right:
Hyper-interactive IEC (HIEC).
In HIEC, the user directs the overall search process and initiates actions by choosing
when and how each evolutionary operator is applied. The user may add a new solution to the population through the crossover, mutate, duplicate, or random operators.
The user can also remove solutions with the delete operator. This naturally results in
dynamic variability of population size and continuous generation change (like steadystate strategies for genetic algorithms).
We developed Swarm Chemistry 1.2 [Sayama et al., 2009, Bush and Sayama,
2011], a redesigned HIEC-based application for designing swarms. This version uses
continuous generation changes, i.e., each evolutionary operator is applied only to part
of the population of swarms on a screen without causing discrete generation changes.
A mutated copy of an existing swarm can be generated by either selecting the “Mutate” option or double-clicking on a particular swarm. Mixing two existing swarms can
be done by single-clicking on two swarms, one after the other. The “Replicate” option
creates an exact copy of the selected swarm next to it. One can also remove a swarm
from the population by selecting the “Kill” option or simply closing the frame. More
details of HIEC and Swarm Chemistry 1.2 can be found elsewhere [Sayama et al.,
2009, Bush and Sayama, 2011].
We conducted the following two human-subject experiments to see if HIEC would
produce a more controllable and positive user experience, and thereby better swarm
design outcomes, than those with NIEC.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
8
6
Hiroki Sayama
p < 0.01
p < 0.01
p < 0.01
*
*
*
Rating
—
5
—
—
—
ì
4
—
ì
—
—
—
—
—
ì
1
—
0
ì
—
—
—
—
2
ì
ì
ì
—
—
—
—
ì
—
ì
—
ì
—
ì
—
3
—
ì
ì
—
—
—
ì
—
NIEC HIEC
NIEC HIEC
NIEC HIEC
NIEC HIEC
NIEC HIEC
NIEC HIEC
NIEC HIEC
easiness of
operation
controllability
intuitiveness
fun factor
fatigue
quality
satisfaction
Fig. 9. Comparison of rating distribution between the NIEC and HIEC applications across seven factors. Mean ratings are shown by diamonds, with error bars
around them showing standard deviations. Significant differences are indicated with an asterisk and corresponding t-test p-values.
Fig. 1.6. Comparison of rating distribution between the NIEC and HIEC applications across
is investigated in the next experiment.
seven factors. Mean ratings are shown by diamonds, with error bars around them showing stanTABLE I
subjects
notasterisk
have any overlap
with the subjects of
expedard deviations. Significant
differences are indicated
withdidan
and corresponding
t-test
MEAN DIFFERENCES IN USER EXPERIENCE OBTAINED IN EXPERIMENT 1
riment 1.
p-values.
Average
Average
of final product quality which the user perceived. This question
The procedure of the experiment was as follows.
1) 21 students were randomly divided into seven
groups, each made of three members. Every time
easiness of operation
4.2
4.5
0.169
groups were formed, we confirmed that each group
1.3.1
User experience
controllability
3.0
4.4
<0.00005*
had at least one member who had a Java-enabled
intuitiveness
3.7
4.1
0.157
laptop computer with wireless network connection.
fun factor
3.7
4.6
0.002*
In the
first experiment,
individual
subjects used 2)the They
NIEC
HIEC
applications
menwereand
instructed
to launch
the NIEC application
fatigue level
1.9
1.9
0.443
from the project website, received a brief explanation
final product
qualityto evolve
3.6
4.0
0.113
tioned
above
aesthetically
pleasing self-organizing
swarms.
We
quantified
of how to use the application, and then asked to work
satisfaction
3.7
4.3
useroverall
experience
outcomes
using0.007*
questionnaire, intogether
orderasto
quantify
differa team
to design potential
an “interesting”
product
* Significant differences are shown in bold and marked with an asterisk.
within ten minutes. After that, each group was reences in user experience between the two applications.
minded to make a final decision within an extra
Twenty-one subjects were recruited from students
and
faculty/staff
members
at
minute and
choose
the best design as
the group’s final
C. Experiment 2: Product Quality
they were told
to post their products
Binghamton
University. Each subject was recruitedproduct.
and Then
participated
individually.
Theto
The goal of the second experiment is to quantify the benefit
an online bulletin board. This step is called “condition
toofspend
fivequality.
minutes
of two applications to design an
ofsubject
HIEC overwas
NIECtold
in terms
final product
In addi-using each
0” hereafter.
tion,
the effects of mixing
and mutation
operatorsEach
on the final
3) Then,
the HIEC application
introduced
with a
“interesting
and
lifelike”
swarm.
of
these
two
applications
ran on was
their
own dediproduct quality are also studied. The key feature of this expebrief explanation of how to use it and how it differs
cated
computer
station.
After
completing
each
which
used
eitherfour
NIEC
riment
is that
product quality
is rated
not individually
by thetwo sessions,
from the
old of
version,
and the
following
condisubjects who designed them but simultaneously by an entire
tionsrating
were disclosed
toof
thethe
students:
or
HIEC
application,
the
subject
filled
out
a
survey,
each
two
platforms
classroom full of subjects. The increased amount of rating
information
yielded by this
procedureeasiness
allowed us of
to more
efon the following
factors:
operation,
controllability,
intuitiveness,
fun factor,
1: Baseline (neither
mixing nor mutation
operators
fectively detect differences in quality between products deavailable)
fatigue
level,
final
design
quality,
and
overall
satisfaction.
Each
factor
was
rated on a
signed using NIEC and products designed using HIEC.
2: Mixing only
5-point scale.
3: Mutation only
1) Experimental Setup
4: Mixing + mutation (full-featured HIEC)
The
resultswasare
shown
Fig.
1.6. Of
The
experiment
done
as part in
of the
activities
in thethe 7 factors measured, 3 showed statistically
"Evolutionary Product
Design"between
module of an
Engineering
significant
difference
two
platforms: controllability,
funvariations
factor,of the
and
Correspondingly, four
newoverall
simulator
elective course “Exploring Social Dynamics”, which was
were
prepared
and uploaded
to the original
website, each
satisfaction.
The
higher
controllability
ratings
for
HIEC
suggest
that
our
in-of
developed with financial support from NSF (Award #
which was configured with these two evolutionary
0737313) to
and re-design
offered to senior
junior
Bioengineeringto grant greater control to the user was suctention
an and
IEC
framework
operators enabled or disabled according to the expeand Management majors at Binghamton University. The
rimental
condition
associated
with it.
cessful.
Our
results
also were:
suggest
that
this increased
control
may
be associated
with a
participating
students’
backgrounds
9 female,
12 male;
were randomly reshuffled into new seven18 Bioengineering
major, experience,
3 Management major.
more
positive user
as is Those
indicated 4)by Students
the higher
overall satisfaction and
Factor
NIEC
rating
HIEC
rating
t-test
p-value
fun ratings for HIEC. In the meantime, there was no significant difference detected in
terms of perceived final design quality. This issue is investigated in more detail in the
following second experiment.
1 Guiding Designs of Self-Organizing Swarms
9
Fig. 1.7. Comparison of normalized rating score distributions between swarms produced under
five experimental conditions (from [Sayama et al., 2009]). Average rating scores are shown by
diamonds, with error bars around them showing standard deviations.
1.3.2 Design quality
The goal of the second experiment was to quantify the difference between HIEC and
NIEC in terms of final design quality. In addition, the effects of mixing and mutation
operators on the final design quality were also studied. The key feature of this experiment was that design quality was rated not individually by the subjects who designed
them, but by an entire group of individual subjects. The increased amount of rating information yielded by this procedure allowed us to more effectively detect differences
in quality between designs created using NIEC and designs created using HIEC.
Twenty-one students were recruited for this experiment. Those subjects did not
have any overlap with the subjects of experiment 1. The subjects were randomly divided into groups of three and instructed to work together as a team to design an
“interesting” swarm design in ten minutes using either the NIEC or HIEC application,
the latter of which was further conditioned to have the mixing operator, the mutation
operator, or both, or none. The sessions were repeated so that five to seven swarm
designs were created under each condition. Once the sessions were over, all the designs created by the subjects were displayed on a large screen in the experiment room,
and each subject was told to evaluate how “cool” each design was on a 0-to-10 numerical scale. Details of the experimental procedure and data analysis can be found
elsewhere [Sayama et al., 2009, Bush and Sayama, 2011].
The result is shown in Fig. 1.7. There was a difference in the average rating scores
between designs created using NIEC and HIEC (conditions 0 and 4), and the rating
scores were higher when more evolutionary operators were made available. Several
final designs produced through the experiment are shown in Fig. 1.8 (three with the
highest scores and three with the lowest scores), which indicate that highly evaluated
swarms tended to maintain coherent, clear structures and motions without dispersal,
while those that received lower ratings tended to disperse so that their behaviors are
not appealing to students.
10
Hiroki Sayama
(a)
(b)
Fig. 1.8. Samples of the final swarm designs created by subjects (from [Sayama et al., 2009]).
(a) Best three that received the highest rating scores. (b) Worst three that received the lowest
rating scores.
Table 1.2. Results of one-way ANOVA on the rating scores for five conditions obtained in
experiment 2 (from [Bush and Sayama, 2011]). Significant difference is shown with an asterisk.
Source of variation Degrees of freedom Sum of squares Mean square F F -test p-value
Between groups
4
14.799
3.700
4.11
0.003*
Within groups
583
525.201
0.901
Total
587
540
To detect statistical differences between experimental conditions, a one-way ANOVA
was conducted. The result of the ANOVA is summarized in Table 1.3.2. Statistically
significant variation was found between the conditions (p < 0.005). Tukey’s and Bonferroni’s post-hoc tests detected a significant difference between conditions 0 (NIEC)
and 4 (HIEC), which supports our hypothesis that the HIEC is more effective at producing final designs of higher quality than NIEC. The post-hoc tests also detected a
significant difference between conditions 1 (HIEC without mixing or mutation operators) and 4 (HIEC). These results indicate that the more active role a designer plays
in the interactive design process, and the more diverse evolutionary operators she has
at her disposal, the more effectively she can guide the evolutionary design of selforganizing swarms.
1.4 Automated Approach
The second approach we took was motivated by the following question: Do we really
need human users in order to guide designs of self-organizing swarms? This question
1 Guiding Designs of Self-Organizing Swarms
11
might sound almost paradoxical, because designing an artifact implies the existence
of a designer by definition. However, this argument is quite similar to the “watchmaker” argument claimed by the English theologist William Paley (as well as by many
other leading scientists in the past) [Dawkins, 1996]. Now that we know that the blind
evolutionary process did “design” quite complex, intricate structures and functions
of biological systems, it is reasonable to assume that it should be possible to create
automatic processes that can spontaneously produce various creative self-organizing
swarms without any human intervention.
In order to make the swarms capable of spontaneous evolution within a simulated
world, we implemented several major modifications to Swarm Chemistry [Sayama,
2010, Sayama, 2011, Sayama and Wong, 2011], as follows:
1. There are now two categories of particles, active (moving and interacting kinetically) and passive (remaining still and inactive). An active particle holds a recipe
of the swarm (a list of kinetic parameter sets) (Fig. 1.9(a)).
2. A recipe is transmitted from an active particle to a passive particle when they
collide, making the latter active (Fig. 1.9(b)).
3. The activated particle differentiates randomly into one of the multiple types specified in the recipe, with probabilities proportional to their ratio in it (Fig. 1.9(c)).
4. Active particles randomly and independently re-differentiate with small probability, r, at every time step (r = 0.005 for all simulations presented in this chapter).
5. A recipe is transmitted even between two active particles of different types when
they collide. The direction of recipe transmission is determined by a competition
function that picks one of the two colliding particles as a source (and the other as
a target) of transmission based on their properties (Fig. 1.9(d)).
6. The recipe can mutate when transmitted, as well as spontaneously at every time
step, with small probabilities, pt and ps , respectively (Fig. 1.9(e)). In a single
recipe mutation event, several mutation operators are applied, including duplication of a kinetic parameter set (5% per set), deletion of a kinetic parameter set
(5% per set), addition of a random kinetic parameter set (10% per event; increased
to 50% per event in later experiments), and a point mutation of kinetic parameter
values (10% per parameter).
These extensions made the model capable of showing morphogenesis and selfrepair [Sayama, 2010] and autonomous ecological/evolutionary behaviors of selforganized “super-organisms” made of a number of swarming particles [Sayama,
2011, Sayama and Wong, 2011]. We note here that there was a technical problem
in the original implementation of collision detection in an earlier version of evolutionary Swarm Chemistry [Sayama, 2011], which was fixed in the later implementation [Sayama and Wong, 2011].
In addition, in order to make evolution occur, we needed to confine the particles
in a finite environment in which different recipes compete against each other. We thus
conducted all the simulations with 10,000 particles contained in a finite, 5, 000×5, 000
square space (in arbitrary units; for reference, the maximal perception radius of a particle was 300). A “pseudo”-periodic boundary condition was applied to the boundaries
of the space. Namely, particles that cross a boundary reappear from the other side of the
12
Hiroki Sayama
(a)
(d)
67 * (216.35, 11.75, 7.7, 0.83, 0.97, 97.31, 0.02, 0.38)
29 * (254.64, 7.28, 7.0, 0.95, 0.11, 22.41, 0.43, 0.31)
13 * (105.4, 3.55, 5.24, 0.34, 0.18, 23.53, 0.39, 0.24)
97 * (226.76, 3.11, 9.61, 0.15, 0.88, 43.35, 0.44, 1.0)
38 * (57.47, 9.99, 35.18, 0.15, 0.37, 30.96, 0.05, 0.31)
56 * (15.25, 13.58, 3.82, 0.3, 0.8, 39.51, 0.43, 0.65)
31 * (113.21, 18.25, 38.21, 0.62, 0.46, 15.78, 0.49, 0.61)
97 * (226.76, 3.11, 9.61, 0.15, 0.88, 43.35, 0.44, 1.0)
38 * (57.47, 9.99, 35.18, 0.15, 0.37, 30.96, 0.05, 0.31)
56 * (15.25, 13.58, 3.82, 0.3, 0.8, 39.51, 0.43, 0.65)
31 * (113.21, 18.25, 38.21, 0.62, 0.46, 15.78, 0.49, 0.61)
(b)
97 * (226.76, 3.11, 9.61, 0.15, 0.88, 43.35, 0.44, 1.0)
38 * (57.47, 9.99, 35.18, 0.15, 0.37, 30.96, 0.05, 0.31)
56 * (15.25, 13.58, 3.82, 0.3, 0.8, 39.51, 0.43, 0.65)
31 * (113.21, 18.25, 38.21, 0.62, 0.46, 15.78, 0.49, 0.61)
competition
function
97 * (226.76, 3.11, 9.61, 0.15, 0.88, 43.35, 0.44, 1.0)
38 * (57.47, 9.99, 35.18, 0.15, 0.37, 30.96, 0.05, 0.31)
56 * (15.25, 13.58, 3.82, 0.3, 0.8, 39.51, 0.43, 0.65)
31 * (113.21, 18.25, 38.21, 0.62, 0.46, 15.78, 0.49, 0.61)
winner:
(e)
67 * (216.35, 11.75, 7.7, 0.83, 0.97, 97.31, 0.02, 0.38)
29 * (254.64, 7.28, 7.0, 0.95, 0.11, 22.41, 0.43, 0.31)
13 * (105.4, 3.55, 5.24, 0.34, 0.18, 23.53, 0.39, 0.24)
(c)
56
31
38
97
97 * (226.76, 3.11, 9.61, 0.15, 0.88, 43.35, 0.44, 1.0)
38 * (57.47, 9.99, 35.18, 0.15, 0.37, 30.96, 0.05, 0.31)
56 * (15.25, 13.58, 3.82, 0.3, 0.8, 39.51, 0.43, 0.65)
31 * (113.21, 18.25, 38.21, 0.62, 0.46, 15.78, 0.49, 0.61)
75 * (216.35, 11.75, 7.7, 0.83, 0.97, 97.31, 0.02, 0.38)
29 * (254.64, 7.28, 7.0, 0.95, 0.11, 28.56, 0.43, 0.31)
13 * (105.4, 3.55, 5.24, 0.34, 0.18, 23.53, 0.39, 0.24)
Fig. 1.9. How particle interactions work in the revised Swarm Chemistry (from [Sayama, 2011]).
(a) There are two categories of particles, active (blue) and passive (gray). An active particle
holds a recipe of the swarm in it (shown in the call-out). Each row in the recipe represents one
kinetic parameter set. The underline shows which kinetic parameter set the particle is currently
using (i.e., which kinetic type it is differentiated into). (b) A recipe is transmitted from an active
particle to a passive particle when they collide, making the latter active. (c) The activated particle
differentiates randomly into a type specified by one of the kinetic parameter sets in the recipe
given to it. (d) A recipe is transmitted between active particles of different types when they
collide. The direction of recipe transmission is determined by a competition function that picks
one of the two colliding particles as a source (and the other as a target) of transmission based on
their properties. (e) The recipe can mutate when transmitted with small probability.
space just like in conventional periodic boundary conditions, but they do not interact
across boundaries with other particles sitting near the other side of the space. In other
words, the periodic boundary condition applies only to particle positions, but not to
their interaction forces. This specific choice of boundary treatment was initially made
because of its simplicity of implementation, but it proved to be a useful boundary condition that introduces a moderate amount of perturbations to swarms while maintaining
their structural coherence and confining them in a finite area.
In the simulations, two different initial conditions were used: a random initial condition made of 9,900 inactive particles and 100 active particles with randomly generated one-type recipes distributed over the space, and a designed initial condition
consisted of 9,999 inactive particles distributed over the space, with just one active
particle that holds a pre-designed recipe positioned in the center of the space. Specifically, recipes of “swinger”, “rotary” and “walker-follower” (shown in Fig. 1.4) patterns
were used.
1 Guiding Designs of Self-Organizing Swarms
13
1.4.1 Exploring experimental conditions
Using the evolutionary Swarm Chemistry model described above, we studied what
kind of experimental conditions (competition functions and mutation rates) would be
most successful in creating self-organizing complex patterns [Sayama, 2011].
The first experiment was to observe the basic evolutionary dynamics of the model
under low mutation rates (pt = 10−3 , ps = 10−5 ). Random and designed (“swinger”)
initial conditions were used. The following four basic competition functions were implemented and tested:
• faster: The faster particle wins.
• slower: The slower particle wins.
• behind: The particle that hit the other one from behind wins. Specifically, if a particle exists within a 90-degree angle opposite to the other particle’s velocity, the
former particle is considered a winner.
• majority: The particle surrounded by more of the same type wins. The local neighborhood radius used to count the number of particles of the same type was 30. The
absolute counts were used for comparison.
Results are shown in Fig. 1.10. The results with the “behind” competition function
were very similar to those with the “faster” competition function, and therefore omitted
from the figure. In general, growth and replication of macroscopic structures were observed at early stages of the simulations. The growth was accomplished by recruitment
of inactive particles through collisions. Once a cluster of active particles outgrew maximal size beyond which they could not maintain a single coherent structure (typically
determined by their perception range), the cluster spontaneously split into multiple
smaller clusters, naturally resulting in the replication of those structures. These growth
and replication dynamics were particularly visible in simulations with designed initial
conditions. Once formed, the macroscopic structures began to show ecological interactions by themselves, such as chasing, predation and competition over finite resources
(i.e., particles), and eventually the whole system tended to settle down in a static or
dynamic state where only a small number of species were dominant. There were some
evolutionary adaptations also observed (e.g., in faster & designed (“swinger”); second
row in Fig. 1.10) even with the low mutation rates used.
It was also observed that the choice of competition functions had significant impacts on the system’s evolutionary dynamics. Both the “faster” and “behind” competition functions always resulted in an evolutionary convergence to a homogeneous
cloud of fast-moving, nearly independent particles. In contrast, the “slower” competition function tended to show very slow evolution, often leading to the emergence
of crystallized patterns. The “majority” competition function turned out to be most
successful in creating and maintaining dynamic behaviors of macroscopic coherent
structures over a long period of time, yet it was quite limited regarding the capability
of producing evolutionary innovations. This was because any potentially innovative
mutation appearing in a single particle would be lost in the presence of local majority
already established around it.
Based on the results of the previous experiment, the following five more competition functions were implemented and tested. The last three functions that took recipe
14
Hiroki Sayama
Time = 100
200
400
800
1600
Time = 1000
2000
4000
8000
16000
faster, random
faster, designed
(“swinger”)
slower, random
slower, designed
(“swinger”)
majority, random
majority, designed
(“swinger”)
Fig. 3. Results of Exp. 1: Evolutionary processes observed in the revised Swarm Chemistry model. Each image shows a snapshot of the space in a simulation,
whereFig.
dots with
different
colors (or gray levels
in print) represent
particles
different
types. Labels on
the left indicates
the competition
1.10.
Evolutionary
processes
observed
in ofthe
evolutionary
Swarm
Chemistry
modelfunction
(fromand the
initial condition used in each case. Snapshots were taken at logarithmic time intervals.
[Sayama, 2011]). Each image shows a snapshot of the space in a simulation, where dots with
different colors represent particles of different types. Labels on the left indicates the competition
10−3function
, ps = 10−5
). Random
and designed
(“swinger”)
is considered
a winner.
and
the initial
condition
used in initial
each case. particle
Snapshots
were taken
at logarithmic time
conditions were used. The following four basic competition
• majority: The particle surrounded by more of the same
intervals.
functions were implemented and tested:
type wins. The local neighborhood radius used to count
•
•
•
faster: The faster particle wins.
slower: The slower particle wins.
behind: The particle that hit the other one from behind
wins. Specifically, if a particle exists within a 90-degree
angle opposite to the other particle’s velocity, the former
the number of particles of the same type was 30. The
absolute counts were used for comparison.
Results are shown in Fig. 3. The results with the “behind”
competition function were very similar to those with the
“faster” competition function, and therefore omitted from the
188
1 Guiding Designs of Self-Organizing Swarms
Initial condition: random
slower
behind
majority
majority
(probabilistic)
majority
(relative)
majority
majority
(probabilistic)
majority
(relative)
recipe length
recipe length
then majority
recipe length
× majority
recipe length
recipe length
then majority
recipe length
× majority
faster
15
Initial condition: designed (“swinger”)
faster
slower
behind
Fig. 4. Results of Exp. 2: Comparison between several different competition functions. The nine cases on the left hand side started with random initial
conditions, while the other nine on the right hand side started with designed initial conditions with the “swinger” recipe. Snapshots were taken at time =
20000 for all cases.
Fig. 1.11. Comparison between several different competition functions (from [Sayama, 2011]).
The nine cases on the left hand side started with random initial conditions, while the other
Time = 1000
2000
4000
8000
16000
nine on the right hand side started with designed initial conditions with the “swinger” recipe.
Snapshots were taken at time = 20,000 for all cases.
designed
(“swinger”)
length into account were implemented in the hope that they might promote evolution
of increasingly more complex recipes and therefore more complex patterns:
• majority
(probabilistic): The particle surrounded by more of the same type wins.
designed
This
is essentially the same function as the original “majority”, except that the win(“rotary”)
ner is determined probabilistically using the particle counts as relative probabilities
of winning.
• majority (relative): The particle that perceives the higher density of the same type
within
its own perception range wins. The density was calculated by dividing the
designed
(“walkerfollower”)of particles of the same type by the total number of particles of any kind,
number
both counted within the perception range. The range may be different and asymmetric between the two colliding particles.
Fig. 5. Results of Exp. 3: Sample simulation runs with designed initial conditions using different recipes. All cases used the majority (relative) competition
function.
taken at logarithmic
time intervals.
• Snapshots
recipewere
length:
The particle
with a recipe that has more kinetic parameter sets wins.
• recipe length then majority: The particle with a recipe that has more kinetic parameter sets wins. If the recipe length is equal between the two colliding particles, the
winner is selected based on the “majority” competition function.
• recipe length × majority: A numerical190
score is calculated for each particle by multiplying its recipe length by the number of particles of the same type within its local
neighborhood (radius = 30). Then the particle with a greater score wins.
16
Hiroki Sayama
Results are summarized in Fig. 1.11. As clearly seen in the figure, the majoritybased rules are generally good at maintaining macroscopic coherent structures, regardless of minor variations in their implementations. This indicates that interaction
between particles, or “cooperation” among particles of the same type to support one
another, is the key to creating and maintaining macroscopic structures. Experimental
observation of a number of simulation runs gave an impression that the “majority (relative)” competition function would be the best in this regard, therefore this function
was used in all of the following experiments.
In the meantime, the “recipe length” and “recipe length then majority” competition
functions did not show any evolution toward more complex forms, despite the fact that
they would strongly promote evolution of longer recipes. What was occurring in these
conditions was an evolutionary accumulation of “garbage” kinetic parameter sets in a
recipe, which did not show any interesting macroscopic structure. This is qualitatively
similar to the well-known observation made in Tierra [Ray, 1992].
The results described above suggested the potential of evolutionary Swarm Chemistry for producing more creative, continuous evolutionary processes, but none of the
competition functions showed notable long-term evolutionary changes yet. We therefore increased the mutation rates to a 100 times greater level than those in the experiments above, and also introduced a few different types of exogenous perturbations to
create a dynamically changing environment (for more details, see [Sayama, 2011]).
This was informed by our earlier work on evolutionary cellular automata [Salzberg
et al., 2004, Salzberg and Sayama, 2004], which demonstrated that such dynamic environments may make evolutionary dynamics of a system more variation-driven and
thus promote long-term evolutionary changes.
With these additional changes, some simulation runs finally demonstrated continuous changes of dominant macroscopic structures over a long period of time (Fig. 1.12).
A fundamental difference between this and earlier experiments was that the perturbation introduced to the environment would often break the “status quo” established
in the swarm population, making room for further evolutionary innovations to take
place. A number of unexpected, creative swarm designs spontaneously emerged out of
these simulation runs, fulfilling our intension to create automated evolutionary design
processes. Videos of sample simulation runs can be found on our YouTube channel
(http://youtube.com/ComplexSystem).
1.4.2 Quantifying observed evolutionary dynamics
The experimental results described above were quite promising, but they were evaluated only by visual inspection with no objective measurements involved. To address
the lack of quantitative measurements, we developed and tested two simple measurements to quantify the degrees of evolutionary exploration and macroscopic structuredness of swarm populations [Sayama and Wong, 2011], assuming that the evolutionary
process of swarms would look interesting and creative to human eyes if it displayed
patterns that are clearly visible and continuously changing. These measurements were
developed so that they can be easily calculated a posteriori from a sequence of snapshots (bitmap images) taken in past simulation runs, without requiring genotypic or
1 Guiding Designs of Self-Organizing Swarms
17
Fig. 1.12. An example of long-term evolutionary behavior seen under dynamic environmental
conditions with high mutation rates. Snapshots were taken at constant time intervals (2,500
steps) to show continuous evolutionary changes.
Table 1.3. Four conditions used for the final experiment to quantify evolutionary dynamics.
Name
original-low
original-high
revised-low
revised-high
Mutation rate Environmental Collision detection
perturbation
algorithm
low
off
original
high
on
original
low
off
revised
high
on
revised
genealogical information that was typically assumed available in other proposed metrics [Bedau and Packard, 1992, Bedau and Brown, 1999, Nehaniv, 2000].
Evolutionary exploration was quantified by counting the number of new RGB colors that appeared in a bitmap image of the simulation snapshot at a specific time point
for the first time during each simulation run (Fig. 1.13, right). Since different particle types are visualized with different colors in Swarm Chemistry, this measurement
roughly represents how many new particle types emerged during the last time segment. Macroscopic structuredness was quantified by measuring a Kullback-Leibler divergence [Kullback and Leibler, 1951] of a pairwise particle distance distribution from
that of a theoretical case where particles are randomly and homogeneously spread over
the entire space (Fig. 1.13, left). Specifically, each snapshot bitmap image was first analyzed and converted into a list of coordinates (each representing the position of a
particle, or a colored pixel), then a pair of coordinates were randomly sampled from
the list 100,000 times to generate an approximate pairwise particle distance distribution in the bitmap image. The Kullback-Leibler divergence of the approximate distance
distribution from the homogeneous case is larger when the swarm is distributed in a
less homogeneous manner, forming macroscopic structures.
We applied these measurements to simulation runs obtained under each of the four
conditions shown in Table 1.3. Results are summarized in Figs. 1.14 and 1.15. Figure
1.14 clearly shows the high evolutionary exploration occurring under the conditions
with high mutation rates and environmental perturbations. In the meantime, Figure
1.15 shows that the “original-high” condition had a tendency to destroy macroscopic
Q(d)
Actual snapshot
Time
Colors that
appeared in this
500-step period
Colors that
appeared in this
500-step period
Colors that
appeared in this
500-step period
# of new
colors =96
# of new
colors =151
Quantifying Evolutionary
Exploration
Simulation Results (series of snapshots)
Fig. 1.13. Methods to quantify evolutionary exploration (right) and macroscopic structuredness (left) directly from a sequence of snapshots (bitmap
images, center).
= 0.1184
Kullback-Leibler Divergence
Pairwise distance distribution
P(d)
Random distribution
Quantifying Macroscopic
Structuredness
18
Hiroki Sayama
1 Guiding Designs of Self-Organizing Swarms
19
Fig. 1.14. Temporal changes of the evolutionary exploration measurement (i.e., number of new
colors per 500 time steps) for four different experimental conditions, calculated from snapshots
of simulation runs taken at 500 time step intervals (from [Sayama and Wong, 2011]). Each curve
shows the average result over 12 simulation runs (3 independent runs × 4 different initial conditions given in [Sayama, 2011]). Sharp spikes seen in “high” conditions were due to dynamic
exogenous perturbations.
structures by allowing swarms to evolve toward simpler, homogeneous forms. Such
degradation of structuredness over time was, as mentioned earlier, due to a technical
problem in the previous implementation of collision detection [Sayama, 2011, Sayama
and Wong, 2011] that mistakenly depended on perception ranges of particles. The “revised” conditions used a fixed collision detection algorithm. This modification was
found to have an effect to maintain macroscopic structures for a prolonged period of
time (Fig. 1.15). Combining these results together (Fig. 1.16), we were able to detect automatically that the “revised-high” condition was most successful in producing
interesting designs, maintaining macroscopic structures without losing evolutionary
exploration. This conclusion also matched subjective observations made by human
users.
1.5 Conclusions
In this chapter, we have reviewed our recent work on two complementary approaches
for guiding designs of self-organizing heterogeneous swarms. The common design
challenge addressed in both approaches was the lack of explicit criteria for what constitutes a “good” design to produce. In the first approach, this challenge was solved by
having a human user as an active initiator of evolutionary design processes. In the sec-
20
Hiroki Sayama
Fig. 1.15. Temporal changes of the macroscopic structuredness measurement (i.e., KullbackLeibler divergence of the pairwise particle distance distribution from that of a purely random
case) for four different experimental conditions, calculated from snapshots of simulation runs
taken at 500 time step intervals (from [Sayama and Wong, 2011]). Each curve shows the average
result over 12 simulation runs (3 independent runs 4 different initial conditions). The “originalhigh” condition loses macroscopic structures while other conditions successfully maintain them.
ond approach, the criteria were replaced by low-level competition functions (similar
to laws of physics) that drive spontaneous evolution of swarms in a virtual ecosystem.
The core message arising from both approaches is the unique power of evolutionary processes for designing self-organizing complex systems. It is uniquely powerful
because evolution does not require any macroscopic plan, strategy or global direction
for the design to proceed. As long as the designer—this could be either an intelligent
entity or a simple unintelligent machinery—can make local decisions at microscopic
levels, the process drives itself to various novel designs through unprescribed evolutionary pathways. Designs made through such open-ended evolutionary processes may
have a potential to be more creative and innovative than those produced through optimization for explicit selection criteria.
We conclude this chapter with a famous quote by Richard Feynman. At the time
of his death, Feynman wrote on a blackboard, “What I cannot create, I do not understand.” This is a concise yet profound sentence that beautifully summarizes the
role and importance of constructive understanding (i.e., model building) in scientific
endeavors, which hits home particularly well for complex systems researchers. But research on evolutionary design of complex systems, including ours discussed here, has
illustrated that the logical converse of the above quote is not necessarily true. That is,
evolutionary approaches make this also possible—“What I do not understand, I can
still create.”
1 Guiding Designs of Self-Organizing Swarms
21
Fig. 1.16. Evolutionary exploration and macroscopic structuredness averaged over t =
10, 000 − 30, 000 for each independent simulation run (from [Sayama and Wong, 2011], with
slight modifications). Each marker represents a data point taken from a single simulation run.
It is clearly observed that the “revised-high” condition (shaded in light blue) most successfully
achieved high evolutionary exploration without losing macroscopic structuredness.
Acknowledgments
We thank the following collaborators and students for their contributions to the research presented in this chapter: Shelley Dionne, Craig Laramee, David Sloan Wilson,
J. David Schaffer, Francis Yammarino, Benjamin James Bush, Hadassah Head, Tom
Raway, and Chun Wong. This material is based upon work supported by the US National Science Foundation under Grants No. 0737313 and 0826711, and also by the
Binghamton University Evolutionary Studies (EvoS) Small Grant (FY 2011).
References
Anderson, C. (2006). Creation of desirable complexity: strategies for designing selforganized
systems. In Complex Engineered Systems, pages 101–121. Springer.
Banzhaf, W. (2000). Interactive evolution. Evolutionary Computation, 1:228–236.
Bar-Yam, Y. (2003). Dynamics of complex systems. Westview Press.
Bedau, M. A. and Brown, C. T. (1999). Visualizing evolutionary activity of genotypes. Artificial Life, 5(1):17–35.
Bedau, M. A. and Packard, N. H. (1992). Measurement of evolutionary activity, teleology, and
life. In Artificial Life II, pages 431–461. Addison-Wesley.
Bentley, P. (1999). Evolutionary design by computers. Morgan Kaufmann.
Bentley, P. J. and O’Reilly, U.-M. (2001). Ten steps to make a perfect creative evolutionary
design system. In GECCO 2001 Workshop on Non-Routine Design with Evolutionary Systems.
22
Hiroki Sayama
Boccara, N. (2010). Modeling complex systems. Springer.
Bush, B. J. and Sayama, H. (2011). Hyperinteractive evolutionary computation. Evolutionary
Computation, IEEE Transactions on, 15(3):424–433.
Camazine, S. (2003). Self-organization in biological systems. Princeton University Press.
Cheng, J., Cheng, W., and Nagpal, R. (2005). Robust and self-repairing formation control for
swarms of mobile agents. In AAAI, volume 5, pages 59–64.
Couzin, I. D., Krause, J., James, R., Ruxton, G. D., and Franks, N. R. (2002). Collective
memory and spatial sorting in animal groups. Journal of theoretical biology, 218(1):1–11.
Dawkins, R. (1996). The blind watchmaker: Why the evidence of evolution reveals a universe
without design. WW Norton & Company.
Dittrich, P., Ziegler, J., and Banzhaf, W. (2001). Artificial chemistriese̊view. Artificial life,
7(3):225–275.
Doursat, R., Sayama, H., and Michel, O. (2012). Morphogenetic engineering: Reconciling
self-organization and architecture. In Morphogenetic Engineering, pages 1–24. Springer.
Hemelrijk, C. K. and Kunz, H. (2005). Density distribution and size sorting in fish schools: an
individual-based model. Behavioral Ecology, 16(1):178–187.
Kullback, S. and Leibler, R. A. (1951). On information and sufficiency. The Annals of Mathematical Statistics, 22(1):79–86.
Kunz, H. and Hemelrijk, C. K. (2003). Artificial fish schools: collective effects of school size,
body size, and body form. Artificial life, 9(3):237–253.
Minai, A. A., Braha, D., and Bar-Yam, Y. (2006). Complex engineered systems: A new
paradigm. Springer.
Nehaniv, C. L. (2000). Measuring evolvability as the rate of complexity increase. In Artificial
Life VII Workshop Proceedings, pages 55–57.
Newman, J. P. and Sayama, H. (2008). Effect of sensory blind zones on milling behavior in a
dynamic self-propelled particle model. Physical Review E, 78(1):011913.
Ottino, J. M. (2004). Engineering complex systems. Nature, 427(6973):399–399.
Pahl, G., Wallace, K., and Blessing, L. (2007). Engineering design: a systematic approach,
volume 157. Springer.
Ray, T. S. (1992). An approach to the synthesis of life. In Artificial Life II, pages 371–408.
Addison-Wesley.
Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. ACM
SIGGRAPH Computer Graphics, 21(4):25–34.
Salzberg, C., Antony, A., and Sayama, H. (2004). Evolutionary dynamics of cellular automatabased self-replicators in hostile environments. BioSystems, 78(1):119–134.
Salzberg, C. and Sayama, H. (2004). Complex genetic evolution of artificial self-replicators in
cellular automata. Complexity, 10(2):33–39.
Sayama, H. (2007). Decentralized control and interactive design methods for large-scale heterogeneous self-organizing swarms. In Advances in Artificial Life, pages 675–684. Springer.
Sayama, H. (2009). Swarm chemistry. Artificial Life, 15(1):105–114.
Sayama, H. (2010). Robust morphogenesis of robotic swarms. Computational Intelligence
Magazine, IEEE, 5(3):43–49.
Sayama, H. (2011). Seeking open-ended evolution in swarm chemistry. In Artificial Life
(ALIFE), 2011 IEEE Symposium on, pages 186–193. IEEE.
Sayama, H. (2012). Swarm-based morphogenetic artificial life. In Morphogenetic Engineering,
pages 191–208. Springer.
Sayama, H., Dionne, S., Laramee, C., and Wilson, D. S. (2009). Enhancing the architecture
of interactive evolutionary design for exploring heterogeneous particle swarm dynamics: An
in-class experiment. In Artificial Life, 2009. ALife’09. IEEE Symposium on, pages 85–91.
IEEE.
1 Guiding Designs of Self-Organizing Swarms
23
Sayama, H. and Wong, C. (2011). Quantifying evolutionary dynamics of swarm chemistry. In
Advances in Artificial Life, ECAL 2011: Proceedings of the Eleventh European Conference on
Artificial Life, pages 729–730.
Shneiderman, B., Plaisant, C., Cohen, M., and Jacobs, S. (2009). Designing the User Interface:
Strategies for Effective Human-Computer Interaction (5th Edition). Prentice Hall.
Sole, R. and Goodwin, B. (2008). Signs of life: How complexity pervades biology. Basic
books.
Takagi, H. (2001). Interactive evolutionary computation: Fusion of the capabilities of ec optimization and human evaluation. Proceedings of the IEEE, 89(9):1275–1296.
Unemi, T. (2003). Simulated breeding–a framework of breeding artifacts on the computer.
Kybernetes, 32(1/2):203–220.
| 9 |
1
Multiplexing Analysis of Millimeter-Wave Massive
MIMO Systems
arXiv:1801.02987v2 [] 21 Jan 2018
Dian-Wu Yue, Ha H. Nguyen and Shuai Xu
Abstract—This paper is concerned with spatial multiplexing
analysis for millimeter-wave (mmWave) massive MIMO systems.
For a single-user mmWave system employing distributed antenna
subarray architecture in which the transmitter and receiver
consist of Kt and Kr subarrays, respectively, an asymptotic
multiplexing gain formula is firstly derived when the numbers of
antennas at subarrays go to infinity. Specifically, assuming that
all subchannels have the same number of propagation paths L,
the formula states that by employing such a distributed antennasubarray architecture, an exact multiplexing gain of Ns can be
achieved, where Ns ≤ Kr Kt L is the number of data streams.
This result means that compared to the co-located antenna
architecture, using the distributed antenna-subarray architecture
can scale up the maximum multiplexing gain proportionally to
Kr Kt . In order to further reveal the relation between diversity
gain and multiplexing gain, a simple characterization of the
diversity-multiplexing tradeoff is also given. The multiplexing
gain analysis is then extended to the multiuser scenario. Moreover, simulation results obtained with the hybrid analog/digital
processing corroborate the analysis results.
Index Terms—Millimeter-wave communications, massive
MIMO, multiplexing gain, diversity gain, diversity-multiplexing
tradeoff, distributed antenna-subarrays, hybrid precoding.
I. I NTRODUCTION
Recently, millimeter-wave (mmWave) communication has
gained considerable attention as a candidate technology for
5G mobile communication systems and beyond [1]–[3]. The
main reason for this is the availability of vast spectrum in the
mmWave band (typically 30-300 GHz) that is very attractive
for high data rate communications. However, compared to
communication systems operating at lower microwave frequencies (such as those currently used for 4G mobile communications), propagation loss in mmWave frequencies is much
higher, in the orders-of-magnitude. Fortunately, given the
much smaller carrier wavelengths, mmWave communication
systems can make use of compact massive antenna arrays to
compensate for the increased propagation loss.
Nevertheless, the large-scale antenna arrays together with
high cost and large power consumption of the mixed analog/digital signal components makes it difficult to equip a
Dian-Wu Yue is with the College of Information Science and Technology, Dalian Maritime University, Dalian, Liaoning 116026, China (e-mail:
[email protected]), and also with the Department of Electrical and Computer Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon,
SK, Canada S7N 5A9.
Ha H. Nguyen is with the Department of Electrical and Computer Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, SK, Canada
S7N 5A9 (e-mail: [email protected]).
Shuai Xu is with the College of Information Science and Technology, Dalian Maritime University, Dalian, Liaoning 116026, China (e-mail:
xu [email protected]).
separate radio-frequency (RF) chain for each antenna and
perform all the signal processing in the baseband. Therefore,
research on hybrid analog-digital processing of precoder and
combiner for mmWave communication systems has attracted
very strong interests from both academia and industry [4] −
[16]. In particular, a lot of work has been performed to
address challenges in using a limited number of RF chains. For
example, the authors in [4] considered single-user precoding
in mmWave massive MIMO systems and established the
optimality of beam steering for both single-stream and multistream transmission scenarios. In [10], the authors showed that
hybrid processing can realize any fully digital processing if the
number of RF chains is twice the number of data streams.
However, due to the fact that mmWave signal propagation
has an important feature of multipath sparsity in both the
temporal and spatial domains [17]–[20], it is expected that
the potentially available benefits of diversity and multiplexing
are indeed not large if the deployment of the antenna arrays is
co-located. In order to enlarge diversity/multiplexing gains in
mmWave massive MIMO communication systems, this paper
consider the use of a more general array architecture, called
distributed antenna subarray architecture, which includes lolocated array architecture as a special case. It is pointed out
that distributed antenna systems have received strong interest
as a promising technique to satisfy such growing demands for
future wireless communication networks due to the increased
spectral efficiency and expanded coverage [21] − [25].
It is well known that diversity-multiplexing tradeoff (DMT)
is a compact and convenient framework to compare different MIMO systems in terms of the two main and related
system indicators: data rate and error performance [26]–[31].
This tradeoff was originally characterized by Zheng and Tse
[26] for MIMO communication systems operating over i.i.d.
Rayleigh fading channels. The framework has then ignited a
lot of interests in analyzing various communication systems
and under different channel models. For a mmWave massive
MIMO system, how to quantify the diversity and multiplexing
performance and further characterize its DMT is a fundamental
and open research problem. In particular, to the best of
our knowledge, until now there is no unified multiplexing
gain analysis for mmWave massive MIMO systems that is
applicable to both co-located and distributed antenna array
architectures.
To fill this gap, this paper investigates the multiplexing
performance of mmWave massive MIMO systems with the
proposed distributed subarray architecture. The focus is on
the asymptotical multiplexing gain analysis in order to find
out the potential multiplexing advantage provided by multiple
2
distributed antenna arrays. The obtained analysis can be used
conveniently to compare various mmWave massive MIMO
systems with different distributed antenna array structures.
The main contributions of this paper are summarized as
follows:
• For a single-user system with the proposed distributed
subarray architecture, a multiplexing gain expression is
obtained when the number of antennas at each subarray
increases without bound. This expression clearly indicates
that one can obtain a large multiplexing gain by employing the distributed subarray architecture.
• A simple DMT characterization is further given. It can
reveal the relation between diversity gain and multiplexing gain and let us obtain insights to understand the
overall resources provided by the distributed antenna
architecture.
• The multiplexing gain analysis is then extended to the
multiuser scenario with downlink and uplink transmission.
• Simulation results are provided to corroborate the analysis results and show that the distributed subarray architecture yields significantly better multiplexing performance
than the co-located single-array architecture.
The remainder of this paper is organized as follows. Section
II describes the massive MIMO system model and hybrid processing with the distributed subarray architecture in mmWave
fading channels. Section III and Section IV provides the
asymptotical achievable rate analysis and the multiplexing gain
analysis for the single-user mmWave system, respectively. In
Section V, the multiplexing gain analysis is extended to the
multiuser scenario. Section VI concludes the paper.
Throughout this paper, the following notations are used.
Boldface upper and lower case letters denote matrices and
column vectors, respectively. The superscripts (·)T and (·)H
stand for transpose and conjugate-transpose, respectively.
diag{a1 , a2 , . . . , aN } stands for a diagonal matrix with diagonal elements {a1 , a2 , . . . , aN }. The expectation operator
is denoted
N by E(·). [A]ij gives the (i, j)th entry of matrix
A. A B is the Kronecker product of A and B. We write a
function a(x) of x as o(x) if limx→0 a(x)/x = 0. We use (x)+
to denote max{0, x}. Finally, CN (0, 1) denotes a circularly
symmetric complex Gaussian random variable with zero mean
and unit variance.
II. S YSTEM M ODEL
Consider a single-user mmWave massive MIMO system as
shown in Fig. 1. The transmitter is equipped with a distributed
antenna array to send Ns data streams to a receiver, which
is also equipped with a distributed antenna array. Here, a
distributed antenna array means an array consisting of several
remote antenna units (RAUs) (i.e., antenna subarrays) that are
distributively located, as depicted in Fig. 2. Specifically, the
antenna array at the transmitter consists of Kt RAUs, each of
which has Nt antennas and is connected to a baseband processing unit (BPU) by fiber. Likewise, the distributed antenna
array at the receiver consists of Kr RAUs, each having Nr
antennas and also being connected to a BPU by fibers. Such
a MIMO system shall be referred to as a (Kt , Nt , Kr , Nr )
distributed MIMO (D-MIMO) system. When Kt = Kr = 1,
the system reduces to a conventional co-located MIMO (CMIMO) system.
The transmitter accepts as its input Ns data streams and is
(rf)
(rf)
equipped with Nt RF chains, where Ns ≤ Nt ≤ Nt Kt .
(rf)
Given Nt
transmit RF chains, the transmitter can apply a
(rf)
low-dimension Nt × Ns baseband precoder, Wt , followed
(rf)
by a high-dimension Kt Nt × Nt
RF precoder, Ft . Note
that amplitude and phase modifications are feasible for the
baseband precoder Wt , while only phase changes can be made
by the RF precoder Ft through the use of variable phase
shifters and combiners. The transmitted signal vector can be
written as
1/2
x = Ft Wt Pt s,
(1)
where Pt = [pij ] is a diagonal power allocation matrix with
PNs
l=1 pll = 1 and s is the Ns × 1 symbol vector such that
E[ssH ] = P INs . Thus P represents the average total input
power. Considering a narrowband block fading channel, the
Kr Nr × 1 received signal vector is
1/2
y = HFt Wt Pt s + n
(2)
where H is Kr Nr ×Kt Nt channel matrix and n is a Kr Nr ×1
vector consisting of i.i.d. CN (0, 1) noise samples. Throughout
this paper, H is assumed known to both the transmitter and
(rf)
(rf)
receiver. Given that Nr RF chains (where Ns ≤ Nr ≤
Nr Kr ) are used at the receiver to detect the Ns data streams,
the processed signal is given by
1/2
H H
z = WrH FH
r HFt Wt Pt s + Wr Fr n
(3)
(rf)
where Fr is the Kr Nr × Nr RF combining matrix, and Wr
(rf)
is the Nr × Ns baseband combining matrix. When Gaussian
symbols are transmitted over the mmWave channel, the the
system achievable rate is expressed as
H H
H H H
R = log2 |INs +P R−1
n Wr Fr HFt Wt Pt Wt Ft H Fr Wr |
(4)
where Rn = WrH FH
r Fr Wr .
Furthermore, according to the architecture of RAUs at the
transmitting and receiving ends, H can be written as
√
√
g11 H11
···
g1Kt H1Kt
..
..
..
H=
. (5)
.
.
.
√
√
g Kr 1 HKr 1 · · ·
g Kr Kt HKr Kt
In the above expression, gij represents the large scale fading
effect between the ith RAU at the receiver and the jth RAU
at the transmitter, which is assumed to be constant over many
coherence-time intervals. The normalized subchannel matrix
Hij represents the MIMO channel between the jth RAU at
the transmitter and the ith RAU at the receiver. We assume
that all of {Hij } are independent mutually each other.
A clustered channel model based on the extended SalehValenzuela model is often used in mmWave channel modeling
and standardization [4] and it is also adopted in this paper. For
simplicity of exposition, each scattering cluster is assumed to
3
N t(rf)
Ns
N r(rf)
Ns
Kr
Kt
Fig. 1. Block diagram of a mmWave massive MIMO system with distributed antenna arrays.
included in the argument of aULA since the response for an
ULA is independent of the elevation angle. In contrast, for a
uniform planar array (UPA), which is composed of Nh and
Nv antenna elements in the horizontal and vertical directions,
respectively, the array response vector is represented by
aUPA (φ, θ) = aULA
(φ) ⊗ aULA
(θ),
h
v
(8)
where
1 h j2π dh
λ
√
aULA
(φ)
=
1, e
h
Nh
sin(φ)
, . . . , ej2π(Nh −1)
dh
λ
sin(φ)
iT
(9)
and
1 h j2π dv
λ
1, e
aULA
(θ) = √
v
Nv
Fig. 2. Illustration of distributed antenna array deployment.
contribute a single propagation path.1 Using this model, the
subchannel matrix Hij is given by
s
Lij
Nt Nr X l
rl H
tl
tl
Hij =
αij ar (φrl
(6)
ij , θij )at (φij , θij ),
Lij
where Lij is the number of propagation
the
rl
complex gain of the lth ray, and φrl
(θ
)
are
ij
ij
its random azimuth (elevation) angles of arrival and departure,
respectively. Without loss of generality, the complex gains αlij
rl
are assumed to be CN (0, 1). 2 The vectors ar (φrl
ij , θij ) and
tl
tl
at (φij , θij ) are the normalized receive/transmit array response
vectors at the corresponding angles of arrival/departure. For an
N -element uniform linear array (ULA) , the array response
vector is
iT
du
1 h j2π du sin(φ)
λ
aULA (φ) = √
, . . . , ej2π(N −1) λ sin(φ)
1, e
N
(7)
where λ is the wavelength of the carrier and du is the interelement spacing. It is pointed out that the angle θ is not
1 This assumption can be relaxed to account for clusters with finite angular
spreads and the results obtained in this paper can be readily extended for such
a case.
2 The different variances of αl can easily accounted for by absorbing into
ij
the large scale fading coefficients gij .
, . . . , ej2π(N
v
−1) dλv sin(θ)
iT
.
(10)
III. A SYMPTOTIC ACHIEVABLE R ATE A NALYSIS
From the structure and definition of the P
channel
matrix
Kr P Kt
H in Section II, there is a total of Ls =
i=1
j=1 Lij
propagation paths. Naturally, H can be decomposed into a
sum of Ls rank-one matrices, each corresponding to one
propagation path. Specifically, H can be rewritten as
l=1
paths, αlij is
tl
and φtl
ij (θij )
sin(θ)
H=
Lij
Kt X
Kr X
X
tl
tl
rl H
α̃lij ãr (φrl
ij , θij )ãt (φij , θij ),
(11)
i=1 j=1 l=1
where
α̃lij
=
s
gij
Nt Nr l
α ,
Lij ij
(12)
rl
ãr (φrl
ij , θij ) is a Kr Nr × 1 vector whose bth entry is defined
as
r
rl
[ar (φrl
rl
ij , θij )]b−(i−1)Nr , b ∈ Qi
[ãr (φrl
,
θ
)]
=
(13)
ij ij b
0,
b∈
/ Qri
tl
where Qri = ((i − 1)Nr , iNr ]. And ãt (φtl
ij , θij ) is a Kt Nt × 1
vector whose bth entry is defined as
t
tl
[at (φtl
tl
tl
ij , θij )]b−(j−1)Nt , b ∈ Qj
(14)
[ãt (φij , θij )]b =
t
0,
b∈
/ Qj .
rl
where Qtj = ((j − 1)Nt , jNt ]. Regarding {ãr (φrl
ij , θij )} and
tl
tl
{ãt (φij , θij )}, we have the following lemma from [32].
Lemma 1: Suppose that the antenna configurations at
all RAUs are either ULA or UPA. Then all Ls vectors
4
rl
{ãr (φrl
ij , θij )} are orthogonal to each other when Nr → ∞.
tl
Likewise, all Ls vectors {ãt (φtl
ij , θij )} are orthogonal to each
other when Nt → ∞.
Mathematically, the distributed massive MIMO system can
be considered as a co-located massive MIMO system with
Ls paths that have complex gains {α̃lij }, receive array rerl
sponse vectors {ãr (φrl
ij , θij )} and transmit response vectors
tl
tl
{ãt (φij , θij )}. Furthermore, order all paths in a decreasing
order of the absolute values of the complex gains {α̃lij }. Then
the channel matrix can be written as
H=
Ls
X
SNRl = P pll |α̃l |2 = γ̃l |β̃l |2 , l = 1, 2, . . . , Ns .
l
rl
rl
tl
tl H
α̃ ãr (φ , θ )ãt (φ , θ ) ,
(15)
where α̃1 ≥ α̃2 ≥ · · · ≥ α̃Ls .
One can rewrite H in a matrix form as
H = Ar DAH
t
(16)
where D is a Ls × Ls diagonal matrix with [D]ll = α̃l , and
Ar and At are defined as follows:
Ar = [ãr (φr1 , θr1 ), . . . , ãr (φrLs , θrLs )]
(17)
At = [ãt (φt1 , θt1 ), . . . , ãt (φtLs , θtLs )].
(18)
and
rl
tl
tl
Since both {ãr (φ , θ )} and {ãt (φ , θ )} are orthogonal
vector sets when Nr → ∞ and Nt → ∞, Ar and At are
asymptotically unitary matrices. Then one can form a singular
value decomposition (SVD) of matrix H as
H = UΣV
H
=
⊥ H
[Ar |A⊥
r ]Σ[Ãt |Ãt ]
(19)
where Σ is a diagonal matrix containing all singular values
on its diagonal, i.e.,
l
|α̃ |, for 1 ≤ l ≤ Ls
(20)
[Σ]ll =
0,
for l > Ls
and the submatrix Ãt is defined as
Ãt = [e−jψ1 ãt (φt1 , θt1 ), . . . , e−jψLs ãt (φtLs , θtLs )]
(21)
where ψl is the phase of complex gain α̃l corresponding to the
lth path. Based on (19), the optimal precoder and combiner
are chosen, respectively, as
[Ft Wt ]opt = [e−jψ1 ãt (φt1 , θt1 ), . . . , e−jψLs ãt (φtNs , θtNs )]
(22)
and
[Fr Wr ]opt = [ãr (φr1 , θr1 ), . . . , ãr (φrNs , θrNs )].
(23)
To summarize, when Nt and Nr are large enough, the
massive MIMO system can employ the optimal precoder and
combiner given in (22) and (23), q
respectively.
′
N r l′
αij for a given l.
Now suppose that α̃l = α̃lij = gij NLt ij
We introduce two notations:
Nt Nr
γ̃l = P pll gij
(24)
Lij
(26)
So we obtain another lemma.
rl
Lemma 2: Suppose that both sets {ãr (φrl
ij , θij )} and
tl
tl
{ãt (φij , θij )} are orthogonal vector sets when Nr → ∞ and
Nt → ∞. Let Ns ≤ Ls . In the limit of large Nt and Nr , then
the system achievable rate is given by
R=
l=1
rl
Then it follows from the above SVD analysis that the instantaneous SNR of the lth data stream is given by
Ns
X
l=1
log2 (1 + γ̃l |β̃l |2 ).
(27)
Remark 1: (22) and (23) indicate that when Nt and Nr
is large enough, the optimal precoder and combiner can be
implemented fully in RF using phase shifters [4]. Furthermore,
(13) and (14) imply that for each data stream only a couple
of RAUs needs the operation of phase shifters at each channel
realization.
Remark 2: By using the optimal power allocation (i.e., the
well-known waterfilling power allocation [33]), the system can
achieve a maximum achievable rate, which is denoted as Ro .
We use Re (P/Ns ) to denote the achievable rate obtained by
using the equal power allocation, namely, pll = NPs , l =
1, 2, . . . , Ns . Then
Re (P/Ns ) ≤ Ro ≤ Re ((P Ns )/Ns ) = Re (P ).
(28)
By doing expectation operation on (28), (28) becomes,
R̄e (P/Ns ) ≤ R̄o ≤ R̄e (P ).
(29)
In what follows, we derive an asymptotic expression of
the ergodic achievable rate with the equal power allocation,
R̄e (P/Ns ) (or R̄e for simplicity). For this reason, we need to
define an integral function
Z +∞
1
∆(x) =
log2 (1 + t)e−t/x dt
x
0
= log2 (e)e1/x E1 (1/x)
(30)
where E1 (·) is the exponential integral of a first-order function
defined as [34], [35]
Z +∞ −yt
e
E1 (y) =
dt
t
1
∞
X
(−y)k
= −E + ln(y) −
(31)
k · k!
k=1
with E being the Euler constant.
rl
Theorem 1: Suppose that both sets {ãr (φrl
ij , θij )} and
tl
tl
{ãt (φij , θij )} are orthogonnal vector sets when Nr → ∞ and
Nt → ∞. Let Ns ≤ Ls and γ̃1 = γ̃2 = . . . = γ̃Ls = γ̃. Then
in the limit of large Nt and Nr , the ergodic achievable rate
with homogeneous coefficient set {γ̃l }, denoted R̄eh , is given
by
γ̃
Ns L
s −l
X
X
(−1)Ls −l−k Ls ! Ls − l ∆( Ls −k )
. (32)
R̄eh =
(Ls − l)!(l − 1)!
Ls − k
k
l=1 k=0
When Ns = Ls , R̄eh can be simplified to
and
β̃l =
′
αlij .
(25)
R̄eh = Ls ∆(γ̃).
(33)
5
F (γ) = 1 − e
−γ
γ̃
1 γ
, f (γ) = e− γ̃ .
γ̃
(34)
For the lth best data stream, based on the theory of order
statistics [36], the PDF of the instantaneous receive SNR at
the receiver, denoted γl , is given by
Ls !
fl:Ls (γl ) =
[F (γl )]Ls −l [1 − F (γl )]l−1 f (γl ).
(Ls − l)!(l − 1)!
(35)
Inserting (34) into (35), we have that
L
s −l
X
Ls !(−1)Ls −l−k Ls − l e−γl (Ls −k)/γ̃
fl:Ls (γl ) =
.
(Ls − l)!(l − 1)!
γ̃
k
k=0
(36)
By the definition of the function ∆(·), the ergodic available
rate for the lth data stream can therefore be written as
Z +∞
(l)
Reh =
log2 (1 + γl )fl:Ls (γl )dγl
0
L
s −l
X
Ls − l Ls !(−1)Ls −l−k
gk (γ̃)
(Ls − l)!(l − 1)!
k
k=0
γ̃
L
s −l
X
Ls − l Ls !(−1)Ls −l−k ∆( Ls −k )
(37)
=
(Ls − l)!(l − 1)! Ls − k
k
=
k=0
where
gk (γ̃) =
Z
0
+∞
log2 (1 + γl )
e−γl (Ls −k)/γ̃
dγl .
γ̃
(38)
So we can obtain the desired result (32).
Finally, when Ns = Ls , we can readily prove (33) with the
help of the knowledge of unordered statistics.
Remark 3: Now let Ns = Ls and assume that Lij = L for
any i and j (i.e., all subchannels Hij have the same number of
propagation paths). When Nr → ∞ and Nt → ∞, the ergodic
achievable rate of the distributed MIMO system, R̄eh , can be
rewritten as
R̄eh (Kt , Kr ) = Kt Kr L∆(γ̃(Kt , Kr )).
(39)
Furthermore, consider a co-located MIMO system in which
the numbers of transmit and receiver antennas are equal to
Kt Nt and Kr Nr , respectively. Then its asymptotic ergodic
achievable rate can be expressed as
R̄eh (1, 1) = L∆(Kt Kr γ̃(Kt , Kr )).
(40)
Remark 4: Generally, the coefficient set {γ̃l } is inhomogeneous. Let γ̃max = max{γ̃l } and γ̃min = min{γ̃l }. Then
the ergodic achievable rate with inhomogeneous coefficient set
{γ̃l }, R̄e , has the following upper and lower bounds:
R̄eh (γ̃min ) ≤ R̄e ({γ̃l }) ≤ R̄eh (γ̃max ).
(41)
Nt=Nr=5
70
Nt=Nr=10
Nt=Nr=50
60
Limit Results
50
Rate
Proof: Due to the assumptions that each complex gain αlij is
CN (0, 1) and the coefficient set {γ̃l } is homogeneous, thus
the instantaneous SNRs in the Ls available data streams are
i.i.d.. Let F (γ) and f (γ) denote the cumulative distribution
function (CDF) and the probability density function (PDF) of
the unordered instantaneous SNRs, respectively. Then γ̃ is just
the average receive SNR of each data stream. Furthermore,
F (γ) and f (γ) can be written as
40
30
20
10
5
10
15
20
25
30
SNRa
Fig. 3. Rate versus SNRa for different numbers of antennas.
(rf)
(rf)
= 2Ns [10] and Nr =
Assuming that Nt
= Nr
Nt = N , Fig.3 plots the ergodic achievable rate curves versus
SNRa = γ̃ for different numbers of antennas, N = 5, 10, 50.
In Fig.3, we set Ns = 6, Kr = Kt = 2, and L = 3. As
expected, it can be seen that the rate performance is improved
as N increases. Obviously, the rate curve with N = 10 is
very close to the curve with limit results obtained based on
(30) while the rate curve with N = 50 is almost the same as
the curve with limit results. This verifies Theorem 1.
IV. M ULTIPLEXING G AIN A NALYSIS AND
D IVERSITY-M ULTIPLEXING T RADEOFF
A. Multiplexing Gain Analysis
P
Definition 1: Let γ̄ = L1s Ls
l=1 γ̃l . The distributed MIMO
system is said to achieve spatial multiplexing gain Gm if its
ergodic date rate with optimal power allocation satisfies
Gm (R̄o ) = lim
γ̄→∞
R̄o (γ̄)
.
log2 γ̄
(42)
rl
Theorem 2: Assume that both sets {ãr (φrl
ij , θij )} and
tl
tl
{ãt (φij , θij )} are orthogonal vector sets when Nr and Nt are
very large. Assume that Nr and Nt are always very large but
fixed and finite when γ̄ → ∞. Let Ns ≤ Ls . Then the spatial
multiplexing gain is given by
Gm (R̄o ) = Ns .
(43)
Proof: We first consider the simple homogeneous case with
γ̃1 = γ̃2 = . . . = γ̃Ls = γ̃ and derive the spatial multiplexing
gain with respect to R̄eh . In this case, γ̄ = γ̃. Obviously,
N
s
R̄eh (γ̄) X
R̄l (γ̄)
=
.
lim eh
γ̄→∞ log2 γ̄
γ̄→∞ log2 γ̄
Gm (R̄eh ) = lim
(44)
l=1
For the lth data stream, under the condition of very large Nt
and Nr , the individual ergodic rate can be written as
(l)
Reh (γ̄) = E log2 (1 + γ̄|β̃l |2 ).
(45)
6
B. Diversity-Multiplexing Tradeoff
18
16
14
Multiplexing Gain
The previous subsection shows how much the maximal
spatial multiplexing gain we can extract for a distributed
mmWave-massive MIMO system while our previous work in
[32] indicates how much the maximal spatial diversity gain
we can extract. However, maximizing one type of gain will
possibly result in minimizing the other. We need to bridge
between these two extremes in order to simultaneously obtain
both types of gains. We firstly give the precise definition of
diversity gain before we carry
Pon the analysis.
Definition 2: Let γ̄ = L1s Ls
l=1 γ̃l . The distributed MIMO
system is said to achieve spatial diversity gain Gd if its average
error probability satisfies
K=1: Ns=1
K=1: Ns=2
K=1: Ns=3
K=2: Ns=3
K=2: Ns=6
K=2: Ns=12
12
10
8
6
4
2
0
10
20
30
40
50
60
SNRa
70
80
90
100
Gd (P̄e ) = − lim
γ̄→∞
Fig. 4. Multiplexing gain versus SNRa for different numbers of data streams.
Gd (P̄out ) = − lim
γ̄→∞
(l)
(l)
(46)
and Reh (1) is a finite value, we can have that
(l)
Reh (γ̄)
γ̄→∞ log2 γ̄
log2 γ̄ + E log2 (|β̃l |2 )
= lim
γ̄→∞
log2 γ̄
= 1.
(47)
PNs (l)
Therefore, Gm (R̄eh ) = l=1 Gm (R̄eh ) = Ns .
Now we consider the general inhomogeneous case with
equal power allocation. Because cmin = γ̃min /γ̄ and cmax =
γ̃max /γ̄ be finite when γ̄ → ∞. Consequently, it readily
follows that both of the two systems with the achievable rates
R̄eh (γ̃min ) and R̄eh (γ̃max ) can achieve a multiplexing gain of
Gm (R̄eh ) = Ns . So we conclude from (41) that the distributed
MIMO system with the achievable rate R̄e (γ̄) can achieve a
multiplexing gain of Gm (R̄e ) = Ns .
Finally, it can be readily shown that the system with the
optimal achievable rate R̄o (γ̄) can only achieve multiplexing
gain Gm (R̄o ) = Ns since both of the equal power allocation
systems with the achievable rates R̄e (P/Ns ) and R̄e (P ) have
the same spatial multiplexing gain Ns .
Remark 5: Assume that Lij = L for any i and j. Theorem 2
implies that the distributed massive MIMO system can obtain
a maximum spatial multiplexing gain of Kr Kt L while the colocated massive MIMO system can only obtain a maximum
spatial multiplexing gain of L.
(rf)
(rf)
Now we let Nt = Nr = 2Ns [10] and Kr = Kt = K,
and set L = 3 and Nr = Nt = 50. We consider the
eh (γ̄)
. In order to
homogeneous case and define Ψ(γ̄) = R̄log
2 γ̄
verify Theorem 2, Fig.4 plots the curves of Ψ(γ̄) versus
SNRa = γ̄ for different numbers of data streams, namely,
Ns = 1, 2, 3 when K = 2 and Ns = 3, 6, 12 when K = 2. It
can be seen that for any given Ns , the function Ψ(γ̄) converges
to the limit value Ns as γ̄ grows large. This observation is
expected and agrees with Theorem 2.
G(l)
m (R̄eh ) =
lim
(48)
P̄out (γ̄)
.
log2 γ̄
(49)
or its outage probability satisfies
Noting that
E log2 (|β̃l |2 ) ≤ E log2 (1 + |β̃l |2 ) = Reh (1)
P̄e (γ̄)
.
log2 γ̄
With the help of a result of diversity analysis from [32], we
can derive the following DMT result.
rl
Theorem 3: Assume that both sets {ãr (φrl
ij , θij )} and
tl
tl
{ãt (φij , θij )} are orthogonal vector sets when Nr and Nt are
very large. Assume that Nr and Nt are always very large but
fixed and finite when γ̄ → ∞. Let Ns ≤ Ls . For a given
d ∈ [0, Ls ], by using optimal power allocation, the distributed
MIMO system with Ns data streams can reach the following
maximum spatial multiplexing gain at diversity gain Gd = d
Gm =
Ls
X
l=1
(1 −
d
)+ .
Ls − l + 1
(50)
Proof: We first consider the simple case where the distributed system is the one with equal power allocation and
the channel is the one with homogeneous large scale fading
coefficients. The distributed system has Ls available link paths
in all. For the lth best path, its individual maximum diversity
(l)
gain is equal to Gd = Ls − l + 1 [32]. Due to the fact that
(l)
each path can not obtain a multiplexing gain of Gm > 1
(l)
[31], we therefore design its target data rate R = rl log2 γ̄
with 0 ≤ rl ≤ 1. Then the individual outage probability is
expressed as
(l)
Pout
= P(log2 (1 + γ̄|β̃l |2 ) < rl log2 γ̄)
γ̄ rl − 1
).
= P(|β̃l |2 <
γ̄
(51)
From [37], [38], the PDF of the parameter µ = |β̃l |2 can be
written as
fµ = aµLs −l + o(µLs −l )
(52)
(l)
where a is a positive constant. So Pout can be rewritten as
(l)
Pout = (cγ̄)−(Ls −l+1)(1−rl ) + o((γ̄)−(Ls −l+1)(1−rl ) ) (53)
where c is a positive constant. This means that this path now
can obtain diversity gain
(l)
Gd = (Ls − l + 1)(1 − rl ).
(54)
7
Diversity−multiplexing Tradeoff
Diversity−multiplexing Tradeoff
14
(0, Ls)
Multiplexing Gain: Fractions
Multiplexing Gain: Integers
12
(0, 12)
10
Diversity Gain
Diversity Gain
(1/Ls, Ls−1)
(Gm(Ls−Ns+1), Ls−Ns+1)
8
6
4
(1, 7)
(2, 6)
(3, 4)
(5, 3)
2
(Ls, 0)
(6, 2)
0
(8, 1)
(12, 0)
−2
−2
Multiplexing Gain
Fig. 5. Diversity-multiplexing tradeoff Gm (d) for a general integer d.
(l)
or say
rl ≤ (1 −
1
)+ .
Ls − l + 1
Ls
X
l=1
rl =
Ls
X
l=1
(1 −
d
)+ .
Ls − l + 1
(56)
(57)
This proves that (50) holds under the special case. We
readily show that for a general case, the lth best path can also
reach a maximum diversity gain of Ls − l + 1. So applying
(41) and (29) leads to the desired result.
Remark 6: When d is an integer, Gm (d) can be expressed
simply. In particular, Gm (0) = Ls if d = 0; Gm (1) =
P
Ls −1 Ls −l
1
l=1 Ls −l+1 if d = 1; Gm (Ls − 1) = Ls if d = Ls − 1;
Gm (Ls ) = 0 if d = Ls . In general, if d = Ls − Ns + 1 for a
given integer Ns ≤ Ls , then
Gm (Ls − Ns + 1) =
NX
s −1
l=1
Ns − l
.
Ls − l + 1
4
6
8
Multiplexing Gain
10
12
14
V. M ULTIPLEXING G AIN A NALYSIS
S CENARIO
FOR THE
M ULTIUSER
(55)
To this end, under the condition that the diversity gain satisfies
Gd = d, the maximum spatial multiplexing gain of the
distributed system must be equal to
Gm (R̄eh ) =
2
Fig. 6. Diversity-multiplexing tradeoff when L = 3 and Kt = Kr = 2.
Since the distributed system requires the system diversity gain
Gd ≥ d, this implies that
Gd = (Ls − l + 1)(1 − rl ) ≥ d
0
(58)
The function Gm (d) is plottedPin Fig.5. Note that Gm (Ls −
Ns
1
Ns ) − Gm (Ls − Ns + 1) = l=1
Ls −l+1 . Generally, when
d ∈ [Ls − Ns , Ls − Ns + 1) , the multiplexing gain is given
by
Ns
X
d
.
(59)
Gm (d) = Ns −
Ls − l + 1
l=1
Example 1: We set that L = 3 and Kt = Kr = 2. So
Ls = 12. The DMT curve with fractional multiplexing gains
is shown in Fig.6. If the multiplexing gains be limited to
integers, the corresponding DMT curve is also plotted in Fig.6
for comparison.
This section considers the downlink communication in a
multiuser massive MIMO system as illustrated in Fig. 7. Here
the base station (BS) employs Kb RAUs with each having Nb
(rf)
antennas and Nb RF chains to transmit data streams to Ku
mobile stations. Each mobile station (MS) is equipped with
(rf)
Nu antennas and Nu RF chains to support the reception of
its own Ns data streams. This means that there is a total of
Ku Ns data streams transmitted by the BS. The numbers of
(rf)
data streams are constrained as Ku Ns ≤ Nb ≤ Kb Nb for
(rf)
the BS, and Ns ≤ Nu ≤ Nu for each MS.
(rf)
At the BS, denote by Fb the Kb Nb × Nb RF precoder
(rf)
and by Wb the Nb × Ns Ku baseband precoder. Then under
the narrowband flat fading channel model, the received signal
vector at the ith MS is given by
yi = Hi Fb Wb s + ni , i = 1, 2, . . . , Ku
(60)
where s is the signal vector for all Ku mobile stations,
which satisfies E[ssH ] = KuPNs IKu Ns and P is the average
transmit power. The Nu ×1 vector ni represents additive white
Gaussian noise, whereas the Nu × Kb Nb matrix Hi is the
channel matrix corresponding to the ith MS, whose entries
Hij are described as in Section II. Furthermore, the signal
vector after combining can be expressed as
H H
H H
zi = Wui
Fui Hi Fb Wb s + Wui
Fui ni , i = 1, 2, . . . , Ku
(61)
(rf)
where Fui is the Nu × Nu RF combining matrix and Wui
(rf)
is the Nu × Ns baseband combining matrix for the ith MS.
Theorem 4: Assume that all antenna array configurations
for the downlink
transmission are ULA. For the ith user, let
P b
(i)
(i)
Ls = K
L
and 0 ≤ d(i) ≤ Ls . In the limit of large
ij
j=1
Nb and Nu , the ith user can achieve the following maximum
spatial multiplexing gain when its individual diversity gain
8
Ku
N b(rf)
Ku N s
Kb
Fig. 7. Block diagram of a multiuser mmWave system with distributed antenna arrays.
(i)
satisfies Gd = d(i)
VI. C ONCLUSION
i
(i)
Gm
=
Ls
X
l=1
d(i)
(1 −
(i)
Ls − l + 1
+
) .
(62)
Proof: For the downlink transmission in a massive MIMO
multiuser system, the overall equivalent multiuser basedband
channel can be written as
H
H1
Fu1 0
···
0
0
H2
FH
0
u2 · · ·
Heq = .
Fb . (63)
..
.
.
.
..
..
..
..
.
H
0
0
· · · FuKu
HKu
On the other hand, when both Nb and Nu are very large, both
rl
receive and transmit array response vector sets, {ãr (φrl
ij , θij )}
tl
tl
and {ãt (φij , θij )}, are orthogonal sets. Therefore the multiplexing gain for the ith user can depend only on the subchannel
matrix Hi and the choices of Fui and Fb . The subchannel
(i)
matrix Hi has a total of Ls propagation paths. Similar to the
proof of Theorem 2, by employing the optimal RF precoder
and combiner for the ith user, when its diversity gain satisfies
Gid = d(i) , the user can achieve a maximum multiplexing gain
of
Ls(i)
X
d(i)
(i)
(1 − (i)
Gm =
)+ .
(64)
L
−
l
+
1
s
l=1
So we obtain the desired result.
Remark 7: Consider the case that all antenna array configurations for the downlink transmission are ULA and Lij = L
for any i and j . Let 0 ≤ d ≤ Kb L. In the limit of large
Nb and Nu , the downlink transmission in the massive MIMO
multiuser system can achieve the following maximum spatial
multiplexing gain at diversity gain Gd = d
Gm =
Ku
X
G(i)
m = Ku
i=1
K
bL
X
l=1
(1 −
d
)+ .
Kb L − l + 1
(65)
Remark 8: In a similar fashion, it is easy to prove that the
uplink transmission in the massive MIMO multiuser system
can also achieve simultaneously a diversity gain of Gd = d
(0 ≤ d ≤ Kb L) and a spatial multiplexing gain of
Gm = Ku
Ls
X
l=1
(1 −
d
)+ .
Kb L − l + 1
(66)
This paper has investigate the distributed antenna subarray architecture for mmWave massive MIMO systems and
provided the asymptotical multiplexing analysis when the
number of antennas at each subarray goes to infinity. In
particular, this paper has derived the closed-form formulas
of the asymptotical available rate and spatial maximum multiplexing gain under the assumption which the subchannel
matrices between different antenna subarray pairs behave
independently. The spatial multiplexing gain formula shows
that mmWave systems with the distributed antenna architecture
can achieve potentially rather larger multiplexing gain than
the ones with the conventional co-located antenna architecture.
On the other hand, using the distributed antenna architecture
can also achieve potentially rather higher diversity gain. For a
given mmWave massive MIMO channel, both types of gains
can be simultaneously obtained. This paper has finally given
a simple DMT tradeoff solution, which provides insights for
designing a mmWave massive MIMO system.
R EFERENCES
[1] T. S. Rappaport et al., “Millimeter wave mobile communications for 5G
cellular: It will work!” IEEE Access, vol. 1, pp. 335-349, May 2013.
[2] A. L. Swindlehurst, E. Ayanoglu, P. Heydari, and F. Capolino,
“Millimeter-wave massive MIMO: the next wireless revolution?” IEEE
Commun. Mag., vol. 52, no. 9, pp. 56-62, Sep. 2014.
[3] W. Roh et al., “Millimeter-wave beamforming as an enabling technology
for 5G cellular communications: theoretical feasibility and prototype
results,” IEEE Commun. Mag., vol. 52, no. 2, pp. 106-113, Feb. 2014.
[4] O. E. Ayach, R. W. Heath, S. Abu-Surra, S. Rajagopal and Z. Pi, “The
capacity optimality of beam steering in large millimeter wave MIMO
systems,” in Proc. IEEE 13th Intl. Workshop on Sig. Process. Advances
in Wireless Commun. (SPAWC), pp. 100-104, June 2012.
[5] O. E. Ayach, R. W. Heath, S. Rajagopal, and Z. Pi, “Multimode precoding
in millimeter wave MIMO transmitters with multiple antenna sub-arrays,”
in Proc. IEEE Glob. Commun. Conf., 2013, pp. 3476-3480.
[6] J. Singh and S. Ramakrishna, “On the feasibility of beamforming in
millimeter wave communication systems with multiple antenna arrays,“
in Proc. IEEE GLOBECOM, 2014, pp. 3802-3808.
[7] L. Liang, W. Xu and X. Dong, “Low-complexity hybrid precoding in
massive multiuser MIMO systems,” IEEE Wireless Commun. Letters,
vol.3, no.6, pp. 653-656, Dec. 2014.
[8] J. A. Zhang, X. Huang, V. Dyadyuk, and Y. J. Guo, “Massive hybrid
antenna array for millimeter-wave cellular communications,” IEEE Wireless Commun., vol. 22, no. 1, pp. 79-87, Feb. 2015.
[9] W. Ni and X. Dong, “Hybrid Block Diagonalization for Massive Multiuser
MIMO Systems,” IEEE Transactions on Communications, vol.64, no.1,
pp.201-211, Jan. 2016.
[10] F. Sohrabi and W. Yu, “Hybrid digital and analog beamforming design
for large-scale antenna arrays,” IEEE Journal of Selected Topics in Signal
Processing, vol. 10, no. 3, pp. 501-513, Apr. 2016.
9
[11] S. He, C. Qi, Y. Wu, and Y. Huang, “Energy-efficient transceiver design
for hybrid sub-array architecture MIMO systems,” IEEE Access, vol. 4,
pp. 9895-9905, 2016.
[12] N. Li, Z. Wei, H. Yang, X. Zhang, D. Yang, “Hybrid Precoding for
mmWave Massive MIMO Systems With Partially Connected Structure,”
IEEE Access, vol. 5, pp. 15142-15151, 2017.
[13] D. Zhang, Y. Wang, X. Li, W. Xiang, “Hybridly-Connected Structure
for Hybrid Beamforming in mmWave Massive MIMO Systems,” IEEE
Transactions on Communication, DOI 10.1109/TCOMM.2017.2756882,
Sep. 2017.
[14] N. Song, T. Yang, and H. Sun, “Overlapped subarray based hybrid
beamforming for millimeter wave multiuser massive MIMO,” IEEE
Signal Processing Letters, vol. 24, no. 5, pp. 550-554, May 2017.
[15] S. Kutty and D. Sen, “Beamforming for millimeter wave communications: an inclusive surey,” IEEE Communications Surveys & Tutorials,
vol. 18, no.2, pp. 949-973, Second Quarter 2016.
[16] A. F. Molisch, V. V. Ratnam, S. Han, Z. Li, S. L. H. Nguyen, L. Li, and
K. Haneda, “Hybrid beamforming for massive MIMO-a survey,” arXiv
preprint arXiv: 1609.05078, 2016.
[17] Z. Pi and F. Khan, “An introduction to millimeter-wave mobile broadband systems,” IEEE Comm. Mag., vol. 49, no. 6, pp.101-107, 2011.
[18] T. S. Rappaport, F. Gutierrez, E. Ben-Dor, J. N. Murdock, Y. Qiao,
and J. I. Tamir, “Broadband millimeter-wave propagation measurements
and models using adaptive-beam antennas for outdoor urban cellular
communications,” IEEE Trans. Antennas Propag., vol. 61, no. 4, pp. 18501859, Apr. 2013.
[19] V. Raghavan and A. M. Sayeed, “Multi-antenna capacity of sparse
multipath channels,” IEEE Trans. Inf. Theory., 2008. [Online]. Available:
dune.ece.wisc.edu/pdfs/sp mimo cap.pdf
[20] V. Raghavan and A. M. Sayeed, “Sublinear capacity scaling laws for
sparse MIMO channels,” IEEE Trans. Inf. Theory., vol. 57, no. 1, pp.
345-364, Jan. 2011.
[21] M. V. Clark, T. M. W. III, L. J. Greenstein, A. J. Rustako, V. Erceg, and
R. S. Roman, “Distributed versus centralized antenna arrays in broadband
wireless networks,” in Proc. IEEE Veh. Technology Conf. (VTC’01), May
2001, pp. 33-37.
[22] W. Roh and A. Paulraj, “MIMO channel capacity for the distributed
antenna systems,” in IEEE Veh. Technology Conf. (VTC’02), vol. 3, Sept.
2002, pp. 1520-1524.
[23] L. Dai, “A comparative study on uplink sum capacity with co-located
and distributed antennas,” IEEE J. Sel. Areas Commun., vol. 29, no. 6,
pp. 1200-1213, June 2011.
[24] Q.Wang,D. Debbarma, A. Lo, Z. Cao, I. Niemegeers, S. H. de Groot,
“Distributed antenna system for mitigating shadowing effect in 60 GHz
WLAN,”, wireless Personal Communications, vol. 82, no. 2, pp. 811-832,
May 2015.
[25] S. Gimenez, D. Calabuig, S. Roger, J. F. Monserrat, and N. Cardona,
“Distributed hybrid precoding for indoor deployments using millimeter
wave band,” Mobile Information Systems, vol. 2017, Article ID 5751809,
12 pages, Oct. 2017.
[26] L. Zheng and D. N. C. Tse, “Diversity and multiplexing: A fundamental
tradeoff in multiple antenna channels,” IEEE Trans. Inform. Theory, vol.
49, pp. 1073-1096, May 2003.
[27] D. Tse, P. Viswanath, and L. Zheng, “Diversity-multiplexing tradeoff
in multiple access channels,” IEEE Trans. Inform. Theory, vol. 50, pp.
1859-1874, Sep. 2004.
[28] L. Zhao, W. Mo, Y. Ma, and Z. Wang, “Diversity and multiplexing
tradeoff in general fading channels,” Electrical and Computer Engineering Department, Iowa State University, Tech. Rep., 2005.
[29] R. Narasimhan, “Finite-SNR diversity-multiplexing tradeoff for correlated Rayleigh and Rician MIMO channels,” IEEE Transactions on
Information Theory, vol.52, no. 9, pp. 3965- 3979, 2006.
[30] M. Yuksel and E. Erkip, “Multiple-antenna cooperative wireless systems:
A diversity-multiplexing tradeoff perspective,” IEEE Trans. Inform. Theory, vol. 53, pp 3371-3393, Oct. 2007.
[31] D. Tse and P. Viswanath, Fundamentals of Wireless Communication.
Cambridge, U.K.: Cambridge Univ. Press, 2007.
[32] D.-W. Yue, S. Xu, and H.H. Nguyen “Diversity analysis of
millimeter-wave massive MIMO systems,” [Online]. Available:
http://arxiv.org/abs/1801.00387.
[33] I. E. Telatar, ”Capacity of multi-antenna Gaussian channels,” European
Trans. on Telecomm., vol. 10, pp. 585-596, Nov.-Dec. 1999.
[34] M. K. Simon and M. -S. Alouini, Digital Communication over Fading
Channels, Second Edition New York: John Wiley & Sons, Inc., 2005.
[35] I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and
Products, 5th ed. San Diego, CA: Academic Press, 1994.
[36] H. A. David, Order Statistics, New York, NY: John Wiley & Sons, Inc.,
1981.
[37] Z. Wang and G. B. Giannakis, “A simple and general parameterization
quantifying performance in fading channels,” IEEE Trans. Signal Process., vol. 51, no. 8, pp. 1389-1398, Aug. 2003.
[38] L. G. Ordóñz, , D. P. Palomar, A. Pagès-Zamora, and J. R. Fonollosa,
“High-SNR Analytical Performance of Spatial Multiplexing MIMO Systems With CSI,” IEEE Transactions on Signal Processing, Vol.55, no. 11,
pp. 5447-5463, Nov. 2007
| 7 |
Fast Computation of Graph Edit Distance
Xiaoyang Chen† , Hongwei Huo∗† , Jun Huan‡ , Jeffrey Scott Vitter
arXiv:1709.10305v1 [] 29 Sep 2017
†
§
Dept. of Computer Science, Xidian University, [email protected], [email protected]
‡
Dept. of Electrical Engineering and Computer Science, The University of Kansas, [email protected]
§
Dept. of Computer and Information Science, The University of Mississippi, [email protected]
Abstract—The graph edit distance (GED) is a well-established
distance measure widely used in many applications. However,
existing methods for the GED computation suffer from several
drawbacks including oversized search space, huge memory
consumption, and lots of expensive backtracking. In this paper,
we present BSS GED, a novel vertex-based mapping method for
the GED computation. First, we create a small search space
by reducing the number of invalid and redundant mappings
involved in the GED computation. Then, we utilize beam-stack
search combined with two heuristics to efficiently compute GED,
achieving a flexible trade-off between available memory and
expensive backtracking. Extensive experiments demonstrate that
BSS GED is highly efficient for the GED computation on sparse
as well as dense graphs and outperforms the state-of-the-art
GED methods. In addition, we also apply BSS GED to the graph
similarity search problem and the practical results confirm its
efficiency.
I. I NTRODUCTION
Graphs are widely used to model various complex structured
data, including social networks, molecular structures, etc. Due
to extensive applications of graph models, there has been a
considerable effort in developing techniques for effective graph
data management and analysis, such as graph matching [3] and
graph similarity search [14], [17], [19].
Among these studies, similarity computation between two
graphs is a core and essential problem. In this paper, we focus
on the similarity measure based on graph edit distance (GED)
since it is applicable to virtually all types of data graphs and
can also precisely capture structural differences. Due to the
flexible and error-tolerant characteristics of GED, it has been
successfully applied in many applications, such as molecular
comparison in chemistry [8], object recognition in computer
vision [2] and graph clustering [9].
Given two graphs G and Q, the GED between them, denoted
by ged(G, Q), is defined as the minimum cost of an edit
path that transforms one graph to another. Unfortunately,
unlike the classical graph matching problem, such as subgraph
isomorphism [15], the fault tolerance of GED allows a vertex
of one graph to be mapped to any vertex of the other graph,
regardless of their labels and degrees. As a consequence, the
complexity of the GED computation is higher than that of
subgraph isomorphism, which has been proved to be an NPhard [17] problem.
The GED computation is usually carried out by means
of a tree search algorithm which explores the space of all
possible mappings of vertices and edges of comparing graphs.
The underlying search space can be organized as an ordered
search tree. Based on the way of generating successors of
nodes in the search tree, existing methods can be divided into
two broad categories: vertex-based and edge-based mapping
methods. When generating successors of a node, the former
extends unmapped vertices of comparing graphs, while the
later extends unmapped edges. A? -GED [5], [7] and DFGED [16] are two major vertex-based mapping methods. A? GED adopts the best-first search paradigm A? [6], which picks
up a partial mapping with the minimum induced edit cost to
extend each time. The first found complete mapping induces
the GED of comparing graphs. However, DF-GED carries out
a depth-first search, which quickly reaches a leaf node. The
edit cost of a leaf node in fact is an upper bound of GED
and hence can be used to prune nodes later to accelerate the
GED computation. Different from the above two methods,
CSI GED [4] is a novel edge-based mapping method based
on common substructure isomorphism, which works well for
the sparse and distant graphs. Similar to DF-GED, CSI GED
also adopts the depth-first search paradigm.
Even though existing methods have achieved promising
preliminary results, they still suffer from several drawbacks.
Both A? -GED and DF-GED enumerate all possible mappings
between two graphs. However, among these mappings, some
mappings must not be optimal, called invalid mappings, or
they induce the same edit cost, called redundant mappings.
For invalid mappings, we do not have to generate them, and
for redundant mappings, we only need to generate one of them
so as to avoid redundancy. The search space of A? -GED and
DF-GED becomes oversized as they generate plenty of invalid
and redundant mappings.
In addition, for A? -GED, it needs to store enormous
partial mappings, resulting in a huge memory consumption.
In practice, A? -GED cannot compute the GED of graphs with
more than 12 vertices. Though DF-GED performing a depthfirst search is efficient in memory, it is easily trapped into a
local (i.e., suboptimal) solution and hence produces lots of
expensive backtracking. On the other hand, for CSI GED, it
adopts the depth-first search paradigm, and hence also faces
the expensive backtracking problem. Besides, the search space
of CSI GED is exponential with respect to the number of
edges of comparing graphs, making it naturally be unsuitable
for dense graphs.
To solve the above issues, we propose a novel vertex-based
mapping method for the GED computation, named BSS GED,
based on beam-stack search [11] which has shown an excellent
performance in AI literature. Our contributions in this paper
are summarized below.
We propose a novel method of generating successors of
nodes in the search tree, which reduces a large number
of invalid and redundant mappings involved in the GED
computation. As a result, we create a small search space.
Moreover, we also give a rigorous theoretical analysis of
the search space.
• Incorporating with the beam-stack search paradigm into
our method to compute GED, we achieve a flexible tradeoff between available memory and the time overhead of
backtracking and gain a better performance than the bestfirst and depth-first search paradigms.
• We propose two heuristics to prune the search space,
where the first heuristic produces tighter lower bound and
the second heuristic enables to fast search of tighter upper
bound.
• We have conducted extensive experiments on both real
and synthetic datasets. The experimental results show that
BSS GED is highly efficient for the GED computation
on sparse as well as dense graphs, and outperforms the
state-of-the-art GED methods.
• In addition, we also extend BSS GED as a standard graph
similarity search query method and the practical results
confirm its efficiency.
The rest of this paper is organized as follows: In Section II,
we introduce the problem definition and then give an overview
of the vertex-based mapping method for the GED computation.
In Section III, we create a small search space by reducing the
number of invalid and redundant mappings involved in the
GED computation. In Section IV, we utilize the beam-stack
search paradigm to traverse the search space to compute GED.
In Section V, we propose two heuristics to prune the search
space. In Section VI, we extend BSS GED as a standard graph
similarity search query method. In Section VII, we report the
experimental results and our analysis. Finally, we investigate
research works related to this paper in Section VIII and then
make concluding remarks in Section IX.
•
II. P RELIMINARIES
In this section, we introduce basic notations. For simplicity
in exposition, we only focus on simple undirected graphs
without multi-edges or self-loops.
A. Problem Definition
Let Σ be a set of discrete-valued labels. A labeled graph
is a triplet G = (VG , EG , L), where VG is the set of vertices,
EG ⊆ VG × VG is the set of edges, L : VG ∪ EG → Σ is a
labeling function which assigns a label to a vertex or an edge.
For a vertex u, we use L(u) to denote its label. Similarly,
L(e(u, v)) is the label of an edge e(u, v). ΣVG = {L(u) :
u ∈ VG } and ΣEG = {L(e(u, v)) : e(u, v) ∈ EG } are the
label multisets of VG and EG , respectively. For a graph G,
S(G) = (VG , EG ) is its unlabeled version, i.e., its structure.
In this paper, we refer |VG | to the size of graph G.
Definition 1 (Subgraph Isomorphism [15]). Given two graphs
G and Q, G is subgraph isomorphic to Q, denoted by G ⊆ Q,
if there exists an injective function φ : VG → VQ , such that
(1) ∀u ∈ VG , φ(u) ∈ VQ and L(u) = L(φ(u)). (2) ∀e(u, v) ∈
EG , e(φ(u), φ(v)) ∈ EQ and L(e(u, v)) = L(e(φ(u), φ(v))).
If G ⊆ Q and Q ⊆ G, then G and Q are graph isomorphic
to each other, denoted by G ∼
= Q.
There are six edit operations can be used to transform one
graph to another [12], [20]: insert/delete an isolated vertex,
insert/delete an edge, and substitute the label of a vertex or
an edge. Given two graphs G and Q, an edit path P =
hp1 , . . . , pk i is a sequence of edit operations that transforms
pk
p1
one graph to another, such as G = G0 −→, . . . , −→ Gk ∼
= Q.
The edit cost of P isPdefined as the sum of edit cost of all
k
operations in P , i.e., i=1 c(pi ), where c(pi ) is the edit cost
of the edit operation pi . In this paper, we focus on the uniform
cost model, i.e., c(pi ) = 1 for ∀i, thus the edit cost of P is its
length, denoted by |P |. For P , we call it is optimal if and only
if it has the minimum length among all possible edit paths.
Definition 2 (Graph Edit Distance). Given two graphs G
and Q, the graph edit distance between them, denoted by
ged(G, Q), is the length of an optimal edit path that transforms G to Q (or vice versa).
Example 1. In Figure 1, we show an optimal edit path P that
transforms graph G to graph Q. The length of P is 4, where
we delete two edges e(u1 , u2 ) and e(u1 , u3 ), substitute the
label of vertex u1 with label A and insert one edge e(u1 , u4 )
with label a.
u1
u2
b
B
A
b
u1
B
a
A a C
u3
u4
G
u2
A
a
A a C
u3
u4
G1
u1
A
u2
A
a
A a C
u3
u4
Q1
v1
A
a
A a C a A
v3
v4
v2
Q
Fig. 1: An optimal edit path P between graphs G and Q.
B. Graph Mapping
In this part, we introduce the graph mapping between two
graphs, which can induce an edit path between them. In order
to match two unequal size graphs G and Q, we extend their
vertex sets as VG∗ and VQ∗ such that VG∗ = VG ∪ {un } and
VQ∗ = VQ ∪ {v n }, respectively, where un and v n are dummy
vertices labeled with ε, s.t., ε ∈
/ Σ. Then, we define graph
mapping as follows:
Definition 3 (Graph Mapping). A graph mapping from
graph G to graph Q is a bijection ψ : VG∗ → VQ∗ , such that
∀u ∈ VG∗ , ψ(u) ∈ VQ∗ , and at least one of u and ψ(u) is not
a dummy vertex.
Given a graph mapping ψ from G to Q, it induces an
unlabeled graph H = (VH , EH ), where VH = {u : u ∈
VG ∧ ψ(u) ∈ VQ } and EH = {e(u, v) : e(u, v) ∈ EG ∧
e(ψ(u), ψ(v)) ∈ EQ }, then H ⊆ S(G) and H ⊆ S(Q).
Let Gψ (resp. Qψ ) be the labeled version of H embedded
in G (resp. Q). Accordingly, we obtain an edit path Pψ :
G → Gψ → Qψ → Q. Let CD (ψ), CS (ψ) and CI (ψ) be
the respective edit cost of transforming G to Gψ , Gψ to Qψ ,
and Qψ to Q. As Gψ is a subgraph of G, we only need
to delete vertices and edges that do not belong to Gψ when
transforming G to Gψ . Thus, CD (ψ) = |VG | − |VH | + |EG | −
|EH |. Similarly, CI (ψ) = |VQ |−|VH |+|EQ |−|EH |. Since Gψ
and Qψ have the same structure H, we only need to substitute
the corresponding vertex and edge labels between G ψ and Q ψ ,
thus CS (ψ) = |{u : u ∈ VH ∧ L(u) 6= L(ψ(u))}| + |{e(u, v) :
e(u, v) ∈ EH ∧ L(e(u, v)) 6= L(e(ψ(u), ψ(v)))}|.
generating successors. For a leaf node r, we compute the edit
cost of its corresponding edit path Pψr by Theorem 1. Thus,
when we generate all leaf nodes, we must find an optimal
graph mapping and then obtain ged(G, Q).
Algorithm 1: BasicGenSuccr(r)
1
2
3
4
5
Theorem 1 ([4]). Given a graph mapping ψ from graph G to
graph Q. Let Pψ be the edit path induced by ψ, then |Pψ | =
CD (ψ) + CI (ψ) + CS (ψ).
Example 2. Consider graphs G and Q in Figure 1. Given a
graph mapping ψ : {u1 , u2 , u3 , u4 } → {v1 , v2 , v3 , v4 }, where
ψ(u1 ) = v1 , ψ(u2 ) = v2 , ψ(u3 ) = v3 , and ψ(u4 ) = v4 , we
have H = ({u1 , u2 , u3 , u4 }, {e(u2 , u4 ), e(u3 , u4 )}). Then ψ
induces an edit path Pψ : G → G ψ → Q ψ → Q shown in
Figure 1, where Gψ = G1 and Qψ = Q1 . By Theorem 1, we
compute that CD (ψ) = 2, CI (ψ) = 1 and CS (ψ) = 1, thus
|Pψ | = CD (ψ) + CI (ψ) + CS (ψ) = 4.
Hereafter, for ease of presentation, we assume that G and Q
are the two comparing graphs, and VG = {u1 , . . . , u|VG | }
and VQ = {v1 , . . . , v|VQ | }. For a graph mapping ψ from G
to Q, we call it is optimal only when its induced edit path Pψ
is optimal. Next, we give an overview of the vertex-based
mapping method for computing ged(G, Q) by enumerating
all possible graph mappings from G to Q.
C. GED computation: Vertex-based Mapping Approach
Assuming that vertices in VG∗ are processed in the order
(ui1 , . . . , ui|VG | , un , . . . , un ), where i1 , . . . , i|VG | is a permutation of 1, . . . , |VG | detailed in Section V-B.∗ Then, we denote
S|VG |
a graph mapping from G to Q as ψ = l=1
{(uil → vjl )} in
the following sections, such that (1) uil = un if il > |VG |; (2)
vjl = v n if jl > |VQ |; and (3) vjl = ψ(uil ) for 1 ≤ l ≤ |VG∗ |.
The GED computation is always achieved by means of an
ordered search tree, where inner nodes correspond to partial
graph mappings and leaf nodes correspond to complete graph
mappings. Such a search tree is created dynamically at runtime
by iteratively generating successors linked by edges to the
currently considered node. Let ψr = {(ui1 → vj1 ), . . ., (uil →
vjl )} be the (partial) mapping associated with a node r,
where vjk is the mapped vertex of uik for 1 ≤ k ≤ l , then
Algorithm 1 outlines the method of generating successors of r.
Algorithm 1 is easy to understand. First, we compute the
r
r
sets of unmapped vertices CG
and CQ
in G and Q, respectively
r
(line 2). Then, if |CG
| > 0, for the vertex uil+1 to be extended,
r
we choose a vertex z from CQ
or {v n } as its mapped vertex,
and finally generate all possible successors of r (lines 4–8);
otherwise, all vertices in G were processed, then we insert all
r
vertices in CQ
into G and obtain a unique successor leaf node
(lines 10–11).
Staring from a dummy root node root such that ψroot = ∅,
we can create the search tree layer-by-layer by iteratively
6
ψr ← {(ui1 → vj1 ), . . . , (uil → vjl )}, succ ← ∅;
r
r
CG
← VQ \{vj1 , . . . , vjl };
← VG \{ui1 , . . . , uil }, CQ
r
if |CG
| > 0 then
r
foreach z ∈ CQ
do
generate successor q, s.t., ψq ← ψr ∪ {(uil+1 → z)};
succ ← succ ∪ {q};
generate successor q, s.t., ψq ← ψr ∪ {(uil+1 → v n )};
succ ← succ ∪ {q};
7
8
9
10
11
12
else
generate successor q, s.t., ψq ← ψr ∪
succ ← succ ∪ {q};
S
r {(u
z∈CQ
n
→ z)};
return succ;
However, the above method BasicGenSuccr used in A? GED [5] and DF-GED [16] generates all possible successors.
As a result, both A? -GED and DF-GED enumerate all possible
graph mappings from G to Q and their search space size
is O(|VQ ||VG | ) [4]. However, among these mappings, some
mappings certainly not be optimal, called invalid mappings,
or they induce the same edit cost, called redundant mappings.
For invalid mappings, we do not have to generate them, and
for redundant mappings, we only need to generate one of
them. Next, we present how to create a small search space
by reducing the number of invalid and redundant mappings.
III. C REATING S MALL S EARCH S PACE
A. Invalid Mapping Identification
Let |ψ| be the length of a graph mapping ψ, i.e., |VG∗ |. We
give an estimation of |ψ| in Theorem 2, which can be used to
identify invalid mappings.
Theorem 2. Given an optimal graph mapping ψ from graph G
to graph Q, then |ψ| = max{|VG |, |VQ |}.
Proof: Suppose for the purpose of contradiction that
|ψ| > max{|VG |, |VQ |}. Then (x → v n ) and (un → y) must
be present simultaneously in ψ, where x ∈ VG and y ∈ VQ . We
construct another graph mapping ψ 0 = (ψ\{(x → v n ), (un →
y)}) ∪ {(x → y)}, and then prove |Pψ0 | < |Pψ | as follows:
Let H and H 0 be two unlabeled graphs induced by ψ and ψ 0 ,
respectively, then VH 0 = {u : u ∈ VG ∧ ψ 0 (u) ∈ VQ } = {u :
u ∈ VG ∧ ψ(u) ∈ VQ } ∪ {x : ψ 0 (x) ∈ VQ } = VH ∪ {x}. Let
Ax = {z : z ∈ VH ∧ e(x, z) ∈ EG ∧ e(y, ψ(z)) ∈ EQ }, then
EH 0 = {e(u, v) : e(u, v) ∈ EG ∧ e(ψ 0 (u), ψ 0 (v)) ∈ EQ } =
{e(u, v) : e(u, v) ∈ EG ∧ e(ψ(u), ψ(v)) ∈ EQ } ∪ {e(x, z) :
z ∈ VH 0 ∧e(x, z) ∈ EG ∧e(y, ψ(z)) ∈ EQ } = EH ∪{e(x, z) :
z ∈ Ax }. As x ∈
/ VH , e(x, z) ∈
/ EH for ∀z ∈ Ax . Thus,
|VH 0 | = |VH | + 1 and |EH 0 | = |EH | + |Ax |.
As CD (ψ) = |VG |−|VH |+|EG |−|EH | and CI (ψ) = |VQ |−
|VH |+|EQ |−|EH |, we have CD (ψ 0 ) = CD (ψ)−(1+|Ax |) and
CI (ψ 0 ) = CI (ψ) − (1 + |Ax |). Since CS (ψ) = |{u : u ∈ VH ∧
L(u) 6= L(ψ(u))}| + |{e(u, v) : e(u, v) ∈ EH ∧ L(e(u, v)) 6=
L(e(ψ(u),
ψ(v)))}|, we have CS (ψ 0 ) = CS (ψ) + c(x → y) +
P
z∈Ax c(e(x, z) → e(y, ψ(z))), where c(·) gives the edit cost
of relabeling a vertex or an edge, such that c(a → b) = 0 if
L(a) = L(b), and c(a → b) = 1 otherwise, and a (resp. b) is
a vertex or an edge in G (resp. Q). Thus, CS (ψ 0 ) ≤ CS (ψ) +
1 + |Ax |. Therefore, |Pψ0 | = CD (ψ 0 ) + CI (ψ 0 ) + CS (ψ 0 ) ≤
CD (ψ)−(1+|Ax |)+CI (ψ)−(1+|Ax |)+CS (ψ)+1+|Ax | =
|Pψ |−(1+|Ax |) < |Pψ |. This would be a contradiction that Pψ
is optimal. Hence |ψ| = max{|VG |, |VQ |}.
Theorem 2 states that a graph mapping whose length
is greater than |V | must be an invalid mapping, where
|V | = max{|VG |, |VQ |}. For example, considering graphs G
and Q in Figure 1, and a graph mapping ψ = {(u1 →
v1 ), (u2 → v n ), (u3 → v3 ), (u4 → v4 ), (un → v2 )},
we know that ψ with an edit cost 7 must be invalid as
|ψ| = 5 > max{|VG |, |VQ |} = 4.
For an edge e(u, v) in EH , e(ψ(u), ψ(v)) is its mapped
edge in EQ . As π(ψ(u)) = π(ψ 0 (u)), we have ψ(u) ∼ ψ 0 (u)
and then obtain NQ (ψ(u)) = NQ (ψ 0 (u)). Thus, we have
e(ψ 0 (u), ψ(v)) ∈ EQ . Similarly, since π(ψ(v)) = π(ψ 0 (v)),
we obtain ψ(v) ∼ ψ 0 (v). Thus, there must exist edges between
ψ(u) and ψ 0 (v), ψ 0 (u) and ψ 0 (v) (an illustration is shown
in Fig. 2), thus e(ψ 0 (u), ψ 0 (v)) ∈ EQ and hence we obtain
e(u, v) ∈ EH 0 . Thus, EH ⊆ EH 0 . Similarly, we also obtain
EH 0 ⊆ EH . So, EH = EH 0 .
Since VH = VH 0 and EH = EH 0 , we have H = H 0 . Thus,
CI (ψ) = CI (ψ 0 ) and CD (ψ) = CD (ψ 0 ). Next, we do not
distinguish H and H 0 anymore.
u
v
ψ(u)
ψ(v)
ψ 0 (u)
ψ 0 (v)
B. Redundant Mapping Identification
For a vertex u in VQ , its neighborhood information is defined as NQ (u) = {(v , L(e(u, v ))) : v ∈ VQ ∧ e(u, v ) ∈ EQ }.
Definition 4 (Vertex Isomorphism). Given two vertices
u, v ∈ VQ , u is isomorphic to v, denoted by u ∼ v, if and
only if L(u) = L(v) and NQ (u) = NQ (v).
By Definition 4, we know that the isomorphic relationship
between vertices is an equivalence relation. Thus, we can
λ
divide VQ into λQ equivalent classes VQ1 , . . . , VQ Q of isomorphic vertices. Each vertex u is said to belong to class π(u) = i
if u ∈ VQi . Dummy vertices in {v n } are isomorphic to each
other, and let π(v n ) = λQ + 1.
Definition 5 (Canonical Code). Given a graph mapping
S|ψ|
ψ = l=1 {(uil → vjl )}, where vjl = ψ(uil ) for 1 ≤
l ≤ |ψ|. The canonical code of ψ is defined as code(ψ) =
hπ(vj1 ), . . . , π(vj|ψ| )i.
Given two graph mappings ψ and ψ 0 such that |ψ| = |ψ 0 |,
we say that code(ψ) = code(ψ 0 ) if and only if π(vjl ) =
π(vj0 l ), where vjl = ψ(uil ) and vj0 l = ψ 0 (uil ) for 1 ≤ l ≤ |ψ|.
Theorem 3. Given two graph mappings ψ and ψ 0 . Let Pψ
and Pψ0 be edit paths induced by ψ and ψ 0 , respectively. If
code(ψ) = code(ψ 0 ), then we have |Pψ | = |Pψ0 |.
Proof: As discussed in Section II-B, |Pψ | = CI (ψ) +
CD (ψ) + CS (ψ). In order to prove |Pψ | = |Pψ0 |, we first
prove CI (ψ) = CI (ψ 0 ) and CD (ψ) = CD (ψ 0 ), then prove
CS (ψ) = CS (ψ 0 ).
Let H and H 0 be two unlabeled graphs induced by ψ and ψ 0 ,
respectively. For a vertex u in VH , ψ(u) and ψ 0 (u) are the
mapped vertices of u, respectively. Since code(ψ) = code(ψ 0 ),
we have π(ψ(u)) = π(ψ 0 (u)) and hence obtain ψ(u) ∼ ψ 0 (u).
As ψ(u) 6= v n , we have ψ 0 (u) 6= v n by Definition 4. Thus
u ∈ VH 0 , and hence we obtain VH ⊆ VH 0 . Similarly, we also
obtain VH 0 ⊆ VH . So, VH = VH 0 .
Fig. 2: Illustration of isomorphic vertices.
For any vertex u in VH , we have L(ψ(u)) = L(ψ 0 (u))
as ψ(u) ∼ ψ 0 (u). Thus, |{u : u ∈ VH ∧ L(u) 6=
L(ψ(u))}| = |{u : u ∈ VH ∧ L(u) 6= L(ψ 0 (u))}|. For
any edge e(u, v) in EH , since ψ(u) ∼ ψ 0 (u), we know
that L(e(ψ(u), ψ(v))) = L(e(ψ 0 (u), ψ(v))). Similarly, we
obtain L(e(ψ 0 (u), ψ(v))) = L(e(ψ 0 (u), ψ 0 (v))) as ψ(v) ∼
ψ 0 (v). Thus L(e(ψ(u), ψ(v))) = L(e(ψ 0 (u), ψ 0 (v))). Hence
|{e(u, v) : e(u, v) ∈ EH ∧L(e(u, v)) 6= L(e(ψ(u), ψ(v)))}| =
|{e(u, v) : e(u, v) ∈ EH ∧ L(e(u, v)) 6= L(e(ψ 0 (u), ψ 0 (v)))}|.
Therefore, CS (ψ) = CS (ψ 0 ). So, we have |Pψ | = |Pψ0 |.
Example 3. Consider graphs G and Q in Figure 1. For Q,
we know that L(v1 ) = L(v2 ) = L(v3 ) = A, and NQ (v1 ) =
NQ (v2 ) = NQ (v3 ) = {(v4 , a)}, thus v1 ∼ v2 ∼ v3 . So, we
divide VQ into equivalent classes VQ1 = {v1 , v2 , v3 }, VQ2 =
{v4 }. Given two graph mappings ψ = {(u1 → v1 ), (u2 →
v2 ), (u3 → v3 ), (u4 → v4 )} and ψ 0 = {(u1 → v2 ), (u2 →
v3 ), (u3 → v1 ), (u4 → v4 )}, we have code(ψ) = code(ψ 0 ) =
h1, 1, 1, 2i, and then obtain |Pψ | = |Pψ0 | = 4.
Theorem 3 states that graph mappings with the same
canonical code induce the same edit cost. Thus, among these
mappings, we only need to generate one of them. Next,
we apply Theorems 2, 3 into the procedure GenSuccr of
generating successors, which can prevent from generating the
above invalid and redundant mappings.
C. Generating Successors
Consider a node r associated with a partial graph mapping
ψr = {(ui1 → vj1 ), . . . , (uil → vjl )} in the search tree. Then,
r
the sets of unmapped vertices in G and Q are CG
=
r
VG \{ui1 , . . . , uil } and CQ
= VQ \{vj1 , . . . , vjl }, respectively.
r
For the vertex uil+1 to be extended, let z ∈ CQ
∪ {v n } be a
possible mapped vertex.
By Theorem 2, if |VG | ≤ |VQ |, we have |ψ| = |VQ |, which
means that none of the vertices in VG is allowed to be mapped
to a dummy vertex, i.e., (u → v n ) ∈
/ ψ for ∀u ∈ VG .
r
r
r
Rule 1. If |CG
| ≤ |CQ
|, then z ∈ CQ
; otherwise z = v n or
r
z ∈ CQ .
Applying rule 1 into the process GenSuccr of generating
successors of each node, we know that if |VG | ≤ |VQ | then
none of the vertices in VG will be mapped to a dummy vertex
otherwise only |VG |−|VQ | vertices do. As a result, the obtained
graph mapping ψ must satisfy |ψ| = max{|VG |, |VQ |}.
Layer
VG
1
u1
2
u2
3
u3
v3
4
u4
v4
root
v1
v2
v4
v4
v1
v4
v2
v2
v3
v3
v3
Fig. 3: Search tree created by GenSuccr.
0
Definition 6 (Canonical Code Partial Order). Let ψ and ψ
be two graph mappings such that code(ψ) = code(ψ 0 ). We
define that ψ ψ 0 if ∃l, 1 ≤ l ≤ |ψ|, s.t., ψ(uik ) = ψ 0 (uik )
for 1 ≤ k < l and ψ(uil ) < ψ 0 (uil ).
By Theorem 3, we know that graph mappings with the same
canonical code induce the same edit cost, thus among these
mappings we only need to generate the smallest according
to the partial order defined in Definition 6. For uil+1 , we
only map it to the smallest unmapped vertex in VQm , for
1 ≤ m ≤ λQ . This will guarantee that the obtained graph
mapping is smallest among those mappings with the same
canonical code. Then we establish Rule 2 as follows:
SλQ
r
Rule 2. z ∈ m=1
∩ VQm }.
min{CQ
Based on the above Rule 1 and Rule 2, we give the method
of generating successors of r in Algorithm 2, where lines 4–8
corresponds to Rule 2 and lines 9–11 corresponds to Rule 1.
Algorithm 2: GenSuccr(r)
1
2
3
4
5
6
7
8
ψr ← {(ui1 → vj1 ), . . . , (uil → vjl )}, succ ← ∅;
r
r
← VQ \{vj1 , . . . , vjl };
← VG \{ui1 , . . . , uil }, CQ
CG
r
| > 0 then
if |CG
for m ← 1 to λQ do
r
∩ VQm 6= ∅ then
if CQ
r
∩ VQm };
z ← min{CQ
generate successor q, s.t.,
ψq ← ψr ∪ {(uil+1 → z)};
succ ← succ ∪ {q};
10
11
12
13
14
15
else
generate successor q, s.t., ψq ← ψr ∪
succ ← succ ∪ {q};
S
r {(u
z∈CQ
n
Replacing BasicGenSuccr (see Alg. 1) with GenSuccr
to generate successors, we reduce a large number of invalid
and redundant mappings and then create a small search tree.
Next, we analyze the size of search tree, i.e., the total number
of nodes in the search tree.
Nodes in the search tree are grouped into different layers
based on their distances from the root node. Hence, the search
tree is divided into layers, one for each depth. When all
vertices in VG were processed, for any node in the layer |VG |
(starting from 0), it generates a unique successor leaf node,
thus we regard this layer as the last layer. Namely, we only
need to generate the first |VG | layers. For the layer l, let Nl be
the total number of nodes in this layer. So, the total number of
P|VG |
nodes in the search tree can be computed as SR = l=0
Nl .
For the layer l, the set of vertices in G that have been
l
= {ui1 , . . . , uil }, correspondingly, we must
processed is BG
choose l vertices from VQ ∪ {v n } as their mapped vertices.
l
Let BQ
= {vj1 , . . . , vjl } be the l selected vertices, then we
use a vector x = [x1 , . . . , xλQ +1 ] to represent it, where xm is
l
that belong to VQm , i.e., xm =
the number of vertices in BQ
m
l
|BQ ∩ VQ |, for 1 ≤ m ≤ λQ , and xλQ +1 is the number of
l
dummy vertices in BQ
. Thus, we have
λQ +1
r
r
| then
| > |CQ
if |CG
generate successor q, s.t., ψq ← ψr ∪ {(uil+1 → v n )};
succ ← succ ∪ {q};
9
D. Search Space Analysis
→ z)};
return succ;
Example 4. Consider graphs G and Q in Figure 1. Figure 3
shows the entire search tree of G and Q created layer-bylayer by using GenSuccr, where vertices in G are processed
in the order (u1 , u2 , u3 , u4 ). In a layer, the values inside
the nodes are the possible mapped vertices (e.g., v1 and v4
in layer one are the possible mapped vertices of u1 ). The
sequence of vertices on the path from root to each leaf node
gives a complete graph mapping. In this example, we totally
generate 4 graph mappings, and then easily compute that
ged(G, Q) = 4.
X
xm = l.
(1)
m=1
where 0 ≤ xm ≤ |VQm | for 1 ≤ m ≤ λQ , and xλQ +1 ≥ 0.
For a solution x of equation (1), it corresponds to a
l
unique BQ
. The reason is as follows: In Rule 2, each time
we only select the smallest unmapped vertex in VQm as the
mapped vertex, for 1 ≤ m ≤ λQ . Thus, for xm in x, it means
l
that BQ
contains the first xm smallest vertices in VQm . For
example, let us consider the search tree in Figure 3. Let l = 3
l
and x = [2, 1, 0], then BQ
contains the first 2 smallest vertices
in VQ1 , i.e., v1 and v2 , and the smallest vertex in VQ2 , i.e., v4 ,
l
then we have BQ
= {v1 , v2 , v4 }.
Let Ψl be the set of solutions of equation (1), then it
l
covers all possible BQ
. For a solution x, it totally produces
l!
different
(partial)
canonical codes. For example,
QλQ +1
m=1
xm !
for x = [2, 1, 0], it produces 3 partial canonical codes, i.e.,
h1, 1, 2i, h1, 2, 1i and h2, 1, 1i. As we know, each (partial)
l
canonical code corresponds to a (partial) mapping from BG
l
to BQ
, which is associated with a node in the layer l, thus
Nl =
X
x∈Ψl
l!
QλQ +1
m=1
.
(2)
xm !
In Rule 1, only when the number of unmapped vertices in G
is greater than that in Q, we select a dummy vertex v n as the
mapped vertex. As a result, if |VG | ≤ |VQ | then the number
|V |
of dummy vertices in BQ G is 0 otherwise is |VG | − |VQ |. Let
l = |VG | and we then discuss the following two cases:
Case 1. When |VG | > |VQ |. For any x, we have
xλQ +1 = |VG | − |VQ |. Then equation (1) is reduced
PλQ
PλQ
m
to
m=1 xm = |VQ |. Since
m=1 |VQ | = |VQ | and
m
0 ≤ xm ≤ |VQ |, for 1 ≤ m ≤ λQ , then equation (1) has
λ
a unique solution x = [|VQ1 |, . . . , |VQ Q |, |VG | − |VQ |], Thus,
|VG |!
. As N0 = 1 and N1 ≤ · · · ≤
N|VG | =
QλQ
m
•
•
to denote the interval of layer l. For a node r in layer l, its
successor n in next layer l + 1 is allowed to be expanded
only when f (n) is in the interval bs[l], i.e., bs[l].fmin ≤
f (n) < bs[l].fmax .
priority queues open[0], . . . , open[|VG |], where open[l]
(0 ≤ l ≤ |VG |) is used to store the expanded nodes in
layer l.
a table new, where new[H(r)] stores all successors of r
and H(r) is a hash function which assigns a unique ID
to r.
B. Algorithm
Algorithm 3 performs an iterative search to obtain a more
and more tight upper bound ub of GED until ub = ged(G, Q).
In an iteration, we perform the following two steps: (1) we
utilize beam search [10] to quickly reach to a leaf node whose
(|VG |−|VQ |)! m=1 |VQ |!
edit cost is an upper bound of GED, then we update ub (line 4).
|VG |!
N|VG | , we obtain SR ≤ |VG |
+ 1.
Q λQ
As beam search expands at most w nodes in each layer, some
m
(|VG |−|VQ |)! m=1 |VQ |!
Case 2. When |VG | ≤ |VQ |. For any x, we have xλQ +1 = 0. nodes are inadmissible pruned when the number of nodes in a
PλQ
Then equation (1) is reduced to
x = |VG |. As layer is greater than w, where w is the beam width; Thus, (2)
Pm=1 m
we backtrack and pop items from bs until a layer l such
|VG |!
|VG | ≤ |VQ |, we have N|VG | =
≤
x∈Ψ|VG | QλQ
that bs.top().fmax < ub (lines 5–6), and then shift the range
m=1 xm !
xλQ +1 =0
P
of bs.top() (line 9) to re-expand those inadmissible pruned
|VQ |!
|VQ |!
= QλQ
. Since N0 = 1 and
x∈Ψ|VQ | QλQ
m |!
xm !
|VQ
nodes in next iteration to search for tighter ub. If l = −1,
m=1
m=1
xλQ +1 =0
it means that we finish a complete search and then obtain
|VQ |!
N1 ≤ · · · ≤ N|VG | , we have SR ≤ |VG | QλQ
+ 1.
m
ub = ged(G, Q) (lines 7–8).
m=1 |VQ |!
However, this will overestimate SR when |VG | |VQ |.
In procedure Search, we perform a beam search starting
For the layer l, if we do not consider the isomorphic vertices from layer l to re-expand those inadmissible pruned nodes
l
l
l
. Since there to search for tighter ub, where P QL and P QLL are two
to BQ
, then there are l! mappings from BG
in BQ
|VQ |
|VQ |
l
,
we
have
N
≤
· l! = temporary priority queues used to record expanded nodes in
are at most
possible
B
l
Q
l
l
P|VG |
P|VG | |VQ |!
|VQ |!
two adjacent layers. Each time we pop a node r with the
l=0 Nl ≤
l=1 (|VQ |−l)! + 1
(|VQ |−l)! . So, SR =
smallest cost to expand (line 4). If r is a leaf node, then
P|VG |
|VQ |!
= (|VQ |−|VG |)! l=1 Q|VG |−l 1
+1
we
update ub and stop the search as g(z) ≥ g(r) holds for
(|V
|−|V
|+m)
Q
G
P|VG | m=1
|VQ |!
|VQ |!
1
∀z
∈ P QL (line 7); otherwise, we call ExpandNode to
≤ (|VQ |−|V
+
1
≤
2
+
1.
l=1 2|VG |−l
(|VQ |−|VG |)!
G |)!
generate all successors of r that are allowed to be expanded
|VG ||VG |!
);
In summary, if |VG | > |VQ |, SR = O(
QλQ
m |!
in next layer and then insert them into P QLL (lines 8–9). As
(|VG |−|VQ |)! m=1
|VQ
|V ||V |!
|VQ |!
at most w successors are allowed to be expanded, we only
otherwise, SR = O(min{ QλQG Qm , (|VQ |−|V
}).
G |)!
m=1 |VQ |!
keep the best w nodes (i.e., the smallest cost) in P QLL,
and the nodes left are inadmissible pruned (lines 11–13).
IV. GED C OMPUTATION USING B EAM - STACK S EARCH
Correspondingly, line 12, we modify the right boundary of
The previous section shows that we create a small search bs.top() as the lowest cost among all inadmissible pruned
space. However, we still need an efficient search paradigm to nodes to ensure that the cost of the w successors currently
traverse the search space to seek for an optimal graph mapping expanded is in this interval.
to compute GED. In this section, based on the efficient search
In procedure ExpandNode, we generate all successors of r
paradigm, beam-stack search [11], we give our approach for that are allowed to be expanded. Note that, all nodes first
the GED computation.
generated are marked as false. If r has not been visited, i.e.,
r .visited = false, then we call GenSuccr (i.e., Alg. 2) to
A. Data Structures
generate all successors of r and mark r as visited (lines 3–4);
For a node r in the search tree, f (r ) = g(r ) + h(r ) is the otherwise, we directly read all successors of r from new
total edit cost assigned to r, where g(r) is the edit cost of the (line 6). For a successor n of r, if f (n) ≥ ub or n.visited =
partial path accumulated so far, and h(r) is the estimated edit true, then we safely prune it, see Lemma 1. Meanwhile,
cost from r to a leaf node, which is less than or equal to the we delete all successors of n from new (line 9); otherwise,
real cost. Before formally presenting the algorithm, we first if bs.top().fmin ≤ f (n) < bs.top().fmax , we expand n. If all
introduce the data structures used as follows:
successors of r are safely pruned, we safely prune r, and
• a beam stack bs, which is a generalized stack. Each item
delete r from open[l] and its successors from new, resin bs is a half-open interval [fmin , fmax ), and we use bs[l] pectively (line 13).
Lemma 1. In ExpandNode, if f (n) ≥ ub or n.visited =
true, i.e., line 8, we safely prune n.
Proof: For the case f (n) ≥ ub, it is trivial. Next we prove
it in the other case.
Consider bs in the last iteration. Assuming that in this
iteration we perform Search starting from layer k (i.e.,
backtracking to layer k in the last iteration, see lines 5–6
in Alg. 3), and node r and its successors n are in layers l
and l + 1, respectively, then k ≤ l and bs[m].fmax ≥ ub
for l + 1 ≤ m ≤ |VG |. If n.visited = true, then we must
have called ExpandNode to generate successors of n in the
last iteration. For a successor x of n in layer l + 2, if x is
inadmissible pruned, then f (x) ≥ bs[l + 1].fmax ≥ ub, thus
we safely prune x; otherwise, we consider a successor of x and
repeat this decision process until a leaf node z. Then, it must
satisfy that f (z) = g(z) ≥ ub. Thus, none of descendants of n
can produce tighter ub. So, we safely prune it.
Lemma 2. A node r is visited at most O(|VQ |) times.
Proof: For a node r in layer l, it generates at most
|VQ | + 1 successors by GenSuccr. In order to fully generate
all successors in layer l + 1, we backtrack to this layer at most
|open[l]|·(|VQ |+1)/w ≤ |VQ |+1 times as |open[l]| ≤ w. After
that, when we visit r once again, all successors of r are either
pruned or marked, thus we safely prune them by Lemma 1.
So, r cannot produce tighter ub in this iteration and we safely
prune it, i.e., lines 12–13 in ExpandNode. Plus the first time
when generating r, we totally visit r at most |VQ | + 3 times,
i.e., O(|VQ |).
Theorem 4. Given two graphs G and Q, BSS GED must
return ged(G, Q).
Proof: By Lemma 2, a node is visited at most O(|VQ |)
times, thus all nodes are totally visited at most O(|VQ |SR )
times (see SR in Section III-D), which is finite. So, BSS GED
always terminates. In Search, we always update ub =
min{ub, g(r)} each time. Thus, ub becomes more and more
tight. Next, we prove that ub converges to ged(G, Q) when
BSS GED terminates by contradiction.
Suppose that ub > ged(G, Q). Let r and n be the leaf nodes
whose edit cost is ub and ged(G, Q), respectively. Let x in
layer l be the common ancestor of r and n, which is farthest
from root. Let z in layer l + 1 be a successor of x, which is
the ancestor of n. Then f (z) ≤ f (n) = ged(G, Q) < ub.
For z, it is not in the path from root to r, thus it must be
pruned in an iteration, i.e., f (z) ≥ ub or z.visited = false,
line 8 in ExpandNode (if z has been inadmissible pruned, we
backtrack and shift the range of bs[l] to re-expand it until that z
is pruned or marked). For the case f (z) ≥ ub, it contradicts
that f (z) < ub, and for the other case, we conclude that
f (n) ≥ ub by using the same analysis in Lemma 1, which
contradicts that f (n) < ub. Thus, ub = ged(G, Q).
V. S EARCH S PACE P RUNING
In BSS GED, for a node r, if f (r) = g(r) + h(r) ≥ ub,
then we safely prune r. As g(r) is the irreversible edit cost,
Algorithm 3: BSS GED(G, Q, w)
1
2
3
4
5
6
7
8
9
10
1
2
3
4
5
6
7
ψroot ← ∅, bs ← ∅, open[] ← ∅, new[] ← ∅, l ← 0, ub ← ∞;
bs.push([0, ub)), open[0].push(root);
while bs 6= ∅ do
Search(l, ub, bs, open, new);
while bs.top().fmax ≥ ub do
bs.pop(), l ← l − 1;
if l = −1 then
return ub;
bs.top().fmin ← bs.top().fmax , bs.top().fmax ← ub;
return ub;
procedure Search(l, ub, bs, open, new)
P QL ← open[l], P QLL ← ∅;
while P QL 6= ∅ or P QLL 6= ∅ do
while P QL 6= ∅ do
r ← arg minn {f (n) : n ∈ P QL};
P QL ← P QL\{r};
if ψr is a complete graph mapping then
ub ← min{ub, g(r)}, return;
succ ←ExpandNode(r, l, ub, open, new);
P QLL ← P QLL ∪ succ;
8
9
if |P QLL| > w then
keepNodes ← the best w nodes in P QLL;
bs.top().fmax ← min{f (n) : n ∈ P QLL ∧ n ∈
/
keepNodes};
P QLL ← keepNodes;
10
11
12
13
open[l + 1] ← P QLL, P QL ← P QLL;
P QLL ← ∅, l ← l + 1, bs.push([0, ub));
14
15
1
2
3
4
5
6
7
8
9
10
11
procedure ExpandNode(r, l, ub, open, new)
expand ← ∅;
if r.visited = false then
succ ←GenSuccr(r);
new[H(r)] ← succ, r.visited ← true;
else
succ ← new[H(r)];
foreach n ∈ succ do
if f (n) ≥ ub or n.visited = true then
new[H(n)] ← ∅;
else if bs.top().fmin ≤ f (n) < bs.top().fmax then
expand ← expand ∪ {n};
13
if ∀n ∈ succ, f (n) ≥ ub or n.visited = true then
open[l] ← open[l]\{r}, new[H(r)] ← ∅;
14
return expand;
12
the upper bound ub and lower bound h(r) are the keys to
perform pruning. Here, we give two heuristics to prune the
search space as follows: (1) proposing an efficient heuristic
function to obtain tighter h(r); (2) ordering vertices in G to
enable to fast find of tighter ub.
A. Estimating h(r)
Let P be an optimal edit path that transforms G to Q,
then it contains at least max{|VG |, |VQ |} − |ΣVG ∩ ΣVQ | edit
operations performed on vertices. Next, we only consider the
edit operations in P performed on edges. Assuming that we
first delete γ1 edges to obtain G1 , then insert γ2 edges to
obtain G2 , and finally change γ3 edge labels to obtain Q.
When transforming G to G1 by deleting γ1 edges, we
have ΣEG1 ⊆ ΣEG , thus |ΣEG ∩ ΣEQ | ≥ |ΣEG1 ∩ ΣEQ |.
When transforming G1 to G2 by inserting γ2 edges, for
each inserted edge, we no longer change its label, thus
|ΣEG2 ∩ ΣEQ | = |ΣEG1 ∩ ΣEQ | + γ2 . When transforming G2 to Q by changing γ3 edge labels, we need to
substitute at least |ΣEQ | − |ΣEG2 ∩ ΣEQ | edge labels, thus
γ3 ≥ |ΣEQ | − |ΣEG2 ∩ ΣEQ |. So, we have
|ΣEG ∩ ΣEQ | + γ2 + γ3 ≥ |EQ |.
(3)
P3
Let lb(G, Q) = max{|VG |, |VQ |} − |ΣVG ∩ ΣVQ | + i=1 γi ,
then ged(G, Q) ≥ lb(G, Q). Obviously, the lower bound
lb(G, Q) should be as tight as possible. In order to achieve
this goal, we utilize the degree sequence of a graph.
For a vertex u in G, its degree du is the number of edges
adjacent to u. The degree sequence δG = [δG [1], . . . , δG [|VG |]]
of G is a permutation of d1 , . . . , d|VG | such that δG [i ] ≥ δG [j ]
for i < j . For unequal size G and Q, we extend δG and δQ
0
0
=
= [δG [1], . . . , δG [|VG |], 01 , . . . , 0|V |−|VG | ] and δQ
as δG
[δQ [1], . . . , δQ [|VQ |], 01 , . . . , 0|V |−|VQ | ], P
resp., where |V | =
0
max{|VG |, |VQ |}. Let ∆1 (G, Q) = d δ0 [i]>δ0 [i] (δG
[i] −
G
Q
P
0
0
0
δQ [i])/2e and ∆2 (G, Q) = d δ0 [i]≤δ0 [i] (δQ [i] − δG [i])/2e,
G
Q
for 1 ≤ i ≤ |V |, then we give the respective lower bounds
of γ1 and γ2 as follows:
Theorem 5 ([14]). Given two graphs G and Q, we have γ1 ≥
∆1 (G, Q) and γ2 ≥ ∆2 (G, Q).
Based on inequality (3) and Theorem 5, we then establish
the following lower bound of GED in Theorem 6.
Theorem 6. Given two graphs G and Q, we
have ged(G, Q) ≥ LB(G, Q), where LB(G, Q) =
max{|VG |, |VQ |} − |ΣVG ∩ ΣVQ | + max{∆1 (G, Q) +
∆2 (G, Q), ∆1 (G, Q) + |EQ | − |ΣEG ∩ ΣEQ |}.
Next, we discuss how to estimate h(r) based on Theorem 6.
Let ψr = {(ui1 → vj1 ), . . . , (uil → vjl )} be the partial
mapping associated with r, then we divide G into two parts Gr1
and Gr2 , where Gr1 is the mapped part of G, s.t., VGr1 =
{ui1 , . . . , uil } and EGr1 = {e(u, v) : u, v ∈ VGr1 ∧ e(u, v) ∈
EG }, and Gr2 is the unmapped part, s.t., VGr2 = VG \VGr1 and
EGr2 = {e(u, v) : u, v ∈ VGr2 ∧ e(u, v) ∈ EG }. Similarly, we
also obtain Qr1 and Qr2 . For r, by Theorem 6 we know that
LB(Gr2 , Qr2 ) is lower bound of ged(Gr2 , Qr2 ) and hence can
adopt it as h(r). However, for the potential edit cost on the
edges between Gr1 (resp. Qr1 ) and Gr2 (resp. Qr2 ), LB(Gr2 , Qr2 )
has not covered it.
Definition 7 (Outer Edge Set). For a vertex u in Gr1 , we define
its outer edge set as Ou = {e(u, v) : v ∈ VGr2 ∧e(u, v) ∈ EG },
which consists of edges adjacent to u that neither belong to
EGr1 nor EGr2 .
Correspondingly, Oψ(u) is the outer edge set of ψ(u).
Note that, if ψ(u) = v n , then Oψ(u) = ∅. Thus, ΣOu =
{L(e(u, v)) : e(u, v) ∈ Ou } is the label multiset of Ou . In
order to make Ou and Oψ(u) have the same label multiset,
assuming that we first delete ξ1u and then insert ξ2u edges on u,
and finally substitute ξ3u labels on the outer edges adjacent to u.
Similar to the previous analysis of obtaining inequality (3), we
have
|Ou | − ξ1u + ξ2u = |Oψ(u) |
(4)
|ΣOu ∩ ΣOψ(u) | + ξ2u + ξ3u ≥ |Oψ(u) |
P3
Thus, i=1 ξiu ≥ |Oψ(u) |−|ΣOu ∩ΣOψ(u) |. As |Ou |+ξ2u =
P3
|Oψ(u) | + ξ1u , we have i=1 ξiu ≥ |Oψ(u) | − |ΣOu ∩ ΣOψ(u) | +
ξ1u = |Ou |−|ΣOu ∩ΣOψ(u) |+ξ2u ≥ |Ou |−|ΣOu ∩ΣOψ(u) |. So,
P3
u
i=1 ξi ≥ max{|Ou |, |Oψ(u) |} − |ΣOu ∩ ΣOψ(u) |. Adding all
vertices in Gr1 , we obtain the lower bound LB1r as follows:
X
(max{|Ou |, |Oψ(u) |}
LB1r = LB(Gr2 , Qr2 ) +
u∈VGr
(5)
1
−|ΣOu ∩ ΣOψ(u) |).
Definition 8 (Outer Vertex Set). For a vertex u in Gr1 , we
define its outer vertex set as Au = {v : v ∈ VGr2 ∧ e(u, v) ∈
EG }, which consists of vertices in Gr2 adjacent to u.
Correspondingly, Aψ(u) denotes the outer vertex set
n
of ψ(u).
S Note that, if ψ(u) = v , then Aψ(u) =r ∅. Thus,
r
AG = u∈VGr Au denotes the set of vertices in G2 adjacent
1
to those outer Sedges between Gr1 and Gr2 . Similarly, we
obtain ArQ = z∈VQr Az . If |ArG | ≤ |ArQ |, then we need
1
to insert at leastP
|ArQ | − |ArG | outer edges on some vertices
u
in Gr1 , hence
≥ |ArQ | − |ArG |; otherwise,
u∈VGr ξ2
P
1
r
u
r
u∈VGr ξ1 ≥ |AG | − |AQ |. Considering equation (4), for a
1
r
vertex P
u in G1 , we have ξ2u + ξ3u ≥P|Oψ(u) | − |ΣOu ∩ ΣOψ(u) |.
Thus, u∈VGr (ξ1u + ξ2u + ξ3u ) ≥ u∈VGr (|Oψ(u) | − |ΣOu ∩
1
1
u
u
ΣOψ(u) |)+max{0,
|ArG |−|ArQ |}. As |O
u |+ξ2 = |Oψ(u) |+ξ1 ,
P
P
u
u
u
we have u∈VGr (ξ1 + ξ2 + ξ3 ) ≥ u∈VGr (|Ou | − |ΣOu ∩
1
1
ΣOψ(u) |) + max{0, |ArQ | − |ArG |}. So, we obtain the lower
bounds LB2r and LB3r as follows:
X
LB2r = LB(Gr2 , Qr2 ) +
(|Oψ(u) | − |ΣOu ∩ ΣOψ(u) |)
u∈VGr
1
+max{0, |ArG | − |ArQ |}.
(6)
LB3r = LB(Gr2 , Qr2 ) +
X
(|Ou | − |ΣOu ∩ ΣOψ(u) |)
(7)
u∈VGr
1
+max{0, |ArQ |
−
and LB3r , we
Based on the above lower bounds
r
r
r
adopt h(r) = max{LB1 , LB2 , LB3 } as the heuristic function
to estimate the edit cost of a node r in BSS GED.
Example 5. Consider graphs G and Q in Figure 4. For a node r
associated with a partial mapping ψ(r) = {(u1 → v1 ), (u2 →
v2 )}, then Gr2 = ({u3 , u4 , u5 }, {e(u3 , u5 ), e(u4 , u5 )}, L) and
Qr2 = ({v3 , v4 , v5 , v6 }, {e(v3 , v6 ), e(v4 , v6 ), e(v5 , v6 )}, L). By
Theorem 6, we compute LB(Gr2 , Qr2 ) = 2. Considering
vertices u1 and u2 that have been processed, we have Ou1 =
{e(u1 , u3 ), e(u1 , u4 )} and Ou2 = {e(u2 , u4 )}, and then obtain
ΣOu1 = {a, a} and ΣOu2 = {b}. Similarly, we have
LB1r ,
LB2r
|ArG |}.
ΣOv1 = {a, a} and ΣOv2 = {b}. Thus LB1r = LB(Gr2 , Qr2 ) +
P
u∈{u1 ,u2 } (max{|Ou |, |Oψ(u) |} − |ΣOu ∩ ΣOψ(u) |) = 2. As
ArG = {u3 , u4 } P
and ArQ = {v3 , v4 , v5 }, we obtain LB2r =
r
r
LB(G2 , Q2 ) + u∈{u1 ,u2 } (|Oψ(u) | − |ΣOu ∩ ΣOψ(u) |) +
max{0,
|ArG | − |ArQ |} = 2, and LB3r = LB(Gr2 , Qr2 ) +
P
r
r
u∈{u1 ,u2 } (|Ou | − |ΣOu ∩ ΣOψ(u) |) + max{0, |AQ | − |AG |}
r
r
r
= 3. So, h(r) = max{LB1 , LB2 , LB3 } = max{2, 2, 3} = 3.
u1
A
u3
b
A
A
a
u5
u2
B
a
a
u4 v3
b
C
G
v1
a
A
A
v2
b
B
a
v4
A
a a
v6 C
B
v5
b
Q
Fig. 4: Example of two comparing graphs G and Q.
B. Ordering Vertices in G
In BSS GED, we use GenSuccr to generate successors.
However, we need to determine the processing order of
vertices in G at first, i.e., (ui1 , . . . , ui|VG | ) (see Section II-C).
The most primitive way is to adopt the default vertex order
in G, i.e., (u1 , . . . , u|VG | ), which is used in A? -GED [5] and
DF-GED [16]. However, this may be inefficient as it has not
considered the structure relationship between vertices.
For vertices u and v in G such that e(u, v) ∈ EG , if u
has been processed, then in order to early obtain the edit cost
on e(u, v), we should process v as soon as possible. Hence,
our policy is to traverse G in a depth-first order to obtain
(ui1 , . . . , ui|VG | ). However, starting from different vertices to
traverse, we may obtain different orders.
In section V-A, we have proposed the heuristic estimate
function h(r), where an important part is LB(Gr2 , Qr2 ) presented in Theorem 6. As we know, the more structure Gr2
and Qr2 keep, the tighter lower bound LB(Gr2 , Qr2 ) we may
obtain. As a result, we preferentially consider vertices with
small degrees. This is because that when we first process those
vertices, the left unmapped parts Gr2 and Qr2 could keep the
structure as much as possible.
Definition 9 (Vertex Partial Order). For two vertices u and v
in G, we define that u ≺ v if and only if du < dv or du =
dv ∧ u < v.
verse G in a depth-first order, and finally obtain order =
[u2 , u4 , u1 , u3 , u5 ]. Thus, we process vertices in G in the order
(u2 , u4 , u1 , u3 , u5 ) in BSS GED.
VI. E XTENSION OF BSS GED
In this section, we extend BSS GED to solve the GED
based graph similarity search problem: Given a graph database
G = {G1 , G2 , . . . }, a query graph Q and an edit distance
threshold τ , the problem aims to find all graphs in G satisfy
ged (Gi , Q) ≤ τ . As computing GED is an NP-hard problem,
most of the existing methods, such as [12], [14], [19], [20],
all use the filter-and-verify schema, that is, first filtering some
graphs in G to obtain candidate graphs, and then verifying
them.
Here, we also use this strategy. For each data graph Gi ,
we compute the lower bound LB(Gi , Q) by Theorem 6. If
LB(Gi , Q) > τ , then ged(Gi , Q) ≥ LB(Gi , Q) > τ and hence
we filter Gi ; otherwise, Gi becomes a candidate graph.
For a candidate graph Gi , we need to compute ged(Gi , Q)
to verify it. The standard method is that we first compute
ged(Gi , Q) and then determine Gi is a required graph or not
by judging ged(Gi , Q) ≤ τ . Incorporating τ with BSS GED,
we can further accelerate it as follows: First, we set the initial
upper bound ub as τ + 1 (line 1 in Alg. 3). Then, during the
execution of BSS GED, when we reach to a leaf node r, if
the cost of r (i.e., g(r)) satisfies g(r) ≤ τ , then Gi must be a
required graph and we stop running of BSS GED. The reason
is that g(r) is an upper bound of GED and hence we know
that ged(Gi , Q) ≤ g(r) ≤ τ .
Algorithm 4: DetermineOrder(G)
1
2
3
4
5
6
7
1
2
3
4
5
6
In Algorithm 4, we give the method to compute the order
(ui1 , . . . , ui|VG | ). First, we sort vertices to obtain a global
order array rank based on the partial order ≺ (line 2). Then,
we call DFS to traverse G in a depth-first order (lines 3–6).
In DFS, we sequentially insert u into order and then mark u
as visited, i.e., set F [u] = true (line 1). Then, we obtain the
set Nu of vertices adjacent to u (line 2). Finally, we select a
smallest unvisited vertex v from Nu based on the partial order
≺, and then recursively call DFS to traverse the subtree rooted
at v (lines 3–7).
Example 6. For the graph G in Figure 4, we first compute rank = [u2 , u1 , u3 , u5 , u4 ]. Starting from u2 , we tra-
7
F [1..|VG |] ← false, order [] ← ∅, count ← 1;
rank ← sort vertices in VG according to the partial order ≺;
for i ← 1 to |VG | do
u ← rank [i];
if F [u] = false then
DFS(u, F, rank, order, count)
return order
procedure DFS(u, F, rank, order, count)
order[count] ← u, count ← count + 1, F [u] ← true;
Nu ← {v : v ∈ VG ∧ e(u, v) ∈ EG };
while |Nu | > 0 do
v ← argminj {rank [j] : j ∈ Nu };
Nu ← Nu \{v};
if F [v] = false then
DFS(v, F, rank, order, count);
VII. E XPERIMENTAL R ESULTS
In this section, we perform comprehensive experiments and
then analyse the obtained results.
A. Datasets and Settings
We choose several real and synthetic datasets used in the
experiment, described as follows:
A ID S - 1 0 K
P R O T E IN - 3 0 0
9 5
9 0
B S S _ G E D b
B S S _ G E D
8 5
1 0 0
A v e r a g e s o lv e r a tio ( % )
A v e r a g e s o lv e r a tio ( % )
A v e r a g e s o lv e r a tio ( % )
S 1 K .E 3 0 .D 3 0 .L 2 0
1 0 0
1 0 0
8 0
6 0
4 0
B S S _ G E D b
B S S _ G E D
2 0
8 0
A v e r a g e r u n n in g tim e ( s )
3 0 0
1 2
1 5
Q u e r y g r o u p
A ID S - 1 0 K
1 8
6
2 1
B S S _ G E D b
B S S _ G E D
1 5 0
1 2
1 5
Q u e r y g r o u p
P R O T E IN - 3 0 0
1 0 0 0
2 5 0
2 0 0
9
1 0 0
5 0
0
1 8
4 0
6
6 0 0
9
1 2
1 5
Q u e r y g r o u p
1 8
B S S _ G E D b
B S S _ G E D
2 0 0
1 2
1 5
Q u e r y g r o u p
S 1 K .E 3 0 .D 3 0 .L 2 0
1 8
2 1
1 8
2 1
B S S _ G E D b
B S S _ G E D
6 0 0
4 0 0
2 0 0
0
6
2 1
9
8 0 0
0
6
B S S _ G E D b
B S S _ G E D
1 0 0 0
8 0 0
4 0 0
6 0
2 0
2 1
A v e r a g e r u n n in g tim e ( s )
9
A v e r a g e r u n n in g tim e ( s )
6
8 0
9
1 2
1 5
Q u e r y g r o u p
1 8
2 1
6
9
1 2
1 5
Q u e r y g r o u p
Fig. 5: Effect of GenSuccr on the performance of BSS GED
AIDS1 . It is an antivirus screen compound dataset from
the Development and Therapeutics Program in NCI/NIH,
which contains 42687 chemical compounds. We generate
labeled graphs from these chemical compounds and omit
Hydrogen atom as did in [13], [14].
2
• PROTEIN . It is a protein database from the Protein
Data Bank, constituted of 600 protein structures. Vertices
represent secondary structure elements and are labeled
with their types (helix, sheet or loop). Edges are labeled
to indicate if two elements are neighbors or not.
• Synthetic. The synthetic dataset is generated by the synthetic graph data generator GraphGen3 . In the experiment,
we generate a density graph dataset S1K.E30.D30.L20,
which means that this dataset contains 1000 graphs; the
average number of edges in each graph is 30; the density4
of each graph is 30%; and the distinct vertex and edge
labels are 20 and 5, respectively.
Due to the hardness of computing GED, existing methods,
such as A? -GED [5], DF-GED [16] and CSI GED [4], cannot
obtain GED of large graphs within a reasonable time and
memory. Therefore, for AIDS and PROTEIN, we exclude large
graphs with more than 30 vertices as did in [1], and then
randomly select 10000 and 300 graphs to make up the datasets
AIDS-10K and PROTEIN-300. For S1K.E30.D30.L20, we use
the entire dataset.
As suggested in [1], [4], for each dataset, we randomly select 6 query groups, where each group consists of 3 data graphs
having three consecutive graph sizes. Specifically, the number
of vertices of each group is in the range: 6 ± 1, 9 ± 1, 12 ±
1, 15 ± 1, 18 ± 1 and 21 ± 1.
For the tested database D = {D1 , D2 , . . . } and query group
T = {T1 , T2 , . . . }, we need to perform |D| × |T | times GED
computation. For each pair of the GED computation, we set the
•
1 http://dtp.nci.nih.gov/docs/aids/aidsdata.html
2 http://www.fki.inf.unibe.ch/databases/iam-graph-database/download-theiam-graph-database
3 http://www.cse.ust.hk/graphgen/
2|EG |
4 the density of a graph G is defined as
.
|V |(|V |−1)
G
G
available time and memory be 1000s and 24GB, respectively,
and then define the metric average solve ratio as follows:
P|D| P|T |
i=1
j=1 slove(Di , Tj )
sr =
.
(8)
|D| × |T |
where slove(Di , Tj ) = 1 if we obtain ged(Di , Tj ) within both
1000s and 24GB, and slove(Di , Tj ) = 0 otherwise. Obviously,
sr should be as large as possible.
We have conducted all experiments on a HP Z800 PC with
a 2.67GHz GPU and 24GB memory, running Ubuntu 12.04
operating system. We implement our algorithm in C++, with
-O3 to compile and run. For BSS GED, we set the beam
width w = 15 for the sparse graphs in datasets AIDS-10K and
PROTEIN-300, and w = 50 for the density graphs in dataset
S1K.E30.D30.L20.
B. Evaluating GenSuccr
In this section, we evaluate the effect of GenSuccr on
the performance of BSS GED. To make a comparison, we
replace GenSuccr with BasicGenSuccr (i.e., Alg. 1) and
then obtain BSS GEDb , where BasicGenSuccr is the basic
method of generating successors used in A? -GED [5] and DFGED [16]. In BSS GEDb , we also use the same heuristics
proposed in Section V. Figure 5 shows the average solve ratio
and running time.
As shown in Figure 5, the average solve ratio of BSS GED
is much higher than that of BSS GEDb , and the gap between them becomes larger as the query graph size increases. This indicates that GenSuccr provides more reduction on the search space for larger graphs. Regarding the
running time, BSS GED achieves the respective 1x–5x, 0.4x–
1.5x and 0.1x–4x speedup over BSS GEDb on AIDS-10K,
PROTEIN-300 and S1K.E30.D50.L20. Thus, we create a small
search space by GenSuccr.
C. Evaluating BSS GED
In this section, we evaluate the effect of beam-stack search
and heuristics on the performance of BSS GED. We fix
datasets AIDS-10K, PROTEIN-300 and S1K.E30.D30.L20 as
the tested datasets and select their corresponding groups 15±1
as the query groups, respectively.
(1). Effect of w
As we know, beam-stack search achieves a flexible tradeoff between available memory and expensive backtracking by
setting different w, thus we vary w to evaluate its effect on
the performance. Figure 6 shows the average solve ratio and
running time.
4 0 0
9 0
A ID S -1 0 K
P R O T E IN -3 0 0
S 1 K .E 3 0 .D 3 0 .L 2 0
8 0
7 0
A v e r a g e r u n n in g tim e ( s )
A v e r a g e s o lv e r a tio ( % )
1 0 0
3 0 0
A ID S -1 0 K
P R O T E IN -3 0 0
S 1 K .E 3 0 .D 3 0 .L 2 0
2 0 0
1 0 0
0
6 0
1
5
1 5
B e a m
5 0
w id th w
1 0 0
1
5 0 0
5
1 5
B e a m
5 0
w id th w
1 0 0
5 0 0
Fig. 6: Effect of w on the performance of BSS GED.
B a s ic
+ h 1
+ h 2
A v e r a g e s o lv e r a tio ( % )
1 0 0
8 0
6 0
4 0
2 0
0
A ID S -1 0 K
P R O T E IN -3 0 0
S 1 K .E 3 0 .D 3 0 .L 2 0
A v e r a g e r u n n in g tim e ( s )
By Figure 6, we obtain that the average solve ratio first
increases and then decreases, and achieves maximum when
w = 15 on AIDS-10K and PROTEIN-300, and w = 50 on
S1K.E30.D30.L20. There are several factors may contribute
to this trend: (1) When w is too small, BSS GED may be
trapped into a local suboptimal solution and hence produces
lots of backtracking. (2) When w is too large, BSS GED
expands too many unnecessary nodes in each layer. Note that,
depth-first search is a special case of beam-stack search when
w = 1. Thus, beam-stack search performs better than depthfirst search. As previously demonstrated in [16], depth-first
search performs better than best-first search. Therefore, we
conclude that the beam-stack search paradigm outperforms
the best-first and depth-first search paradigms for the GED
computation.
1 0
3
1 0
2
1 0
1
B a is c
+ h 1
+ h 2
A ID S -1 0 K
P R O T E IN -3 0 0
S 1 K .E 3 0 .D 3 0 .L 2 0
Fig. 7: Effect of heuristics on the performance of BSS GED.
(2). Effect of Heuristics
In this part, we evaluate the effect of the proposed two
heuristics by injecting them one by one into the base algorithm. We use the term Basic for the baseline algorithm
without applying any heuristics. +h1 denotes the improved
algorithm of Basic by incorporating the first heuristics
(Section V-A). +h2 denotes the improved algorithm of +h1
by incorporating the second heuristic (Section V-B). Figure 7
plots the average solve ratio and running time.
By Figure 7, we know that the average solve ratio of
Basic is only 15% of that of +h1. This means that the
proposed heuristic function provides powerful pruning ability.
Considering the running time, +h1 brings the respective 50x,
2x and 9x speedup over Basic on AIDS-10K, PROTEIN300 and S1K.E30.D30.L20. Moreover, compared with +h1,
the running time needed by +h2 decreases 21%, 30% and
41% on AIDS-10K, PROTEIN-300 and S1K.E30.D30.L20,
respectively. Thus, the proposed two heuristics greatly boost
the performance.
D. Comparing with Existing GED Methods
In this section, we compare BSS GED with existing
methods A? -GED [5], DF-GED [16] and CSI GED [4].
Figure 8 shows the average solve ratio and running time.
By Figure 8, we know that BSS GED performs the best in
terms of average solve ratio. For A? -GED, it cannot obtain
GED of graphs with more than 12 vertices within 24GB. For
DF-GED, it cannot finish the GED computation of graphs with
more than 15 vertices in 1000s. Besides, for density graphs in
S1K.E30.D30.L20, the average solve ratio of CSI GED drops
sharply as the query graph size increases, which confirms that
it is unsuitable for dense graphs.
Regarding the running time, BSS GED still performs the
best in most cases. DF-GED performs better than A? -GED,
which is consistent with the previous results in [16]. Compared
with DF-GED, BSS GED achieves 50x–500x, 20x–2000x
and 15x–1000x speedup on AIDS-10K, PROTEIN-300 and
S1K.E30.D30.L20, respectively. Though CSI GED performs
better than BSS GED on AIDS-10K when the query graph
size is less than 9, BSS GED achieves 2x–5x speedup over
CSI GED when the graph size is greater than 12. Besides,
for S1K.E30.D30.L20, BSS GED achieves 5x–95x speedup
over CSI GED. Thus, BSS GED is efficient for the GED
computation on sparse as well as dense graphs.
E. Performance Evaluation on Graph Similarity Search
In this part, we evaluate the performance of BSS GED as a
standard graph similarity search query method by comparing it
with CSI GED and GSimJoin [19]. For each dataset described
in Section VII-A, we use its entire dataset and randomly select
100 graphs from it as query graphs. Figure 9 shows the total
running time (i.e., the filtering time plus the verification time).
It is clear from Figure 9 that BSS GED has the best performance in most cases, especially for large τ . For GSimJoin, it
cannot finish when τ ≥ 8 in AIDS and PROTEIN because of
the huge memory consumption. Compared with GSimJoin for
τ values where it can finish, BSS GED achieves the respective
1.6x–15000x, 3.8x–800x and 2x–3000x speedup on AIDS,
PROTEIN and S1K.E30.D30.L20. Considering CSI GED, it
performs slightly better than BSS GED when τ ≤ 4 on AIDS.
However, BSS GED performs much better than CSI GED
when τ ≥ 6 and the gap between them becomes larger as τ
increases. Specifically, BSS GED achieves 2x–28x, 1.2x–
100000x and 1.1x–187x speedup over CSI GED on AIDS,
PROTEIN and S1K.E30.D30.L20, respectively. As previously
discussed in [4], CSI GED performs much better than the
state-of-the-art graph similarity search query methods. Thus,
we conclude that BSS GED can efficiently finish the graph
4 0
6
1 0
3
1 0
2
1 0
1
1 0
0
1 0
- 1
1 0
- 2
1 0
9
G
- G
I_
S
E D
E D
G E D
_ G E D
- 3
9
8 0
G
- G
I_
S
E D
E D
G E D
_ G E D
6 0
4 0
2 0
1 0 0
8 0
6 0
4 0
1 8
1 2
1 5
Q u e r y g r o u p
1 8
E D
E D
G E D
_ G E D
6
1 0
3
1 0
2
1 0
1
1 0
0
1 0
2 1
G
- G
I_
S
E D
E D
G E D
_ G E D
0
2 1
G
- G
I_
S
A * D F
C S
B S
2 0
0
1 2
1 5
Q u e r y g r o u p
A ID S - 1 0 K
A * D F
C S
B S
6
S 1 K .E 3 0 .D 3 0 .L 2 0
A * D F
C S
B S
9
1 2
1 5
Q u e r y g r o u p
P R O T E IN - 3 0 0
1 8
A * D F
C S
B S
- 1
6
9
1 2
1 5
Q u e r y g r o u p
1 8
2 1
G
- G
I_
S
6
A v e r a g e r u n n in g tim e ( s )
A * D F
C S
B S
0
A v e r a g e r u n n in g tim e ( s )
A v e r a g e s lo v e r a tio ( % )
6 0
A v e r a g e r u n n in g tim e ( s )
A v e r a g e s o lv e r a tio ( % )
8 0
2 0
P R O T E IN - 3 0 0
1 0 0
A v e r a g e s lo v e r a tio ( % )
A ID S - 1 0 K
1 0 0
E D
E D
G E D
_ G E D
2 1
1 0
3
1 0
2
1 0
1
9
1 2
1 5
Q u e r y g r o u p
S 1 K .E 3 0 .D 3 0 .L 2 0
1 8
2 1
0
1 0
1 0
-1
1 0
-2
6
A * D F
C S
B S
9
1 2
1 5
Q u e r y g r o u p
G
- G
I_
S
1 8
E D
E D
G E D
_ G E D
2 1
Fig. 8: Performance comparison with existing state-of-the-art GED methods.
similarity search and runs much faster than existing methods.
VIII. R ELATED W ORKS
Recently, the GED computation has received considerable
attention. A? -GED [5], [7] and DF-GED [16] are two major
vertex-based mapping methods, which utilize the best-first
and depth-first search paradigms, respectively. Provided that
the heuristic function estimates the lower bound of GED of
unmapped parts, A? -GED guarantees that the first found complete mapping induces the GED of comparing graphs, which
seems very attractive. However, A? -GED stores numerous
partial mappings, resulting in a huge memory consumption. As
a result, it is only suitable for the small graphs. To overcome
this bottleneck, DF-GED performs a depth-first search, which
only stores the partial mappings of a path from the root to
a leaf node. However, it may be easily trapped into a local
suboptimal solution, leading to massive expensive backtracking. CSI GED [4] is an edge-based mapping method through
common substructure isomorphism enumeration, which has
an excellent performance on the sparse graphs. However, the
edge-based search space of CSI GED is exponential with
respect to the number of edges of comparing graphs, making it
naturally be unsuitable for dense graphs. Note that, CSI GED
only works for the uniform model, and [1] generalized it to
cover the non-uniform model.
Another work closely related to this paper is the GED
based graph similarity search problem. Due to the hardness
of computing GED, existing graph similarity search query
methods [12], [14], [17], [18], [19], [20] all adopt the filterand-verify schema, that is, first filtering graphs to obtain a
candidate set and then verifying those candidate graphs. In
the verification phase, most of the existing methods adopt
A? -GED as their verifiers. As discussed above, BSS GED
greatly outperforms A? -GED, hence it can be also used as
a standard verifier to accelerate those graph similarity search
query methods.
IX. C ONCLUSIONS AND F UTURE W ORK
In this paper, we present a novel vertex-based mapping
method for the GED computation. First, we reduce the number
of invalid and redundant mappings involved in the GED
computation and then create a small search space. Then, we
utilize beam-stack search to efficiently traverse the search
space to compute GED, achieving a flexible trade-off between
available memory and expensive backtracking. In addition, we
also give two efficient heuristics to prune the search space.
However, it is still very hard to compute GED of large graphs
within a reasonable time. Thus, the approximate algorithm that
fast suboptimally compute GED is left as a future work.
X. ACKNOWLEDGMENTS
The authors would like to thank Kaspar Riesen and Zeina
Abu-Aisheh for providing their source files, and thank Karam
Gouda and Xiang Zhao for providing their executable files.
This work is supported in part by China NSF grants 61173025
and 61373044, and US NSF grant CCF-1017623. Hongwei
Huo is the corresponding author.
R EFERENCES
[1] D. B. Blumenthal and J. J. Gamper. Exact computation of graph edit
distance for uniform and non-uniform metric edit costs. In GbRPR,
pages 211–221, 2017.
[2] D. Conte, P. Foggia, C. Sansone, and M. Vento. Thirty years of graph
matching in pattern recognition. Int. J. Pattern Recogn., 18(03):265–298,
2004.
[3] K. H. Bunke. Approximate graph edit distance computation by means of
bipartite graph matching. Image Vision Comput., 27(7):950–959, 2009.
[4] K. Gouda and M. Hassaan. CSI GED: An efficient approach for graph
edit similarity computation. In ICDE , pages 256–275, 2016.
[5] K. Riesen, S. Fankhauser, and H. Bunke. Speeding up graph edit distance
computation with a bipartite heuristic. In MLG, pages 21–24, 2007.
[6] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic
determination of minimum cost paths. IEEE Trans.SSC., 4(2):100–107,
1968.
[7] K. Riesen, S. Emmenegger, and H. Bunke. A novel software toolkit for
graph edit distance computation. In GbRPR, pages 142–151, 2013.
[8] R. M. Marı́n, N. F. Aguirre, and E. E. Daza. Graph theoretical similarity
approach to compare molecular electrostatic potentials. J. Chem. Inf.
Model., 48(1):109–118, 2008.
[9] A. Robles-Kelly and R. H. Edwin. Graph edit distance from spectral
seriation. IEEE Trans. Pattern Anal Mach Intell., 27(3):365–378, 2005.
1 0
5
1 0
4
1 0
3
1 0
2
1 0
1
1 0
0
G S im J o in
C S I_ G E D
B S S _ G E D
2
4
6
G E D
8
T h r e s h o ld
1 0
1 2
6
1 0
5
1 0
4
1 0
3
1 0
2
1 0
1
P R O T E IN
1 0
1 0
1 0
S 1 K .E 3 0 .D 3 0 .L 2 0
G S im J o in
C S I_ G E D
B S S _ G E D
1 0
4
1 0
3
1 0
2
1 0
1
G S im J o in
C S I_ G E D
B S S _ G E D
T o ta l r e s p o n s e tim e ( s )
6
T o ta l r e s p o n s e tim e ( s )
T o ta l r e s p o n s e tim e ( s )
A ID S
1 0
0
1 0
0
- 1
2
1 0
4
6
G E D
8
T h r e s h o ld
1 0
1 2
- 1
3
Fig. 9: Performance comparison with CSI GED and GSimJoin.
[10] S. Russell and P. Norvig. Artificial intelligence: a modern approach (2nd
ed.). Prentice-Hall, 2002.
[11] R. Zhou and E. A. Hansen. Beam-stack search: Integrating backtracking
with beam search. In ICAPS , pages 90–98, 2005.
[12] G. Wang, B. Wang, X. Yang, and G. Yu. Efficiently indexing large
sparse graphs for similarity search. IEEE Trans. Knowl Data Eng.,
24(3):440–451, 2012.
[13] D. W. Williams, J. Huan, and W. Wang. Graph database indexing using
structured graph decomposition. In ICDE , pages 976–985, 2007.
[14] X. Chen, H. Huo, J. Huan, and J. S. Vitter. MSQ-Index: A succinct
index for fast graph similarity search. arXiv preprint arXiv:1612.09155,
2016.
[15] X. Yan, P. S. Yu, and J. Han. Graph indexing: a frequent structure-based
approach. In SIGMOD, pages 335–346, 2004.
[16] Z. Abu-Aisheh, R. Raveaux, J. Y. Ramel, and P. Martineau. An exact
graph edit distance algorithm for solving pattern recognition problems.
In ICPRAM , pages 271–278, 2015.
[17] Z. Zeng, A. K. H. Tung, J. Wang, J. Feng, and L. Zhou. Comparing
stars: On approximating graph edit distance. PVLDB, 2(1):25–36, 2009.
[18] X. Zhao, C. Xiao, X. Lin, Q. Liu, and W. Zhang. A partition-based
approach to structure similarity search. PVLDB, 7(3):169–180, 2013.
[19] X. Zhao, C. Xiao, X. Lin, and W. Wang. Efficient graph similarity joins
with edit distance constraints. In ICDE , pages 834–845, 2012.
[20] W. Zheng, L. Zou, X. Lian, D. Wang, and D. Zhao. Efficient graph
similarity search over large graph databases. IEEE Trans. Knowl Data
Eng., 27(4):964–978, 2015.
6
9
G E D
1 2
T h r e s h o ld
1 5
1 8
| 8 |
Pure state ‘really’ informationally complete with
rank-1 POVM
Yu Wang1,2 , Yun Shang1,3
1
arXiv:1711.07585v1 [quant-ph] 21 Nov 2017
2
Institute of Mathematics, AMSS, CAS, Beijing, 100190, China,
University of Chinese Academy of Sciences, Beijing, 100049, China
3
NCMIS, AMSS, CAS, Beijing, 100190, China.
[email protected]
Abstract. What is the minimal number of elements in a rank-1 positiveoperator-valued measure (POVM) which can uniquely determine any
pure state in d-dimensional Hilbert space Hd ? The known result is that
the number is no less than 3d − 2. We show that this lower bound is not
tight except for d = 2 or 4. Then we give an upper bound of 4d−3. For d =
2, many rank-1 POVMs with four elements can determine any pure states
in H2 . For d = 3, we show eight is the minimal number by construction.
For d = 4, the minimal number is in the set of {10, 11, 12, 13}. We show
that if this number is greater than 10, an unsettled open problem can be
solved that three orthonormal bases can not distinguish all pure states
in H4 . For any dimension d, we construct d + 2k − 2 adaptive rank-1
positive operators for the reconstruction of any unknown pure state in
Hd , where 1 ≤ k ≤ d.
Key words: Quantum state tomography, Pure state, Quantum measurement, Rank-1 operators.
1 Introduction
One of the central problems in quantum science and technology is the estimation
of an unknown quantum state, via the measurements on a large number of
copies of this state. Quantum state tomography is the process of determining an
arbitrary unknown quantum state with appropriate measurement strategies.
A quantum state ρ in d-dimensional Hilbert space Hd is described by a density matrix, namely by a positive semi-definite, unit-trace d × d matrix as Sd . A
generalized measurement can be described by a positive operator-valued measure (POVM) [1]. The POVM elements, Ek , satisfy the completeness condition:
P
k Ek = I. Performing this measurement on a system in state ρ, the probability
of the k-th outcome is given from the Born rule, pk = tr(ρEk ). If the statistics
of the outcome probabilities are sufficient to uniquely determine the state, the
POVM is regarded as informationally complete (IC) [2].
The IC-POVM can give a unique identification of an unknown state, which
should distinguish any pair of different states from the statistics of probabilities.
For example, we consider a POVM, { 41 (|0i±|1i)(h0|±h1|), 41 (|0i±i|1i)(h0|∓ih1|)}.
2
It is not an IC-POVM, as the statistics of the outcome probabilities for states
{|0i, |1i} under this measurement are the same. For any different quantum states
ρ1 , ρ2 ∈ Sd , an IC-POVM should distinguish them from the statistics of the
outcome probabilities. That is to say, we have tr(ρ1 Ek ) 6= tr(ρ2 Ek ) for some
elements Ek .
We know that a quantum state ρ in Hd is specified by d2 − 1 real parameters.
The number is reduced by one because tr(ρ) = 1. Caves et al. constructed an
IC-POVM which contains the minimal d2 rank-1 elements [3], i.e., multiples of
projectors onto pure states. If d + 1 mutually unbiased bases (MUBs) exist in
Hd , we can construct an IC-POVM with d(d + 1) elements [4]. MUBs have the
property that all inner products between projectors of different bases labeled by
i and j are equal to 1/d. Another related topic is the symmetric informationally
complete positive operator-valued measure (SIC-POVM) [5]. It is comprised of
d2 rank-1 operators. The inner products of all different operators are equal. This
SIC-POVM appears to exist in many dimensions.
For a state ρ in n-qubit system, d = 2n . Thus the cost of measurement resource with these measurement strategies grows exponentially with the increase
of number n. It is important to design schemes with lower outcomes to uniquely
determine the state. This is possible when we consider a priori information about
the states to be characterized.
Denote the rank of a density matrix for state ρ as k, 1 ≤ k ≤ d. And make
a decomposition that Sd = ⊕dk=1 Sd,k , where Sd,k is the set of all the density
matrices with rank k. When k = 1, the state in Sd,1 is pure. A pure state is
specified by d complex numbers, which correspond to 2d real numbers. For the
reason of normalization condition and freedom of a global phase, there are 2d − 2
independent real numbers totally.
Flammia, Silberfarb, and Caves [6] showed that any POVM with less than
2d elements can not distinguish all pair of different states ρ1 , ρ2 in Sd,1 , not
even in a subset S̃d,1 , where Sd,1 \ S̃d,1 is a set of measure zero. They gave a
definition of pure-state informationally complete (PSI-complete) POVM, whose
outcome probabilities are sufficient to determine any pure states (up to a global
phase), except for a set of pure states that is dense only on a set of measure zero.
That is to say, if a pure state was selected at random, then with probability 1 it
would be located in S̃d,1 and be uniquely identified. A PSI-complete POVM with
2d elements is constructed, but not all the elements in this POVM are rank-1.
They constructed another PSI-complete POVM with 3d − 2 rank-1 elements and
conjectured that there exists a rank-1 PSI-complete POVM with 2d elements.
Finkelstein proved this by a precise construction [7]. Moreover, he gave a
strengthened definition of PSIR-completeness, which indicates that all pure
states are uniquely determined. For any pair of different pure states ρ1 , ρ2 ∈ Sd,1 ,
a PSIR-complete POVM should distinguish them. He showed that a rank-1
PSIR-complete POVM must have at least 3d−2 elements and wondered whether
we could reach the lower bound of 3d − 2.
There are a series of studies on the relevant topic. For any pair of different
states ρ1 , ρ2 ∈ Sd,1 , Heinosaari, Mazzarella, and Wolf gave the minimal number
3
of POVM elements to identify them [8]. The number is 4d − 3 − c(d)α(d), where
c(d) ∈ [1, 2] and α(d) is the number of ones appearing in the binary expansion of
d − 1; the results in papers [20, 21, 11] showed that four orthonormal bases, corresponding to four projective measurements, can distinguish all pure states. For
any pair of different states ρ1 ∈ Sd,1 , ρ2 ∈ Sd , Chen et al. showed that a POVM
must contain at least 5d − 7 elements to distinguish them [12]; Carmeli et al.
gave five orthonormal bases that are enough to distinguish them[13]. For a state
in Sd,k , it can be reconstructed with a high probability with rd log2 (d) outcomes
via compressed sensing techniques [14]. Goyeneche et al. [15] constructed five
orthonormal bases to determine all the coefficients of any unknown input pure
states. The first basis is fixed and used to determine a subset sd,1 ⊂ Sd,1 , where
the pure state belongs to. The other four bases are used to uniquely determine
all the states in sd,1 .
In this paper, we consider the pure-state version of informational completeness with rank-1 POVM. Firstly, we show that the lower bound of 3d − 2 is not
tight in most of the cases. It can be reached when d = 2 and possibly be reached
when d = 4. Then we show a result that there exist a large number of rank-1
PSIR-complete POVMs with 4d − 3 elements. Secondly, we make a discussion
about the rank-1 PSIR-complete POVMs when d = 2, 3, 4. For dimension d = 2
and d = 3, we construct the rank-1 PSIR-complete POVMs with the minimal
number of elements, which are 4 and 8 correspondingly. All the coefficients of
an unknown pure state in H2 and H3 can be calculated by these POVMs. For
dimension d = 4, the minimal number is in the range of {10, 11, 12, 13}. If it is
bigger than 10, an answer can be given to a related unsolved problem, i.e., three
orthonormal bases can not distinguish all pure states in H4 . Lastly, we construct
d + 2k − 2 rank-1 positive self-adjoint operators for the tomography of any input
pure states in Hd , here 1 ≤ k ≤ d. This is an adaptive strategy. For any input
pure state, we use d operators to determine a subset sd,1 ⊂ Sd,1 , where the pure
state belongs to. Together with the other 2k − 2 operators, we can uniquely determine all the pure states in sd,1 . Thus using this adaptive method, any input
pure states can be determined with at most 3d − 2 rank-1 operators.
2 The upper and lower bounds
In this section, we will give the upper and lower bounds of the minimal number
of elements in a rank-1 PSIR-complete POVM. Denote this minimal number as
m1 (d). It is in the range of [4d − 3 − c(d)α(d), 4d − 3].
2.1 Feasibility of 3d-2 for PSIR-complete
In this part, we show that a rank-1 PSIR-complete POVM with 3d − 2 elements
possibly exists when dimension d = 2 or 4. For the other dimensions, any rank-1
POVM with 3d − 2 elements cannot be PSIR-complete. Firstly, we introduce the
concept of PSIR-complete.
4
Definition 1 : (PSI really-completeness [7]). A pure-state informationally really complete POVM on a d-dimensional quantum system Hd is a POVM whose
outcome probabilities are sufficient to uniquely determine any pure state (up to
a global phase).
As we introduced above, the PSIR-complete POVM can distinguish any pair
of different states ρ1 , ρ2 ∈ Sd,1 . Neglecting the restriction of rank-1, we denote
m0 (d) to be the minimal number of elements in a PSIR-complete POVM. Certainly, a rank-1 PSIR-complete POVM is PSIR-complete. Thus m1 (d) ≥ m0 (d).
From the result in [8], m0 (d) = 4d − 3 − c(d)α(d), where c(d) ∈ [1, 2] and α(d)
is the number of ones appearing in the binary expansion of d − 1. From the conclusion by Finkelstein, m1 (d) ≥ 3d − 2. But it is not clear when they are equal
or whether a greater number than 3d − 2 might be required. Now we compare
the size of m0 (d) and 3d − 2.
Let f (d) = 4d − 3 − c(d)α(d) − (3d − 2). By the definition of α(d), we have
log d ≥ α(d). So f (d) > d − 1 − 2 log d. Define g(d) ≡ d − 1 − 2 log d. Then
g 0 (d) = 1 − 2/d. If d > 2, it holds that g 0 (d) > 0. And when d = 8, g(8) = 1 > 0.
So when d ∈ [8, +∞), m0 (d) > 3d − 2. When d ∈ [2, 7], the true value of m0 (d)
is given in [8]. We have m0 (2, 3, 4, 5, 6, 7) = (4, 8, 10, 16, 18, 23). We compare this
with the value of 3d − 2: (4, 7, 10, 13, 16, 19). As a result, only when d = 2 or 4,
m0 (d) can be 3d − 2. For the other dimensions, m1 (d) ≥ m0 (d) > 3d − 2.
2.2 The upper bound of 4d − 3
In this section, we show that 4d − 3 is the upper bound of m1 (d). This upper bound is given by constructing rank-1 POVMs from the minimal sets of
orthonormal bases which can determine all pure states in Hd .
Definition 2 : Let B0 = {|φk0 i},· · · ,Bm−1 = {|φkm−1 i} be m orthonormal bases
of Hd , k = 0, · · · , d − 1. For different pure states ρ1 , ρ2 ∈ Hd , they are distinguishable if
tr(ρ1 |φkj ihφkj |) 6= tr(ρ2 |φkj ihφkj |)
(1)
for some |φkj i. If any pair of different pure states is distinguishable by B0 , · · · , Bm−1 ,
the bases {Bj } can distinguish all pure states [11].
Obviously m bases correspond to m · d rank-1 projections. Ejk = |φkj ihφkj |,
Pd−1
j = 0, · · · , m − 1, k = 0, · · · , d − 1. Since k=0 Ejk = I, we have tr(ρI) = 1 for
all pure state ρ. One projection for each basis can be left out as the probability
can be expressed by others. Thus m(d − 1) rank-1 self-adjoint operators can
distinguish all pure states. Can these operators be transformed to a rank-1 PSIRcomplete POVM?
From the proposition 3 in paper [8], we know that m(d − 1) self adjoint
operators can be used to construct a POVM with m(d − 1) + 1 elements.
Akj ≡ ( 21 I + 12 kEjk k−1 Ejk )/[m(d − 1)]. j = 0, · · · , m − 1, k = 0, · · · , d − 2. Then
P
O ≤ Akj ≤ I/m(d − 1) and by setting the new element A ≡ I − j,k Akj we get
a new POVM. This POVM have the same power with the self-adjoint operators
{Ek }, as there exists a bijection between the outcome probabilities of both sides.
5
But not all of the elements are rank-1. The following conversion can keep the
elements of transformed POVM to be rank-1.
Rank-1 conversion: Given n rank-1 positive self-adjoint operators {Ek :
Pd
k = 1, · · · , n}, G =
by {Fk : k =
k=1 Ek > 0, a rank-1 POVM denoted
Pn
1, · · · , n} can be constructed. Fk = G−1/2 Ek G−1/2 and k=1 Fk = I.
From the discussion in [6, 7] , if positive operators {Ek } are informationally complete with respect to generic pure states (a set of measure zero can
be neglected), and they can determine all (normalized and unnormalized) pure
states in this set, {Fk } is a PSI-complete POVM. Furthermore, if positive operators {Ek } are informationally complete with respect to all pure states, can the
converted POVM {Fk } be PSIR-complete? Here we give a sufficient condition.
Theorem 1. Let {Ek } be a set of rank-1 positive self-adjoint operators, whose
outcome probabilities are sufficient to uniquely determine all pure states (up to
a global phase). Some of the elements satisfy the following condition:
X
Ek = I.
(2)
k∈B
After the rank-1 conversion, we will get a rank-1 PSIR-complete POVM {Fk }.
Proof. Here we prove that any pair of different pure states is distinguishable by
this POVM.
Let ρ1 and ρ2 be an arbitraryPpair of different pure states. Define qi =
−1/2
tr(G−1 ρi ) for i = 1, 2. As G = I + k∈B
/ Ek , we have det(G) 6= 0. So G, G
−1
−1
and G are of full rank. And qi = tr(G ρi ) 6= 0.
Then define another pair of pure states σi = G−1/2 ρi G−1/2 /qi , i = 1, 2. For
any k,
tr(Fk ρi ) = qi tr(Ek σi ).
(3)
When pure states σ1 and σ2 are the same, as ρ1 6= ρ2 , the number q1 should
not be equal to q2 . Thus tr(Fk ρ1 ) 6= tr(Fk ρ2 ) for any k.
When pure states σ1 and σ2 are different, by the assumption of {Ek }, there
exists some Ek satisfyingPtr(Ek σ1 ) 6= tr(EP
k σ2 ). If q1 = q2 , then tr(Fk ρ1 ) 6=
tr(Fk ρ2 ). If not,
we
have
tr(E
σ
)
=
σ2 ) = 1. As we have the
k
1
k∈B
k∈B tr(EkP
P
P
assumption k∈B tr(Ek ) = I. Then k∈B tr(Fk σ1 ) 6= k∈B tr(Fk σ2 ). Thus it
can also be deduced that tr(Fk ρ1 ) 6= tr(Fk ρ2 ) for some k ∈ B.
So the POVM {Fk } can distinguish the different pure states ρ1 , ρ2 ∈ Sd,1 .
This indicates that given a set of outcome probabilities {pk }, there is a unique
pure state ρ such that pk = tr(ρFk ) for all k. For any other different pure state
σ, we can always get tr(σFk ) 6= pk for some k. Thus {Fk } is enough to uniquely
determine any pure states from the other different pure states. With the prior
knowledge that the state is pure, it can be uniquely determined.
Remark: In this proof, we consider the case where {Ek } can distinguish all
pure states. We can make a extension to this theorem. If {Ek } can distinguish
all different states ρ1 , ρ2 ∈ Hd , and the equation 2 still holds, then the conversed
POVM {Fk } can is informationally complete with respect to all quantum states,
pure or mixed.
6
Theorem 2. Assume that m orthonormal bases can distinguish all pure states in
Hd , a large number of PSIR-complete POVMs with m(d − 1) + 1 rank-1 elements
can be constructed.
Proof. Denote these orthonormal bases as {Bj }, j = 0, · · · , m − 1. The elements
in basis Bj are {|φkj i}, k = 0, · · · , d − 1. Now we pick up m(d − 1) + 1 elements
from these bases. We can randomly choose one basis Bj and keep all the elements
Pd−1
in it. The corresponding projectors satisfy k=0 |φkj ihφkj | = I. Then we select
d − 1 elements at random from each one of the other bases. Thus we will get a
set of m(d − 1) + 1 elements. There are m · dm−1 collections totally.
Each collection will correspond to m(d − 1) + 1 rank-1 projectors, which can
distinguish all pure states. They satisfy the condition in Theorem 1. After the
rank-1 conversion, we will get a rank-1 PSIR-complete POVM with m(d − 1) + 1
elements. Moreover, we can construct a large number of PSIR-complete POVMs
for each collection. Denote the projectors to be {E1 , · · · , Ed , Ed+1 , · · · , Em(d−1)+1 },
Pd
where k=1 Ek = I. We can multiply Ej by an arbitrary non-negative number
ej , where j = d + 1, · · · , m(d − 1) + 1. So a new set of operators is constructed,
{E1 , · · · , Ed , ed+1 · Ed+1 , · · · , em(d−1)+1 · Em(d−1)+1 }. They also satisfy the condition in Theorem 1. The proof is complete.
Various researches focus on the minimal number of orthonormal bases that
can distinguish all pure states [17, 18, 19, 20, 21]. This problem is almost solved.
The minimal number of orthonormal bases is summarized in [11]. Moreover, four
bases are constructed from a sequence of orthogonal polynomials. For dimension
d = 2, at least three orthonormal bases are needed to distinguish all pure quantum states. For d = 3 and d ≥ 5, the number is four. For d = 4, four bases are
enough but it is not clear whether three bases can also distinguish.
So we can give the upper bound of m1 (d). When d = 2, m1 (2) = 4. When
d ≥ 3, m1 (d) = 4d − 3.
3 Rank-1 PSIR-complete POVMs for H2 , H3 and H4
In this section, we will present some results about the rank-1 PSIR-complete
POVMs for lower dimensions d. In Figure 1, we show the relations between
different kinds of informationally complete POVM. An IC-POVM is a PSIRcomplete POVM.
3.1 d=2
For dimension d = 2, four is the minimal number of elements in a rank-1 PSIRcomplete POVM. One example showed in [6] is the following:
E c = ac I + bc nc · σ, c = 1, 2, 3, 4.
(4)
√
The
ac = bc = 1/4.√n1 = √
(0, 0, 1), n2 = (2 2/3, 0, −1/3), n3 =
√ parameter:
√
4
(− 2/3, 2/3, −1/3), n = (− 2/3, − 2/3 − 1/3). σ = (σx , σy , σz ). This is
7
1
2
3
4
Fig. 1. The relations of different kinds of informationally complete POVM. The labels {1, 2, 3, 4} stand for SIC-POVM, IC-POVM, PSIR-complete POVM, PSI-complete
POVM respectively. For example, an IC-POVM is a PSIR-complete POVM.
also a SIC-POVM. It can distinguish all quantum states in H2 , pure or mixed.
There are two SIC-POVMs for d = 2 introduced in paper [5]. The other SICPOVM is used [22], which shows the efficiency of qubit tomography.
Now we can construct 12 rank-1 IC-complete POVMs with four elements.
There are three mutually unbiased bases for d = 2.
√
√
B0 = {|0i, |1i}, B1 = {(|0i ± |1i)/ 2}, B2 = {(|0i ± i|1i)/ 2}.
(5)
These three mutually unbiased bases can distinguish all quantum states in H2 .
We can select four elements as introduced in Theorem 2. There are 12 collections
the elements for one collection are |0i, |1i, (|0i +
√ totally. For example,
√
|1i)/ 2, (|0i + i|1i)/ 2. The corresponding rank-1 projectors are |0ih0|, |1ih1|,
(|0i + |1i)(h0| + h1|)/2 and (|0i + i|1i)(h0| − ih1|)/2. After the rank-1 conversion,
we will get a rank-1 POVM with 4 elements. Interestingly, this POVM is the
special case when d = 2 constructed by Caves et al. [3].
3.2 d=3
For dimension d = 3, there are four mutually unbiased bases. By Theorem 2, we
have 4 × 33 collections with 9 elements. We can construct rank-1 IC-complete
POVMs with 9 elements from each selection. By a reference to Heinosaari et al.
[8], m0 (3) = 8. So the minimal number of elements is either 8 or 9 for a rank-1
PSIR-complete POVM. Now we show that this number is 8 by constructing 8
rank-1 operators satisfying Theorem 1. After the rank-1 conversion, we will get
a PSIR-complete POVM with 8 elements. The operators are as follows:
E0 = |0ih0|, E1 = |1ih1|, E2 = |2ih2|, E3 = (|0i + |1i)(h0| + h1|), E4 =
(|0i+i|1i)(h0|−ih1|), E5 = (|0i+|2i)(h0|+h2|), E6 = (|0i+|1i+|2i)(h0|+h1|+h2|),
E7 = (|0i + |1i + i|2i)(h0| + h1| − ih2|).
P2
Let an arbitrary unknown pure state in H3 be |φi = k=0 ak eiθk |ki. Let ak
be non-negative real numbers for k = 0, 1, 2. As eiπ = −1, we can modify the
value of θk to guarantee ak ≥ 0. Let θk be in the range of [0, 2π), as eiθk =
8
ei(θk +2tπ) for integer t. For the freedom choice of global phase, we let θ0 = 0.
The outcome probabilities can be calculated as follows:
tr(Ek |φihφ|)=a2k , for k = 0, 1, 2.
tr(E3 |φihφ|)=a20 + a21 + 2a0 a1 cos θ1 ,
tr(E4 |φihφ|)=a20 + a21 + 2a0 a1 sin θ1 ,
tr(E5 |φihφ|)=a20 + a22 + 2a0 a2 cos θ2 ,
tr(E6 |φihφ|)=a20 + a21 + a22 + 2a0 a1 cos θ1 + 2a0 a2 cos θ2 + 2a1 a2 cos θ1 cos θ2 +
2a1 a2 sin θ1 sin θ2 ,
tr(E7 |φihφ|)=a20 + a21 + a22 + 2a0 a1 cos θ1 + 2a0 a2 sin θ2 + 2a1 a2 cos θ1 sin θ2 −
2a1 a2 sin θ1 cos θ2 .
The coefficients of ak can be calculatedpby Ek , where k = 0, 1, 2. As the
coefficient ak is non-negative, we have ak = tr(Ek |φihφ|). The remaining task
is to determine θk .
When only one element in {a0 , a1 , a2 } is nonzero, it is the trivial case. The
state can be |0i, |1i or |2i.
When two elements in {a0 , a1 , a2 } are nonzero, the state can also be determined. For example, a0 = 0 and a1 , a2 6= 0. We can write the state as
|φi = a1 |1i + a2 eiθ2 |2i. The global phase of θ1 is extracted. So θ1 = 0. The
remaining unknown coefficient θ2 can be calculated by the effect of E6 and E7 .
If a1 = 0 and a0 , a2 6= 0, the state is |φi = a0 |0i + a2 eiθ2 |2i. The coefficient θ2
can be calculated by the effect of E5 and E7 . If a2 = 0 and a0 , a1 6= 0, coefficient
θ1 can be calculated by the effect of E3 and E4 .
When all elements in {a0 , a1 , a2 } are nonzero, we let θ0 = 0. Now we determine the remaining coefficients θ1 and θ2 . From the effect of E3 and E4 ,
cos θ1 and sin θ1 can be calculated correspondingly, thus the coefficient θ1 can
be uniquely determined. After we know the values of ak and θ1 , we can calculate
cos θ2 by the effect of E5 . At the same time, sin θ2 can be calculated by the
effect of E6 or E7 , as cos θ1 and sin θ1 can not be both zero. Then θ2 is uniquely
determined.
Thus any pure state in H3 can be uniquely determined by the eight rank-1
positive self-adjoint operators. These operators satisfies the condition in Theorem 1. After the rank-1 conversion, we will get a PSIR-complete POVM with
eight elements. By a reference to Heinosaari et al. [8], such POVM is one of the
minimal possible resource.
3.3 d=4
For dimension d = 4, the known result is that m0 (4) = 10 [8]. There are five
mutually unbiased bases. Thus we can construct many rank-1 IC-POVMs with
16 elements. Four orthonormal bases can distinguish all pure states in H4 [11].
By theorem 1 and theorem 2, we can construct many PSIR-complete POVMs
with 13 elements. So the true value of m1 (4) is in the range of {10, 11, 12, 13}.
It is still not clear whether three bases can distinguish all pure states in
H4 . No results show three bases would fail and no results give the potential
support. There are some partial answers to this question. Three orthonormal
9
bases consisting solely of product vectors are not enough. In fact, even four
product bases are not enough [11]. Eleven is the minimum number of Pauli
operators needed to uniquely determine any two-qubit pure state [23].
We can conclude that there is no gap between m0 (d) and m1 (d) when d =
2, 3. If a gap exists when d = 4, three orthonormal bases are not enough to
distinguish all pure states. Consider the contrapositive form. If three orthonormal
bases can distinguish all pure states in H4 , we can construct a PSIR-complete
POVM with 10 elements by Theorem 2.
4 Adaptive d + 2k − 2 rank-1 operators for any dimensions
Goyeneche et al. took an adaptive method to demonstrate that any input pure
state in Hd is unambiguously reconstructed by measuring five observables, i.e.,
projective measurements onto the states of five orthonormal bases [15]. Thus ∼
5d rank-1 operators are needed. The adaptive method is that the choice of some
measurements is dependent on the result of former ones. The fixed measurement
basis is the standard, B0 = {|0i, · · · , |d − 1i}. We measure the pure state with
this basis first. The results of this basis will determine a subset sd,1 ⊂ Sd,1 , where
the input pure state belongs to. They construct four bases {B1 , B2 , B3 , B4 } to
determine all pure states in sd,1 .
Pd−1
Let an arbitrary unknown input pure state in Hd be |φi = s=0 as eiθs |si,
where as is a non-negative real number and θs ∈ [0, 2π) for s = 0, · · · , d − 1. We
can extract the global phase to let one phase θs be 0.
Now we construct d + 2k − 2 adaptive rank-1 positive self-adjoint operators
to determine this pure state, where 1 ≤ k ≤ d. Thus at most 3d − 2 rank-1
elements are enough by adaptive strategy.
The first d operators to be measured with are
Es = |sihs|, s = 0, · · · , d − 1.
(6)
p
We can calculate the amplitudes as by the effect of Es , as = tr(Es |φihφ|).
Then we keep track of the sites {s} of nonzero amplitudes {as } to determine a
subset sd,1 . Let k be the number of nonzero amplitudes, 1 ≤ k ≤ d.
For example, the sites of nonzero amplitudes are {0, · · · , d − 1}. Then k = d.
Pd−1
The subset sd,1 is { k=0 ak eiθk |ki : ak 6= 0}. The remaining 2d − 2 rank-1
operators are follows:
Fs = (|0i + |si)(h0| + hs|), Gs = (|0i + i|si)(h0| − ihs|);
where s = 1, · · · , d − 1.
We extract the global phase to make θ0 = 0. We have the equations:
(
tr(Fs |φihφ|) = a20 + a2s + 2a0 as cos θs ,
tr(Gs |φihφ|) =
a20 + a2s + 2a0 as sin θs .
(7)
(8)
10
From the assumption and measurement results of Es , all the amplitudes
as are nonzero and known. Then cos θs and sin θs can be calculated by the
effect of Fs and Gs . All the coefficients θs can be uniquely determined. Thus all
coefficients of the unknown pure state in Hd are calculated.
The operators Es and Gs appear in the construction of PSI-complete POVM
given by Finkelstein [7]. And operators Es , Fs /2 and Gs /2 are some part of d2
rank-1 elements in the IC-POVM constructed by Caves et al. [3]. In fact, Fs and
Gs can be the other types to calculate θs . For example, Fs = (|1i+|si)(h1|+hs|),
Gs = (|1i + i|si)(h1| − ihs|), s = 0, 2, · · · , d − 1.
Now consider the general case, the sites of nonzero amplitudes are {n0 , · · · , nk−1 }.
Pk−1
The subset sd,1 is { j=0 anj eiθnj |nj i : anj 6= 0}. The remaining 2k − 2 projections are as follows:
Fs = (|n0 i + |ns i)(hn0 | + hns |), Gs = (|n0 i + i|ns i)(hn0 | − ihns |);
(9)
where s = 1, · · · , d − k − 1. Let the phase θn0 = 0. With similar analysis, we can
uniquely calculate cos θj and sin θj by the effect of Fj and Gj . All the phases θs
and amplitudes as of |φi can be uniquely determined.
5 Conclusions
We analyse the minimal number of elements in rank-1 PSIR-complete POVM.
The bound is in [4d − 3 − c(d)α(d), 4d − 3]. The lower bound of 3d − 2 is not
tight except for d = 2, 4. For d = 2, we construct many rank-1 POVMs with four
elements which can distinguish all quantum states. For d = 3, we show that eight
is the minimal number in a PSIR-complete POVM by construction. For d = 4,
if m1 (4) > 10, we can give a answer to an unsolved problem. Three orthonormal
bases can not distinguish all pure states in H4 . Finally, we construct d + 2k − 2
adaptive rank-1 positive self-adjoint operators to determine any input states in
Hd , where 1 ≤ k ≤ d. Thus we can determine an arbitrary unknown pure state
in Hd with at most 3d − 2 rank-1 operators by adaptive strategy.
Acknowledgements
This work was partially supported by National Key Research and Development
Program of China under grant 2016YFB1000902, National Research Foundation
of China (Grant No.61472412), and Program for Creative Research Group of
National Natural Science Foundation of China (Grant No. 61621003).
References
1. Nielson, M.A., Chuang, I.L.: Quantum Computation and Quantum Information.
Cambridge University Press, Cambridge (2000)
11
2. Busch, P.: Informationally complete-sets of physical quantities. Int. J. Theor. Phys.
30, 1217 (1991)
3. Caves, C.M., Fuchs, C.A., Schack, R.: Unknown quantum states: the quantum de
Finetti representation. J. Math. Phys. 43, 4537 (2002)
4. Wootters, W.K., Fields, B.D.: Optimal state-determination by mutually unbiased
measurements. Ann. Phys. (N.Y.) 191, 363 (1989)
5. Renes, J.M., Blume-Kohout, R., Scott, A.J., Caves, C.M.: Symmetric informationally complete quantum measurements. J. Math. Phys. 45, 2171 (2004)
6. Flammia, S.T., Silberfarb, A., Caves, C.M.: Minimal informationally complete
measurements for pure states. Found. Phys. 35, 1985 (2005)
7. Finkelstein, J.: Pure-state informationally complete and ‘really’ complete measurements. Phys. Rev. A, 70, 052107 (2004)
8. Heinosaari, T., Mazzarella, L., Wolf, M.M.: Quantum tomography under prior
information Comm. Math. Phys. 318, 355 (2013)
9. Mondragon, D., Voroninski, V.: Determination of all pure quantum states from a
minimal number of observables. arXiv:1306.1214 [math-ph], (2013)
10. Jaming, P.: Uniqueness results in an extension of Paulis phase retrieval problem.
Appl. Comput. Harmon. Anal, 37, 413 (2014)
11. Carmeli, C., Heinosaari, T., Schultz, J., Toigo, A.: How many orthonormal bases
are needed to distinguish all pure quantum states? Eur. Phys. J. D 69, 179 (2015)
12. Chen, J., Dawkins, H., Ji, Z., Johnston, N., Kribs, D., Shultz, F., Zeng, B.: Uniqueness of quantum states compatible with given measurement results. Phys. Rev. A
88, 012109 (2013)
13. Carmeli, C., Heinosaari, T., Kech, Schultz, M.J., Toigo, A.: Efficient Pure State
Quantum Tomography from Five Orthonormal Bases. Europhys. Lett. 115, 30001,
(2016)
14. Gross, D., Liu, Y.K., Flammia, S.T., Becker, S., Eisert, J.: Quantum state tomography via compressed sensing. Phys. Rev. Lett. 105, 150401 (2010)
15. Goyeneche, D., Cãnas, G., Etcheverry, S., Gómez, E.S., Xavier, G.B., Lima, G.,
Delgado, A.: Five measurement bases determine pure quantum states on any dimension. Phys. Rev. Lett. 115, 090401 (2015)
16. Pauli, W.: Handbuch der physik. in Handbuch der Physik, Vol. 5 (Springer, Berlin,
1958)
17. Moroz, B.Z.: Reflections on quantum logic. Int. J. Theor. Phys. 22, 329 (1983)
18. Moroz. B.Z.: Erratum: Reflections on quantum logic. Int. J. Theor. Phys. 23, 498
(1984)
19. Moroz, B.Z., Perelomov, A.M.: On a problem posed by Pauli. Theor. Math. Phys.
101, 1200 (1994)
20. Mondragon, D., Voroninski, V.: Determination of all pure quantum states from a
minimal number of observables. arXiv:1306.1214 [math-ph], (2013)
21. Jaming, P.: Uniqueness results in an extension of Paulis phase retrieval problem.
Appl. Comput. Harmon. A. 37, 413 (2014)
22. Řeháček, J., Englert, B.-G., Kaszlikowski, D.: Minimal qubit tomography. Phys.
Rev. A. 70, 052321 (2004)
23. Ma, X., et al.: Pure-state tomography with the expectation value of Pauli operators.
Phys. Rev. A 93, 032140 (2016)
| 7 |
IEEE TRANSACTIONS ON COMMUNICATIONS
1
A Unified Form of EVENODD and RDP
Codes and Their Efficient Decoding
Hanxu Hou, Member, IEEE, Yunghsiang S. Han, Fellow, IEEE, Kenneth W.
Shum, Senior Member, IEEE and Hui Li Member, IEEE,
arXiv:1803.03508v1 [] 9 Mar 2018
Abstract
Array codes have been widely employed in storage systems, such as Redundant Arrays of Inexpensive Disks (RAID). The row-diagonal parity (RDP) codes and EVENODD codes are two popular doubleparity array codes. As the capacity of hard disks increases, better fault tolerance by using array codes
with three or more parity disks is needed. Although many extensions of RDP codes and EVENODD
codes have been proposed, the high decoding complexity is the main drawback of them. In this paper, we
present a new construction for all families of EVENODD codes and RDP codes, and propose a unified
form of them. Under this unified form, RDP codes can be treated as shortened codes of EVENODD
codes. Moreover, an efficient decoding algorithm based on an LU factorization of Vandermonde matrix
is proposed when the number of continuous surviving parity columns is no less than the number of
erased information columns. The new decoding algorithm is faster than the existing algorithms when
more than three information columns fail. The proposed efficient decoding algorithm is also applicable
to other Vandermonde array codes. Thus the proposed MDS array code is practically very meaningful
for storage systems that need higher reliability.
Index Terms
RAID, array codes, EVENODD, RDP, efficient decoding, LU factorization.
I. I NTRODUCTION
Array codes have been widely employed in storage systems, such as Redundant Arrays of
Inexpensive Disks (RAID) [1], [2], for the purpose of enhancing data reliability. In the current
Hanxu Hou is with the School of Electrical Engineering & Intelligentization, Dongguan University of Technology and with
the Shenzhen Key Lab of Information Theory & Future Internet Architecture, Peking University Shenzhen Graduate School (Email: [email protected]). Yunghsiang S. Han is with the School of Electrical Engineering & Intelligentization, Dongguan
University of Technology (E-mail: [email protected]). Kenneth W. Shum is with the Institute of Network Coding, The
Chinese University of Hong Kong (E-mail: [email protected]). Hui Li is with the Shenzhen Key Lab of Information
Theory & Future Internet Architecture, Future Network PKU Lab of National Major Research Infrastructure, Peking University
Shenzhen Graduate School(E-mail: [email protected]).
March 12, 2018
DRAFT
2
IEEE TRANSACTIONS ON COMMUNICATIONS
RAID-6 system, two disks are dedicated to the storage of parity-check bits, so that any two disk
failures can be tolerated. There are a lot of existing works on the design of array codes which
can recover any two disks failures, such as the EVENODD codes [3] and the row-diagonal parity
(RDP) codes [4].
As the capacities of hard disks are increasing in a much faster pace than the decreasing of
bit error rates, the protection offered by double parities will soon be inadequate [5]. The issue
of reliability is more pronounced in solid-state drives, which have significant wear-out rates
when the frequencies of disk writes are high. In order to tolerate three or more disk failures,
the EVENODD codes were extended in [6], and the RDP codes were extended in [7], [8]. All
of the above coding methods are binary array codes, whose codewords are m × n arrays with
each entry belonging to the binary field F2 , for some positive integers m and n. Binary array
codes enjoy the advantage that encoding and decoding can be done by Exclusive OR (XOR)
operations. The n disks are identified as n columns, and the m bits in each column are stored in
the corresponding disk. A binary array code is said to be systematic if, for some positive integer
r less than n, the right-most r columns store the parity bits, while the left-most k = n − r
columns store the uncoded data bits. If the array code can tolerate arbitrary r erasures, then it
is called a maximum-distance separable (MDS) array code. In other words, in an MDS array
code, the information bits can be recovered from any k columns.
A. Related Works
There are many follow-up studies on EVENODD codes [3] and RDP codes [4] along different
directions, such as the extensions of fault tolerance [6], [7], [9], the improvement of repair
problem [10], [11], [12], [13] and efficient decoding methods [14], [15], [16], [17] of their
extensions.
Huang and Xu [14] extended the EVENODD codes to be STAR codes with three parity
columns. The EVENODD codes were extended by Blaum, Bruck and Vardy [6], [9] for three
or more parity columns, with the additional assumption that the multiplicative order of 2 mod
p is equal to p − 1. A sufficient condition for the extended EVENODD codes to be MDS with
more than eight parity columns is given in [18]. Goel and Corbett [7] proposed the RTP codes
that extend the RDP codes to tolerate three disk failures. Blaum [8] generalized the RDP codes
that can correct more than three column erasures and showed that the extended EVENODD
codes and generalized RDP codes share the same MDS property condition. Blaum and Roth
DRAFT
March 12, 2018
SUBMITTED PAPER
3
[19] proposed Blaum-Roth codes, which are non-systematic MDS array codes constructed over
a Vandermonde matrix. Some efficient systematic encoding methods for Blaum-Roth codes are
given in [19], [20], [21]. We call the existing MDS array codes in [3], [4], [6], [7], [8], [9],
[14], [15], [16], [17], [19] as Vandermonde MDS array codes, as their constructions are based
on Vandermonde matrices.
Decoding complexity in this work is defined as the number of XORs required to recover
the erased no more than r columns (including information erasure and parity erasure) from
surviving k columns. There are many decoding methods for extended EVENODD codes [15]
and generalized RDP codes; however, most of them focus on r = 3. Jiang et al. [15] proposed
a decoding algorithm for extended EVENODD codes with r = 3. To further reduce decoding
complexity of the extended EVENODD codes with r = 3, Huang and Xu [14] invented STAR
codes. One extension of RDP codes with three parity columns is RTP codes, whose decoding has
been improved by Huang et al. [17]. Two efficient interpolation-based encoding algorithms for
Blaum-Roth codes were proposed in [20], [21]. However, the efficient algorithms in [20], [21]
are not applicable to the decoding of the extended EVENODD codes and generalized RDP codes.
An efficient erasure decoding method that solves Vandermonde linear system over a polynomial
ring was given in [19] for Blaum-Roth codes, and the decoding method is also applicable to
the erasure decoding of extended EVENODD codes if the number of information erasures is no
larger than the number of continuous surviving parity columns. There is no efficient decoding
method for arbitrary erasures and one needs to employ the traditional decoding method such as
Cramer’s rule to recover the erased columns.
B. Contributions
In this paper, we present a unified form of EVENODD codes and RDP codes that include
the existing RDP codes and their extensions in [4], [8], along with the existing EVENODD
codes and their extensions in [3], [6], [9]. Under this unified form, these two families of codes
are shown having a close relationship between each other. Based on this unified form, we also
propose a fast method for the recovery of failed columns. This method is based on a factorization
of Vandermonde matrix into very sparse lower and upper triangular matrices. Similar to the
decoding method in [19], the proposed fast decoding method can recover up to r erasures such
that the number of information erasure is no larger than the number of continuous surviving
parity columns. We then illustrate the methodology by applying it to EVENODD codes and
March 12, 2018
DRAFT
4
IEEE TRANSACTIONS ON COMMUNICATIONS
RDP codes. We compare the decoding complexity of the proposed method with those presented
in [19] for the extended EVENODD codes and generalized RDP codes. The proposed method
has lower decoding complexity than that of the decoding algorithm given in [19], and is also
applicable to other Vandermonde MDS array codes.
II. U NIFIED F ORM OF EVENODD C ODES AND RDP C ODES
In this section, we first present EVENODD codes and RDP codes. Then, we give a unified
form of them and illustrate that RDP codes are shortened EVENODD codes under this form.
The array codes considered in this paper contain p − 1 rows and k + r columns, where p is
an odd number. In the following, we let k and r be positive integers which are both no larger
than p. Let g(`) = (g(0), g(1), . . . , g(` − 1)) be an `-tuple consisting of ` distinct integers that
range from 0 to p − 1, where ` ≤ p. The i-th entry of column j are denoted as ai,j and bi,j for
EVENODD codes and RDP codes respectively. The subscripts are taken modulo p throughout
the paper, if it is not specified.
A. EVENODD Codes
For an odd p ≥ {k, r}, we define the EVENODD code as follows. It is a (p − 1) × (k + r)
array code, with the first k columns storing the information bits, and the last r columns storing
the parity bits. For j = 0, 1, . . . , k − 1, column j is called information column that stores the
information bits a0,j , a1,j , . . . , ap−2,j , and for j = k, k + 1, . . . , k + r − 1, column j is called
parity column that stores the parity bits a0,j , a1,j , . . . , ap−2,j .
Given the (p − 1) × k information array [ai,j ] for i = 0, 1, . . . , p − 2 and j = 0, 1, . . . , k − 1,
we add an extra imaginary row ap−1,j = 0, for j = 0, 1, . . . , k − 1, to this information array. The
parity bits in column k are computed by
ai,k =
k−1
X
ai,j for 0 ≤ i ≤ p − 2,
(1)
j=0
and the parity bits stored in column k + `, ` = 1, 2, . . . , r − 1, are computed by
ai,k+` = ap−1,k+` +
k−1
X
ai−`g(j),j for 0 ≤ i ≤ p − 2,
(2)
j=0
where
ap−1,k+` =
k−1
X
ap−1−`g(j),j .
(3)
j=0
DRAFT
March 12, 2018
SUBMITTED PAPER
5
TABLE I: Encoding of EVENODD(5, 3, 3; (0, 1, 4)). Note that, by (3), a4,4 = a3,1 + a0,2 and
a4,5 = a2,1 + a1,2 .
a0,0
a0,1
a0,2
a0,3 = a0,0 + a0,1 + a0,2
a0,4 = a0,0 + a1,2 + a4.4
a0,5 = a0,0 + a3,1 + a2,2 + a4,5
a1,0
a1,1
a1,2
a1,3 = a1,0 + a1,1 + a1,2
a1,4 = a1,0 + a0,1 + a2,2 + a4,4
a1,5 = a1,0 + a3,2 + a4,5
a2,0
a2,1
a2,2
a2,3 = a2,0 + a2,1 + a2,2
a2,4 = a2,0 + a1,1 + a3,2 + a4.4
a2,5 = a2,0 + a0,1 + a4,5
a3,0
a3,1
a3,2
a3,3 = a3,0 + a3,1 + a3,2
a3,4 = a3,0 + a2,1 + a4.4
a3,5 = a3,0 + a1,1 + a0,2 + a4,5
We denote the EVENODD codes defined in the above equations as EVENODD(p, k, r; g(k)).
The default values in g(k) are (0, 1, . . . , k − 1), and we simply write EVENODD(p, k, r) if the
values in g(k) are default. An example of EVENODD(5, 3, 3; (0, 1, 4)) is given in Table I. Under
the above definition, the EVENODD code in [3] is EVENODD(p, p, 2) with g(k) = (0, 1, . . . , k−
1), and the extended EVENODD code in [6] is EVENODD(p, p, r) with g(k) = (0, 1, . . . , k −1).
B. RDP Codes
RDP code is an array code of size (p − 1) × (k + r). Given the parameters k, r, p that satisfy
p ≥ max(k + 1, r), we add an extra imaginary row bp−1,0 = bp−1,1 = · · · = bp−1,k−1 = 0 to
the (p − 1) × k information array [bi,j ], for i = 0, 1, . . . , p − 2 and j = 0, 1, . . . , k − 1, as in
EVENODD(p, k, r). The parity bits of the RDP(p, k, r; g(k + 1)) are computed as follows:
bi,k =
k−1
X
bi,j for 0 ≤ i ≤ p − 2,
(4)
j=0
bi,k+` =
k
X
bi−`g(j),j for 0 ≤ i ≤ p − 2, 1 ≤ ` ≤ r − 1.
(5)
j=0
Like EVENODD(p, k, r), the default value of g(k + 1) are (0, 1, . . . , k). The first 4 rows in
Table II are the array of RDP(5, 3, 3; (0, 1, 4, 3)). The RDP code in [4] is RDP(p, p − 1, 2) with
g(p) = (0, 1, . . . , p − 1) and RDP(p, p − 1, r) is the extended RDP in [8].
C. Unified Form
There is a close relationship between RDP(p, k, r; g(k + 1)) and EVENODD(p, k, r; g(k))
when both array codes have the same number of parity columns. The relationship can be seen
by augmenting the arrays as follows. For RDP codes, we define the corresponding augmented
March 12, 2018
DRAFT
6
IEEE TRANSACTIONS ON COMMUNICATIONS
TABLE II: The augmented array of RDP(5, 3, 3; (0, 1, 4, 3)).
b0,0
b0,1
b0,2
b0,3 = b0,0 + b0,1 + b0,2
b0,4 = b0,0 + b1,2 + b2,3
b0,5 = b0,0 + b3,1 + b2,2
b1,0
b1,1
b1,2
b1,3 = b1,0 + b1,1 + b1,2
b1,4 = b1,0 + b0,1 + b2,2 + b3,3
b1,5 = b1,0 + b3,2 + b0,3
b2,0
b2,1
b2,2
b2,3 = b2,0 + b2,1 + b2,2
b2,4 = b2,0 + b1,1 + b3,2
b2,5 = b2,0 + b0,1 + b1,3
b3,0
b3,1
b3,2
b3,3 = b3,0 + b3,1 + b3,2
b3,4 = b3,0 + b2,1 + b0,3
b3,5 = b3,0 + b1,1 + b0,2 + b2,3
0
0
0
0
b4,4 = b3,1 + b0,2 + b1,3
b4,5 = b2,1 + b1,2 + b3,3
array as a p × (k + r) array with the top p − 1 rows the same as in RDP(p, k, r), and the last
row defined by bp−1,j = 0 for 0 ≤ j ≤ k and
bp−1,k+` =
k
X
bp−1−`g(j),j for 1 ≤ ` ≤ r − 1.
(6)
j=0
Note that (6) is the extension of (5) when i = p − 1. The auxiliary row in the augmented array
is defined such that the column sums of columns k + 1, k + 2, . . . , k + r − 1 are equal to zero.
The above claim is proved as follows.
Lemma 1. For 1 ≤ ` ≤ r − 1, we have
Pp−1
i=0
bi,k+` = 0.
Proof. The summation of all bits in column k + ` of the augmented array is the summation of
all bits in columns 0 to k. Since the summation of all bits in column k is the summation of all
bits in columns 0 to k − 1, we have that the summation of all bits in column k + ` is equal to
0.
By the above lemma, we can compute bp−1,k+` for ` = 1, 2, . . . , r − 1 as
bp−1,k+` = b0,k+` + b1,k+` + · · · + bp−2,k+` .
An example of the augmented array code of RDP(5, 3, 3; (0, 1, 4, 3)) is given in Table II.
Similarly, for an EVENODD(p, k, r; g(k)), the augmented array is a p × (k + r) array [a0i,j ]
defined as follows. The first k + 1 columns are the same as those of EVENODD(p, k, r; g(k)),
i.e., for j = 0, 1, . . . , k and i = 0, 1, . . . , p − 1, a0i,j = ai,j . For ` = 1, 2, . . . , r − 1, we define the
parity bits in column k + ` as
a0i,k+` :=
k−1
X
ai−`g(j),j for 0 ≤ i ≤ p − 1.
(7)
j=0
DRAFT
March 12, 2018
SUBMITTED PAPER
7
We note that a0p−1,k+` is the same as ap−1,k+` defined in (3). According to (2), the parity bits in
column k + ` of EVENODD(p, k, r; g(k)) can be obtained from the augmented array by
ai,k+` = a0i,k+` + a0p−1,k+` .
Lemma 2. The bits in column k + ` for ` = 1, 2, . . . , r − 1 of the augmented array can be
obtained from EVENODD(p, k, r, g(k)) by
a0p−1,k+`
p−2
X
(ai,k + ai,k+` ), and
=
(8)
i=0
a0i,k+`
= ai,k+` +
a0p−1,k+`
for i = 0, 1, . . . , p − 2.
(9)
Proof. Note that
p−2
X
(ai,k + ai,k+` ) =
p−2 k−1
p−2
k−1
X
X
X
X
(
ai,j ) +
(ap−1,k+` +
ai−`g(j),j )
i=0
=
=
p−2
X
i=0 j=0
ai,0 + · · · +
ai,k−1 +
p−2
X
j=0
ai−`g(0),0 + · · · +
p−2
X
i=0
i=0
i=0
i=0
p−1
p−1
p−2
p−2
X
ai,0 + · · · +
i=0
=
p−2
X
i=0
X
ai,k−1 +
i=0
X
ai−`g(0),0 + · · · +
i=0
p−1
p−2
X
X
X
ai−`g(0),0 + · · · +
ai−`g(k−1),k−1 +
i=0
X
ai−`g(k−1),k−1 + (ap−1,k+` + · · · + ap−1,k+` )
|
{z
}
p−1
ai−`g(k−1),k−1
(11)
i=0
p−1
i=0
(10)
ai−`g(0),0 + · · · +
i=0
p−2
X
ai−`g(k−1),k−1
(12)
i=0
= (ap−1−`g(0),0 + ap−1−`g(1),1 + · · · + ap−1−`g(k−1),k−1 )
= a0p−1,k+` ,
where (10) comes from (1) and (2), (11) comes from that ap−1,j = 0 for j = 0, 1, . . . , k − 1, and
(12) comes from the fact that
{−`g(j), 1 − `g(j), . . . , p − 1 − `g(j)} = {0, 1, . . . , p − 1} mod p
for 1 ≤ ` ≤ r − 1, 0 ≤ g(j) ≤ p − 1. Therefore, we can obtain the bit a0p−1,k+` by (8) and the
other bits in parity column k + ` by (9).
The augmented array of EVENODD(5, 3, 3; (0, 1, 4)) is given in Table III.
The augmented array of RDP(p, k, r; g(k+1)) can be obtained from shortening the augmented
array of EVENODD(p, k + 1, r; g(k + 1)) and we summarize this fact in the following.
March 12, 2018
DRAFT
8
IEEE TRANSACTIONS ON COMMUNICATIONS
TABLE III: The augmented array of EVENODD(5, 3, 3; (0, 1, 4)).
a0,0
a0,1
a0,2
a0,3 = a0,0 + a0,1 + a0,2
a0,4 = a0,0 + a1,2
a0,5 = a0,0 + a3,1 + a2,2
a1,0
a1,1
a1,2
a1,3 = a1,0 + a1,1 + a1,2
a1,4 = a1,0 + a0,1 + a2,2
a1,5 = a1,0 + a3,2
a2,0
a2,1
a2,2
a2,3 = a2,0 + a2,1 + a2,2
a2,4 = a2,0 + a1,1 + a3,2
a2,5 = a2,0 + a0,1
a3,0
a3,1
a3,2
a3,3 = a3,0 + a3,1 + a3,2
a3,4 = a3,0 + a2,1
a3,5 = a3,0 + a1,1 + a0,2
0
0
0
0
a4.4 = a3,1 + a0,2
a4,5 = a2,1 + a1,2
Proposition 3. Let g(k+1) of RDP(p, k, r; g(k+1)) be the same as g(k+1) of EVENODD(p, k+
1, r; g(k + 1)). The augmented array of RDP(p, k, r; g(k + 1)) can be obtained from shortening
the augmented array of EVENODD(p, k + 1, r; g(k + 1)) as follows: (i) imposing the following
additional constraint on the information bits
a0i,k = a0i,0 + a0i,1 + · · · + a0i,k−1
(13)
for i = 0, 1, . . . , p − 1; (ii) removing column k + 1 of the augmented array of EVENODD(p, k +
1, r; g(k + 1)).
Proof. Consider the augmented array of EVENODD(p, k + 1, r; g(k + 1)) and assume that the
information bits of column k satisfy (13). By (1), the parity bits in column k + 1 are all zeros.
After deleting column k + 1 from the augmented array of EVENODD(p, k + 1, r; g(k + 1)) and
reindexing the columns after this deleted column by reducing all indices by one, we have a new
array with k + r columns of a shortened EVENODD(p, k, r; g(k)). Let the augmented array of
RDP(p, k, r; g(k + 1)) with the k information columns being the same as the first k information
columns of the augmented array of EVENODD(p, k + 1, r; g(k + 1)) such that these columns are
the same as those of the array of the shortened EVENODD(p, k, r; g(k)). Then column k of the
augmented array of RDP(p, k, r; g(k + 1)) is the same as column k of the array of the shortened
EVENODD(p, k, r; g(k)) according to (13) and (4). Recall that the bit bi,k+` in column k + `,
i = 0, 1, . . . , p − 1 and ` = 2, 3, . . . , r − 1, of the augmented array of RDP(p, k, r; g(k + 1)) is
computed by (5) (or (6)). Since a0i,j = ai,j = bi,j for i = 0, 1, . . . , p − 1 and j = 0, 1, . . . , k, bi,k+`
is the same as a0i,k+` in the array of the shortened EVENODD(p, k, r; g(k)) that is defined by
(7). Therefore, we can obtain the augmented RDP(p, k, r; g(k +1)) by shortening the augmented
EVENODD(p, k + 1, r; g(k + 1)) by imposing the condition (13) and removing column k + 1,
and this completes the proof.
DRAFT
March 12, 2018
SUBMITTED PAPER
9
By Proposition 3, the unified form of RDP(p, k, r; g(k + 1)) and EVENODD(p, k, r; g(k))
is the augmented array of EVENODD(p, k + 1, r; g(k + 1)). In the following, we focus on
EVENODD(p, k, r; g(k)), as the augmented array of RDP(p, k, r; g(k + 1)) can be viewed as
the shorten augmented array of EVENODD(p, k + 1, r; g(k + 1)).
III. A LGEBRAIC R EPRESENTATION
Let F2 [x] be the ring of polynomials over binary field F2 , and Rp be the quotient ring F2 [x]/(1+
xp ). An element in Rp can be represented by a polynomial of degree strictly less than p with
coefficients in F2 , we will refer to an element of Rp as a polynomial in the sequel. Note that
the multiplication of two polynomials in Rp is performed under modulo 1 + xp .
The ring Rp has been discussed in [22], [23] and has been used in designing regenerating
codes with low computational complexity. Let
Mp (x) := 1 + x + · · · + xp−1 .
Rp is isomorphic to a direct sum of two finite fields F2 [x]/(1+x) and F2 [x]/Mp (x)1 if and only if
2 is a primitive element in Fp [24]. In [25], F2 [x]/Mp (x) was used for performing computations
in F2p−1 , when p is a prime such that 2 is a primitive element in Fp . In addition, Blaum et al. [6],
[9] discussed the rings F2 [x]/Mp (x) in detail.
We will represent each column in a augmented array of EVENODD(p, k, r; g(k)) by a polynomial in Rp , so that a p × n array is identified with an n-tuple
(a00 (x), a01 (x), · · · , a0k+r−1 (x))
(14)
in Rpk+r , where n = k+r. Under this representation, the augmented array of EVENODD(p, k, r; g(k))
can be defined in terms of a Vandermonde matrix.
In the p × (k + r) array, the p bits a00,j , a01,j , . . . , a0p−1,j in column j can be represented as a
polynomial
a0j (x) = a00,j + a01,j x + · · · + a0p−1,j xp−1
for j = 0, 1, . . . , k+r−1. The first k polynomials a00 (x), a01 (x), . . . , a0k−1 (x) are called information
polynomials, and the last r polynomials a0k (x), a0k+1 (x), . . . , a0k+r−1 (x) are the parity polynomials.
1
When 2 is a primitive element in Fp , F2 [x]/Mp (x) is a finite field.
March 12, 2018
DRAFT
10
IEEE TRANSACTIONS ON COMMUNICATIONS
The parity bit of augmented array of EVENODD(p, k, r; g(k)) defined in (7) is equivalent to
the following equation over the ring Rp
i h
h
0
0
ak (x) · · · ak+r−1 (x) = a00 (x) · · ·
a0k−1 (x)
where Vk×r (g(k)) is the k × r Vandermonde matrix
1 xg(0) · · ·
1 xg(1) · · ·
Vk×r (g(k)) := .
..
..
..
.
.
1 xg(k−1) · · ·
i
· Vk×r (g(k)),
x(r−1)g(0)
x(r−1)g(1)
..
.
(15)
(16)
x(r−1)g(k−1)
and additions and multiplications in the above calculations are performed in Rp . (15) can be
verified as follows:
a0k+` (x)
=
p−1
X
a0i,k+` xi
=
i=0
k−1
X
aj (x)x
`g(j)
=
p−1
k−1 X
X
a
i0 ,j
x
j=0 i0 =0
j=0
i0 +`g(j)
=
p−1 k−1
X
X
0
ai0 ,j xi +`g(j) .
(17)
i0 =0 j=0
Let i0 = i − `g(j), we have
a0i,k+`
=
k−1
X
ai−`g(j),j ,
j=0
which is the same as (7). In other words, each parity column in the augmented array of
EVENODD codes is obtained by adding some cyclically shifted version of the information
columns.
Recall that a0j (x) is a polynomial over Rp for j = 0, 1, . . . , k + r − 1. When we reduce a
polynomial a0j (x) modulo Mp (x), it means that we replace the coefficient a0i,j with a0i,j + a0p−1,j
for i = 0, 1, . . . , p − 2. When j = 0, 1, . . . , k, we have that a0p−1,j = 0. If we reduce a0j (x)
modulo Mp (x), we obtain a0j (x) itself, of which the coefficients are the bits of column j of
EVENODD(p, k, r; g(k)), for j = 0, 1, . . . , k. Recall that the coefficients of a0k+` (x) for ` =
1, 2, . . . , r − 1 are computed by (7). If we reduce a0k+` (x) modulo Mp (x), i.e., replace the
coefficients a0i,k+` for i = 0, 1, . . . , p − 2 by a0i,k+` + a0p−1,k+` , which are ai,k+` that are bits in
column k + ` of EVENODD(p, k, r; g(k)). In fact, we have shown how to convert augmented
array of EVENODD(p, k, r; g(k)) into original array of EVENODD(p, k, r; g(k)).
By Proposition 3, we can obtain the augmented array of RDP(p, k, r; g(k +1)) by multiplying
(b0 (x), · · · , bk−1 (x),
k−1
X
h
i
bj (x)) Ik+1 V(k+1)×r (g(k + 1)) ,
j=0
DRAFT
March 12, 2018
SUBMITTED PAPER
11
and removing the k + 2-th component, which is always equal to zero, in the resultant product.
If we arrange all coefficients in the polynomials with degree strictly less than p − 1, we get the
original (p − 1) × (k + r) array of RDP(p, k, r; g(k + 1)).
When 0 ≤ g(i) ≤ k−1 for i = 0, 1, . . . , k−1, the MDS property condition of EVENODD(p, k, r; g(k))
is the same as that of the extended EVENODD codes [6], and the MDS property condition of
r ≤ 8 and r ≥ 9 was given in [6] and [18], respectively. Note that the MDS property condition
depends on that 2 is a primitive element in Fp . This is the reason of the assumption of primitivity
of 2 in Fp . In the following of the paper, we assume that 0 ≤ g(i) ≤ k −1 with i = 0, 1, . . . , k −1
for EVENODD(p, k, r; g(k)) and 0 ≤ g(i) ≤ k with i = 0, 1, . . . , k for RDP(p, k, r; g(k + 1)),
and the proposed EVENODD(p, k, r; g(k)) and RDP(p, k, r; g(k + 1)) are MDS codes. We will
focus on the erasure decoding for these two codes.
When some columns of EVENODD(p, k, r; g(k)) are erased, we assume that the number of
erased information columns is no larger than the number of continuous surviving parity columns.
Note that one needs to recover the failure columns by downloading k surviving columns. First,
we represent the downloaded k columns by some information polynomials and continuous parity
polynomials. Then, we can subtract all the downloaded information polynomials from the parity
polynomials to obtain a Vandermonde linear system. Although EVENODD(p, k, r; g(k)) can be
described by the k ×r Vandermonde matrix given in (16) over F2 [x]/Mp (x) and we can solve the
Vandermonde linear system over F2 [x]/Mp (x) to recover the failure columns, it is more efficient
to solve the Vandermonde linear system over Rp . First, we will show in the next section that we
can first perform calculation over Rp and then reduce the results modulo Mp (x) in the decoding
process. An efficient decoding algorithm to solve Vandermonde linear system over Rp based on
LU factorization of Vandermonde matrix is then proposed in Section V.
IV. VANDERMONDE M ATRIX OVER Rp
Before we focus on the efficient decoding method of EVENODD(p, k, r; g(k)) and RDP(p, k, r; g(k+
1)), we first present some properties of Vandermonde matrix. As the decoding algorithm hinges
on a quick method in solving a Vandermonde system of equations over Rp , we discuss some
properties of the linear system of Vandermonde matrix over Rp in this section.
March 12, 2018
DRAFT
12
IEEE TRANSACTIONS ON COMMUNICATIONS
Let Vr×r (a) be an r × r Vandermonde matrix
1 xa1 · · ·
1 xa2 · · ·
Vr×r (a) := . .
...
.. ..
1 xar · · ·
x(r−1)a1
x(r−1)a2
,
..
.
x(r−1)ar
(18)
where a1 , . . . , ar are distinct integers such that the difference of each pair of them is relatively
prime to p. The entries of Vr×r (a) are considered as polynomials in Rp . We investigate the
action of multiplication over Vr×r (a) by defining the function F : Rpr → Rpr :
F (u) := uVr×r (a)
for u = (u1 (x), . . . , ur (x)) ∈ Rpr . Obviously, F is a homomorphism of abelian group and we
have F (u + u0 ) = F (u) + F (u0 ) for u, u0 ∈ Rpr .
The function F is not surjective. If a vector v = (v1 (x), v2 (x), . . . , vr (x)) is equal to F (u)
for some u ∈ Rp , it is necessary that
v1 (1) = v2 (1) = · · · = vr (1).
(19)
This is due to the fact that each polynomial vj (x) is obtained by adding certain cyclically shifted
version of ui (x)’s. In other words, if v is in the image of F , then either there are even number of
nonzero terms in all vi (x), or there are odd number of nonzero terms in all vi (x) for 1 ≤ i ≤ r.
The function F is also not injective. We can see this by observing that if we add the polynomial
Mp (x) to a component of u, for example, adding Mp (x) to ui (x), then
F (u + (0, . . . , 0, Mp (x), 0, . . . , 0)) = F (u) + (Mp (x), . . . , Mp (x)).
| {z }
i−1
Hence, if we add Mp (x) to two distinct components of input vector u, then the value of F (u) does
not change. We need the following lemma before discussing the properties of the Vandermonde
linear system over Rp .
Lemma 4. [6, Lemma 2.1] Suppose that p is an odd number and d is relatively prime to p,
then 1 + xd and Mp (x) are coprime in F2 [x], and xi and Mp (x) are relatively prime in F2 [x]
for any positive integer i.
If the vector v satisfies (19), in the next theorem, we show that there are many vectors u such
that F (u) = v.
DRAFT
March 12, 2018
SUBMITTED PAPER
13
Theorem 5. Let a1 , a2 , . . . , ar be r integers such that the difference ai1 − ai2 is relatively prime
to p for all pair of distinct indices 1 ≤ i1 < i2 ≤ r. The image of F consists of all vectors
v ∈ Rpr that satisfy the condition (19). For all vectors u satisfying
mod 1 + xp ,
uVr×r (a) = v
(20)
they are congruent to each other modulo Mp (x).
Proof. Suppose that v1 (x), . . . , vr (x) are polynomials in Rp satisfying (19). We want to show
that the vector v = (v1 (x), . . . , vr (x)) is in the image of F . We first consider the case that
v1 (1) = v2 (1) = · · · = vr (1) = 0. Since 1 + x and Mp (x) are relatively prime polynomials, by
Chinese remainder theorem, we have an isomorphism
θ(f (x)) = (f (x) mod 1 + x, f (x) mod Mp (x))
defined for f (x) ∈ Rp . The inverse of θ is given by
θ−1 (a(x), b(x)) = Mp (x)a(x) + (1 + Mp (x))b(x) mod 1 + xp ,
where a(x) ∈ F2 [x]/(1 + x) and b(x) ∈ F2 [x]/Mp (x). We thus have a decomposition of the ring
Rp as a direct sum of F2 [x]/(1 + x) and F2 [x]/Mp (x). It suffices to investigate the action of
multiplication over Vr×r (a) by considering
uVr×r (a) = v
uVr×r (a) = v
mod 1 + x, and
(21)
mod Mp (x).
(22)
Note that (21) is equivalent to
(u mod (1 + x)) · (Vr×r (a) mod (1 + x)) = v mod (1 + x).
Also Vr×r (a) mod (1 + x) is an r × r all one matrix and v mod (1 + x) = 0 because vi (1) =
0, 1 ≤ i ≤ r. It is sufficient to find r components of solution u0 from binary field such that
their summation is zero. Therefore, there are many solutions u0 and each solution satisfies that
the number of one among all the components of u0 is an even number.
For (22), we need to show that the determinant of Vr×r (a) is invertible modulo Mp (x). Since
the determinant of Vr×r (a) is2
det(Vr×r (a)) =
Y
(xai1 + xai2 ),
i1 <i2
2
Since −1 is the same as 1 in F2 , we replace −1 with 1 in this work and addition is the same as subtraction.
March 12, 2018
DRAFT
14
IEEE TRANSACTIONS ON COMMUNICATIONS
we need to show that xai1 + xai2 and Mp (x) are relatively prime polynomials in F2 [x], for all
pairs of distinct (i1 , i2 ). We first factorize xai1 + xai2 in the form xai1 + xai2 = xai1 (1 + xd ), and
by assumption, d is an integer which is coprime with p. This problem now reduces to showing
that (i) 1 + xd and Mp (x) are relatively prime in F2 [x] whenever gcd(d, p) = 1, and (ii) x`
and Mp (x) are relatively prime in F2 [x] for all integer `. We can show this by Lemma 4. We
can thus solve the equation in (22), say, by Cramer’s rule, to obtain the unique solution u00 .
After obtaining the solutions u0i (x) ∈ F2 [x]/(1 + x) and u00i (x) ∈ F2 [x]/Mp (x) in (21) and (22)
respectively for all i, we can calculate the solution via the isomorphism θ−1
θ−1 (u0i (x), u00i (x)) = Mp (x)u0i (x) + (1 + Mp (x))u00i (x) mod 1 + xp .
Therefore, the solutions of u ∈ Rpr in (20) are
((1 + Mp (x))u001 (x) + 1 Mp (x), · · · , (1 + Mp (x))u00r (x) + r Mp (x)),
(23)
where i is equal to 0 or 1 for all i and the number of ones is an even number. That is to say,
there are many solutions in (20) and all the solutions after reducing modulo Mp (x) is unique
and is the solution in (22).
When v1 (1) = v2 (1) = · · · = vr (1) = 1, similar argument can be applied to find the solution.
The only difference between this solution and (23) is that the number of ones among all i in
this solution is odd. This completes the proof.
From the above theorem, whenever the vector v satisfies the condition (19), we can decode one
solution of u in (20). Recall that, for augmented array of EVENODD codes, every component
in u has been added zero coefficient of the term with degree p − 1. Hence, each component of
the real solution is with at most degree p − 2. By the theorem, reducing u modulo Mp (x) gives
us the final solution. Therefore, to solve u in (22), we can first solve u over Rp and then take
the modulo Mp (x) for every component of u. This will be demonstrated in next section.
V. E FFICIENT D ECODING OF VANDERMONDE S YSTEM OVER Rp
In this section, we will present an efficient decoding method of the Vandermonde system over
Rp based on LU factorization of the Vandermonde matrix.
DRAFT
March 12, 2018
SUBMITTED PAPER
15
A. Efficient Division by 1 + xd
We want to first present two decoding algorithms for performing division by 1 + xd which
will be used in the decoding process, where d is a positive integer that is coprime with p and
all operations are performed in the ring Rp = F2 [x]/(1 + xp ). Given the equation
(1 + xd )g(x) = f (x) mod 1 + xp ,
(24)
where d is a positive integer such that gcd(d, p) = 1 and f (x) has even number of nonzero terms.
One method to compute g(x) is given in Lemma 8 in [26], which is summarized as follows.
Lemma 6. [26, Lemma 8] The coefficients of g(x) in (24) can be computed by
gp−1 = 0, gp−d−1 = fp−1 , gd−1 = fd−1 ,
g(p−(i+1)d−1) = f(p−id−1) + g(p−id−1) for i = 1, . . . , p − 3,
where g(x) =
Pp−1
i=0
gi xi , f (x) =
Pp−1
i=0
f i xi .
Although computing the division in (24) by Lemma 6 only takes p−3 XORs, we do not know
whether the resulting polynomial g(x) has even number of nonzero terms or not. In solving the
Vandermonde linear system in the next subsection, we need to compute many divisions of the
form in (24). If we do not require that the solved polynomial g(x) should have even number
of nonzero terms, then we can employ Lemma 6 to solve the division. Otherwise, Lemma 6 is
not applicable to such division. Therefore, we need the following lemma that can compute the
division when g(x) is required to have even number of nonzero terms.
Lemma 7. [23, Lemma 13] Given the equation in (24), we can compute the coefficient g0 by
g0 = f2d + f4d + · · · + f(p−1)d ,
(25)
and the other coefficients of g(x) can be iteratively computed by
gd` = fd` + gd(`−1) for ` = 1, 2, . . . , p − 1.
(26)
Note that the subscripts in Lemma 7 are taken modulo p. As gcd(d, p) = 1, we have that
{0, d, 2d, · · · , (p − 1)d} = {0, 1, 2, · · · , p − 1} mod p.
Therefore, we can compute all the coefficients of g(x) by Lemma 7. The result in Lemma 7 has
been observed in [23], [27]. We can check that the computed polynomial g(x) satisfies g(1) = 0,
March 12, 2018
DRAFT
16
IEEE TRANSACTIONS ON COMMUNICATIONS
by adding the equation in (25) and the equations in (26) for ` = 2, 4, . . . , p − 1. The number of
XORs required in computing the division by Lemma 7 is (3p − 5)/2.
For the same parameters, the computed g(x) by Lemma 6 is equal to either the computed
g(x) by Lemma 7 or the summation of Mp (x) and the computed g(x) by Lemma 7, depending
on whether the computed g(x) by Lemma 6 has even number of nonzero terms or not. When
solving a division in (24), we prefer the method in Lemma 6 if there is no requirement that
the resulting polynomial g(x) should have even number of nonzero terms, as the method in
Lemma 6 involves less XORs.
B. LU Method of Vandermonde Systems over Rp
The next theorem is the core of the fast LU method for solving Vandermonde system of
equations v = uVr×r (a) which is based on the LU decomposition of a Vandermonde matrix
given in [28].
Theorem 8. [28] For positive integer r, the square Vandermonde matrix Vr×r (a) defined in
(18) can be factorized into
(2)
(r−1) (r−1) (r−2)
Vr×r (a) = L(1)
Ur
Ur
· · · U(1)
r Lr · · · Lr
r ,
(`)
where Ur is the upper triangular matrix
Ir−`−1
U(`)
:=
r
0
0
1 xa1
···
0
xa2 · · ·
..
...
.
0
..
.
0
0
..
.
1
..
.
0
0
0
···
1
0
0
0
···
0
0
0
..
.
a`
x
1
(`)
and Lr is the lower triangular matrix
Ir−`−1
0
0
1
0
···
0
0
1
..
.
xar−`+1 + xar−`
..
.
···
..
.
0
..
.
0
..
.
0
0
···
xar−1 + xar−`
0
0
0
···
1
xar + xar−`
for ` = 1, 2, . . . , r − 1.
DRAFT
March 12, 2018
SUBMITTED PAPER
17
For example, the Vandermonde matrix V3×3 (1, x, x4 ) can be factorized as
0 1
0
0 1 1 0 1 0 0
1 0
(1) (2) (2) (1)
L3 L3 U3 U3 = 0 1
0 1 x + 1
0 0 1 x 0 1 1 .
0 1 x4 + x
0
1
x4 + 1
0 0 1
0 0 1
Based on the factorization in Theorem 8, we have a fast algorithm for solving a Vandermonde
system of linear equations. Given the matrix Vr×r (a) and a row vector v = (v1 (x), . . . , vr (x)),
we can solve the linear system uVr×r (a) = v in (20) by solving
(2)
(r−1) (r−1) (r−2)
uL(1)
Ur
Ur
· · · U(1)
r Lr · · · Lr
r = v.
(27)
As the inversion of each of the upper and lower triangular matrices can be done efficiently, we
can solve for u by inverting 2(r − 1) triangular matrices.
Algorithm 1 Solving a Vandermonde linear system.
Inputs:
positive integer r, odd integer p,
integers
a1 , a2 , . . . , ar ,
and
v
=
(v1 (x), v2 (x), . . . , vr (x)) ∈ Rpr .
u = (u1 (x), . . . , ur (x)) that satisfies uVr×r (a) = v.
Output:
Require: v1 (1) = v2 (1) = · · · = vr (1), and gcd(ai1 − ai2 , p) = 1 for all 1 ≤ i1 < i2 ≤ r.
1:
u ← v.
2:
for i from 1 to r − 1 do
3:
4:
5:
6:
for j from r − i + 1 to r do
uj (x) ← uj (x) + uj−1 (x)xai+j−r
for i from r − 1 down to 1 do
Solve g(x) from (xar + xar−i )g(x) = ur (x) by Lemma 7 or Lemma 6 (only when i = 1)
ur (x) ← g(x)
7:
8:
for j from r − 1 down to r − i + 1 do
Solve g(x) from (xaj + xar−i )g(x) = (uj (x) + uj+1 (x)) by Lemma 7 or Lemma 6 (only
when i + j = r + 1)
uj (x) ← g(x)
9:
10:
ur−i (x) ← ur−i (x) + ur−i+1 (x)
return u = (u1 (x), . . . , ur (x))
The procedure of solving a Vandermonde system of linear equations is given in Algorithm 1. In
Algorithm 1, steps 2 to 4 are forward additions that require r(r − 1)/2 additions and r(r − 1)/2
March 12, 2018
DRAFT
18
IEEE TRANSACTIONS ON COMMUNICATIONS
multiplications. Steps 5 to 9 are backward additions, and require r(r − 1)/2 additions and
r(r−1)/2 divisions by factors of the form xaj +xar−i . Division by xaj +xar−i is done by invoking
the method in Lemma 6 or Lemma 7. We may compute all the divisions by Lemma 7. However,
some of the division can be computed by Lemma 6, where the computational complexity can
be reduced.
Theorem 9. Algorithm 1 outputs u that is one of the solution to uVr×r (a) = v over Rp .
Furthermore, g(x) in step 6 when i = 1, and in step 8 when i+j = r+1 and i ∈ {2, 3, . . . , r−1}
can be computed by Lemma 6 to reduce computation complexity.
Proof. First, we want to show that Algorithm 1 implements precisely the matrix multiplications
(i)
in (27). Consider the linear equations uUr = v for i = 1, 2, . . . , r − 1. According to the upper
(i)
triangular matrix Ur in Theorem 8, we can obtain the relation between u and v as
uj (x) = vj (x) for j = 1, . . . , r − i,
xai+j−r uj−1 (x) + uj (x) = vj (x) for j = r − i + 1, . . . , r.
(r−1)
We can observe from Algorithm 1 that steps 1 to 4 solve u from uUr
(r−2)
Ur
(1)
· · · Ur = v,
and we denote the solved r polynomials as v0 = (v10 (x), . . . , vr0 (x)). Consider the equations
(i)
(i)
uLr = v0 . According to the lower triangular matrix Lr in Theorem 8, the relation between u
and v0 is as follows:
uj (x) = vj0 (x) for j = 1, . . . , r − i − 1,
0
ur−i (x) + ur−i+1 (x) = vr−i
(x),
(28)
(xaj + xar−i )uj (x) + uj+1 (x) = vj0 (x) for j = r − i + 1, . . . , r − 1,
(29)
(xar + xar−i )ur (x) = vr0 (x).
(30)
It is easy to see that step 6, step 8 and step 9 solves ur (x) from (30), uj (x) from (29) and ur−i (x)
(1)
(2)
(r−1)
from (28) respectively. We thus obtain that steps 5 to 9 solve u from uLr Lr · · · Lr
= v0 .
Therefore, Algorithm 1 precisely computes u from the matrix multiplication in (27).
Note that we need to compute r(r−1)/2 divisions (solving g(x)) in steps 6 and 8 in Algorithm
1, and all divisions can be solved by Lemma 6 or Lemma 7. To solve all divisions by Lemma
6 or Lemma 7, all polynomials ur (x) and uj (x) + uj+1 (x) in steps 6 and 8 must have even
number of nonzero terms, which is a requirement in computing the divisions. That is ur (1) = 0
DRAFT
March 12, 2018
SUBMITTED PAPER
19
and uj (1) + uj+1 (1) = 0 when we calculate g(x) in these steps. Next we show this requirement
is ensured during the process of the algorithm.
Consider two cases: vi (1) = 0 for all i and vi (1) = 1 for all i. If vi (1) = 0 for all i, then we
have ui (1) = 0 for all i after the double for loops between steps 2 to 4. Hence ur (1) = 0 in step
6 when i = r − 1. In step 6, we need to ensure that all ur (x) produced satisfy ur (1) = 0. Since
all g(x), hence ur (x), generated by Lemma 7 have such property, we only need to consider the
case that g(x) is generated by Lemma 6, i.e., when i = 1. When i = 1, the ur (x) assigned by
g(x) will not be invoked in computing the division in steps 7 and 8 as r − i + 1 = r which is
larger than the initial value r − 1 of the loop in step 7. Hence, there is no need to run the for
loop in step 7 to step 8.
Similarly, we need to ensure that uj (1) − uj+1 (1) = 0 for j = r − 1, r − 2, . . . , r − i − 1. It
is sufficient to ensure that uj (1) = 0 and uj+1 (1) = 0. Since all g(x), hence uj (x), generated
by Lemma 7 already have had such property, in step 8, we only need to consider the case that
g(x) is generated by Lemma 6, i.e., when i + j = r + 1. Next, we prove that when i + j = r + 1,
the uj (x) assigned by g(x) in step 8 are not used again to solve the division in step 8. Let i = t
(i = r − 1, . . . , 2). Note that, when i + j = r + 1, the algorithm is in the final iteration of the for
loop in step 7. Hence, we need to prove that uj (x) = ur−t+1 (x) will not be used in the iterations
i < t. When i < t in step 7, the last iteration j = r − i + 1 > r − t + 1 such that ur−t+1 (x) will
not be used in the calculation involving uj (x) and uj+1 (x) in step 8.
If vi (1) = 1 for all i, then after the double for loops between steps 2 to 4, we have
u1 (1) = 1, and u2 (1) = u3 (1) = · · · = ur (1) = 0.
In this case, we only need to show that u1 (x) has never been used in the calculation of g(x) in
step 8. Note that, in the last iteration of step 7, j = r − i + 1 has never gone down to 1 since
i ≤ r − 1. Hence, u1 (x) has never been used in step 8.
The next theorem shows the computational complexity in Algorithm 1.
Theorem 10. The computation complexity in Algorithm 1 is at most
r(r − 1)p + (r − 1)(p − 3) + (r − 1)(r − 2)(3p − 5)/4.
(31)
Proof. In Algorithm 1, there are r(r − 1) additions that require r(r − 1)p XORs, r(r − 1)/2
multiplications that require no XORs (only cyclic shift applied) and r(r − 1)/2 divisions that
March 12, 2018
DRAFT
20
IEEE TRANSACTIONS ON COMMUNICATIONS
require (r − 1)(p − 3) + (r − 1)(r − 2)(3p − 5)/4 XORs (r − 1 divisions are computed by
Lemma 6 and the other divisions are computed by Lemma 7). Therefore, the total computation
of Algorithm 1 is at most (31).
According to Theorem 5, we can solve the Vandermonde system over F2 [x]/Mp (x) by first
solving the Vandermonde system over Rp and then reducing the r resulting polynomials by
modulo Mp (x). By Theorem 9, we can solve the Vandermonde system over Rp by using the
factorization method in Theorem 8.
Remark. Since in Algorithm 1, when i = 1, the output ur (x) is computed from the division
in step 6 which is solved by Lemma 6, we have that the last coefficient of ur (x), gp−1 , is zero.
For i = 1, 2, . . . , r − 1, the output ur−i (x) is a summation of ur−i (x) and ur−i+1 (x), where
ur−i+1 (x) is computed in the last iteration in step 7 for i = r − 1, r − 2, . . . , 2, and ur−i (x) is
computed in the last iteration in step 7 for i + 1 = r − 1, r − 2, . . . , 2. Note that the last iteration
for each i is solved by Lemma 6. Thus the last coefficient of ui (x) is zero for i = 2, 3, . . . , r − 1.
The output u1 (x) is the summation of u1 (x) and u2 (x) by step 9, where u1 (x) = v1 (x) and
u2 (x) is computed from the division in step 8 when i = r − 1, which is solved by Lemma 6.
Therefore, if the last coefficient of v1 (x) is zero, then the last coefficient of the output u1 (x)
is zero. Otherwise, the last coefficient of the output u1 (x) is one. We thus have that the last
coefficient of each of the last r − 1 resulting polynomials is zero. Therefore, it is not necessary
to reduce the last r − 1 resulting polynomials by modulo Mp (x) and we only need to reduce
the first resulting polynomial by modulo Mp (x). In the example of EVENODD(5, 3, 3; (0, 1, 4)),
the three components of the returned u are exactly equal to the three information polynomials
of EVENODD(5, 3, 3; (0, 1, 4)), as the last coefficient of v1 (x) is zero.
Although the LU decoding method of the r × r Vandermonde linear systems over Rp is also
discussed in Theorem 14 of [23], the complexity of the algorithm provided is 47 r(r − 1)p which
is larger than (31). The reason of computation reduction given in (31) is as follows. There are
r − 1 divisions in Algorithm 1 that are solved with p − 3 XORs involved for each division, while
all r(r − 1) divisions are solved with (3p − 5)/2 XORs involved for each division in [23].
VI. E RASURE D ECODING OF EVENODD(p, k, r) AND RDP(p, k, r)
The efficient decoding method of the Vandermonde linear systems over Rp proposed in
Section V is applicable to the information column failure and some particular cases with both
information failure and parity failure of EVENODD(p, k, r; g(k)), RDP(p, k, r; g(k + 1)) and
DRAFT
March 12, 2018
SUBMITTED PAPER
21
Vandermonde array codes such as Blaum-Roth code [19]. We first consider the decoding method
for EVENODD(p, k, r; g(k)).
A. Erasures Decoding of EVENODD(p, k, r)
Suppose that γ information columns e1 , . . . , eγ and δ parity columns f1 , . . . , fδ are erased
with 0 ≤ e1 < . . . < eγ ≤ k − 1 and k + 1 ≤ f1 < . . . < fδ ≤ k + r − 1, where k ≥ γ ≥ 0,
r − 1 ≥ δ ≥ 0 and γ + δ = ρ ≤ r. Let f0 = k − 1 and fδ+1 = k + r − 1, we assume that there
exist λ ∈ {0, 1, . . . , δ} such that fλ+1 − fλ ≥ γ + 1. We have that the columns fλ + 1, . . . , fλ + γ
are not erased. Let
A := {0, 1, . . . , k − 1} \ {e1 , e2 , . . . , eγ }
be a set of indices of the available information columns. We want to first recover the lost
information columns by reading k − γ information columns with indices i1 , i2 , . . . , ik−γ ∈ A,
and γ parity columns with indices fλ + 1, fλ + 2, . . . , fλ + γ, and then recover the failure parity
column by re-encoding the failure parity bits according to (2) for ` = f1 − k, . . . , fδ − k.
First, we compute the bits of the γ parity columns fλ + 1, fλ + 2, . . . , fλ + γ of the augmented
array according to (8) and (9) in Lemma 2. This can be done since column k is not failed.
Then, we represent the k − γ information columns and γ parity columns by k − γ information
polynomials a0i (x) as
a0i (x) := a0,i + a1,i x + · · · + ap−2,i xp−2
(32)
for i ∈ A and γ parity polynomials a0fλ +j (x)
a0fλ +j (x) := a00,fλ +j + a01,fλ +j x + · · · + a0p−1,fλ +j xp−1
(33)
for j = 1, 2, . . . , γ. Then, we subtract k − γ information polynomials a0i (x) in (32), i ∈ A from
the γ parity polynomials a0fλ +1 (x), . . . , a0fλ +γ (x) in (33), to obtain γ syndrome polynomial āh (x)
over Rp as
āh (x) = a0fλ +h (x) +
X
a0i (x)xg(i)·(fλ +h−k) ,
(34)
i∈A
for h = 1, 2, . . . , γ. Therefore, we can establish the relation between the syndrome polynomials
and the erased information polynomials as follows
xg(e1 )(fλ +1−k) xg(e1 )(fλ +2−k) · · ·
h
i h
i xg(e2 )(fλ +1−k) xg(e2 )(fλ +2−k) · · ·
ā1 (x) · · · āγ (x) = a0e1 (x) · · · a0eγ (x) ·
..
..
..
.
.
.
xg(eγ )(fλ +1−k) xg(eγ )(fλ +2−k) · · ·
March 12, 2018
xg(e1 )(fλ +γ−k)
xg(e2 )(fλ +γ−k)
.
..
.
g(eγ )(fλ +γ−k)
x
DRAFT
22
IEEE TRANSACTIONS ON COMMUNICATIONS
The right-hand side of the above equations can be reformulated as
h
i
g(e1 )(fλ +1−k) 0
g(eγ )(fλ +1−k) 0
x
ae1 (x) · · · x
aeγ (x) Vγ×γ (e),
where Vγ×γ (e) is a Vandermonde matrix
1 xg(e1 ) · · ·
1 xg(e2 ) · · ·
Vγ×γ (e) := .
..
...
..
.
1 xg(eγ ) · · ·
By (17), we have that a0fλ +h (1) =
āh (1) =
Pk−1
j=0
a0fλ +h (1)
+
xg(e1 )(γ−1)
xg(e2 )(γ−1)
.
..
.
xg(eγ )(γ−1)
a0j (1) and we thus have
X
i∈A
a0i (1)
=
k−1
X
j=0
a0j (1) +
X
a0i (1),
i∈A
which is independent on h. Thus, we obtain that ā1 (1) = · · · = ār (1). We can then obtain the
erased information polynomials by first solving the Vandermonde linear systems over Rp by
Algorithm 1, cyclic-left-shifting the solved polynomials xg(ei )(fλ +1−k) a0ei (x) by g(ei )(fλ + 1 − k)
positions for i = 1, 2, . . . , γ, and then reduce a0e1 (x) modulo Mp (x) when λ > 0 according to
the remark at the end of Section V-B. If λ = 0, we have fλ + 1 − k = 0 and the last coefficient
of ā1 (x) is zero, then we do not need to reduce a0e1 (x) modulo Mp (x) according to the remark at
the end of Section V-B. The parity bits in the δ erased parity columns can be recovered by (2).
Note that column k is assumed to be non-failure, as column k is needed to compute the bits
of the augmented array by (8) and (9).
B. Erasure Decoding of RDP(p, k, r)
Similar to the decoding for EVENODD(p, k, r), we assume that γ information columns
indexed by e1 , . . . , eγ and δ parity columns f1 , . . . , fδ of RDP(p, k, r) are erased with 0 ≤
e1 < . . . < eγ ≤ k − 1 and k + 1 ≤ f1 < . . . < fδ ≤ k + r − 1, where k ≥ γ ≥ 0,
r − 1 ≥ δ ≥ 0 and γ + δ = ρ ≤ r. Let f0 = k − 1 and fδ+1 = k + r − 1 and assume that there
exist λ ∈ {0, 1, . . . , δ} such that fλ+1 − fλ ≥ γ + 1. The decoding procedure can be divided into
two cases: λ ≥ 1 and λ = 0.
(i) λ ≥ 1. First, we formulate k − γ surviving information polynomials bi (x) for i ∈ A as
bi (x) := b0,i + b1,i x + · · · + bp−2,i xp−2 ,
DRAFT
March 12, 2018
SUBMITTED PAPER
23
and γ + 1 parity polynomials as
bk (x) := b0,k + b1,k x + · · · + bp−2,k xp−2 ,
bfλ +j (x) := b0,fλ +j + · · · + bp−2,fλ +j x
p−2
+(
p−2
X
bi,fλ +j )xp−1 ,
i=0
where j = 1, 2, . . . , γ. Then, we compute γ syndrome polynomials b̄1 (x), b̄2 (x), . . . , b̄γ (x) by
b̄h (x) = bfλ +h (x) + bk (x)xg(k)(fλ +h−k) +
X
bi (x)xg(i)(fλ +h−k) ,
i∈A
for h = 1, 2, . . . , γ. It is easy to check that b̄1 (1) = · · · = b̄γ (1). By the remark at the end of
Section V-B, the erased information polynomials can be computed by first solving the following
Vandermonde system of linear equations
h
i h
b̄1 (x) · · · b̄γ (x) = xg(e1 )(fλ +1−k) be1 (x) · · ·
x
g(eγ )(fλ +1−k)
i
beγ (x) Vr×r (e),
over Rp by Algorithm 1, cyclic-left-shifting the resultant xg(ei )(fλ +1−k) bei (x) by g(ei )(fλ + 1 − k)
positions for i = 1, 2, . . . , γ, and then reducing be1 (x) modulo Mp (x).
(ii) λ = 0. We have that columns k, k + 1, . . . , k + γ are not erased. We can obtain γ syndrome
polynomials b̄1 (x), b̄2 (x), . . . , b̄γ (x) by
b̄1 (x) = bk (x) +
X
bi (x),
i∈A
and
b̄h (x) = bk+h (x) + bk (x)xg(k)·(h−1) +
X
bi (x)xg(i)·(h−1) ,
i∈A
for h = 2, 3, . . . , γ. The erased information polynomials can be computed by solving the
following Vandermonde linear system
h
i h
b̄1 (x) · · · b̄γ (x) = be1 (x) · · ·
i
beγ (x) · Vr×r (e).
We do not need to reduce be1 (x) modulo Mp (x) according to the remark at the end of Section
V-B, as the last coefficient of b̄1 (x) is zero. Lastly, we can recover the δ parity columns by (5)
for i = 0, 1, . . . , p − 2 and ` = f1 − k, . . . , fδ − k.
Note that we need column k to obtain the syndrome polynomials in the decoding procedure,
so column k is assumed to be non-failure column.
Remark. In the erasure decoding, the assumption that there exist λ ∈ {0, 1, . . . , δ} such that
fλ+1 − fλ ≥ γ + 1 is necessary; otherwise, we cannot obtain the Vandermonde linear system and
March 12, 2018
DRAFT
24
IEEE TRANSACTIONS ON COMMUNICATIONS
Algorithm 1 is not applicable. The traditional decoding method, such as Cramer’s rule, can be
used to recover the failures if the assumption is not satisfied. In the next section, we consider
the decoding complexity for two codes when the assumption is satisfied.
VII. D ECODING C OMPLEXITY
In this section, we evaluate the decoding complexity for EVENODD(p, k, r) and RDP(p, k, r).
We determine the normalized decoding complexity as the ratio of the decoding complexity to
the number of information bits.
When r = 3, some specific decoding methods [7], [14], [15], [16], [17] are proposed to
optimize the decoding complexity of three information erasures, such as the decoding method
for STAR [14] and the decoding method for Triple-Star [16]. However, all those decoding
methods [7], [14], [15], [16], [17] only focus on the specific codes with r = 3 and cannot be
generalized for r ≥ 4. In the following, we evaluate the decoding complexity for more than
three information erasures.
Theorem 11. Suppose that γ information columns and δ parity columns f1 , . . . , fδ are erased.
Let f0 = k − 1 and fδ+1 = k + r − 1, we assume that there exist λ ∈ {0, 1, . . . , δ} such
that fλ+1 − fλ ≥ γ + 1. We employ Algorithm 1 to recover the γ information erasures and
recover the failure parity columns by re-encoding the parity bits. The decoding complexity of
EVENODD(p, k, r) is
p(γk +
γ 2 5γ 5
3γ 2 γ 5
− + ) − γk −
−
− + δ(kp − k − 1) when λ > 0,
4
4 2
4
4
2
(35)
p(γk +
3γ 2 γ 1
γ 2 5γ 1
− − ) − γk −
−
+ + δ(kp − k − 1) when λ = 0.
4
4 2
4
4
2
(36)
The decoding complexity of RDP(p, k, r) is
3γ 2 γ 3
γ 2 9γ 1
− + ) − γk −
−
− + δk(p − 2) when λ > 0,
p(γk +
4
4 2
4
4
2
(37)
3γ 2 γ 3
γ 2 9γ 7
− − ) − γk −
−
+ + δk(p − 2) when λ = 0.
4
4 2
4
4
2
(38)
p(γk +
Proof. Consider the decoding process of EVENODD(p, k, r). When λ > 0, we compute the
bits a0i,k+` of the augmented array from EVENODD(p, k, r) by (8) and (9) for ` = fλ + 1 −
Pp−2
k, . . . , fλ + γ − k. We first compute i=0
ai,k , and then compute a0p−1,k+` by (8) and a0i,k+` by
DRAFT
March 12, 2018
SUBMITTED PAPER
25
(9). Thus, the total number of XORs involved in computing the bits a0i,k+` is 2(p − 1)γ + (p − 2).
We now obtain k − γ information polynomials in (32) and γ parity polynomials in (33). Next,
we subtract k − γ surviving information polynomials from the γ parity polynomials to obtain γ
syndrome polynomials by (34) that takes γ(k − γ)(p − 1) XORs. The γ information polynomials
are obtained by solving the Vandermonde system of equations by using Algorithm 1, of which
the computational complexity is
γ(γ − 1)p + (γ − 1)(p − 3) + (γ − 1)(γ − 2)(3p − 5)/4
according to Theorem 10. Since the last γ − 1 output polynomials of Algorithm 1 are exactly
the last γ − 1 information polynomials of EVENODD(p, k, r), we only need to reduce the first
polynomial modulo Mp (x), which takes at most p − 1 XORs. The erased δ parity columns can
be recovered by (2) and the complexity is δ(kp − k − 1). Therefore, the decoding complexity
of EVENODD(p, k, r) is (35) when λ > 0.
When λ = 0, there are two differences compared with the case of λ > 0. First, we only
need to compute the bits of the augmented array for γ − 1 parity columns and the complexity
is 2(p − 1)(γ − 1) + (p − 2), as the bits in the first parity column of the augmented array are
the same as those of the first parity column of EVENODD(p, k, r), Second, we do not need
to reduce the first polynomial modulo Mp (x) after solving the Vandermonde system. Therefore,
the decoding complexity of λ = 0 has 3p − 3 XORs reduction and results in (36).
In RDP(p, k, r), computing r syndrome polynomials takes
γ(p − 2) + γ(k − γ + 1)(p − 1)
XORs when λ ≥ 1 and
(γ − 1)(p − 2) + (k − γ)(p − 1) + (γ − 1)(k − γ + 1)(p − 1)
XORs when λ = 0. Similar to EVENODD(p, k, r), the Vandermonde linear system can be
solved by Algorithm 1 with complexity
γ(γ − 1)p + (γ − 1)(p − 3) + (γ − 1)(γ − 2)(3p − 5)/4
XORs. Reducing one polynomial modulo Mp (x) takes at most p − 1 XORs when λ > 0. The δ
parity columns are recovered by (5) and its complexity is δk(p − 2). Therefore, the total number
of XORs involved in the decoding process results in (37) for λ ≥ 1 and (38) for λ = 0.
March 12, 2018
DRAFT
26
IEEE TRANSACTIONS ON COMMUNICATIONS
Blaum-Roth decoding method [19] proposed for decoding Blaum-Roth codes is also applicable
to the decoding of EVENODD(p, k, r). Suppose that γ information columns and δ parity columns
f1 , . . . , fδ are erased with the assumption that there exist λ ∈ {0, 1, . . . , δ} such that fλ+1 − fλ ≥
γ + 1. If one employs the Blaum-Roth decoding method to recover the information erasures and
recover the failure parity columns by re-encoding the parity bits, the decoding complexity of
EVENODD(p, k, r) is [29]
γ(k + γ)p + (3γ 2 + 0.5γ)p + γ 2 − 0.5γ + δ(kp − k − 1).
The Blaum-Roth decoding method cannot be directly employed on the erasure decoding for
RDP(p, k, r). However, one can first transform λ parity columns of RDP(p, k, r) into the form
of EVENODD(p, k, r) and then recover the erased information columns by the decoding method
of EVENODD(p, k, r). Let ai,j = bi,j for i = 0, 1, . . . , p − 2 and j = 0, 1, . . . , k − 1. That is, the
information bits of RDP(p, k, r) and EVENODD(p, k, r) are the same. We then have ai,k = bi,k
by (1) and (4) and
p−2
X
bi,k+` + bp−1−`g(k),k =
i=0
k
XX
bi−`g(j),j +
i=0 j=0
=
k
X
bi−`g(j),j + bp−1−`g(k),k
(39)
i=0 j=0
p−1
=
p−2 k
X
X
k
X
bp−1−`g(j),j + bp−1−`g(k),k
j=0
bp−1−`g(j),j + bp−1−`g(k),k
(40)
j=0
=
k−1
X
j=0
bp−1−`g(j),j =
k−1
X
ap−1−`g(j),j = ap−1,k+` ,
j=0
where (39) comes from (5), (40) comes from (4) and
{−`g(j), 1 − `g(j), · · · , p − 1 − `g(j)} = {0, 1, · · · , p − 1} mod p.
Therefore, when λ > 0, we can transform λ parity columns of RDP(p, k, r) into the form of
EVENODD(p, k, r) by
ap−1,k+` =
p−2
X
bi,k+` + bp−1−`g(k),k
i=0
and
ai,k+` = bi,k+` + bi−`g(k),k + ap−1,k+`
DRAFT
March 12, 2018
SUBMITTED PAPER
27
for ` = fλ +1−k, . . . , fλ +γ −k and i = 0, 1, . . . , p−2. When λ = 0, we only need to transform
the bits for ` = 1, . . . , γ − 1 and i = 0, 1, . . . , p − 2, as column k of EVENODD(p, k, r) is the
same as column k of RDP(p, k, r). Then we employ the Blaum-Roth decoding method to obtain
the erased γ information columns of EVENODD(p, k, r). Lastly, we recover δ parity columns
by (5). The decoding complexity is then
γ(k + γ)p + (3γ 2 + 3.5γ)p + γ 2 − 0.5γ + δ(kp − 2k) − 3 for λ > 0,
γ(k + γ)p + (3γ 2 + 3.5γ − 3)p + γ 2 − 3.5γ + δ(kp − 2k) + 3 for λ = 0.
Note that we can recover the erased parity columns by encoding the parity bits according
to the definition for both EVENODD(p, k, r) and RDP(p, k, r) after recovering all the erased
information bits. Therefore, the main difference of the decoding complexity between the proposed
LU decoding method and the Blaum-Roth decoding method lies in the complexity of decoding the
Vandermonde linear system, i.e., the erasure decoding of information failures. In the following,
we consider a special case where δ = 0. We evaluate the decoding complexity of γ information
30
# of XORs/information bits
# of XORs/information bits
erasures for the proposed LU decoding method and the Blaum-Roth decoding method.
RDP(p,p−1,4) (Blaum−Roth method)
EVENODD(p,p,4) (Blaum−Roth method)
RDP(p,p−1,4) (Algorithm 1)
EVENODD(p,p,4) (Algorithm 1)
25
20
15
10
5
10
20
30
40
Parameter p
50
(a) r = 4 and p ranges from 5 to 59.
60
40
RDP(p,p−1,5) (Blaum−Roth method)
EVENODD(p,p,5) (Blaum−Roth method)
RDP(p,p−1,5) (Algorithm 1)
EVENODD(p,p,5) (Algorithm 1)
30
20
10
10
20
30
40
Parameter p
50
60
(b) r = 5 and p ranges from 5 to 59.
Fig. 1: The normalized decoding complexity of γ = r information erasures EVENODD(p, p, r)
and RDP(p, p − 1, r) by Algorithm 1 and by Blaum-Roth decoding method for r = 4, 5.
For fair comparison, we let k = p for EVENODD(p, k, r) and k = p−1 for RDP(p, k, r). According to Lemma 11, the decoding complexity of γ information erasures of EVENODD(p, p, r)
and RDP(p, p − 1, r) by Algorithm 1 is
p(γp +
March 12, 2018
γ 2 5γ 1
3γ 2 5γ 1
−
− )−
−
+ , and
4
4
2
4
4
2
DRAFT
28
IEEE TRANSACTIONS ON COMMUNICATIONS
p(γ(p − 1) +
3γ 2 5γ 3
γ 2 5γ 7
−
− )−
−
+ ,
4
4
2
4
4
2
respectively.
When r = 4 and 5, the normalized decoding complexity of γ = r information erasures of
EVENODD(p, p, r) and RDP(p, p−1, r) by Algorithm 1 and by Blaum-Roth decoding method is
shown in Fig. 1. One can observe that EVENODD(p, p, r) and RDP(p, p − 1, r) decoded by LU
decoding method is more efficient than by the Blaum-Roth decoding method. When r = 4 and
p ranges from 5 to 59, the decoding complexity of EVENODD(p, p, 4) and RDP(p, p − 1, 4)
by Algorithm 1 has 20.1% to 71.3% and 22.7% to 77.7% reduction on that by the BlaumRoth decoding method, respectively. When r = 5, the complexity reduction is 20.4% to 68.5%
and 25.7% to 78.5% for EVENODD(p, p, 5) and RDP(p, p − 1, 5), respectively. The reduction
increases when p is small and r is large. For example, RDP(p, p − 1, r) decoded by Algorithm 1
has 78.5% less decoding complexity than that by the Blaum-Roth decoding method when p = 5
and r = 5.
The reasons that the decoding complexity of EVENODD(p, p, r) and RDP(p, p − 1, r) by the
LU decoding method is less than that by the Blaum-Roth decoding method are summarized as
follows. First, the Blaum-Roth decoding method is operated over the ring F2 [x]/Mp (x); however,
we show that the Vandermonde linear systems over F2 [x]/Mp (x) can be computed by first solving
the Vandermonde linear systems over Rp and then reducing the results by Mp (x) modulus.3 The
operation of multiplication and division over Rp is more efficient than that over F2 [x]/Mp (x).
Second, the proposed LU decoding method is more efficient than the Blaum-Roth decoding
method.
VIII. D ISCUSSION AND C ONCLUSIONS
In this paper, we present a unified construction of EVENODD codes and RDP codes, which
can be viewed as a generalization of extended EVENODD codes and generalized RDP codes.
Moreover, an efficient LU decoding method is proposed for EVENODD codes and RDP codes,
and we show that the LU decoding method requires less XOR operations than the existing
algorithm when more than three information columns are failure.
In most existing Vandermonde array codes, the parity bits are computed along some straight
lines in the array, while the parity bits of the proposed EVENODD codes and RDP codes are
3
When there are only information erasures, modulo Mp (x) is not needed in the decoding procedure.
DRAFT
March 12, 2018
SUBMITTED PAPER
29
computed along some polygonal lines in the array. By this generalization, EVENODD codes
and RDP codes may have more design space for decoding algorithm when there is a failure
column. For example, assume that the first column of EVENODD(5, 3, 3; (0, 1, 4)) in Table I
is erased, we want to recover the erased column by downloading some bits from other four
surviving columns. We can recover bits a0,0 , a2,0 by
a0,0 = a0,1 + a0,2 + a0,3 , a2,0 = a2,1 + a2,2 + a2,3 ,
and bits a1,0 , a3,0 by
a1,0 = a0,1 + a2,2 + a4,4 + a1,4 , a3,0 = a2,1 + a4,4 + a3,4 ,
where a4,4 can be computed as a4,4 = a3,1 + a0,2 . In total, 9 bits are downloaded to recover
the first column. For original EVENODD codes, an erased information column is covered by
downloading at least 0.75(p−1) bits from each of the helped k +1 columns [10]. Hence, the total
number of bits to be downloaded to recover the first column of original EVENODD codes is at
least 12. Therefore, one may design a decoding algorithm for an information failure such that
the number of bits downloaded is less than that of the original EVENODD codes. Designing an
algorithm to recover a failure column for general parameters is then an interesting future work.
R EFERENCES
[1] D. A. Patterson, P. Chen, G. Gibson, and R. H. Katz, “Introduction to Redundant Arrays of Inexpensive Disks (RAID),”
in Proc. IEEE COMPCON, vol. 89, 1989, pp. 112–117.
[2] P. M. Chen, E. K. Lee, G. A. Gibson, R. H. Katz, and D. A. Patterson, “RAID: high-performance, reliable secondary
storage,” University of California at Berkeley, Berkeley, Tech. Rep. CSD 03-778, 1993.
[3] M. Blaum, J. Brady, J. Bruck, and J. Menon, “EVENODD: An efficient scheme for tolerating double disk failures in RAID
architectures,” IEEE Trans. Computers, vol. 44, no. 2, pp. 192–202, 1995.
[4] P. Corbett, B. English, A. Goel, T. Grcanac, S. Kleiman, J. Leong, and S. Sankar, “Row-diagonal parity for double disk
failure correction,” in Proc. of the 3rd USENIX Conf. on File and Storage Technologies (FAST), 2004, pp. 1–14.
[5] A. H. Leventhal, “Triple-parity RAID and beyond,” Comm. of the ACM, vol. 53, no. 1, pp. 58–63, January 2010.
[6] M. Blaum, J. Bruck, and A. Vardy, “MDS array codes with independent parity symbols,” IEEE Trans. Information Theory,
vol. 42, no. 2, pp. 529–542, 1996.
[7] A. Goel and P. Corbett, “RAID triple parity,” in ACM SIGOPS Operating Systems Review, vol. 36, no. 3, December 2012,
pp. 41–49.
[8] M. Blaum, “A family of MDS array codes with minimal number of encoding operations,” in IEEE Int. Symp. on Inf.
Theory, 2006, pp. 2784–2788.
[9] M. Blaum, J. Brady, J. Bruck, J. Menon, and A. Vardy, “The EVENODD code and its generalization: An effcient scheme for
tolerating multiple disk failures in RAID architectures,” in High Performance Mass Storage and Parallel I/O. Wiley-IEEE
Press, 2002, ch. 8, pp. 187–208.
March 12, 2018
DRAFT
30
IEEE TRANSACTIONS ON COMMUNICATIONS
[10] Z. Wang, A. G. Dimakis, and J. Bruck, “Rebuilding for array codes in distributed storage systems,” in IEEE GLOBECOM
Workshops (GC Wkshps), 2010, pp. 1905–1909.
[11] L. Xiang, Y. Xu, J. Lui, and Q. Chang, “Optimal recovery of single disk failure in RDP code storage systems,” in ACM
SIGMETRICS Performance Evaluation Rev., vol. 38, no. 1.
ACM, 2010, pp. 119–130.
[12] L. Xiang, Y. Xu, J. C. S. Lui, Q. Chang, Y. Pan, and R. Li, “A hybrid approach of failed disk recovery using RAID-6
codes: Algorithms and performance evaluation,” ACM Trans. on Storage, vol. 7, no. 3, pp. 1–34, October 2011.
[13] Y. Zhu, P. P. C. Lee, Y. Xu, Y. Hu, and L. Xiang, “On the speedup of recovery in large-scale erasure-coded storage
systems,” IEEE Transactions on Parallel & Distributed Systems, vol. 25, no. 7, pp. 1830–1840, 2014.
[14] C. Huang and L. Xu, “STAR: An efficient coding scheme for correcting triple storage node failures,” IEEE Trans.
Computers, vol. 57, no. 7, pp. 889–901, 2008.
[15] H. Jiang, M. Fan, Y. Xiao, X. Wang, and Y. Wu, “Improved decoding algorithm for the generalized EVENODD array
code,” in International Conference on Computer Science and Network Technology, 2013, pp. 2216–2219.
[16] Y. Wang, G. Li, and X. Zhong, “Triple-Star: A coding scheme with optimal encoding complexity for tolerating triple disk
failures in RAID,” International Journal of innovative Computing, Information and Control, vol. 3, pp. 1731–1472, 2012.
[17] Z. Huang, H. Jiang, and K. Zhou, “An improved decoding algorithm for generalized RDP codes,” IEEE Communications
Letters, vol. 20, no. 4, pp. 632–635, 2016.
[18] H. Hou, K. W. Shum, and H. Li, “On the MDS condition of Blaum-Bruck-Vardy codes with large number parity columns,”
IEEE Communications Letters, vol. 20, no. 4, pp. 644–647, 2016.
[19] M. Blaum and R. M. Roth, “New array codes for multiple phased burst correction,” IEEE Trans. Information Theory,
vol. 39, no. 1, pp. 66–77, January 1993.
[20] Q. Guo and H. Kan, “On systematic encoding for Blaum-Roth codes,” Development, vol. 42, no. 4, pp. 2353–2357, 2011.
[21] ——, “An efficient interpolation-based systematic encoder for low-rate Blaum-Roth codes,” in IEEE Int. Symp. on Inf.
Theory, 2013, pp. 2384–2388.
[22] K. W. Shum, H. Hou, M. Chen, H. Xu, and H. Li, “Regenerating codes over a binary cyclic code,” in Proc. IEEE Int.
Symp. Inf. Theory, Honolulu, July 2014, pp. 1046–1050.
[23] H. Hou, K. W. Shum, M. Chen, and H. Li, “BASIC codes: Low-complexity regenerating codes for distributed storage
systems,” IEEE Transactions on Information Theory, vol. 62, no. 6, pp. 3053–3069, 2016.
[24] S. T. J. Fenn, M. G. Parker, M. Benaissa, and D. Taylor, “Bit-serial multiplication in GF(2m ) using irreducible all-one
polynomials,” IEEE Proceedings on Computers and Digital Techniques, vol. 144, no. 6, pp. 391–393, 1997.
[25] J. H. Silverman, “Fast multiplication in finite fields GF(2n ),” in International Workshop on Cryptographic Hardware and
Embedded Systems, 1999, pp. 122–134.
[26] H. Hou and Y. S. Han, “A new construction and an efficient decoding method for Rabin-Like codes,” IEEE Transactions
on Communications, pp. 1–1, 2017.
[27] H. Hou, K. W. Shum, M. Chen, and H. Li, “New MDS array code correcting multiple disk failures,” in Global
Communications Conference, 2014, pp. 2369–2374.
[28] S.-L. Yang, “On the LU factorization of the Vandermonde matrix,” Discrete applied mathematics, vol. 146, no. 1, pp.
102–105, 2005.
[29] P. Subedi and X. He, “A comprehensive analysis of XOR-based erasure codes tolerating 3 or more concurrent failures,”
in IEEE 27th International Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW), 2013,
pp. 1528–1537.
DRAFT
March 12, 2018
| 7 |
arXiv:1404.1901v4 [] 23 Oct 2017
INVERTIBLE IDEALS AND GAUSSIAN SEMIRINGS
SHABAN GHALANDARZADEH, PEYMAN NASEHPOUR, AND RAFIEH RAZAVI
Abstract. In the first section, we introduce the notions of fractional and
invertible ideals of semirings and characterize invertible ideals of a semidomain.
In section two, we define Prüfer semirings and characterize them in terms of
valuation semirings. In this section, we also characterize Prüfer semirings in
terms of some identities over its ideals such as (I + J)(I ∩ J) = IJ for all ideals
I, J of S. In the third section, we give a semiring version for the Gilmer-Tsang
Theorem, which states that for a suitable family of semirings, the concepts of
Prüfer and Gaussian semirings are equivalent. At last, we end this paper by
giving a plenty of examples for proper Gaussian and Prüfer semirings.
0. Introduction
Vandiver introduced the term “semi-ring” and its structure in 1934 [27], though
the early examples of semirings had appeared in the works of Dedekind in 1894,
when he had been working on the algebra of the ideals of commutative rings [4]. Despite the great efforts of some mathematicians on semiring theory in 1940s, 1950s,
and early 1960s, they were apparently not successful to draw the attention of mathematical society to consider the semiring theory as a serious line of mathematical
research. Actually, it was in the late 1960s that semiring theory was considered
a more important topic for research when real applications were found for semirings. Eilenberg and a couple of other mathematicians started developing formal
languages and automata theory systematically [6], which have strong connections
to semirings. Since then, because of the wonderful applications of semirings in engineering, many mathematicians and computer scientists have broadened the theory
of semirings and related structures [10] and [14]. As stated in [12, p. 6], multiplicative ideal theoretic methods in ring theory are certainly one of the major sources
of inspiration and problems for semiring theory. In the present paper, we develop
some ring theoretic methods of multiplicative ideal theory for semirings as follows:
Let, for the moment, R be a commutative ring with a nonzero identity. The
Dedekind-Mertens lemma in ring theory states that if f and g are two elements
of the polynomial ring R[X], then there exists a natural number n such that
c(f )n−1 c(f g) = c(f )n c(g), where by the content c(f ) of an arbitrary polynomial
f ∈ R[X], it is meant the R-ideal generated by the coefficients of f . From this, it
is clear that if R is a Prüfer domain, then R is Gaussian, i.e. c(f g) = c(f )c(g) for
all f, g ∈ R[X].
Gilmer in [9] and Tsang in [26], independently, proved that the inverse of the
above statement is also correct in this sense that if R is a Gaussian domain, then
R is a Prüfer domain.
2010 Mathematics Subject Classification. 16Y60, 13B25, 13F25, 06D75.
Key words and phrases. Semiring, Semiring polynomials, Gaussian semiring, Cancellation
ideals, Invertible ideals.
1
2
SHABAN GHALANDARZADEH, PEYMAN NASEHPOUR, AND RAFIEH RAZAVI
Since Gaussian semirings were introduced in Definition 7 in [21] and the DedekindMertens lemma was proved for subtractive semirings in Theorem 3 in the same paper, our motivation for this work was to see how one could define invertible ideals
for semirings to use them in Dedekind-Mertens lemma and discover another family
of Gaussian semirings. We do emphasize that the definition of Gaussian semiring
used in our paper is different from the one investigated in [5] and [13]. We also
asked ourselves if some kind of a Gilmer-Tsang Theorem held for polynomial semirings. Therefore, we were not surprised to see while investigating these questions,
we needed to borrow some definitions and techniques - for example Prüfer domains
and a couple of other concepts mentioned in [18] and [8] - from multiplicative ideal
theory for rings . In most cases, we also constructed examples of proper semirings
- semirings that are not rings - satisfying the conditions of those definitions and
results to show that what we bring in this paper are really generalizations of their
ring version ones. Since different authors have used the term “semiring” with some
different meanings, it is essential, from the beginning, to clarify what we mean by
a semiring.
In this paper, by a semiring, we understand an algebraic structure, consisting of
a nonempty set S with two operations of addition and multiplication such that the
following conditions are satisfied:
(1) (S, +) is a commutative monoid with identity element 0;
(2) (S, ·) is a commutative monoid with identity element 1 6= 0;
(3) Multiplication distributes over addition, i.e. a(b + c) = ab + ac for all
a, b, c ∈ S;
(4) The element 0 is the absorbing element of the multiplication, i.e. s · 0 = 0
for all s ∈ S.
From the above definition, it is clear for the reader that the semirings are fairly
interesting generalizations of the two important and broadly studied algebraic structures, i.e. rings and bounded distributive lattices.
A nonempty subset I of a semiring S is defined to be an ideal of S if a, b ∈ I
and s ∈ S implies that a + b, sa ∈ I [3]. An ideal I of a semiring S is said to be
subtractive, if a + b ∈ I and a ∈ I implies b ∈ I for all a, b ∈ S. A semiring S is
said to be subtractive if every ideal of the semiring S is subtractive. An ideal P of
S is called a prime ideal of S if P 6= S and ab ∈ P implies that a ∈ P or b ∈ P for
all a, b ∈ S.
In §1, we define fractional and invertible ideals and show that any invertible ideal
of a local semidomain is principal (See Definitions 1.1 and 1.2 and Proposition 1.5).
Note that a semiring S is called a semidomain if for any nonzero element s of
S, sb = sc implies that b = c. A semiring is said to be local if it has only one
maximal ideal. We also prove that any invertible ideal of a weak Gaussian semilocal semidomain is principal (See Theorem 1.6). Note that a semiring is defined
to be a weak Gaussian semiring if each prime ideal of the semiring is subtractive
[21, Definition 18] and a semiring is said to be semi-local if the set of its maximal
ideals is finite. Also, note that localization of semirings has been introduced and
investigated in [15]. It is good to mention that an equivalent definition for the
localization of semirings has been given in [12, §11].
At last, in Theorem 1.8, we show that if I is a nonzero finitely generated ideal
of a semidomain S, then I is invertible if and only if Im is a principal ideal of Sm
INVERTIBLE IDEALS AND GAUSSIAN SEMIRINGS
3
for each maximal ideal m of S.
In §2, we observe that if S is a semiring, then every nonzero finitely generated
ideal of S is an invertible ideal of S if and only if every nonzero principal and every
nonzero 2-generated ideal of S is an invertible ideal of S (Check Theorem 2.1). This
result and a nice example of a proper semiring having this property, motivate us
to define Prüfer semiring, the semiring that each of its nonzero finitely generated
ideals is invertible (See Definition 2.3). After that, in Theorem 2.9, we prove that
a semidomain S is a Prüfer semiring if and only if one of the following equivalent
statements holds:
(1) I(J ∩ K) = IJ ∩ IK for all ideals I, J, and K of S,
(2) (I + J)(I ∩ J) = IJ for all ideals I and J of S,
(3) [(I + J) : K] = [I : K] + [J : K] for all ideals I, J, and K of S with K
finitely generated,
(4) [I : J] + [J : I] = S for all finitely generated ideals I and J of S,
(5) [K : I ∩ J] = [K : I] + [K : J] for all ideals I, J, and K of S with I and J
finitely generated.
Note that, in the above, it is defined that [I : J] = {s ∈ S : sJ ⊆ I}. Also, note
that this theorem is the semiring version of Theorem 6.6 in [18], though we give
partly an alternative proof for the semiring generalization of its ring version.
In §2, we also characterize Prüfer semirings in terms of valuation semirings. Let
us recall that a semidomain is valuation if its ideals are totally ordered by inclusion
[22, Theorem 2.4]. In fact, in Theorem 2.11, we prove that a semiring S is Prüfer
if and only if one of the following statements holds:
(1) For any prime ideal p of S, Sp is a valuation semidomain.
(2) For any maximal ideal m of S, Sm is a valuation semidomain.
A nonzero ideal I of a semiring S is called a cancellation ideal, if IJ = IK
implies J = K for all ideals J and K of S [17]. Let f ∈ S[X] be a polynomial over
the semiring S. The content of f , denoted by c(f ), is defined to be the S-ideal
generated by the coefficients of f . It is, then, easy to see that c(f g) ⊆ c(f )c(g)
for all f, g ∈ S[X]. Finally, a semiring S is defined to be a Gaussian semiring if
c(f g) = c(f )c(g) for all f, g ∈ S[X] [21, Definition 8].
In §3, we discuss Gaussian semirings and prove a semiring version of the GilmerTsang Theorem with the following statement (See Theorem 3.5):
Let S be a subtractive semiring such that every nonzero principal ideal of S
is invertible and ab ∈ (a2 , b2 ) for all a, b ∈ S. Then the following statements are
equivalent:
(1)
(2)
(3)
(4)
S is a Prüfer semiring,
Each nonzero finitely generated ideal of S is cancellation,
[IJ : I] = J for all ideals I and J of S,
S is a Gaussian semiring.
At last, we end this paper by giving a plenty of examples of proper Gaussian and
Prüfer semirings in Theorem 3.7 and Corollary 3.8. Actually, we prove that if S is
a Prüfer semiring (say for example S is a Prüfer domain), then FId(S) is a Prüfer
semiring, where by FId(S) we mean the semiring of finitely generated ideals of S.
4
SHABAN GHALANDARZADEH, PEYMAN NASEHPOUR, AND RAFIEH RAZAVI
In this paper, all semirings are assumed to be commutative with a nonzero
identity. Unless otherwise stated, our terminology and notation will follow as closely
as possible that of [8].
1. Fractional and invertible ideals of semirings
In this section, we introduce fractional and invertible ideals for semirings and
prove a couple of interesting results for them. Note that whenever we feel it is
necessary, we recall concepts related to semiring theory to make the paper as selfcontained as possible.
Let us recall that a nonempty subset I of a semiring S is defined to be an ideal
of S if a, b ∈ I and s ∈ S implies that a + b, sa ∈ I [3]. Also, T ⊆ S is said to
be a multiplicatively closed set of S provided that if a, b ∈ T , then ab ∈ T . The
localization of S at T is defined in the following way:
First define the equivalent relation ∼ on S × T by (a, b) ∼ (c, d), if tad = tbc for
some t ∈ T . Then Put ST the set of all equivalence classes of S × T and define
addition and multiplication on ST respectively by [a, b] + [c, d] = [ad + bc, bd] and
[a, b] · [c, d] = [ac, bd], where by [a, b], also denoted by a/b, we mean the equivalence
class of (a, b). It is, then, easy to see that ST with the mentioned operations of
addition and multiplication in above is a semiring [15].
Also, note that an element s of a semiring S is said to be multiplicativelycancellable (abbreviated as MC), if sb = sc implies b = c for all b, c ∈ S. For
more on MC elements of a semiring, refer to [7]. We denote the set of all MC
elements of S by MC(S). It is clear that MC(S) is a multiplicatively closed set of
S. Similar to ring theory, total quotient semiring Q(S) of the semiring S is defined
as the localization of S at MC(S). Note that Q(S) is also an S-semimodule. For
a definition and a general discussion of semimodules, refer to [12, §14]. Now, we
define fractional ideals of a semiring as follows:
Definition 1.1. Fractional ideal. We define a fractional ideal of a semiring S to
be a subset I of the total quotient semiring Q(S) of S such that:
(1) I is an S-subsemimodule of Q(S), that is, if a, b ∈ I and s ∈ S, then a+b ∈ I
and sa ∈ I.
(2) There exists an MC element d ∈ S such that dI ⊆ S.
Let us denote the set of all nonzero fractional ideals of S by Frac(S). It is easy to
check that Frac(S) equipped with the following multiplication of fractional ideals
is a commutative monoid:
I · J = {a1 b1 + · · · + an bn : ai ∈ I, bi ∈ J}.
Definition 1.2. Invertible ideal. We define a fractional ideal I of a semiring S
to be invertible if there exists a fractional ideal J of S such that IJ = S.
Note that if a fractional ideal I of a semiring S is invertible and IJ = S, for
some fractional ideal J of S, then J is unique and we denote that by I −1 . It is
clear that the set of invertible ideals of a semiring equipped with the multiplication
of fractional ideals is an Abelian group.
Theorem 1.3. Let S be a semiring with its total quotient semiring Q(S).
(1) If I ∈ Frac(S) is invertible, then I is a finitely generated S-subsemimodule
of Q(S).
INVERTIBLE IDEALS AND GAUSSIAN SEMIRINGS
(2) If
S
(3) If
J
5
I, J ∈ Frac(S) and I ⊆ J and J is invertible, then there is an ideal K of
such that I = JK.
I ∈ Frac(S), then I is invertible if and only if there is a fractional ideal
of S such that IJ is principal and generated by an MC element of Q(S).
Proof. The proof of this theorem is nothing but the mimic of the proof of its ring
version in [18, Proposition 6.3].
Let us recall that a semiring S is defined to be a semidomain, if each nonzero
element of the semiring S is an MC element of S.
Proposition 1.4. Let S be a semiring and a ∈ S. Then the following statements
hold:
(1) The principal ideal (a) is invertible if and only if a is an MC element of S.
(2) The semiring S is a semidomain if and only if each nonzero principal ideal
of S is an invertible ideal of S.
Proof. Straightforward.
Prime and maximal ideals of a semiring are defined similar to rings ([12, §7]).
Note that the set of the unit elements of a semiring S is denoted by U (S). Also
note that when S is a semidomain, MC(S) = S − {0} and the localization of S at
MC(S) is called the semifield of fractions of the semidomain S and usually denoted
by F(S) [11, p. 22].
Proposition 1.5. Any invertible ideal of a local semidomain is principal.
Proof. Let I be an invertible ideal of a local semidomain (S, m). It is clear that
there are s1 , . . . , sn ∈ S and t1 , . . . , tn ∈ F(S), such that s1 t1 + · · · + sn tn = 1. This
implies that at least one of the elements si ti is a unit, since if all of them are nonunit,
their sum will be in m and cannot be equal to 1. Assume that s1 t1 ∈ U (S). Now
we have S = (s1 )(t1 ) ⊆ I(t1 ) ⊆ II −1 = S, which obviously implies that I = (s1 )
and the proof is complete.
Let us recall that an ideal I of a semiring S is said to be subtractive, if a + b ∈ I
and a ∈ I implies b ∈ I for all a, b ∈ S. Now we prove a similar statement for
weak Gaussian semirings introduced in [21]. Note that any prime ideal of a weak
Gaussian semiring is subtractive ([21, Theorem 19]). Using this property, we prove
the following theorem:
Theorem 1.6. Any invertible ideal of a weak Gaussian semi-local semidomain is
principal.
Proof. Let S be a weak Gaussian semi-local semidomain and Max(S) = {m1 , . . . , mn }
and II −1 = S. Similar to the proof of Proposition 1.5, for each 1 ≤ i ≤ n, there
exist ai ∈ I and bi ∈ I −1 such that ai bi ∈
/ mi . Since by [12, Corollary 7.13] any
maximal ideal of a semiring is prime, one can easily check that any mi cannot contain the intersection of the remaining maximal ideals of S. So for any 1 ≤ i ≤ n,
one can find some ui , where ui is not in mi , while it is in all the other maximal
ideals of S. Put v = u1 b1 + · · · + un bn . It is obvious that v ∈ I −1 , which causes vI
to be an ideal of S. Our claim is that vI is not a subset of any maximal ideal of
S. In contrary assume that vI is a subset of a maximal ideal, say m1 . This implies
that va1 ∈ m1 . But
va1 = (u1 b1 + · · · + un bn )a1 .
6
SHABAN GHALANDARZADEH, PEYMAN NASEHPOUR, AND RAFIEH RAZAVI
Also note that ui bi ai ∈ m1 for any i ≥ 2. Since m1 is subtractive, u1 b1 a1 ∈ m1 ,
a contradiction. From all we said we have that vI = S and finally I = (v −1 ), as
required.
The proof of the following lemma is straightforward, but we bring it only for the
sake of reference.
Lemma 1.7. Let I be an invertible ideal in a semidomain S and T a multiplicatively
closed set. Then IT is an invertible ideal of ST .
Proof. Straightforward.
Let us recall that if m is a maximal ideal of S, then S − m is a multiplicatively
closed set of S and the localization of S at S − m is simply denoted by Sm [15].
Now, we prove the following theorem:
Theorem 1.8. Let I be a nonzero finitely generated ideal of a semidomain S. Then
I is invertible if and only if Im is a principal ideal of Sm for each maximal ideal m
of S.
Proof. Let S be a semidomain and I a nonzero finitely generated ideal of S.
(→) : If I is invertible, then by Lemma 1.7, Im is invertible and therefore, by
Proposition 1.5, is principal.
(←) : Assume that Im is a principal ideal of Sm for each maximal ideal m of S.
For the ideal I, define J := {x ∈ F(S) : xI ⊆ S}. It is easy to check that J is a
fractional ideal of S and IJ ⊆ S is an ideal of S. Our claim is that IJ = S. On
the contrary, suppose that IJ 6= S. So IJ lies under a maximal ideal m of S. By
hypothesis Im is principal. We can choose a generator for Im to be an element z ∈ I.
Now let a1 , . . . , an be generators of I in S. It is, then, clear that for any ai , one
can find an si ∈ S − m such that ai si ∈ (z). Set s = s1 · · · sn . Since (sz −1 )ai ∈ S,
by definition of J, we have sz −1 ∈ J. But now s = (sz −1 )z ∈ m, contradicting that
si ∈ S − m and the proof is complete.
Now the question arises if there is any proper semiring, which each of its nonzero
finitely generated ideals is invertible. The answer is affirmative and next section is
devoted to such semirings.
2. Prüfer semirings
The purpose of this section is to introduce the concept of Prüfer semirings and
investigate some of their properties. We start by proving the following important
theorem, which in its ring version can be found in [18, Theorem 6.6].
Theorem 2.1. Let S be a semiring. Then the following statements are equivalent:
(1) Each nonzero finitely generated ideal of S is an invertible ideal of S,
(2) The semiring S is a semidomain and every nonzero 2-generated ideal of S
is an invertible ideal of S.
Proof. Obviously the first assertion implies the second one. We prove that the
second assertion implies the first one. The proof is by induction. Let n > 2
be a natural number and suppose that all nonzero ideals of S generated by less
than n generators are invertible ideals and L = (a1 , a2 , . . . , an−1 , an ) be an ideal
of S. If we put I = (a1 ), J = (a2 , . . . , an−1 ) and K = (an ), then by induction’s
hypothesis the ideals I + J, J + K and K + I are all invertible ideals. On the
INVERTIBLE IDEALS AND GAUSSIAN SEMIRINGS
7
other hand, a simple calculation shows that the identity (I + J)(J + K)(K + I) =
(I + J + K)(IJ + JK + KI) holds. Also since product of fractional ideals of
S is invertible if and only if every factor of this product is invertible, the ideal
I + J + K = L is invertible and the proof is complete.
A ring R is said to be a Prüfer domain if every nonzero finitely generated ideal
of R is invertible. It is, now, natural to ask if there is any proper semiring S with
this property that every nonzero finitely generated ideal of S is invertible. In the
following remark, we give such an example.
Remark 2.2. Example of a proper semiring with this property that every nonzero
finitely generated ideal of S is invertible: Obviously (Id(Z), +, ·) is a semidomain,
since any element of Id(Z) is of the form (n) such that n is a nonnegative integer
and (a)(b) = (ab), for any a, b ≥ 0. Let I be an arbitrary ideal of Id(Z). Define AI
to be the set of all positive integers n such that (n) ∈ I and put m = min AI . Our
claim is that I is the principal ideal of Id(Z), generated by (m), i.e. I = ((m)).
For doing so, let (d) be an element of I. But then (gcd(d, m)) = (d) + (m) and
therefore, (gcd(d, m)) ∈ I. This means that m ≤ gcd(d, m), since m = min AI ,
while gcd(d, m) ≤ m and this implies that gcd(d, m) = m and so m divides d and
therefore, there exists a natural number r such that d = rm. Hence, (d) = (r)(m)
and the proof of our claim is finished. From all we said we learn that each ideal of
the semiring Id(Z) is a principal and, therefore, an invertible ideal, while obviously
it is not a ring.
By Theorem 2.1 and the example given in Remark 2.2, we are inspired to give
the following definition:
Definition 2.3. We define a semiring S to be a Prüfer semiring if every nonzero
finitely generated ideal of S is invertible.
First we prove the following interesting results:
Lemma 2.4. Let S be a Prüfer semiring. Then I ∩ (J + K) = I ∩ J + I ∩ K for
all ideals I, J, and K of S.
Proof. Let s ∈ I ∩(J +K). So there are s1 ∈ J and s2 ∈ K such that s = s1 +s2 ∈ I.
If we put L = (s1 , s2 ), by definition, we have LL−1 = S. Consequently, there are
t1 , t2 ∈ L−1 such that s1 t1 + s2 t2 = 1. So s = ss1 t1 + ss2 t2 . But st1 , st2 ∈ S, since
s = s1 + s2 ∈ L. Therefore, ss1 t1 ∈ J and ss2 t2 ∈ K. Moreover s1 t1 , s2 t2 ∈ S
and therefore, ss1 t1 , ss2 t2 ∈ I. This implies that ss1 t1 ∈ I ∩ J, ss2 t2 ∈ I ∩ K, and
s ∈ I ∩ J + I ∩ K, which means that I ∩ (J + K) ⊆ I ∩ J + I ∩ K. Since the reverse
inclusion is always true, I ∩ (J + K) = I ∩ J + I ∩ K and this finishes the proof.
Lemma 2.5. Let S be a Prüfer semiring. Then the following statements hold:
(1) If I and K are ideals of S, with K finitely generated, and if I ⊆ K, then
there is an ideal J of S such that I = JK.
(2) If IJ = IK, where I, J and K are ideals of S and I is finitely generated
and nonzero, then J = K.
Proof. By considering Theorem 1.3, the assertion (1) holds. The assertion (2) is
straightforward.
Note that the second property in Lemma 2.5 is the concept of cancellation ideal
for semirings, introduced in [17]:
8
SHABAN GHALANDARZADEH, PEYMAN NASEHPOUR, AND RAFIEH RAZAVI
Definition 2.6. A nonzero ideal I of a semiring S is called a cancellation ideal, if
IJ = IK implies J = K for all ideals J and K of S.
Remark 2.7. It is clear that each invertible ideal of a semiring is cancellation.
Also, each finitely generated nonzero ideal of a Prüfer semiring is cancellation. For
a general discussion on cancellation ideals in rings, refer to [8] and for generalizations
of this concept in module and ring theory, refer to [19] and [20].
While the topic of cancellation ideals is interesting by itself, we do not go through
them deeply. In fact in this section, we only prove the following result for cancellation ideals of semirings, since we need it in the proof of Theorem 3.5. Note that
similar to ring theory, for any ideals I and J of a semiring S, it is defined that
[I : J] = {s ∈ S : sJ ⊆ I}.
Also, we point out that this result is the semiring version of an assertion mentioned
in [8, Exercise. 4, p. 66]:
Proposition 2.8. Let S be a semiring and I be a nonzero ideal of S. Then the
following statements are equivalent:
(1) I is a cancellation ideal of S,
(2) [IJ : I] = J for any ideal J of S,
(3) IJ ⊆ IK implies J ⊆ K for all ideals J, K of S.
Proof. By considering this point that the equality [IJ : I]I = IJ holds for all ideals
I, J of S, it is then easy to see that (1) implies (2). The rest of the proof is
straightforward.
Now we prove an important theorem that is rather the semiring version of Theorem 6.6 in [18]. While some parts of our proof is similar to those ones in Theorem
6.6 in [18] and Proposition 4 in [25], other parts of the proof are apparently original.
Theorem 2.9. Let S be a semidomain. Then the following statements are equivalent:
(1) The semiring S is a Prüfer semiring,
(2) I(J ∩ K) = IJ ∩ IK for all ideals I, J, and K of S,
(3) (I + J)(I ∩ J) = IJ for all ideals I and J of S,
(4) [(I + J) : K] = [I : K] + [J : K] for all ideals I, J, and K of S with K
finitely generated,
(5) [I : J] + [J : I] = S for all finitely generated ideals I and J of S,
(6) [K : I ∩ J] = [K : I] + [K : J] for all ideals I, J, and K of S with I and J
finitely generated.
Proof. (1) P
→ (2): It is P
clear that I(J ∩ K) ⊆ IJ ∩ IK. Let s ∈ IJ ∩ IK. So we can
m
n
write s = i=1 ti zi = j=1 t′j zj′ , where ti , t′j ∈ I, zi ∈ J, and zj′ ∈ K for all 1 ≤
i ≤ m and 1 ≤ j ≤ n. Put I1 = (t1 , . . . , tm ), I2 = (t′1 , . . . , t′n ), J ′ = (z1 , . . . , zm ),
K ′ = (z1′ , . . . , zn′ ), and I3 = I1 + I2 . Then I1 J ′ ∩ I2 K ′ ⊆ I3 J ′ ∩ I3 K ′ ⊆ I3 . Since
I3 is a finitely generated ideal of S, by Lemma 2.5, there exists an ideal L of S
such that I3 J ′ ∩ I3 K ′ = I3 L. Note that L = I3 −1 (I3 J ′ ∩ I3 K ′ ) ⊆ I3 −1 (I3 J ′ ) = J ′ .
Moreover L = I3 −1 (I3 J ′ ∩ I3 K ′ ) ⊆ I3 −1 (I3 K ′ ) = K ′ . Therefore, L ⊆ J ′ ∩ K ′ . Thus
s ∈ I3 J ′ ∩ I3 K ′ = I3 L ⊆ I3 (J ′ ∩ K ′ ) ⊆ I(J ∩ K).
(2) → (3): Let I, J ⊆ S. Then (I + J)(I ∩ J) = (I + J)I ∩ (I + J)J ⊇ IJ. Since
the reverse inclusion always holds, (I + J)(I ∩ J) = IJ.
INVERTIBLE IDEALS AND GAUSSIAN SEMIRINGS
9
(3) → (1): By hypothesis, every two generated ideal I = (s1 , s2 ) is a factor of
the invertible ideal (s1 s2 ) and therefore, it is itself invertible. Now by considering
Theorem 2.1, it is clear that the semiring S is a Prüfer semiring.
(1) → (4): Let s ∈ S such that sK ⊆ I + J. So sK ⊆ (I + J) ∩ K. By Lemma
2.4, sK
Therefore, s ∈ (I ∩ K)K −1 + (J ∩ K)K −1 . Thus
Pm ⊆ I ∩ KP+n J ∩′ K.
′
s = i=1 ti zi + j=1 tj zj , where zi , zj′ ∈ K −1 , ti ∈ I ∩ K, and t′j ∈ J ∩ K for
all 1 ≤ i ≤ m and 1 ≤ j ≤ n. Let xP∈ K and 1 ≤ i ≤ m. Then zi x, zi ti ∈
m
S and P
so ti zi x ∈ I ∩ K. Therefore, ( i=1 ti zi )K ⊆ I ∩ K ⊆ I. In a similar
n
′ ′
way, ( j=1 tj zj )K ⊆ J ∩ K ⊆ J. Thus s ∈ [I : K] + [J : K]. Therefore,
[(I + J) : K] ⊆ [I : K] + [J : K]. Since the reverse inclusion is always true,
[(I + J) : K] = [I : K] + [J : K].
(4) → (5): Let I and J be finitely generated ideals of S. Then,
S = [I + J : I + J] = [I : I + J] + [J : I + J] ⊆ [I : J] + [J : I] ⊆ S.
(5) → (6): ([25, Proposition 4]) It is clear that [K : I] + [K : J] ⊆ [K : I ∩ J].
Let s ∈ S such that s(I ∩ J) ⊆ K. By hypothesis, S = [I : J] + [J : I]. So
there exist t1 ∈ [I : J] and t2 ∈ [J : I] such that 1 = t1 + t2 . This implies
that s = st1 + st2 . Let x ∈ I. Then t2 x ∈ J. Therefore, t2 x ∈ I ∩ J. Since
s(I ∩ J) ⊆ K, st2 x ∈ K. Thus st2 ∈ [K : I]. Now let y ∈ J. Then t1 y ∈ I.
Therefore, t1 y ∈ I ∩ J. Thus st1 ∈ [K : J]. So finally we have s ∈ [K : I] + [K : J].
Therefore, [K : I ∩ J] ⊆ [K : I] + [K : J].
(6) → (1): The proof is just a mimic of the proof of [18, Theorem 6.6] and
therefore, it is omitted.
We end this section by characterizing Prüfer semirings in terms of valuation
semidomains. Note that valuation semirings have been introduced and investigated
in [22]. Let us recall that a semiring is called to be a Bézout semiring if each of its
finitely generated ideal is principal.
Proposition 2.10. A local semidomain is a valuation semidomain if and only if
it is a Bézout semidomain.
Proof. (→) : Straightforward.
(←) : Let S be a local semidomain. Take x, y ∈ S such that both of them are
nonzero. Assume that (x, y) = (d) for some nonzero d ∈ S. Define x′ = x/d and
y ′ = y/d. It is clear that there are a, b ∈ S such that ax′ + by ′ = 1. Since S is
local, one of ax′ and by ′ must be unit, say ax′ . So x′ is also unit and therefore,
(y ′ ) ⊆ S = (x′ ). Now multiplying the both sides of the inclusion by d gives us the
result (y) ⊆ (x) and by Theorem 2.4 in [22], the proof is complete.
Now we get the following nice result:
Theorem 2.11. For a semidomain S, the following statements are equivalent:
(1) S is Prüfer.
(2) For any prime ideal p, Sp is a valuation semidomain.
(3) For any maximal ideal m, Sm is a valuation semidomain.
Proof. (1) → (2) :
Let J be a finitely generated nonzero ideal in Sp , generated by s1 /u1 , . . . , sn /un ,
where si ∈ S and ui ∈ S − p. It is clear that J = Ip , where I = (s1 , . . . , sn ). By
hypothesis, I is invertible. So by Theorem 1.8, J is principal. This means that Sp
10
SHABAN GHALANDARZADEH, PEYMAN NASEHPOUR, AND RAFIEH RAZAVI
is a Bézout semidomain and since it is local, by Proposition 2.10, Sp is a valuation
semidomain.
(2) → (3) : Trivial.
(3) → (1) : Let I be a nonzero finitely generated ideal of S. Then for any
maximal ideal m of S, Im is a nonzero principal ideal of Sm and by Theorem 1.8,
I is invertible. So we have proved that the semiring S is Prüfer and the proof is
complete.
Now we pass to the next section that is on Gaussian semirings.
3. Gaussian Semirings
In this section, we discuss Gaussian semirings. For doing so, we need to recall
the concept of the content of a polynomial in semirings. Let us recall that for a
polynomial f ∈ S[X], the content of f , denoted by c(f ), is defined to be the finitely
generated ideal of S generated by the coefficients of f . In [21, Theorem 3], the
semiring version of the Dedekind-Mertens lemma (Cf. [24, p. 24] and [1]) has been
proved. We state that in the following only for the convenience of the reader:
Theorem 3.1 (Dedekind-Mertens Lemma for Semirings). Let S be a semiring.
Then the following statements are equivalent:
(1) The semiring S is subtractive, i.e. each ideal of S is subtractive,
(2) If f, g ∈ S[X] and deg(g) = m, then c(f )m+1 c(g) = c(f )m c(f g).
Now, we recall the definition of Gaussian semirings:
Definition 3.2. A semiring S is said to be Gaussian if c(f g) = c(f )c(g) for all
polynomials f, g ∈ S[X] [21, Definition 7].
Note that this is the semiring version of the concept of Gaussian rings defined
in [26]. For more on Gaussian rings, one may refer to [2] also.
Remark 3.3. There is a point for the notion of Gaussian semirings that we need
to clarify here. An Abelian semigroup G with identity, satisfying the cancellation
law, is called a Gaussian semigroup if each of its elements g, which is not a unit,
can be factorized into the product of irreducible elements, where any two such
factorizations of the element g are associated with each other [16, §8 p. 71]. In the
papers [5] and [13] on Euclidean semirings, a semiring S is called to be Gaussian if
its semigroup of nonzero elements is Gaussian, which is another notion comparing
to ours.
Finally, we emphasize that by Theorem 3.1, each ideal of a Gaussian semiring
needs to be subtractive. Such semirings are called subtractive. Note that the
boolean semiring B = {0, 1} is a subtractive semiring, but the semiring N0 is not,
since its ideal N0 − {1} is not subtractive. As a matter of fact, all subtractive ideals
of the semiring N0 are of the form kN0 for some k ∈ N0 [23, Proposition 6].
With this background, it is now easy to see that if every nonzero finitely generated ideal of a subtractive semiring S, is invertible, then S is Gaussian. Also note
that an important theorem in commutative ring theory, known as Gilmer-Tsang
Theorem (cf. [9] and [26]), states that D is a Prüfer domain if and only if D is a
Gaussian domain. The question may arise if a semiring version for Gilmer-Tsang
Theorem can be proved. This is what we are going to do in the rest of the paper.
First we prove the following interesting theorem:
INVERTIBLE IDEALS AND GAUSSIAN SEMIRINGS
11
Theorem 3.4. Let S be a semiring. Then the following statements are equivalent:
(1) S is a Gaussian semidomain and ab ∈ (a2 , b2 ) for all a, b ∈ S,
(2) S is a subtractive and Prüfer semiring.
Proof. (1) → (2): Since ab ∈ (a2 , b2 ), there exists r, s ∈ S such that ab = ra2 + sb2 .
Now define f, g ∈ S[X] by f = a + bX and g = sb + raX. It is easy to check that
f g = sab+abX +rabX 2 . Since S is Gaussian, S is subtractive by Theorem 3.1, and
we have c(f g) = c(f )c(g), i.e. (ab) = (a, b)(sb, ra). But (ab) = (a)(b) is invertible
and therefore, (a, b) is also invertible and by Theorem 2.1, S is a Prüfer semiring.
(2) → (1): Since S is a subtractive and Prüfer semiring, by Theorem 3.1, S is a
Gaussian semiring. On the other hand, one can verify that (ab)(a, b) ⊆ (a2 , b2 )(a, b)
for any a, b ∈ S. If a = b = 0, then there is nothing to be proved. Otherwise, since
(a, b) is an invertible ideal of S, we have ab ∈ (a2 , b2 ) and this completes the
proof.
Theorem 3.5 (Gilmer-Tsang Theorem for Semirings). Let S be a subtractive
semidomain such that ab ∈ (a2 , b2 ) for all a, b ∈ S. Then the following statements
are equivalent:
(1) S is a Prüfer semiring,
(2) Each nonzero finitely generated ideal of S is cancellation,
(3) [IJ : I] = J for all finitely generated ideals I and J of S,
(4) S is a Gaussian semiring.
Proof. Obviously (1) → (2) and (2) → (3) hold by Proposition 2.5 and Proposition
2.8, respectively.
(3) → (4): Let f, g ∈ S[X]. By Theorem 3.1, we have c(f )c(g)c(f )m =
c(f g)c(f )m . So [c(f )c(g)c(f )m : c(f )m ] = [c(f g)c(f )m : c(f )m ]. This means that
c(f )c(g) = c(f g) and S is Gaussian.
Finally, the implication (4) → (1) holds by Theorem 3.4 and this finishes the
proof.
Remark 3.6. In [21, Theorem 9], it has been proved that every bounded distributive lattice is a Gaussian semiring. Also, note that if L is a bounded distributive
lattice with more than two elements, it is neither a ring nor a semidomain, since if
it is a ring then the idempotency of addition causes L = {0} and if it is a semidomain, the idempotency of multiplication causes L = B = {0, 1}. With the help
of the following theorem, we give a plenty of examples of proper Gaussian and
Prüfer semirings. Let us recall that if S is a semiring, then by FId(S), we mean
the semiring of finitely generated ideals of S.
Theorem 3.7. Let S be a Prüfer semiring. Then the following statements hold for
the semiring FId(S):
(1) FId(S) is a Gaussian semiring.
(2) FId(S) is a subtractive semiring.
(3) FId(S) is an additively idempotent semidomain and for all finitely generated
ideals I and J of S, we have IJ ∈ (I 2 , J 2 ).
(4) FId(S) is a Prüfer semiring.
Proof. (1): Let I, J ∈ FId(S). Since S is a Prüfer semiring and I ⊆ I + J, by
Theorem 1.3, there exists an ideal K of S such that I = K(I + J). On the other
hand, since I is invertible, K is also invertible. This means that K is finitely
12
SHABAN GHALANDARZADEH, PEYMAN NASEHPOUR, AND RAFIEH RAZAVI
generated and therefore, K ∈ FId(S) and I ∈ (I + J). Similarly, it can be proved
that J ∈ (I + J). So, we have (I, J) = (I + J) and by [21, Theorem 8], FId(S) is a
Gaussian semiring.
(2): By Theorem 3.1, every Gaussian semiring is subtractive. But by (1), FId(S)
is a Gaussian semiring. Therefore, FId(S) is a subtractive semiring.
(3): Obviously FId(S) is additively-idempotent and since S is a Prüfer semiring,
FId(S) is an additively idempotent semidomain. By Theorem [12, Proposition 4.43],
we have (I + J)2 = I 2 + J 2 and so (I + J)2 ∈ (I 2 , J 2 ). But (I + J)2 = I 2 + J 2 + IJ
and by (2), FId(S) is subtractive. So, IJ ∈ (I 2 , J 2 ), for all I, J ∈ FId(S).
(4): Since FId(S) is a Gaussian semidomain such that IJ ∈ (I 2 , J 2 ) for all
I, J ∈ FId(S), by Theorem 3.4, FId(S) is a Prüfer semiring and this is what we
wanted to prove.
Corollary 3.8. If D is a Prüfer domain, then FId(D) is a Gaussian and Prüfer
semiring.
Acknowledgments
The authors are very grateful to the anonymous referee for her/his useful pieces
of advice, which helped them to improve the paper. The first named author is supported by the Faculty of Mathematics at the K. N. Toosi University of Technology.
The second named author is supported by Department of Engineering Science at
University of Tehran and Department of Engineering Science at Golpayegan University of Technology and his special thanks go to the both departments for providing
all necessary facilities available to him for successfully conducting this research.
References
[1] J. T. Arnold and R. Gilmer, On the content of polynomials, Proc. Amer. Math. Soc. 40
(1970), 556–562.
[2] S. Bazzoni and S. Glaz, Gaussian properties of total rings of quotients, J. Algebra 310
(2007), no. 1, 180–193.
[3] S. Bourne, The Jacobson radical of a semiring, Proc. Nat. Acad. Sci. 37 (1951), 163–170.
[4] R. Dedekind, Über die Theorie der ganzen algebraiscen Zahlen, Supplement XI to P.G.
Lejeune Dirichlet: Vorlesung über Zahlentheorie 4 Aufl., Druck und Verlag, Braunschweig,
1894.
[5] L. Dale and J. D. Pitts, Euclidean and Gaussian semirings, Kyungpook Math. J. 18 (1978),
17–22.
[6] S. Eilenberg, Automata, Languages, and Machines, Vol. A., Academic Press, New York,
1974.
[7] R. El Bashir, J. Hurt, A. Jančařı́k, and T. Kepka, Simple commutative semirings, J. Algebra
236 (2001), 277–306.
[8] R. Gilmer, Multiplicative Ideal Theory, Marcel Dekker, New York, 1972.
[9] R. Gilmer, Some applications of the Hilfsatz von Dedekind-Mertens, Math. Scand., 20
(1967), 240–244.
[10] K. Glazek, A guide to the literature on semirings and their applications in mathematics and
information sciences, Kluwer Academic Publishers, Dordrecht, 2002.
[11] J. S. Golan, Power Algebras over Semirings: with Applications in Mathematics and Computer Science, Vol. 488, Springer , 1999.
[12] J. S. Golan, Semirings and Their Applications, Kluwer Academic Publishers, Dordrecht,
1999.
[13] U. Hebisch and H. J. Weinert, On Euclidean semirings, Kyungpook Math. J., 27 (1987),
61–88.
[14] U. Hebisch and H. J. Weinert, Semirings - Algebraic Theory and Applications in Computer
Science, World Scientific, Singapore, (1998).
INVERTIBLE IDEALS AND GAUSSIAN SEMIRINGS
13
[15] C. B. Kim, A Note on the Localization in Semirings, Journal of Scientific Institute at Kookmin Univ., 3 (1985), 13–19.
[16] A. G. Kurosh, Lectures in General Algebra, translated by A. Swinfen, Pergamon Press,
Oxford, 1965.
[17] S. LaGrassa, Semirings: Ideals and Polynomials, PhD Thesis, University of Iowa, 1995.
[18] M. D. Larsen and P. J. McCarthy, Multiplicative Theory of Ideals, Academic Press, New
York, 1971.
[19] A. G. Naoum and A. S. Mijbass, Weak cancellation modules, Kyungpook Math. J., 37
(1997), 73–82.
[20] P. Nasehpour and S. Yassemi, M -cancellation ideals, Kyungpook Math. J., 40 (2000), 259–
263.
[21] P. Nasehpour, On the content of polynomials over semirings and its applications, J. Algebra
Appl., 15, No. 5 (2016), 1650088 (32 pages).
[22] P. Nasehpour, Valuation semirings, J. Algebra Appl., Vol. 16, No. 11 (2018) 1850073 (23
pages) arXiv:1509.03354.
[23] M. L. Noronha Galvão, Ideals in the semiring N, Potugal. Math. 37 (1978), 231–235.
[24] H. Prüfer, Untersuchungen über Teilbarkeitseigenschaften in Körpern, J. Reine Angew.
Math. 168 (1932), 1–36.
[25] F. Smith, Some remarks on multiplication modules, Arch. der Math. 50 (1988), 223–235.
[26] H. Tsang, Gauss’ Lemma, dissertation, University of Chicago, 1965.
[27] H.S. Vandiver, Note on a simple type of algebra in which cancellation law of addition does
not hold, Bull. Amer. Math. Soc. Vol. 40 (1934), 914–920.
Shaban Ghalandarzadeh, Faculty of Mathematics, K. N. Toosi University of Technology, Tehran, Iran
E-mail address: [email protected]
Peyman Nasehpour, Department of Engineering Science, Golpayegan University of
Technology, Golpayegan, Iran
Peyman Nasehpour, Department of Engineering Science, Faculty of Engineering,
University of Tehran, Tehran, Iran
E-mail address: [email protected], [email protected]
Rafieh Razavi, Faculty of Mathematics, K. N. Toosi University of Technology,
Tehran, Iran
E-mail address: [email protected]
| 0 |
A Differential Evaluation Markov Chain Monte Carlo algorithm for Bayesian Model
Updating
M. Sherri a, I. Boulkaibet b, T. Marwala b, M. I. Friswell c,
a
Department of Mechanical Engineering Science, University of Johannesburg, PO Box 524, Auckland
Park 2006, South Africa.
b
Institute of Intelligent Systems, University of Johannesburg, PO Box 524, Auckland Park 2006, South
Africa.
c
College of Engineering, Swansea University, Bay Campus, Swansea SA1 8EN, United Kingdom.
Abstract:
The use of the Bayesian tools in system identification and model updating paradigms has been increased in the last
ten years. Usually, the Bayesian techniques can be implemented to incorporate the uncertainties associated with
measurements as well as the prediction made by the finite element model (FEM) into the FEM updating procedure. In this
case, the posterior distribution function describes the uncertainty in the FE model prediction and the experimental data.
Due to the complexity of the modeled systems, the analytical solution for the posterior distribution function may not exist.
This leads to the use of numerical methods, such as Markov Chain Monte Carlo techniques, to obtain approximate
solutions for the posterior distribution function. In this paper, a Differential Evaluation Markov Chain Monte Carlo (DEMC) method is used to approximate the posterior function and update FEMs. The main idea of the DE-MC approach is to
combine the Differential Evolution, which is an effective global optimization algorithm over real parameter space, with
Markov Chain Monte Carlo (MCMC) techniques to generate samples from the posterior distribution function. In this
paper, the DE-MC method is discussed in detail while the performance and the accuracy of this algorithm are investigated
by updating two structural examples.
Keywords: Bayesian model updating; Markov Chain Monte Carlo; differential evolution; finite element model; posterior
distribution function.
1.
Introduction
During the last thirty years, the application of the finite element method (FEM) [1-3] has exponentially increased
where this numerical technique has become one of the most popular engineering tools in systems modelling and prediction.
In the domain of structural dynamics, the FEM tools are widely applied to model complex systems where this technique
can produce results with high accuracy, especially when the modelled system is simple. However, the results attained by
the FEM can be relatively inaccurate and the mismatches between the FEM results and the results attained from
experimental studies are relatively significant. This is due to the errors associated with the modelling process as well as
the complexity of modelled structure, which may reduce the accuracy of the modelling process. Consequently, the model
obtained by an FEM needs to be updated to reduce the errors between the experimental and modelled outputs. The
procedure of minimizing the differences between the numerical results and the measured data is known as the finite
element model updating (FEMU) [4, 5], where the FEMU methods can be divided into two main classes. In the first class,
which is the direct methods, we equate the experimental data directly to the FEM outputs resulting in a procedure that
constrains the updating to the FE system matrices (mass, stiffness) only. This kind of approach may produce non-realistic
results where the resulting updating parameters may not have physical meaning. In the second class, which is also known
as the iterative (or indirect) approaches, the FEM outputs are not directly equated to the experimental data, but instead, an
objective function is introduced and iteratively minimised to reduce the errors between the analytical and experiential
results. Thus, we vary the system matrices and the model output during the minimisation process, and realistic results are
often expected at the end of the updating process.
Generally, several sources of uncertainty are associated with the modelling process, such as the mathematical
simplifications made during the modelling, where this kind of uncertainty may affect the accuracy of the modelling
process. Moreover, the noise that contaminates the experimental results may also have a significant impact on the updating
process. To deal with such uncertainty problems, another class of methods called the uncertainty quantification methods
accomplishes the updating process. The most common uncertainty quantification method is known as the Bayesian
approach in which the unknown parameters and their uncertainty are identified by defining each unknown parameter with
a probability density distribution (PDF). Recently, the use of the Bayesian methodology has massively increased in the
domain of system identification and uncertainty quantification. In this approach, the uncertainties associated with the
modelled structure are expressed in terms of probability distributions where the unknown parameters are defined as a
random vector with a multi-variable probability density function, and the resulting function is known as the posterior PDF.
Solving the posterior PDF helps in identifying the unknown parameters and their uncertainties. Unfortunately, the posterior
PDF cannot be solved in an analytical way for sufficiently complex problems which is the case for the FEMU problems
since the search space is usually nonlinear and high dimensional. In this case, sampling techniques are employed to identify
these uncertain parameters. The most recognised sampling methods are these related to Markov chain Monte Carlo
(MCMC) methods.
Generally, the MCMC methods are very useful tools that can efficiently cope with large search spaces and generate
samples from complex distributions. These methods draw samples with an element of randomness while being guided by
the values of the posterior distribution function. Then, the drawn samples are accepted or rejected according to the
Metropolis criterion. Unfortunately, the updated models, with relatively large complexities, may have multiple optimal
(or near optimal) simple MCMC algorithms cannot easily identify solution, and this. In this paper, another version of the
MCMC algorithms, known as the Differential Evolution Markov Chain (DE-MC) [6, 7] algorithm, is used to update FEMs
of structural systems. The DE-MC algorithm combines the abilities of the differential evaluation algorithm [8, 9], which
is one of the genetic algorithms for global optimization, with the Metropolis-Hastings algorithm. In this algorithm, multiple
chains are run in parallel, and the exploration and exploitation of the search space in the current chain are achieved by the
difference of two randomly selected chains, multiplied by the value of the difference with a preselected factor and then
the result is added to the value of the current chain. The value of the current chain is then accepted or rejected according
to the Metropolis criterion. In this paper, the efficiency, reliability and the limitations of the DE-MC algorithm are
investigated when the Bayesian approach is applied for FEMU. This paper is organized as follows: in the next section, the
Bayesian formulations are introduced. Section 3 describes the DE-MC algorithm while section 4 presents the results when
a simple mass-spring structure is updated. Section 5 presents the updating results of an unsymmetrical H-shaped Structure.
The paper is concluded in section 6.
2.
Bayesian formulations
In this paper, the Bayesian approach is adopted to compute the posterior distribution function in order to update the
FEMs. The posterior function can be represented by Bayes rule [10-14]:
𝑃(𝜽|𝒟, ℳ) 𝑃(𝒟|𝜽, ℳ) 𝑃(𝜽|ℳ)
(2.1)
where ℳ describes the model class for the target system where each model class ℳ is defined by certain updating
parameters 𝜽 ∈ 𝚯 ⊂ ℛ 𝑑 . The experimental data 𝒟 of the structural system, which is represented by the natural frequencies
𝑓𝑖𝑚 and mode shapes 𝝓𝑚
𝑖 , are used to improve the FEM results. 𝑃(𝜽|ℳ) is the prior probability distribution function
(PDF) that represents the initial knowledge of the uncertain parameters given a specific model ℳ, and in the absence of
the measured data 𝒟. The function 𝑃(𝒟|𝜽, ℳ) is known as the likelihood function and represents the difference between
the experimental data and the FEM results. Finally, the probability distribution function 𝑃(𝜽|𝒟, ℳ) is the posterior
function of the unknown parameters given a model class ℳ and the measured data 𝒟. The model class ℳ is used only
when several classes are investigated for both model updating and model selection. In this paper, only one model class is
considered, and therefore, the term ℳ is omitted in order to simplify the Bayesian formulations.
In this paper, the likelihood function is given by:
2
1
𝑃(𝒟|𝜽) =
(
2𝜋 𝑁𝑚
)
𝛽𝑐
⁄2
exp (−
𝑚
𝑚
∏𝑁
𝑖=1 𝑓𝑖
𝑁𝑚 𝑓 𝑚 − 𝑓
𝛽𝑐
𝑖
𝑖
∑ (
) )
2
𝑓𝑖𝑚
𝑖
(2.2)
where 𝑁𝑚 is the number of measured modes, 𝛽𝑐 is a constant, 𝑓𝑖𝑚 and 𝑓𝑖 are the 𝑖th analytical and measured natural
frequencies. The initial knowledge of the updating parameters 𝜽, which is defined by a prior PDF, is given by the following
Gaussian distribution:
𝑃(𝜽) =
𝑄𝛼
1
(2𝜋)𝑄⁄2 ∏𝑄
𝑖=1
1
√𝛼𝑖
exp (− ∑
𝑖
𝑖
2
2
‖𝜃 𝑖 − 𝜃0𝑖 ‖ ) =
1
(2𝜋)𝑄⁄2 ∏𝑄
𝑖=1
1
√𝛼𝑖
1
exp (− (𝜽 − 𝜽0 )𝑇 Σ−1 (𝜽 − 𝜽0 ))
2
(2.3)
where 𝑄 is the number of the uncertain parameters, 𝜽0 represents the mean value of the updating parameters, 𝛼𝑖 , 𝑖 =
1, … , 𝑄 are the coefficients of the updating parameters and the Euclidean norm is given by the notation: ‖∗‖.
After substituting Eqs. (2.2) and (2.3) into the Bayesian inference defined by Eq. (2.1), the posterior 𝑃(𝜽|𝒟) of the
unknown parameters 𝜽 given the experimental data 𝒟 is characterized by:
2
𝑃(𝜽|𝒟)
𝑁𝑚 𝑓 𝑚 − 𝑓
𝑄𝛼
1
𝛽𝑐
2
𝑖
𝑖
𝑖
exp (−
∑ (
‖𝜃 𝑖 − 𝜃0𝑖 ‖ )
𝑚 ) −∑
𝑍𝑠 (𝛼, 𝛽𝑐 )
2
𝑓𝑖
𝑖
𝑖 2
(2.4)
where
𝑁𝑚
𝑄
2𝜋 𝑁𝑚⁄2
1
𝑍𝑠 (𝛼, 𝛽𝑐 ) = ( )
∏ 𝑓𝑖𝑚 (2𝜋)𝑄/2 ∏
𝛽𝑐
𝑖=1
𝑖=1 √𝛼𝑖
(2.5)
Generally, the complexity of the posterior PDF, which depends on the modal parameters of the analytical model, is
related to the complexity of the analytical model, and for certain relatively complex structural models the analytical results
for the posterior distribution are difficult to obtain due to the high dimensionality of the search space. In this case, sampling
techniques [5, 10, 11, 13, 14] are the only practical approaches in order to approximate the posterior PDF. The main idea
of sampling techniques is to generate a 𝑁𝑠 sequence of vectors {𝜽1 , 𝜽2 , … , 𝜽𝑁𝑠 } and use these samples to approximate the
future response of the unknown parameters at different time instances. The most recognized sampling techniques are
Markov Chain Monte Carlo (MCMC) methods [5, 13-18]. In this paper, the combination of one of the basic MCMC
algorithms, known as the Metropolis-Hasting algorithm, with one of the genetic algorithms, known as differential
evolution (DE), is used to generate samples from the posterior PDF in order to update structural models.
3.
The Differential Evolution Markov Chain Monte Carlo (DE-MC) method
In this paper, the DE and MCMC methods, which are extremely popular methods in several scientific domains, are
combined to improve the convergence of the sampling procedure. In this approach, multiple chains are run in parallel in
order to improve the accuracy of the updating parameters, while these chains learn from each other instead of running all
the chains independently. This may improve the efficiency of the searching procedure and avoid sampling in the vicinity
of a local minimum. The new chains are then accepted or rejected according to the Metropolis-Hastings criterion.
The Metropolis-Hastings (M-H) [18, 19, 20] algorithm is one of the common MCMC methods that can be used to
draw samples from multivariate probability distributions. To sample from the posterior PDF 𝑃(𝜃|𝐷), where 𝜽 =
{𝜃1 , 𝜃2 , … , 𝜃𝑑 } is a 𝑑-dimensional parameters vector, a proposal density distribution 𝑞(𝜽|𝜽𝑡−1 ) is used to generate a
proposed random vector 𝜽∗ given the value at the previous accepted vector 𝜽𝑡−1 at the iteration 𝑡 − 1 of the algorithm.
Next, the Metropolis criterion is used to accept or reject the proposed sample 𝜽∗ as follows:
𝛼(𝜽∗ |𝜽𝑡−1 ) = min {1,
𝑃(𝜽∗ |𝐷) 𝑞(𝜽𝑡−1 |𝜽∗ )
}
𝑃(𝜽𝑡−1 |𝐷)𝑞(𝜽∗ |𝜽𝑡−1 )
(3.1)
On the other hand, the Differential Evolution (DE) [8] is a very effective genetic algorithm in solving various realworld global optimization problems. As one of the genetic algorithms, the DE algorithm begins by randomly initialising
the population within certain search area, and then these initial values are evolved over the generations in order to find the
global minimum. This can be achieved using genetic operators such as: mutation, selection, and crossover.
By integrating the Metropolis-Hastings criterion within the search abilities of the DE algorithm, the resulted MCMC
method can be more efficient in determining where other chains can be employed to create the new candidates for the
current chain. In the DE-MC algorithm, the new value of the chain is obtained by a simple mutation operation where the
difference between two randomly selected chains (different from the current chain) are added to the current chain. Thus,
the proposal for each chain depends on a weighted combination of other chains which can be easily defined as [6, 7]:
𝜽∗ = 𝜽𝑖 + 𝛾(𝜽𝑎 − 𝜽𝑏 ) + 𝜺
(3.2)
where 𝜽∗ represents the new proposed vector, 𝜽𝑖 is the current state of the 𝑖-th chain, 𝜽𝑎 and 𝜽𝑏 are randomly selected
chains, 𝛾 is a tuning factor that always take a positive value and can be set to vary between [0.4, 1]. Note that the vectors:
𝜽𝑖 ≠ 𝜽𝑎 ≠ 𝜽𝑏 . Finally, the noise 𝜺, which is defined as a Gaussian distribution 𝜺~𝑁𝑝 (𝟎, 𝝈2 ) with a very small variance
vector 𝝈2 , is added to the proposed vector to avoid degeneracy problems. The factor 𝛾 can be seen as the magnitude that
controls the jumping distribution. The main idea of the DE-MC algorithm can be illustrated in Figure 3.1b.
(a): Metropolis-Hastings
(b): DE-MC
Figure 3.1: Proposed vector generation in the M-H and DE-MC methods
Figure 3.1 explains the way to generate proposed vectors for the M-H method (Figure 3.1a) and for the DE-MC
method (Figure 3.1b). As illustrated, the difference vector between the two randomly selected chains 𝜽𝑎 and 𝜽𝑏 represents
the direction of the new proposed vector, where this difference is multiplied by the factor 𝛾 to define the moving distance.
The moving distance is then added to the current chain 𝜽𝑖 to create the proposed vector. Note that, the DE-MC method
has only one tuning factor 𝛾 in comparing to other versions of evolutionary MCMC methods. Finally, the new proposal
𝜽∗ of the 𝑖-th chain is accepted or rejected according to the Metropolis criterion which is given as:
𝑟 = min {1,
𝑃(𝜽∗ |𝐷)
}
𝑃(𝜽𝒊 |𝐷)
(3.3)
The steps to update FEMs using the DE-MC algorithm are summarized as follows:
1234-
Initialize the population 𝜽𝑖,𝜊 , 𝑖 ∈ {1,2, … , 𝑁}.
Set the tuning factor 𝛾. In this paper, 𝛾 = 2.38/√2𝑑 and 𝑑 is the dimension of the updating parameters.
Calculate the Posterior PDF for all chains.
For all chains 𝑖 ∈ {1,2, … , 𝑁}:
4.1 Sample uniformly two random vectors 𝜽𝑎 ,𝜽𝑏 where 𝜽𝑎 ≠ 𝜽𝑏 ≠ 𝜽𝑖 .
4.2 Sample the random value 𝜺 with small variance 𝜺~𝑁𝑝 (𝟎, 𝝈2 ).
4.3 Calculate the proposed vector 𝜽∗ = 𝜽𝑖 + 𝛾(𝜽𝑎 − 𝜽𝑏 ) + 𝜺.
4.4 Calculate the Posterior PDF for the vector 𝜽∗ .
4.5 Calculate the Metropolis ratio 𝑟 = min {1,
𝑃(𝜽∗ |𝐷)
𝑃(𝜽𝒊 |𝐷)
}
4.6 Accept the proposed vector 𝜽𝑖 ← 𝜽∗ with probability 𝑚𝑖𝑛 (1, 𝑟), otherwise 𝜽𝑖 is unchanged.
5- Repeat the steps 4.1 to 4.6 until the number of samples required is achieved.
In next two sections, the DE-MC performance is highlighted when two structural examples are updated.
4.
Application 1: Simple Mass-Spring system
In this section, a five degrees of freedom mass-spring linear system, as presented in Figure 4.1, is updated using the DEMC algorithm.
Figure 4.1: The five degrees of freedom mass-spring system
The system contains 5 masses connected to each other using 10 springs (see Figure 4.1). The deterministic values of
the masses are: 𝑚1 = 2.7 kg, 𝑚2 = 1.7 kg, 𝑚3 = 6.1 kg, 𝑚4 = 5.3 kg and 𝑚5 = 2.9 kg. The stiffness of the springs are:
𝑘3 = 3200 N/m, 𝑘5 = 1840 N/m, 𝑘7 = 2200 N/m, 𝑘9 = 2800 N/m and 𝑘10 = 2000 N/m. The spring stiffnesses
𝑘1 , 𝑘2 , 𝑘4 , 𝑘6 , and 𝑘8 are considered as the uncertain parameters where the updating vector is: 𝜽 = {𝜃1 , 𝜃2 , 𝜃3 , 𝜃4 , 𝜃5 } =
{𝑘1 , 𝑘2 , 𝑘4 , 𝑘6 , 𝑘8 }.
Since the DE-MC method is used for the updating procedure, the population used by the algorithm is selected to be
𝑁 = 10. The updating vectors are bounded by 𝜽𝑚𝑎𝑥 and 𝜽𝑚𝑖𝑛 which are set to {4800, 2600,2670, 3400, 2750} and
{3200, 1800,1600, 1800, 2050}, respectively. The tuning factor is set to 𝛾 = 2.38/√2𝑑 while 𝑑 = 5, the initial vector
of 𝜽 is set to 𝜽0 = {4600, 2580, 1680, 3100, 2350} and the number of generations (number of samples) is set to 𝑁𝑠 =
10000. The obtained samples are illustrated in Figure 4.2 while the updating parameters, as well as the initial and updated
natural frequencies, are shown in Tables 4.1 and 4.2, respectively.
2600
2350
2300
2500
2250
2400
2200
2300
3
2
2150
2100
2050
2200
2100
2000
2000
1950
1900
1900
1850
3400
3600
3800
4000
4200
1800
1800
4400
1
2000
2
2200
2400
2600
3100
3000
2500
2900
2800
2400
5
4
2700
2600
2300
2500
2200
2400
2300
2100
2200
2100
1800
2000
2200
2400
2600
3
2000
2000
2500
4
3000
3500
Figure 4.2: The scatter plots of the samples using the DE-MC algorithm
Figure 4.2 shows the scatter plots of the uncertain parameters using the DE-MC algorithm. The confidence ellipses
(error ellipse) of the samples are also shown in the same figures (in red) where these ellipses visualize the regions that
contain 95% of the obtained samples. As expected, the figure shows that the DE-MC algorithm has found the high
probability area after only few iterations. Table 4.1 contains the initial values, the nominal values and the updated values
of the uncertain parameters. The coefficient of variation (c.o.v) values, which are estimated by dividing the standard
deviations 𝜎𝑖 by the updated vectors 𝜽𝑖 (or 𝝁𝑖 ), are also presented in Table 4.1 and used to measure the errors in the
updating. It is clear that the obtained values of the c.o.v when the DE-MC algorithm is used to update the structure are
small and less than 2.5% which means that the DE-MC algorithm performed well and was able to identify the areas with
high probability. This also can be verified from the same table where the updating parameters are close the nominal values.
Table 4.1: The updating parameters using DE-MC technique
Unknown parameters (N/m)
Nominal values
Error (%)
𝜃1
4600
4010
14.71
4004.4
0.14
1.03
𝜃2
2580
2210
16.74
2197.6
0.56
1.71
𝜃3
1680
2130
21.13
2109.4
0.97
2.07
𝜃4
3100
2595
19.46
2600.9
0.23
2.32
𝜃4
2350
2398
02.00
2410.4
0.52
1.77
DE-MC (𝜇𝑖 )
Error (%)
𝜎𝑖
Initial
𝜇𝑖
c.o.v %)
Table 4.2 contains the initial, nominal and updated natural frequencies. Furthermore, the absolute errors, which are
estimated by
|𝑓𝑖𝑚−𝑓𝑖 |
, the total average error (𝑇𝐴𝐸), which is computed by 𝑇𝐴𝐸 =
𝑓𝑖𝑚
1
𝑁𝑚
𝑚
∑𝑁
𝑖=1
|𝑓𝑖𝑚 −𝑓𝑖 |
𝑓𝑖𝑚
, 𝑁𝑚 = 5, and the c.o.v
values are also displayed. Obviously, the updated frequencies obtained by the DE-MC are better than the initial
frequencies, and almost equal to the nominal frequencies.
Table 4.2: The updated natural frequencies and the errors obtained using the DE-MC
Modes
Nominal
Frequency
Initial Frequency
Error
Frequency DE-MC
c.o.v
Error
(Hz)
(%)
(Hz)
(%)
(%)
(Hz)
1
3.507
3.577
1.97
3.507
0.118
0.00
2
5.149
5.371
4.30
5.149
0.126
0.00
3
7.083
7.239
2.21
7.082
0.119
0.02
4
8.892
9.030
1.56
8.894
0.140
0.03
5
9.426
9.412
0.16
9.426
0.117
0.00
TAE
_______
_______
1.98
_______
_____
0.012
The total average error of the FEM output was reduced from 1.98% to 0.012%. On the other hand, the values of the
c.o.v for all updated frequencies are smaller than 0.15% which indicates that the DE-MC technique efficiently updated the
structural system. Figure 4.3 shows the evaluation of the total average error at each iteration. The 𝑇𝐴𝐸 in Figure 4.3 is
1
obtained as follows: first, the mean value of the samples at each iteration is computed as 𝜃̂ = 𝐸(𝜃) =
̃ ∑𝑖𝑗=1 𝜃 𝑖 where 𝑖
𝑁𝑠
is the current iteration. Next, the mean value is used to compute the analytical frequencies of the FEM, and then the total
average error is calculated as: 𝑇𝐴𝐸(𝑖) =
1
𝑁𝑚
𝑚
∑𝑁
𝑗=1
|𝑓𝑗𝑚−𝑓𝑗 |
𝑓𝑗𝑚
. As a result, it is clear that the DE-MC algorithm converges
efficiently after the first 2000 iterations.
Total average error
10
10
10
0
-1
-2
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
Iteration
Figure 4.3: The evaluation of the TAE using the DE-MC method
Figure 4.4 illustrates the correlation between the updating parameters where all parameters are correlated (the values
are different from zero). Moreover, the majority of these parameters are weakly correlated (small values <0.3) except the
pairs (𝜃1 , 𝜃2 ) and (𝜃4 , 𝜃5 ) which are highly correlated (values >0.7).
1.5
Correlation
1
0.5
0
-0.5
-1
1
2
5
3
4
4
3
5
2
1
i
i
Figure 4.4: The correlation between the updating parameters.
In the next section, the DE-MC method is used to update an unsymmetrical H-shaped aluminum structure with real
experimental data.
5.
Application 2: The FEMU of the unsymmetrical H-Shaped structural system.
In this section, the performance of the DE-MC algorithm is examined by updatintg an unsymmetrical H-shaped
aluminum structure with real measured data. The FEM model of the H-shaped structure is presented in figure (5.1) where
the structure is divided into 12 elements, and each element is modelled as an Euler-Bernoulli beam. The location displayed
by a double arrow at the middle beam indicates the position of excitation which is produced by an electromagnetic shaker.
An accelerometer was used to measure the set of frequency-response functions. The initial analytical natural frequencies
are 53.9, 117.3, 208.4, 254.0 and 445.0 Hz. In this example, the moments of inertia 𝐼𝑥𝑥 and the cross-sectional areas 𝐴𝑥𝑥
of the left, middle and right subsections of the H-shaped beam are selected to be updated in order to improve the analytical
natural frequencies. Thus, the updating parameters are: 𝜽 = {𝐼𝑥1 , 𝐼𝑥2 , 𝐼𝑥3 , 𝐴𝑥1 , 𝐴𝑥2, 𝐴𝑥3, }.
Figure 5.1: The Unsymmetrical H-Shaped Structure
The rest of the H-shaped structure parameters are given as follows: The Young’s modulus is set to 7.2 × 1010 N/m2
and the density is set to 2785 kg/m3. The updating parameters 𝜽 are bounded by maximum and minimum vectors given
by: [4.73 × 10−8 , 4.73 × 10−8 , 4.73 × 10−8 , 5.16 × 10−4 , 5.16 × 10−4 , 5.16 × 10−4 ] and [0.73 × 10−8 , 0.73 ×
10−8 , 0.73 × 10−8 , 1.16 × 10−4 , 1.16 × 10−4 , 1.16 × 10−4 ], respectively. These boundaries are used to ensure that the
updating parameters are physically realistic. The number of samples is set to 𝑁𝑠 = 5000, the factor 𝛽𝑐 of the likelihood
function is set equal to 10, the coefficients 𝛼𝑖 of the prior PDF are set to
1
𝜎𝑖2
where 𝜎𝑖2 is the variance of the 𝑖th uncertain
parameters, and 𝜎 = [5 × 10−8 , 5 × 10−8 , 5 × 10−8 , 5 × 10−4 , 5 × 10−4 , 5 × 10−4 ].
Figure 5.2 illustrates the scatter plots of the updating parameters. The confidence ellipse that contains 95% of samples
are also included in the figure. The updating parameters were normalized by dividing the parameters by 𝒌 =
[10−8 , 10−8 , 10−8 , 10−4 , 10−4 , 10−4 ]. As expected, the DE-MC algorithm was able to find the area with high probably
after a few iterations. The rest of the updating parameters are shown in Table 5.1 as well as the initial values, the c.o.v
values and the updating parameters obtained by the M-H algorithm.
4
4.2
4
3.5
3.8
3.6
4 / k4
3 / k3
3
3.4
3.2
2.5
3
2.8
2
2.6
1.5
1.5
2
2.5
3
3.5
1.5
2
2.5
3
3.5
4
3 / k3
1 / k1
Figure 5.2: The scatter plots of the samples using the DE-MC algorithm
Table 5.1: The initial parameters, the c.o.v values and the updating parameters using the DE-MC and M-H algorithms
Initial
DE-MC
𝜎𝑖
𝜇𝑖
(%)
(𝜇𝑖 )
M-H
𝜎𝑖
𝜇𝑖
(%)
(𝜇𝑖 )
𝜃1
2.7265 × 10−8
2.8965 × 10−8
5.76
2.31 × 10−8
22.59
𝜃2
2.7265 × 10−8
2.9739 × 10−8
1.91
2.68 × 10−8
15.25
𝜃3
2.7265 × 10−8
1.7676 × 10−8
1.67
2.17 × 10−8
13.96
𝜃4
3.1556 × 10−4
3.8966 × 10−4
0.65
2.85 × 10−4
14.36
𝜃5
3.1556 × 10−4
2.1584 × 10−4
2.93
2.83 × 10−4
14.36
𝜃6
3.1556 × 10−4
2.9553 × 10−4
0.026
2.77 × 10−4
13.08
The results in Table 5.1 indicate that the updating parameters obtained by the DE-MC and M-H algorithms are
different from the initial values which means that the uncertain parameters have been successfully updated. Furthermore,
the c.o.v values of the updating parameters obtained by the DE-MC algorithm are relatively small (<2.5%) with verifies
that the algorithm was able to identify the areas with high probability in a reasonable amount of time; however, the c.o.v
obtained by the M-H algorithm are relatively high (>13.08%) which means that the M-H algorithm does not have the
efficiency of the DE-MC algorithm.
Table 5.2 illustrates the updating frequencies using the DE-MC and M-H algorithms, the errors and the c.o.v values.
As expected, the analytical frequencies obtained by the DE-MC algorithm are better than the initial frequencies as well as
the frequencies obtained by the M-H algorithm. The DE-MC method has improved all natural frequencies and reduced
the total average error (TAE) from 5.37% to 1.53%. Also, the c.o.v values obtained by the DE-MC method are relatively
small (<0.65%).
Table 5.2: Natural frequencies, c.o.v values and errors when DE-MC and M-H techniques are used for FEMU
Modes
Measured
Frequency
(Hz)
Initial
Frequency
(Hz)
Error
(%)
Frequency
DE-MC
(Hz)
c.o.v
(%)
Error
(%)
Frequency
M-H
(Hz)
c.o.v
(%)
Error
(%)
1
53.90
51.04
5.31
52.56
0.30
2.49
53.92
3.96
0.04
2
117.30
115.79
1.29
119.42
0.35
1.81
122.05
4.28
4.05
3
208.40
199.88
4.09
210.46
0.54
0.99
210.93
4.95
1.22
4
254.00
245.76
3.25
253.37
0.41
0.25
258.94
4.81
1.94
5
445.00
387.53
12.92
435.71
0.63
2.09
410.33
4.74
7.79
TAE
_______
_______
5.37
_______
_____
1.53
_____
_____
3.01
1
Correlation
0.5
0
-0.5
-1
1
2
6
3
5
4
4
5
3
6
i
2
1
i
Figure 5.3: The correlation between the updating parameters
Figure 5.3 shows the correlation of the updating parameters using the DE-MC parameters where the majority of these
parameters are weakly correlated except the pairs (𝜃2 , 𝜃5 ) and (𝜃4 , 𝜃5 ), where the correlation between these pairs are
relatively high (<0.7%).
The evaluation of total average error after each accepted (or rejected) sample is illustrated in Figure 5.4. The result
indicates that the DE-MC has a fast convergence rate and was able converge after 500 iterations.
10
Total average error
10
10
10
10
10
10
10
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
Iteration
Figure 5.4: The evaluation of the TAE using the DE-MC method
6.
Conclusion
In this paper, the Differential Evolution Markov Chain (DE-MC) algorithm is used to approximate the Bayesian
formulations in order to perform a finite element model updating procedure. In the DE-MC method, multi-chains are run
in parallel which allows the chains to learn from each other in order to improve the sampling process where the jumping
step depends on the difference between randomly selected chains. This method is investigated by updating two structural
system: the first one is a five DOF mass-spring linear system and the second one is the unsymmetrical H-shaped aluminum
structure. In the first case, the total average error was reduced from 1.98% to 0.012%, while in the second case, the FEM
updating of the unsymmetrical H-shaped structure, the total average error was reduced from 5.37% to 1.53%. Also the
DE-MC algorithm appeared to have better results than the M-H algorithm when the unsymmetrical H-shaped structure is
updated. In further work, the DE-MC algorithm will be modified and improved to include several steps such as the cross
over and the exchange between the parallel chains. These changes may further improve the sampling procedure.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
Zienkiewicz, O.C., & Taylor, R.L. The finite element method, Vol. 3, London: McGraw-hill, 1977.
Hutton, D., Fundamentals of finite element analysis, McGraw-Hill, 2004.
Cook, R.D., Concepts and applications of finite element analysis. John Wiley & Sons, 2007.
Friswell, M. I. and J.E. Mottershead, Finite element model updating in structural dynamics. Vol. 38, Springer
Science & Business Media, 2013.
Marwala, T., Finite element model updating using computational intelligence techniques: applications to
structural dynamics, Springer Science & Business Media, 2010.
Ter Braak, C.J., A Markov Chain Monte Carlo version of the genetic algorithm Differential Evolution: easy
Bayesian computing for real parameter spaces. Statistics and Computing, 16(3), pp. 239-249, 2006.
ter Braak, C.J. and J.A. Vrugt, Differential evolution Markov chain with snooker updater and fewer chains.
Statistics and Computing, 18(4), pp. 435-446, 2008.
Storn, R. and K. Price, Differential evolution–a simple and efficient heuristic for global optimization over
continuous spaces. Journal of global optimization, 11(4), pp. 341-359, 1997.
Price, K., R.M. Storn, and J.A. Lampinen, Differential evolution: a practical approach to global optimization,
Springer Science & Business Media, 2006.
Bishop, C.M., Pattern recognition and machine learning, springer, 2006.
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
Boulkaibet, I., Marwala, T., Mthembu, L., Friswell, M. I., & Adhikari, S. Sampling techniques in Bayesian finite
element model updating, in Topics in Model Validation and Uncertainty Quantification, Vol 4. Springer. pp. 7583, 2012.
Yuen, K.-V., Bayesian methods for structural dynamics and civil engineering. John Wiley & Sons, 2010.
Boulkaibet, I., Mthembu, L., Marwala, T., Friswell, M. I., & Adhikari, S. Finite element model updating using
the shadow hybrid Monte Carlo technique. Mechanical Systems and Signal Processing, Vol. 52, pp. 115-132,
2015.
Boulkaibet, I., Mthembu, L., Marwala, T., Friswell, M. I., & Adhikari, S. Finite element model updating using
Hamiltonian Monte Carlo techniques. Inverse Problems in Science and Engineering, 2017. 25(7): p. 1042-1070.
Marwala, T. and S. Sibisi, Finite Element Model Updating Using Bayesian Framework and Modal Properties.
Journal of Aircraft, Vol. 42(1), pp. 275-278, 2005.
Bishop, C.M., Neural networks for pattern recognition. Oxford university press, 1995.
Ching, J. & S.-S. Leu, Bayesian updating of reliability of civil infrastructure facilities based on condition-state
data and fault-tree model. Reliability Engineering & System Safety, Vol. 94(12), pp. 1962-1974, 2009.
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E., Equation of state calculations
by fast computing machines. The journal of chemical physics, Vol. 21(6), pp. 1087-1092, 1953.
Boulkaibet, I., Mthembu, L., Marwala, T., Friswell, M. I., & Adhikari, S., Finite element model updating using
an evolutionary Markov Chain Monte Carlo algorithm, in Dynamics of Civil Structures, Vol. 2., Springer. pp.
245-253, 2015.
Marwala, T., I. Boulkaibet, and S. Adhikari, Probabilistic Finite Element Model Updating Using Bayesian
Statistics: Applications to Aeronautical and Mechanical Engineering. John Wiley & Sons, 2016.
Boulkaibet, I., Finite element model updating using Markov Chain Monte Carlo techniques, PhD thesis,
University of Johannesburg, 2014.
| 5 |
arXiv:1410.6843v2 [] 22 Apr 2016
Posteriors, conjugacy, and exponential families
for completely random measures
Tamara Broderick
Ashia C. Wilson
Michael I. Jordan
April 25, 2016
Abstract
We demonstrate how to calculate posteriors for general Bayesian nonparametric priors and likelihoods based on completely random measures
(CRMs). We further show how to represent Bayesian nonparametric priors as a sequence of finite draws using a size-biasing approach—and how
to represent full Bayesian nonparametric models via finite marginals. Motivated by conjugate priors based on exponential family representations
of likelihoods, we introduce a notion of exponential families for CRMs,
which we call exponential CRMs. This construction allows us to specify
automatic Bayesian nonparametric conjugate priors for exponential CRM
likelihoods. We demonstrate that our exponential CRMs allow particularly straightforward recipes for size-biased and marginal representations
of Bayesian nonparametric models. Along the way, we prove that the
gamma process is a conjugate prior for the Poisson likelihood process and
the beta prime process is a conjugate prior for a process we call the odds
Bernoulli process. We deliver a size-biased representation of the gamma
process and a marginal representation of the gamma process coupled with
a Poisson likelihood process.
1
Introduction
An important milestone in Bayesian analysis was the development of a general
strategy for obtaining conjugate priors based on exponential family representations of likelihoods [DeGroot, 1970]. While slavish adherence to exponentialfamily conjugacy can be criticized, conjugacy continues to occupy an important
place in Bayesian analysis, for its computational tractability in high-dimensional
problems and for its role in inspiring investigations into broader classes of priors (e.g., via mixtures, limits, or augmentations). The exponential family is,
however, a parametric class of models, and it is of interest to consider whether
similar general notions of conjugacy can be developed for Bayesian nonparametric models. Indeed, the nonparametric literature is replete with nomenclature
that suggests the exponential family, including familiar names such as “Dirichlet,” “beta,” “gamma,” and “Poisson.” These names refer to aspects of the
1
random measures underlying Bayesian nonparametrics, either the Lévy measure used in constructing certain classes of random measures or properties of
marginals obtained from random measures. In some cases, conjugacy results
have been established that parallel results from classical exponential families; in
particular, the Dirichlet process is known to be conjugate to a multinomial process likelihood [Ferguson, 1973], the beta process is conjugate to a Bernoulli
process [Kim, 1999, Thibaux and Jordan, 2007] and to a negative binomial
process [Broderick et al., 2015]. Moreover, various useful representations for
marginal distributions, including stick-breaking and size-biased representations,
have been obtained by making use of properties that derive from exponential
families. It is striking, however, that these results have been obtained separately,
and with significant effort; a general formalism that encompasses these individual results has not yet emerged. In this paper, we provide the single, holistic
framework so strongly suggested by the nomenclature. Within this single framework, we show that it is straightforward to calculate posteriors and establish
conjugacy. Our framework includes the specification of a Bayesian nonparametric analog of the finite exponential family, which allows us to provide automatic
and constructive nonparametric conjugate priors given a likelihood specification
as well as general recipes for marginal and size-biased representations.
A broad class of Bayesian nonparametric priors—including those built on the
Dirichlet process [Ferguson, 1973], the beta process [Hjort, 1990], the gamma
process [Ferguson, 1973, Lo, 1982, Titsias, 2008], and the negative binomial
process [Zhou et al., 2012, Broderick et al., 2015]—can be viewed as models
for the allocation of data points to traits. These processes give us pairs of
traits together with rates or frequencies with which the traits occur in some
population. Corresponding likelihoods assign each data point in the population
to some finite subset of traits conditioned on the trait frequencies. What makes
these models nonparametric is that the number of traits in the prior is countably
infinite. Then the (typically random) number of traits to which any individual
data point is allocated is unbounded, but also there are always new traits to
which as-yet-unseen data points may be allocated. That is, such a model allows
the number of traits in any data set to grow with the size of that data set.
A principal challenge of working with such models arises in posterior inference. There is a countable infinity of trait frequencies in the prior which we
must integrate over to calculate the posterior of trait frequencies given allocations of data points to traits. Bayesian nonparametric models sidestep the full
infinite-dimensional integration in three principal ways: conjugacy, size-biased
representations, and marginalization.
In its most general form, conjugacy simply asserts that the prior is in the
same family of distributions as the posterior. When the prior and likelihood are
in finite-dimensional conjugate exponential families, conjugacy can turn posterior calculation into, effectively, vector addition. As a simple example, consider a
model with beta-distributed prior, θ ∼ Beta(θ|α, β), for some fixed hyperparameters α and β. For the likelihood, let each observation xn with n ∈ {1, . . . , N } be
iid
iid Bernoulli-distributed conditional on parameter θ: xn ∼ Bern(x|θ). Then the
2
posterior is simply another beta distribution, Beta(θ|αpost , βpost ), with paramePN
PN
ters updated via addition: αpost := α + n=1 xn and βpost := β + N − n=1 xn .
While conjugacy is certainly useful and popular in the case of finite parameter cardinality, there is arguably a stronger computational imperative for its
use in the infinite-parameter case. Indeed, the core prior-likelihood pairs of
Bayesian nonparametrics are generally proven [Hjort, 1990, Kim, 1999, Lo, 1982,
Thibaux and Jordan, 2007, Broderick et al., 2015], or assumed to be [Titsias,
2008, Thibaux, 2008], conjugate. When such proofs exist, though, thus far
they have been specialized to specific pairs of processes. In what follows, we
demonstrate a general way to calculate posteriors for a class of distributions that
includes all of these classical Bayesian nonparametric models. We also define
a notion of exponential family representation for the infinite-dimensional case
and show that, given a Bayesian nonparametric exponential family likelihood,
we can readily construct a Bayesian nonparametric conjugate prior.
Size-biased sampling provides a finite-dimensional distribution for each of
the individual prior trait frequencies [Thibaux and Jordan, 2007, Paisley et al.,
2010]. Such a representation has played an important role in Bayesian nonparametrics in recent years, allowing for either exact inference via slice sampling [Damien et al., 1999, Neal, 2003]—as demonstrated by Teh et al. [2007],
Broderick et al. [2015]—or approximate inference via truncation [Doshi et al.,
2009, Paisley et al., 2011]. This representation is particularly useful for building
hierarchical models [Thibaux and Jordan, 2007]. We show that our framework
yields such representations in general, and we show that our construction is
especially straightforward to use in the exponential family framework that we
develop.
Marginal processes avoid directly representing the infinite-dimensional prior
and posterior altogether by integrating out the trait frequencies. Since the
trait allocations are finite for each data point, the marginal processes are finite for any finite set of data points. Again, thus far, such processes have
been shown to exist separately in special cases; for example, the Indian buffet
process [Griffiths and Ghahramani, 2006] is the marginal process for the beta
process prior paired with a Bernoulli process likelihood [Thibaux and Jordan,
2007]. We show that the integration that generates the marginal process from
the full Bayesian model can be generally applied in Bayesian nonparametrics
and takes a particularly straightforward form when using conjugate exponential
family priors and likelihoods. We further demonstrate that, in this case, a basic, constructive recipe exists for the general marginal process in terms of only
finite-dimensional distributions.
Our results are built on the general class of stochastic processes known as
completely random measures (CRMs) [Kingman, 1967]. We review CRMs in
Section 2.1 and we discuss what assumptions are needed to form a full Bayesian
nonparametric model from CRMs in Section 2.3. Given a general Bayesian
nonparametric prior and likelihood (Section 2.2), we demonstrate in Section 3
how to calculate the posterior. Although the development up to this point is
more general, we next introduce a concept of exponential families for CRMs
3
(Section 4.1) and call such models exponential CRMs. We show that we can
generate automatic conjugate priors given exponential CRM likelihoods in Section 4.2. Finally, we show how we can generate recipes for size-biased representations (Section 5) and marginal processes (Section 6), which are particularly
straightforward in the exponential CRM case (Corollary 5.2 in Section 5 and
Corollary 6.2 in Section 6). We illustrate our results on a number of examples
and derive new conjugacy results, size-biased representations, and marginal processes along the way.
We note that some similar results have been obtained by Orbanz [2010] and
James [2014]. In the present work, we focus on creating representations that
allow tractable inference.
2
Bayesian models based on completely random
measures
As we have discussed, we view Bayesian nonparametric models as being composed of two parts: (1) a collection of pairs of traits together with their frequencies or rates and (2) for each data point, an allocation to different traits. Both
parts can be expressed as random measures. Recall that a random measure is
a random element whose values are measures.
We represent each trait by a point ψ in some space Ψ of traits. Further,
let θk be the frequency, or rate, of the trait represented by ψk , where k indexes
the countably many traits. In particular, θk ∈ R+ . Then (θk , ψk ) is a tuple
consisting of the frequency of the kth trait together with its trait descriptor.
We can represent the full collection of pairs of traits with their frequencies by
the discrete measure on Ψ that places weight θk at location ψk :
Θ=
K
X
θk δ ψ k ,
(1)
k=1
where the cardinality K may be finite or infinity.
Next, we form data point Xn for the nth individual. The data point Xn is
viewed as a discrete measure. Each atom of Xn represents a pair consisting of
(1) a trait to which the nth individual is allocated and (2) a degree to which
the nth individual is allocated to this particular trait. That is,
Xn =
Kn
X
xn,k δψn,k ,
(2)
k=1
where again ψn,k ∈ Ψ represents a trait and now xn,k ∈ R+ represents the degree
to which the nth data point belongs to trait ψn,k . Kn is the total number of
traits to which the nth data point belongs.
Here and in what follows, we treat X1:N = {Xn : n ∈ [N ]} as our observed
data points for [N ] := {1, 2, 3, . . . , N }. In practice X1:N is often incorporated
4
into a more complex Bayesian hierarchical model. For instance, in topic modeling, ψk represents a topic; that is, ψk is a distribution over words in a vocabulary [Blei et al., 2003, Teh et al., 2006]. θk might represent the frequency with
which the topic ψk occurs in a corpus of documents. xn,k might be a positive
integer and represent the number of words in topic ψn,k that occur in the nth
P Kn
document. So the nth document has a total length of k=1
xn,k words. In this
case, the actual observation consists of the words in each document, and the
topics are latent. Not only are the results concerning posteriors, conjugacy, and
exponential family representations that we develop below useful for inference in
such models, but in fact our results are especially useful in such models—where
the traits and any ordering on the traits are not known in advance.
Next, we want to specify a full Bayesian model for our data points X1:N .
To do so, we must first define a prior distribution for the random measure Θ
as well as a likelihood for each random measure Xn conditioned on Θ. We let
ΣΨ be a σ-algebra of subsets of Ψ, where we assume all singletons are in ΣΨ .
Then we consider random measures Θ and Xn whose values are measures on Ψ.
Note that for any random measure Θ and any measurable set A ∈ ΣΨ , Θ(A) is
a random variable.
2.1
Completely random measures
We can see from Eqs. (1) and (2) that we desire a distribution on random
measures that yields discrete measures almost surely. A particularly simple
form of random measure called a completely random measure can be used to
generate a.s. discrete random measures [Kingman, 1967].
A completely random measure Θ is defined as a random measure that satisfies one additional property; for any disjoint, measurable sets A1 , A2 , . . . , AK ∈
ΣΨ , we require that Θ(A1 ), Θ(A2 ), . . . , Θ(AK ) be independent random variables. Kingman [1967] showed that a completely random measure can always
be decomposed into a sum of three independent parts:
Θ = Θdet + Θf ix + Θord .
(3)
Here, Θdet is the deterministic component, Θf ix is the fixed-location component,
and Θord is the ordinary component. In particular, Θdet is any deterministic
measure. We define the remaining two parts next.
The fixed-location component is called the “fixed component” by Kingman
[1967], but we expand the name slightly here to emphasize that Θf ix is defined
to be constructed from a set of random weights at fixed (i.e., deterministic)
locations. That is,
Kf ix
X
θf ix,k δψf ix,k ,
(4)
Θf ix =
k=1
where the number of fixed-location atoms, Kf ix , may be either finite or infinity;
ψf ix,k is deterministic, and θf ix,k is a non-negative, real-valued random variable
(since Φ is a measure). Without loss of generality, we assume that the locations
5
ψf ix,k are all distinct. Then, by the independence assumption of CRMs, we must
have that θf ix,k are independent random variables across k. Although the fixedlocation atoms are often ignored in the Bayesian nonparametrics literature, we
will see that the fixed-location component has a key role to play in establishing
Bayesian nonparametric conjugacy and in the CRM representations we present.
The third and final component is the ordinary component. Let #(A) denote
the cardinality of some countable set A. Let µ be any σ-finite, deterministic
measure on R+ × Ψ, where R+ is equipped with the Borel σ-algebra and ΣR+ ×Ψ
is the resulting product σ-algebra given ΣΨ . Recall that a Poisson point process
with rate measure µ on R+ × Ψ is a random countable subset Π of R+ × Ψ such
that two properties hold [Kingman, 1992]:
1. For any A ∈ ΣR+ ×Ψ , #(Π ∩ A) ∼ Poisson(µ(A)).
2. For any disjoint A1 , A2 , . . . , AK ∈ ΣR+ ×Ψ , #(Π∩A1 ), #(Π∩A2 ), · · · , #(Π∩
AK ) are independent random variables.
To generate an ordinary component, start with a Poisson point process on R+ ×
Ψ, characterized by its rate measure µ(dθ×dψ). This process yields Π, a random
ord
and countable set of points: Π = {(θord,k , ψord,k )}K
k=1 , where Kord may be finite
or infinity. Form the ordinary component measure by letting θord,k be the weight
of the atom located at ψord,k :
Θord =
K
ord
X
θord,k δψord,k .
(5)
k=1
Recall that we stated at the start of Section 2.1 that CRMs may be used to
produce a.s. discrete random measures. To check this assertion, note that Θf ix
is a.s. discrete by construction (Eq. (4)) and Θord is a.s. discrete by construction
(Eq. (5)). Θdet is the one component that may not be a.s. atomic. Thus the
prevailing norm in using models based on CRMs is to set Θdet ≡ 0; in what
follows, we adopt this norm. If the reader is concerned about missing any atoms
in Θdet , note that it is straightforward to adapt the treatment of Θf ix to include
the case where the atom weights are deterministic. When we set Θdet ≡ 0, we
are left with Θ = Θf ix + Θord by Eq. (3). So Θ is also discrete, as desired.
2.2
Prior and likelihood
The prior that we place on Θ will be a fully general CRM (minus any deterministic component) with one additional assumption on the rate measure of the
ordinary component. Before incorporating the additional assumption, we say
that Θ has a fixed-location component with Kf ix atoms, where the kth atom
indep
has arbitrary distribution Ff ix,k : θf ix,k ∼ Ff ix,k (dθ). Kf ix may be finite
or infinity, and Θ has an ordinary component characterized by rate measure
µ(dθ × dψ). The additional assumption we make is that the distribution on
6
the weights in the ordinary component is assumed to be decoupled from the
distribution on the locations. That is, the rate measure decomposes as
µ(dθ × dψ) = ν(dθ) · G(dψ),
(6)
where ν is any σ-finite, deterministic measure on R+ and G is any proper
distribution on Ψ. While the distribution over locations has been discussed
extensively elsewhere [Neal, 2000, Wang and Blei, 2013], it is the weights that
affect the allocation of data points to traits.
Given the factorization of µ in Eq. (6), the ordinary component of Θ can
ord
be generated by letting {θf ix,k }K
k=1 be the points of a Poisson point process
1
ord
generated on R+ with rate ν. We then draw the locations {ψf ix,k }K
k=1 iid
iid
according to G(dψ): ψf ix,k ∼ G(dψ). Finally, for each k, θf ix,k δψf ix,k is an
atom in Θord. This factorization will allow us to focus our attention on the
trait frequencies, and not the trait locations, in what follows. Moreover, going
forward, we will assume G is diffuse (i.e., G has no atoms) so that the ordinary
component atoms are all at a.s. distinct locations, which are further a.s. distinct
from the fixed locations.
Since we have seen that Θ is an a.s. discrete random measure, we can write
it as
K
X
Θ=
θk δ ψ k ,
(7)
k=1
where K := Kf ix + Kord may be finite or infinity, and every ψk is a.s. unique.
That is, we will sometimes find it helpful notationally to use Eq. (7) instead of
separating the fixed and ordinary components. At this point, we have specified
the prior for Θ in our general model.
Next, we specify the likelihood; i.e., we specify how to generate the data
points Xn given Θ. We will assume each Xn is generated iid given Θ across the
data indices n. We will let Xn be a CRM with only a fixed-location component
given Θ. In particular, the atoms of Xn will be located at the atom locations
of Θ, which are fixed when we condition on Θ:
Xn :=
K
X
xn,k δψk .
k=1
Here, xn,k is drawn according to some distribution H that may take θk , the
weight of Θ at location ψk , as a parameter; i.e.,
indep
xn,k ∼ H(dx|θk ) independently across n and k.
(8)
Note that while every atom of Xn is located at an atom of Θ, it is not
necessarily the case that every atom of Θ has a corresponding atom in Xn . In
particular, if xn,k is zero for any k, there is no atom in Xn at ψk .
1 Recall that K
ord may be finite or infinity depending on ν and is random when taking
finite values.
7
We highlight that the model above stands in contrast to Bayesian nonparametric partition models, for which there is a large literature. In partition models
(or clustering models), Θ is a random probability measure [Ferguson, 1974]; in
this case, the probability constraint precludes Θ from being a completely random measure, but it is often chosen to be a normalized completely random
measure [James et al., 2009, Lijoi and Prünster, 2010]. The choice of Dirichlet process (a normalized gamma process) for Θ is particularly popular due
to a number of useful properties that coincide in this single choice [Doksum,
1974, Escobar, 1994, Escobar and West, 1995, 1998, Ferguson, 1973, Lo, 1984,
MacEachern, 1994, Perman et al., 1992, Pitman, 1996a,b, Sethuraman, 1994,
West and Escobar, 1994]. In partition models, Xn is a draw from the probability distribution described by Θ. If we think of such Xn as a random measure,
it is a.s. a single unit mass at a point ψ with strictly positive probability in Θ.
One potential connection between these two types of models is provided
by combinatorial clustering [Broderick et al., 2015]. In partition models, we
might suppose that we have a number of data sets, all of which we would like
to partition. For instance, in a document modeling scenario, each document
might be a data set; in particular each data point is a word in the document.
And we might wish to partition the words in each document. An alternative
perspective is to suppose that there is a single data set, where each data point
is a document. Then the document exhibits traits with multiplicities, where the
multiplicities might be the number of words from each trait; typically a trait
in this application would be a topic. In this case, there are a number of other
names besides feature or trait model that may be applied to the overarching
model—such as admixture model or mixed membership model [Airoldi et al.,
2014].
2.3
Bayesian nonparametrics
So far we have described a prior and likelihood that may be used to form a
Bayesian model. We have already stated above that forming a Bayesian nonparametric model imposes some restrictions on the prior and likelihood. We
formalize these restrictions in Assumptions A0, A1, and A2 below.
Recall that the premise of Bayesian nonparametrics is that the number of
traits represented in a collection of data can grow with the number of data
points. More explicitly, we achieve the desideratum that the number of traits is
unbounded, and may always grow as new data points are collected, by modeling
a countable infinity of traits. This assumption requires that the prior have
a countable infinity of atoms. These must either be fixed-location atoms or
ordinary component atoms. Fixed-location atoms represent known traits in
some sense since we must know the fixed locations of the atoms in advance.
Conversely, ordinary component atoms represent unknown traits, as yet to be
discovered, since both their locations and associated rates are unknown a priori.
Since we cannot know (or represent) a countable infinity of traits a priori, we
cannot start with a countable infinity of fixed-location atoms.
8
A0. The number of fixed-location atoms in Θ is finite.
Since we require a countable infinity of traits in total and they cannot come
from the fixed-location atoms by Assumption A0, the ordinary component must
contain a countable infinity of atoms. This assumption will be true if and only
if the rate measure on the trait frequencies has infinite mass.
A1. ν(R+ ) = ∞.
Finally, an implicit part of the starting premise is that each data point be
allocated to only a finite number of traits; we do not expect to glean an infinite
amount of information from finitely represented data. Thus, we require that
the number of atoms in every Xn be finite. By Assumption A0, the number
of atoms in Xn that correspond to fixed-location atoms in Θ is finite. But by
Assumption A1, the number of atoms in Θ from the ordinary component is
infinite. So there must be some restriction on the distribution of values of X at
the atoms of Θ (that is, some restriction on H in Eq. (8)) such that only finitely
many of these values are nonzero.
In particular, note that if H(dx|θ) does not contain an atom at zero for any
θ, then a.s. every one of the countable infinity of atoms of X will be nonzero.
Conversely, it follows that, for our desiderata to hold, we must have that H(dx|θ)
exhibits an atom at zero. One consequence of this observation is that H(dx|θ)
cannot be purely continuous for all θ. Though this line of reasoning does not
necessarily preclude a mixed continuous and discrete H, we henceforth assume
that H(dx|θ) is discrete, with support Z∗ = {0, 1, 2, . . .}, for all θ.
In what follows, we write h(x|θ) for the probability mass function of x given
θ. So our requirement that each data point be allocated to only a finite number of traits translates into a requirement that the number of atoms of Xn
with values in Z+ = {1, 2, . . .} be finite. Note that, by construction, the pairs
ord
{(θord,k , xord,k )}K
k=1 form a marked Poisson point process with rate measure
µmark (dθ × dx) := ν(dθ)h(x|θ). And the pairs with xord,k equal to any particular value x ∈ Z+ further form a thinned Poisson point process with rate
measure νx (dθ) := ν(dθ)h(x|θ). In particular, the number of atoms of X with
weight x is Poisson-distributed with mean νx (R+ ). So the number of atoms of
X is finite if and only if the following assumption holds.2
P∞
A2.
x=1 νx (R+ ) < ∞ for νx := ν(dθ)h(x|θ).
Thus Assumptions A0, A1, and A2 capture our Bayesian nonparametric
desiderata. We illustrate the development so far with an example.
Example 2.1. The beta process [Hjort, 1990] provides an example distribution
for Θ. In its most general form, sometimes called the three-parameter beta
2 When we have the more general case of a mixed continuous and discrete H, Assumption A2
becomes
R
R
A2b. x>0 θ∈R ν(dθ)H(dx|θ) < ∞.
+
9
process [Teh and Görür, 2009, Broderick et al., 2012], the beta process has an
ordinary component whose weight rate measure has a beta distribution kernel,
ν(dθ) = γθ−α−1 (1 − θ)c+α−1 dθ,
(9)
with support on (0, 1]. Here, the three fixed hyperparameters are γ, the mass
parameter ; c, the concentration parameter ; and α, the discount parameter.3
Moreover, each of its Kf ix fixed-location atoms, θk δψk , has a beta-distributed
weight [Broderick et al., 2015]:
θf ix,k ∼ Beta(θ|ρf ix,k , σf ix,k ),
(10)
where ρf ix,k , σf ix,k > 0 are fixed hyperparameters of the model.
By Assumption A0, Kf ix is finite. By Assumption A1, ν(R+ ) = ∞. To
achieve this infinite-mass restriction, the beta kernel in Eq. (9) must be improper; i.e., either −α ≤ 0 or c + α ≤ 0. Also, note that we must have γ > 0
since ν is a measure (and the case γ = 0 would be trivial).
Often the beta process is used as a prior paired with a Bernoulli process
likelihood [Thibaux
and Jordan, 2007]. The Bernoulli process specifies that,
P∞
given Θ = k=1 θk δψk , we draw
indep
xn,k ∼ Bern(x|θk ),
which is well-defined since every atom weight θk of Θ is in (0, 1] by the beta
process construction. Thus,
Xn =
∞
X
xn,k δψk .
k=1
The marginal distribution of the X1:N in this case is often called an Indian buffet process [Griffiths and Ghahramani, 2006, Thibaux and Jordan, 2007]. The
locations of atoms in Xn are thought of as the dishes sampled by the nth customer.
We take a moment to highlight the fact that continuous distributions for
H(dx|θ) are precluded based on the Bayesian nonparametric desiderata by considering an alternative likelihood. Consider instead if H(dx|θ) were continuous
here. Then X1 would have atoms at every atom of Θ. In the Indian buffet process analogy, any customer would sample an infinite number of dishes,
which contradicts our assumption that our data are finite. Indeed, any customer would sample all of the dishes at once. It is quite often the case in
practical applications, though, that the Xn are merely latent variables, with
the observed variables chosen according to a (potentially continuous) distribution given Xn [Griffiths and Ghahramani, 2006, Thibaux and Jordan, 2007];
3 In [Teh and Görür, 2009, Broderick et al., 2012], the ordinary component features the
beta distribution kernel in Eq. (9) multiplied not only by γ but also by a more complex, positive, real-valued expression in c and α. Since all of γ, c, and α are fixed hyperparameters, and
γ is an arbitrary positive real value, any other constant factors containing the hyperparameters
can be absorbed into γ, as in the main text here.
10
consider, e.g., mixture and admixture models. These cases are not precluded
by our development.
Finally, then, we may apply Assumption A2, which specifies that the number
of atoms in each observation Xn is finite; in this case, the assumption means
Z
∞ Z
X
ν(dθ) · h(1|θ)
ν(dθ) · h(x|θ) =
x=1
θ∈(0,1]
θ∈R+
since θ is supported on (0, 1] and x is supported on {0, 1}
Z
Z
θ1−α−1 (1 − θ)c+α−1 dθ < ∞.
γθ−α−1 (1 − θ)c+α−1 dθ · θ = γ
=
θ∈(0,1]
θ∈(0,1]
The integral here is finite if and only if 1 − α and c + α are the parameters of a
proper beta distribution: i.e., if and only if α < 1 and c > −α. Together with
the restrictions above, these restrictions imply the following allowable parameter
ranges for the beta process fixed hyperparameters:
γ > 0,
α ∈ [0, 1),
c > −α,
ρf ix,k , σf ix,k > 0 for all k ∈ [Kf ix ].
(11)
These correspond to the hyperparameter ranges previously found in [Teh and Görür,
2009, Broderick et al., 2012].
3
Posteriors
In Section 2, we defined a full Bayesian model consisting of a CRM prior for Θ
and a CRM likelihood for an observation X conditional on Θ. Now we would
like to calculate the posterior distribution of Θ|X.
Theorem 3.1 (Bayesian nonparametric posteriors). Let Θ be a completely random measure that satisfies Assumptions A0 and A1; that is, Θ is a CRM with
Kf ix fixed atoms such that Kf ix < ∞ and such that the kth atom can be written
θf ix,k δψf ix,k with
indep
θf ix,k ∼ Ff ix,k (dθ)
for proper distribution Ff ix,k and deterministic ψf ix,k . Let the ordinary component of Θ have rate measure
µ(dθ × dψ) = ν(dθ) · G(dψ),
P∞
where G is a proper distribution and ν(R+ ) = ∞. Write ΘP=
k=1 θk δψk ,
∞
and let X be generated conditional on Θ according to X = k=1 xk δψk with
indep
xk ∼ h(x|θk ) for proper, discrete probability mass function h. And suppose
X and Θ jointly satisfy Assumption A2 so that
∞ Z
X
ν(dθ)h(x|θ) < ∞.
x=1
θ∈R+
Then let Θpost be a random measure with the distribution of Θ|X. Θpost is
a completely random measure with three parts.
11
1. For each k ∈ [Kf ix ], Θpost has a fixed-location atom at ψf ix,k with weight
θpost,f ix,k distributed according to the finite-dimensional posterior Fpost,f ix,k (dθ)
that comes from prior Ff ix,k , likelihood h, and observation X({ψf ix,k }).
2. Let {xnew,k δψnew,k : k ∈ [Knew ]} be the atoms of X that are not at fixed
locations in the prior of Θ. Knew is finite by Assumption A2. Then Θpost
has a fixed-location atom at xnew,k with random weight θpost,new,k , whose
distribution Fpost,new,k (dθ) is proportional to
ν(dθ)h(xnew,k |θ).
3. The ordinary component of Θpost has rate measure
νpost (dθ) := ν(dθ)h(0|θ).
Proof. To prove the theorem, we consider in turn each of the two parts of
the prior: the fixed-location component and the ordinary component. First,
consider any fixed-location atom, θf ix,k δψf ix,k , in the prior. All of the other
fixed-location atoms in the prior, as well as the prior ordinary component, are
independent of the random weight θf ix,k . So it follows that all of X except
xf ix,k := X({ψf ix,k }) is independent of θf ix,k . Thus the posterior has a fixed
atom located at ψf ix,k whose weight, which we denote θpost,f ix,k , has distribution
Fpost,f ix,k (dθ) ∝ Ff ix,k (dθ)h(xf ix,k |θ),
which follows from the usual finite Bayes Theorem.
Next, consider the ordinary component in the prior. Let
Ψf ix = {ψf ix,1 , . . . , ψf ix,Kf ix }
be the set of fixed-location atoms in the prior. Recall that Ψf ix is deterministic,
and since G is continuous, all of the fixed-location atoms and ordinary component atoms of Θ are at a.s. distinct locations. So the measure Xf ix defined
by
Xf ix (A) := X(A ∩ Ψf ix )
can be derived purely from X, without knowledge of Θ. It follows that the
measure Xord defined by
Xord (A) := X(A ∩ (Ψ\Ψf ix ))
can be derived purely from X without knowledge of Θ. Xord is the same as
the observed data measure X but with atoms only at atoms of the ordinary
component of Θ and not at the fixed-location atoms of Θ.
Now for any value x ∈ Z+ , let
{ψnew,x,1 , . . . , ψnew,x,Knew,x }
be all of the locations of atoms of size x in Xord. By Assumption A2, the number
of such atoms, Knew,x , is finite. Further let θnew,x,k := Θ({ψnew,x,k }). Then
12
K
new,x
are generated from a thinned Poisson point process
the values {θnew,x,k }k=1
with rate measure
νx (dθ) := ν(dθ)h(x|θ).
(12)
And since νx (R+ ) < ∞ by assumption, each θnew,x,k has distribution equal to
the normalized rate measure in Eq. (12). Note that θnew,x,k δψnew,x,k is a fixedlocation atom in the posterior now that its location is known from the observed
Xord .
By contrast, if a likelihood draw at an ordinary component atom in the prior
returns a zero, that atom is not observed in Xord . Such atom weights in Θpost
thus form a marked Poisson point process with rate measure
ν(dθ)h(0|θ),
as was to be shown.
In Theorem 3.1, we consider generating Θ and then a single data point
X conditional on Θ. Now suppose we generate Θ and then N data points,
X1 , . . . , XN , iid conditional on Θ. In this case, Theorem 3.1 may be iterated to
find the posterior Θ|X1:N . In particular, Theorem 3.1 gives the ordinary component and fixed atoms of the random measure Θ1 := Θ|X1 . Then, using Θ1 as
the prior measure and X2 as the data point, another application of Theorem 3.1
gives Θ2 := Θ|X1:2 . We continue recursively using Θ|X1:n for n between 1 and
N − 1 as the prior measure until we find Θ|X1:N . The result is made explicit in
the following corollary.
Corollary 3.2 (Bayesian nonparametric posteriors given multiple data points).
Let Θ be a completely random measure that satisfies Assumptions A0 and A1;
that is, Θ is a CRM with Kf ix fixed atoms such that Kf ix < ∞ and such that
the kth atom can be written θf ix,k δψf ix,k with
indep
θf ix,k ∼ Ff ix,k (dθ)
for proper distribution Ff ix,k and deterministic ψf ix,k . Let the ordinary component of Θ have rate measure
µ(dθ × dψ) = ν(dθ) · G(dψ),
P∞
where G is a proper distribution and ν(R+ ) = ∞. Write Θ = P
k=1 θk δψk , and
∞
let X1 , . . . , Xn be generated conditional on Θ according to X = k=1 xn,k δψn,k
indep
with xn,k ∼ h(x|θk ) for proper, discrete probability mass function h. And
suppose X1 and Θ jointly satisfy Assumption A2 so that
∞ Z
X
ν(dθ)h(x|θ) < ∞.
x=1
θ∈R+
It is enough to make the assumption for X1 since the Xn are iid conditional on
Θ.
Then let Θpost be a random measure with the distribution of Θ|X1:N . Θpost
is a completely random measure with three parts.
13
1. For each k ∈ [Kf ix ], Θpost has a fixed-location atom at ψf ix,k with weight
θpost,f ix,k distributed according to the finite-dimensional posterior Fpost,f ix,k (dθ)
that comes from prior Ff ix,k , likelihood h, and observation X({ψf ix,k }).
2. Let {ψnew,k : k ∈ [Knew ]} be the union of atom locations across X1 , X2 , . . . , XN
minus the fixed locations in the prior of Θ. Knew is finite. Let xnew,n,k
be the weight of the atom in Xn located at ψnew,k . Note that at least one
of xnew,n,k across n must be non-zero, but in general xnew,n,k may equal
zero. Then Θpost has a fixed-location atom at xnew,k with random weight
θpost,new,k , whose distribution Fpost,new,k (dθ) is proportional to
ν(dθ)
N
Y
h(xnew,n,k |θ).
n=1
3. The ordinary component of Θpost has rate measure
n
νpost,n (dθ) := ν(dθ) [h(0|θ)] .
Proof. Corollary 3.2 follows from recursive application of Theorem 3.1. In order
to recursively apply Theorem 3.1, we need to verify that Assumptions A0, A1,
and A2 hold for the posterior Θ|X1:(n+1) when they hold for the prior Θ|X1:n .
Note that the number of fixed atoms in the posterior is the number of fixed atoms
in the prior plus the number of new atoms in the posterior. By Theorem 3.1,
these counts are both finite as long as Θ|X1:n satisfies Assumptions A0 and A2,
which both hold for n = 0 by assumption and n > 0 by the recursive assumption.
So Assumption A0 holds for Θ|X1:(n+1) .
Next we notice that since Assumption A1 implies that there is an infinite
number of ordinary component atoms in Θ|X1:n and only finitely many become
fixed atoms in the posterior by Assumption A2, it must be that Θ|X1:(n+1)
has infinitely many ordinary component atoms. So Assumption A1 holds for
Θ|X1:(n+1) .
Finally, we note that
∞ Z
X
x=1
=
νpost,n (dθ)h(x|θ)
θ∈R+
∞ Z
X
x=1
n
ν(dθ) [h(0|θ)] h(x|θ) ≤
θ∈R+
∞ Z
X
x=1
ν(dθ)h(x|θ) < ∞,
θ∈R+
where the penultimate inequality follows since h(0|θ) ∈ [0, 1] and where the
inequality follows by Assumption A2 on the original Θ (conditioned on no data).
So Assumption A2 holds for Θ|X1:(n+1) .
We now illustrate the results of the theorem with an example.
Example 3.3. Suppose we again start with a beta process prior for Θ as
in Example 2.1. This time we consider a negative binomial process likelihood
14
[Zhou et al., 2012, Broderick
et al., 2015]. The negative
binomial process specP∞
P∞
ifies that, given Θ = k=1 θk δψk , we draw X = k=1 xk δψk with
indep
xk ∼ NegBin(x|r, θk ),
for some fixed hyperparameter r > 0. So
Xn =
∞
X
xn,k δψk .
k=1
In this case, Assumption A2 translates into the following restriction.
∞ Z
X
x=1
=
Z
ν(dθ) · h(x|θ)
θ∈R+
ν(dθ) · [1 − h(0|θ)] =
Z
γθ−α−1 (1 − θ)c+α−1 dθ · [1 − (1 − θ)r ] < ∞,
θ∈(0,1]
θ∈R+
where the penultimate equality follows since the support of ν(dθ) is (0, 1].
By a Taylor expansion, we have 1 − (1 − θ)r = rθ + o(θ) as θ → 0, so we
require
Z
θ1−α−1 (1 − θ)c+α−1 dθ < ∞,
θ∈(0,1]
which is satisfied if and only if 1 − α and c + α are the parameters of a proper
beta distribution. Thus, we have the same parameter restrictions as in Eq. (11).
Now we calculate the posterior given the beta process prior on Θ and the
negative binomial process likelihood for X conditional on Θ. In particular,
the posterior has the distribution of Θpost , a CRM with three parts given by
Theorem 3.1.
First, at each fixed atom ψf ix,k of the prior with weight θf ix,k given by
Eq. (10), there is a fixed atom in the posterior with weight θpost,f ix,k . Let
xpost,f ix,k := X({ψf ix,k }). Then θpost,f ix,k has distribution
Fpost,f ix,k (dθ) ∝ Ff ix (dθ) · h(xpost,f ix,k |θ)
= Beta(θ|ρf ix,k , σf ix,k ) dθ · NegBin(xpost,f ix,k |r, θ)
∝ θρf ix,k −1 (1 − θ)σf ix,k −1 dθ · θxpost,f ix,k (1 − θ)r
(13)
∝ Beta (θ |ρf ix,k + xpost,f ix,k , σf ix,k + r ) dθ.
Second, for any atom xnew,k δψnew,k in X that is not at a fixed location in the
prior, Θpost has a fixed atom at ψnew,k whose weight θpost,new,k has distribution
Fpost,new,k (dθ) ∝ ν(dθ) · h(xnew,k |θ)
= ν(dθ) · NegBin(xnew,k |r, θ)
∝ θ−α−1 (1 − θ)c+α−1 dθ · θxnew,k (1 − θ)r
∝ Beta (θ |−α + xnew,k , c + α + r ) dθ,
15
(14)
which is a proper distribution since we have the following restrictions on its
parameters. For one, by assumption, xnew,k ≥ 1. And further, by Eq. (11), we
have α ∈ [0, 1) as well as c + α > 0 and r > 0.
Third, the ordinary component of Θpost has rate measure
ν(dθ)h(0|θ) = γθ−α−1 (1 − θ)c+α−1 dθ · (1 − θ)r = γθ−α−1 (1 − θ)c+r+α−1 dθ.
Not only have we found the posterior distribution Θpost above, but now
we can note that the posterior is in the same form as the prior with updated
ordinary component hyperparameters:
γpost = γ,
αpost = α,
cpost = c + r.
The posterior also has old and new beta-distributed fixed atoms with beta distribution hyperparameters given in Eq. (13) and Eq. (14), respectively. Thus,
we have proven that the beta process is, in fact, conjugate to the negative binomial process. An alternative proof was first given by Broderick et al. [2015].
As in Example 3.3, we can use Theorem 3.1 not only to calculate posteriors but also, once those posteriors are calculated, to check for conjugacy. This
approach unifies existing disparate approaches to Bayesian nonparametric conjugacy. However, it still requires the practitioner to guess the right conjugate
prior for a given likelihood. In the next section, we define a notion of exponential families for CRMs, and we show how to automatically construct a conjugate
prior for any exponential family likelihood.
4
Exponential families
Exponential families are what typically make conjugacy so powerful in the finite
case. For one, when a finite likelihood belongs to an exponential family, then existing results give an automatic conjugate, exponential family prior for that likelihood. In this section, we review finite exponential families, define exponential
CRMs, and show that analogous automatic conjugacy results can be obtained
for exponential CRMs. Our development of exponential CRMs will also allow
particularly straightforward results for size-biased representations (Corollary 5.2
in Section 5) and marginal processes (Corollary 6.2 in Section 6).
In the finite-dimensional case, suppose we have some (random) parameter θ
and some (random) observation x whose distribution is conditioned on θ. We
say the distribution Hexp,like of x conditional on θ is in an exponential family if
Hexp,like (dx|θ) = hexp,like (x|θ) dx = κ(x) exp {hη(θ), φ(x)i − A(θ)} µ(dx),
(15)
where η(θ) is the natural parameter, φ(x) is the sufficient statistic, κ(x) is the
base density, and A(θ) is the log partition function. We denote the density
16
of Hexp,like here, which exists by definition, by hexp,like . The measure µ—
with respect to which the density hexp,like exists—is typically Lebesgue measure
when Hexp,like is diffuse or counting measure when Hexp,like is atomic. A(θ) is
determined by the condition that Hexp,like (dx|θ) have unit total mass on its
support.
It is a classic result [Diaconis and Ylvisaker, 1979] that the following distribution for θ ∈ RD constitutes a conjugate prior:
Fexp,prior (dθ) = fexp,prior (θ) dθ = exp {hξ, η(θ)i + λ [−A(θ)] − B(ξ, λ)} dθ.
(16)
Fexp,prior is another exponential family distribution, now with natural parameter (ξ ′ , λ)′ , sufficient statistic (η(θ)′ , −A(θ))′ , and log partition function B(ξ, λ).
Note that the logarithms of the densities in both Eq. (15) and Eq. (16) are linear
in η(θ) and −A(θ). So, by Bayes Theorem, the posterior Fexp,post also has these
quantities as sufficient statistics in θ, and we can see Fexp,post must have the
following form.
Fexp,post (dθ|x) = fexp,post (θ|x) dθ
= exp {hξ + φ(x), η(θ)i + (λ + 1) [−A(θ)] − B(ξ + φ(x), λ + 1)} dθ.
(17)
Thus we see that Fexp,post belongs to the same exponential family as Fexp,prior
in Eq. (16), and hence Fexp,prior is a conjugate prior for Hexp,like in Eq. (15).
4.1
Exponential families for completely random measures
In the finite-dimensional case, we saw that for any exponential family likelihood,
as in Eq. (15), we can always construct a conjugate exponential family prior,
given by Eq. (16).
In order to prove a similar result for CRMs, we start by defining a notion of
exponential families for CRMs.
Definition 4.1. We say that a CRM Θ is an exponential CRM if it has the
following two parts. First, let Θ have Kf ix fixed-location atoms, where Kf ix
may be finite or infinite. The kth fixed-location atom is located at any ψf ix,k ,
unique from the other fixed locations, and has random weight θf ix,k , whose
distribution has density ff ix,k :
ff ix,k (θ) = κ(θ) exp {hη(ζk ), φ(θ)i − A(ζk )} ,
for some base density κ, natural parameter function η, sufficient statistic φ,
and log partition function A shared across atoms. Here, ζk is an atom-specific
parameter.
Second, let Θ have an ordinary component with rate measure µ(dθ × dψ) =
ν(dθ) · G(dψ) for some proper distribution G and weight rate measure ν of the
form
ν(dθ) = γ exp {hη(ζ), φ(θ)i} .
In particular, η and φ are shared with the fixed-location atoms, and fixed hyperparameters γ and ζ are unique to the ordinary component.
17
4.2
Automatic conjugacy for completely random measures
With Definition 4.1 in hand, we can specify an automatic Bayesian nonparametric conjugate prior for an exponential CRM likelihood.
P∞
Theorem 4.2 (Automatic conjugacy). Let Θ =
k=1 θk δψk , in accordance
with Assumption A1. Let X be generated conditional on Θ according to an
exponential CRM with fixed-location atoms at {ψk }∞
k=1 and no ordinary component. In particular, the distribution of the weight xk at ψk of X has the
following density conditional on the weight θk at ψk of Θ:
h(x|θk ) = κ(x) exp {hη(θk ), φ(x)i − A(θk )} .
Then a conjugate prior for Θ is the following exponential CRM distribution.
First, let Θ have Kprior,f ix fixed-location atoms, in accordance with Assumption A0. The kth such atom has random weight θf ix,k with proper density
fprior,f ix,k (θ) = exp {hξf ix,k , η(θ)i + λf ix,k [−A(θ)] − B(ξf ix,k , λf ix,k )} ,
where (η ′ , −A)′ here is the sufficient statistic and B is the log partition function.
ξf ix,k and λf ix,k are fixed hyperparameters for this atom weight.
Second, let Θ have ordinary component characterized by any proper distribution G and weight rate measure
ν(dθ) = γ exp {hξ, η(θ)i + λ [−A(θ)]} ,
where γ, ξ, and λ are fixed hyperparameters of the weight rate measure chosen
to satisfy Assumptions A1 and A2.
Proof. To prove the conjugacy of the prior for Θ with the likelihood for X, we
calculate the posterior distribution of Θ|X using Theorem 3.1. Let Θpost be
a CRM with the distribution of Θ|X. Then, by Theorem 3.1, Θpost has the
following three parts.
First, at any fixed location ψf ix,k in the prior, let xf ix,k be the value of X
at that location. Then Θpost has a fixed-location atom at ψf ix,k , and its weight
θpost,f ix,k has distribution
Fpost,f ix,k (dθ) ∝ fprior,f ix,k (θ) dθ · h(xf ix,k |θ)
∝ exp {hξf ix,k , η(θ)i + λf ix,k [−A(θ)]} dθ · exp {hη(θ), φ(xf ix,k )i − A(θ)} dθ
= exp {hξf ix,k + φ(xf ix,k ), η(θ)i + (λf ix,k + 1) [−A(θ)]} dθ.
It follows, from putting in the normalizing constant, that the distribution of
θpost,f ix,k has density
fpost,f ix,k (θ) = exp {hξf ix,k + φ(xf ix,k ), η(θ)i + (λf ix,k + 1) [−A(θ)]
− B(ξf ix,k + φ(xf ix,k ), λf ix,k + 1)} .
Second, for any atom xnew,k δψnew,k in X that is not at a fixed location in the
prior, Θpost has a fixed atom at ψnew,k whose weight θpost,new,k has distribution
Fpost,new,k (θ) ∝ ν(dθ) · h(xnew,k |θ)
18
∝ exp {hξ, η(θ)i + λ [−A(θ)]} · exp {hη(θ), φ(xnew,k )i − A(θ)} dθ
= exp {hξ + φ(xnew,k ), η(θ)i + (λ + 1) [−A(θ)]} dθ
and hence density
fpost,new,k (θ) = exp {hξ + φ(xnew,k ), η(θ)i + (λ + 1) [−A(θ)] − B(ξ + φ(xnew,k ), λ + 1)} .
Third, the ordinary component of Θpost has weight rate measure
ν(dθ) · h(0|θ)
= γ exp {hξ, η(θ)i + λ [−A(θ)]} · κ(0) exp {hη(θ), φ(0)i − A(θ)}
= γκ(0) · exp {hξ + φ(0), η(θ)i + (λ + 1) [−A(θ)]} .
Thus, the posterior rate measure is in the same exponential CRM form as
the prior rate measure with updated hyperparameters:
γpost = γκ(0),
ξpost = ξ + φ(0),
λpost = λ + 1.
Since we see that the posterior fixed-location atoms are likewise in the same
exponential CRM form as the prior, we have shown that conjugacy holds, as
desired.
We next use Theorem 4.2 to give proofs of conjugacy in cases where conjugacy has not previously been established in the Bayesian nonparametrics literature.
4
Example 4.3. Let X be generated
P∞according to a Poisson likelihood
P∞process
conditional on Θ. That is, X = k=1 xk δψk conditional on Θ = k=1 θk δψk
has an exponential CRM distribution with only a fixed-location component.
The weight xk at location ψk has support on Z∗ and has a Poisson density with
parameter θk ∈ R+ :
h(x|θk ) =
1 x −θk
1
θk e
=
exp {x log(θk ) − θk } .
x!
x!
(18)
The final line is rewritten to emphasize the exponential family form of this
density, with
κ(x) =
1
,
x!
φ(x) = x,
η(θ) = log(θ),
A(θ) = θ.
By Theorem 4.2, this Poisson likelihood process has a Bayesian nonparametric
conjugate prior for Θ with two parts.
First, Θ has a set of Kprior,f ix fixed-location atoms, where Kprior,f ix < ∞
by Assumption A0. The kth such atom has random weight θf ix,k with density
fprior,f ix,k (θ) = exp {hξf ix,k , η(θ)i + λf ix,k [−A(θ)] − B(ξf ix,k , λf ix,k )}
4 We use the term “Poisson likelihood process” to distinguish this specific Bayesian nonparametric likelihood from the Poisson point process.
19
= θξf ix,k e−λf ix,k θ exp {−B(ξf ix,k , λf ix,k )}
= Gamma(θ |ξf ix,k + 1, λf ix,k ),
(19)
where Gamma(θ|a, b) denotes the gamma density with shape parameter a > 0
and rate parameter b > 0. So we must have fixed hyperparameters ξf ix,k > −1
and λf ix,k > 0. Further,
ξ
ix,k
exp {−B(ξf ix,k , λf ix,k )} = λffix,k
+1
/Γ(ξf ix,k + 1)
to ensure normalization.
Second, Θ has an ordinary component characterized by any proper distribution G and weight rate measure
ν(dθ) = γ exp {hξ, η(θ)i + λ [−A(θ)]} dθ = γθξ e−λθ dθ.
(20)
Note that Theorem 4.2 guarantees that the weight rate measure will have the
same distributional kernel in θ as the fixed-location atoms.
Finally, we need to choose the allowable hyperparameter ranges for γ, ξ, and
λ. First, γ > 0 to ensure ν is a measure. By Assumption A1, we must have
ν(R+ ) = ∞, so ν must represent an improper gamma distribution. As such, we
require either ξ + 1 ≤ 0 or λ ≤ 0. By Assumption A2, we must have
Z
Z
∞ Z
X
γθξ e−λθ dθ · 1 − e−θ < ∞.
ν(dθ) · [1 − h(0|θ)] =
ν(dθ) · h(x|θ) =
x=1
θ∈R+
θ∈R+
θ∈R+
To ensure the integral over [1, ∞) is finite, we must have λ > 0. To ensure the
integral over (0, 1) is finite, we note that 1 − e−θ = θ + o(θ) as θ → 0. So we
require
Z
γθξ+1 e−λθ dθ < ∞,
θ∈(0,1)
which is satisfied if and only if ξ + 2 > 0.
Finally, then the hyperparameter restrictions can be summarized as:
γ > 0,
ξ ∈ (−2, −1],
λ > 0;
ξf ix,k > −1 and λf ix,k > 0
for all k ∈ [Kprior,f ix ].
The ordinary component of the conjugate prior for Θ discovered in this
example is typically called a gamma process. Here, we have for the first time
specified the distribution of the fixed-location atoms of the gamma process and,
also for the first time, proved that the gamma process is conjugate to the Poisson
likelihood process. We highlight this result as a corollary to Theorem 4.2.
Corollary 4.4. Let the Poisson likelihood process be a CRM with fixed-location
atom weight distributions as in Eq. (18). Let the gamma process be a CRM with
fixed-location atom weight distributions as in Eq. (19) and ordinary component
weight measure as in Eq. (20). Then the gamma process is a conjugate Bayesian
nonparametric prior for the Poisson likelihood process.
20
Example 4.5. Next, let X be generated according to a new process we call
an odds Bernoulli process. We have previously seen a typical Bernoulli process
likelihood in Example 2.1. In the odds Bernoulli process, we say that X, conditional on Θ, has an exponential CRM distribution. In this case, the weight of
the kth atom, xk , conditional on θk has support on {0, 1} and has a Bernoulli
density with odds parameter θk ∈ R+ :
h(x|θk ) = θkx (1 + θk )−1
= exp {x log(θk ) − log(1 + θk )} .
(21)
That is, if ρ is the probability of a successful Bernoulli draw, then θ = ρ/(1 − ρ)
represents the odds ratio of the probability of success over the probability of
failure.
The final line of Eq. (21) is written to emphasize the exponential family form
of this density, with
κ(x) = 1,
φ(x) = x,
η(θ) = log(θ),
A(θ) = log(1 + θ).
By Theorem 4.2, the likelihood for X has a Bayesian nonparametric conjugate
prior for Θ. This conjugate prior has two parts.
First, Θ has a set of Kprior,f ix fixed-location atoms. The kth such atom has
random weight θf ix,k with density
fprior,f ix,k (θ) = exp {hξf ix,k , η(θ)i + λf ix,k [−A(θ)] − B(ξf ix,k , λf ix,k )}
= θξf ix,k (1 + θ)−λf ix,k exp {−B(ξf ix,k , λf ix,k )}
= BetaPrime (θ |ξf ix,k + 1, λf ix,k − ξf ix,k − 1 ) ,
(22)
where BetaPrime(θ|a, b) denotes the beta prime density with shape parameters
a > 0 and b > 0. Further,
exp {−B(ξf ix,k , λf ix,k )} =
Γ(λf ix,k )
Γ(ξf ix,k + 1)Γ(λf ix,k − ξf ix,k − 1)
to ensure normalization.
Second, Θ has an ordinary component characterized by any proper distribution G and weight rate measure
ν(dθ) = γ exp {hξ, η(θ)i + λ [−A(θ)]} dθ = γθξ (1 + θ)−λ dθ.
(23)
We need to choose the allowable hyperparameter ranges for γ, ξ, and λ.
First, γ > 0 to ensure ν is a measure. By Assumption A1, we must have
ν(R+ ) = ∞, so ν must represent an improper beta prime distribution. As such,
we require either ξ + 1 ≤ 0 or λ − ξ − 1 ≤ 0. By Assumption A2, we must have
∞ Z
X
x=1
θ∈R+
ν(dθ) · h(x|θ) =
Z
ν(dθ) · h(1|θ)
θ∈R+
since the support of x is {0, 1}
21
=
Z
γθξ (1 + θ)−λ dθ · θ1 (1 + θ)−1 = γ
θ∈R+
Z
θξ+1 (1 + θ)−λ−1 dθ < ∞.
θ∈R+
Since the integrand is the kernel of a beta prime distribution, we simply require
that this distribution be proper; i.e., ξ + 2 > 0 and λ − ξ − 1 > 0.
The hyperparameter restrictions can be summarized as:
γ > 0, ξ ∈ (−2, −1], λ > ξ + 1; ξf ix,k > −1 and λf ix,k > ξf ix,k + 1 for all k ∈ [Kprior,f ix ].
We call the distribution for Θ described in this example the beta prime process. Its ordinary component has previously been defined by Broderick et al.
[2015]. But this result represents the first time the beta prime process is described in full, including parameter restrictions and fixed-location atoms, as well
as the first proof of its conjugacy with the odds Bernoulli process. We highlight
the latter result as a corollary to Theorem 4.2 below.
Corollary 4.6. Let the odds Bernoulli process be a CRM with fixed-location
atom weight distributions as in Eq. (21). Let the beta process be a CRM with
fixed-location atom weight distributions as in Eq. (22) and ordinary component
weight measure as in Eq. (23). Then the beta process is a conjugate Bayesian
nonparametric prior for the odds Bernoulli process.
5
Size-biased representations
We have shown in Section 4.2 that our exponential CRM (Definition 4.1) is
useful in that we can find an automatic Bayesian nonparametric conjugate prior
given an exponential CRM likelihood. We will see in this section and the next
that exponential CRMs allow us to build representations that allow tractable
inference despite the infinite-dimensional nature of the models we are using.
The best-known size-biased representation of a random measure in Bayesian
nonparametrics is the stick-breaking representation of the Dirichlet process ΘDP
[Sethuraman, 1994]:
ΘDP =
∞
X
θDP,k δψk ;
k=1
For k ∈ Z∗ , θDP,k = βk
k−1
Y
(24)
(1 − βj ),
iid
βk ∼ Beta(1, c),
iid
ψk ∼ G,
j=1
where c is a fixed hyperparameter satisfying c > 0.
The name “stick-breaking” originates from thinking of the unit interval as
a stick of length one. At each round k, only some of the stick remains; βk
describes the proportion of the remaining stick that is broken off in round k,
and θDP,k describes the total amount of remaining stick that is broken off in
22
round k. By construction, not only is each θDP,k ∈ (0, 1) but in fact the θDP,k
add to one (the total stick length) and thus describe a distribution.
Eq. (24) is called a size-biased representation for the following reason. Since
the weights {θDP,k }∞
k=1 describe a distribution, we can make draws from this
distribution; each such draw is sometimes thought of as a multinomial draw with
a single trial. In that vein, typically we imagine that our data points Xmult,n are
described as iid draws conditioned on ΘDP , where Xmult,n is a random measure
with just a single atom:
Xmult,n = δψmult,n ;
ψmult,n = ψk with probability θDP,k .
(25)
Then the limiting proportion of data points Xmult,n with an atom at ψmult,1
(the first atom location chosen) is θDP,1 . The limiting proportion of data points
with an atom at the next unique atom location chosen will have size θDP,2 , and
so on [Broderick et al., 2013].
The representation in Eq. (24) is so useful because there is a familiar, finitedimensional distribution for each of the atom weights θDP,k of the random measure ΘDP . This representation allows approximate inference via truncation
[Ishwaran and James, 2001] or exact inference via slice sampling [Walker, 2007,
Kalli et al., 2011].
Since the weights {θDP,k }∞
k=1 are constrained to sum to one, the Dirichlet
process is not a CRM.5 Indeed, there has been much work on size-biased representations for more general normalized random measures, which include the
Dirichlet process as just one example [Perman et al., 1992, Pitman, 1996a,b,
2003].
By contrast, we here wish to explore size-biasing for non-normalized CRMs.
In the normalized CRM case, we considered which atom of a random discrete
probability measure was drawn first and what is the distribution of that atom’s
size. In the non-normalized CRM case considered in the present work, when
drawing X conditional on Θ, there may be multiple atoms (or one atom or no
atoms) of Θ that correspond to non-zero atoms in X. The number will always
be finite though by Assumption A2. In this non-normalized CRM case, we
wish to consider the sizes of all such atoms in Θ. Size-biased representations
have been developed in the past for particular CRM examples, notably the beta
process [Paisley et al., 2010, Broderick et al., 2012]. And even though there is
typically no interpretation of these representations in terms of a single stick
representing a unit probability mass, they are sometimes referred to as stickbreaking representations as a nod to the popularity of Dirichlet process stickbreaking.
In the beta process case, such size-biased representations have already been
shown to allow approximate inference via truncation [Doshi et al., 2009, Paisley et al.,
2011] or exact inference via slice sampling [Teh et al., 2007, Broderick et al.,
2015]. Here we provide general recipes for the creation of these representations and illustrate our recipes by discovering previously unknown size-biased
5 In fact, the Dirichlet process is a normalized gamma process (cf. Example 4.3) [Ferguson,
1973].
23
representations.
We have seen that a general CRM Θ takes the form of an a.s. discrete random
measure:
∞
X
θk δ ψ k .
(26)
k=1
The fixed-location atoms are straightforward to simulate; there are finitely many
by Assumption A0, their locations are fixed, and their weights are assumed to
come from finite-dimensional distributions. The infinite-dimensionality of the
Bayesian nonparametric CRM comes from the ordinary component (cf. Section 2.3 and Assumption A1). So far the only description we have of the ordinary component is its generation from the countable infinity of points in a
Poisson point process. The next result constructively demonstrates that we
can represent the distributions of the CRM weights {θk }∞
k=1 in Eq. (26) as a
sequence of finite-dimensional distributions, much as in the familiar Dirichlet
process case.
Theorem 5.1 (Size-biased representations). Let Θ be a completely random
measure that satisfies Assumptions A0 and A1; that is, Θ is a CRM with Kf ix
fixed atoms such that Kf ix < ∞ and such that the kth atom can be written
θf ix,k δψf ix,k . The ordinary component of Θ has rate measure
µ(dθ × dψ) = ν(dθ) · G(dψ),
P∞
where G is a proper distribution and ν(R+ ) = ∞. Write P
Θ =
k=1 θk δψk ,
∞
and let Xn be generated iid given Θ according to Xn =
k=1 xn,k δψk with
indep
xn,k ∼ h(x|θk ) for proper, discrete probability mass function h. And suppose
Xn and Θ jointly satisfy Assumption A2 so that
∞ Z
X
x=1
ν(dθ)h(x|θ) < ∞.
θ∈R+
Then we can write
Θ=
m,x
∞ X
∞ ρX
X
θm,x,j δψm,x,j
m=1 x=1 j=1
iid
ψm,x,k ∼ G iid across m, x, j
Z
indep
ρm,x ∼ Poisson ρ
ν(dθ)h(0|θ)m−1 h(x|θ) across m, x
(27)
θ
indep
θm,x,j ∼ Fsize,m,x (dθ) ∝ ν(dθ)h(0|θ)m−1 h(x|θ)
iid across j and independently across m, x.
Proof. By construction, Θ is an a.s. discrete random measure with a countable
infinity of atoms. Without loss of generality, suppose that for every (non-zero)
value of an atom weight θ, there is a non-zero probability of generating an atom
24
with non-zero weight x in the likelihood. Now suppose we generate X1 , X2 , . . ..
Then, for every atom θδψ of Θ, there exists some finite n with an atom at ψ.
Therefore, we can enumerate all of the atoms of Θ by enumerating
• Each atom θδψ such that there is an atom in X1 at ψ.
• Each atom θδψ such that there is an atom in X2 at ψ but there is not an
atom in X1 at ψ.
..
.
• Each atom θδψ such that there is an atom in Xm at ψ but there is not an
atom in any of X1 , X2 , . . . , Xm−1 at ψ.
..
.
Moreover, on the mth round of this enumeration, we can further break down
the enumeration by the value of the observation Xm at the atom location:
• Each atom θδψ such that there is an atom in Xm of weight 1 at ψ but
there is not an atom in any of X1 , X2 , . . . , Xm−1 at ψ.
• Each atom θδψ such that there is an atom in Xm of weight 2 at ψ but
there is not an atom in any of X1 , X2 , . . . , Xm−1 at ψ.
..
.
• Each atom θδψ such that there is an atom in Xm of weight x at ψ but
there is not an atom in any of X1 , X2 , . . . , Xm−1 at ψ.
..
.
Recall that the values θk that form the weights of Θ are generated according
to a Poisson point process with rate measure ν(dθ). So, on the first round, the
values of θk such that x1,k = x also holds are generated according to a thinned
Poisson point process with rate measure
ν(dθ)h(x|θ).
In particular, since the rate measure has finite total mass by Assumption A2,
we can define
Z
M1,x := ν(dθ)h(x|θ),
θ
which will be finite. Then the number of atoms θk for which x1,k = x is
ρ1,x ∼ Poisson(ρ|M1,x ).
And each such θk has weight with distribution
Fsize,1,x (dθ) ∝ ν(dθ)h(x|θ).
25
Finally, note from Theorem 3.1 that the posterior Θ|X1 has weight rate measure
ν1 (dθ) := ν(dθ)h(0|θ).
Now take any m > 1. Suppose, inductively, that the ordinary component of
the posterior Θ|X1 , . . . , Xm−1 has weight rate measure
νm−1 (dθ) := ν(dθ)h(0|θ)m−1 .
The atoms in this ordinary component have been selected precisely because they
have not appeared in any of X1 , . . . , Xm−1 . As for m = 1, we have that the
atoms θk in this ordinary component with corresponding weight in Xm equal to
x are formed by a thinned Poisson point process, with rate measure
νm−1 (dθ)h(x|θ) = ν(dθ)h(0|θ)m−1 h(x|θ).
Since the rate measure has finite total mass by Assumption A2, we can define
Z
Mm,x := ν(dθ)h(0|θ)m−1 h(x|θ),
θ
which will be finite. Then the number of atoms θk for which x1,k = x is
ρm,x ∼ Poisson(ρ|Mm,x ).
And each such θk has weight
Fsize,m,x ∝ ν(dθ)h(0|θ)m−1 h(x|θ).
Finally, note from Theorem 3.1 that the posterior Θ|X1:m , which can be thought
of as generated by prior Θ|X1:(m−1) and likelihood Xm |Θ, has weight rate measure
ν(dθ)h(0|θ)m−1 h(0|θ) = νm (dθ),
confirming the inductive hypothesis.
Recall that every atom of Θ is found in exactly one of these rounds and that
x ∈ Z+ . Also recall that the atom locations may be generated independently
and identically across atoms, and independently from all the weights, according
to proper distribution G (Section 2.2). To summarize, we have then
Θ=
m,x
∞ X
∞ ρX
X
θm,x,j δψm,x,j ,
m=1 x=1 j=1
where
iid
ψm,x,k ∼ G iid across m, x, j
Z
Mm,x = ν(dθ)h(0|θ)m−1 h(x|θ) across m, x
θ
26
indep
ρm,x ∼ Poisson(ρ|Mm,x ) across m, x
Fsize,m,x (dθ) ∝ ν(dθ)h(0|θ)m−1 h(x|θ) across m, x
indep
θm,x,j ∼ Fsize,m,x (dθ) iid across j and independently across m, x,
as was to be shown.
The following corollary gives a more detailed recipe for the calculations in
Theorem 5.1 when the prior is in a conjugate exponential CRM to the likelihood.
Corollary 5.2 (Exponential CRM size-biased representations). Let Θ be an
exponential CRM with no fixed-location atoms (thereby trivially satisfying Assumption A0) such that Assumption A1 holds.
Let X be generated conditional on Θ according to an exponential CRM with
fixed-location atoms at {ψk }∞
k=1 and no ordinary component. Let the distribution
of the weight xn,k at ψk have probability mass function
h(x|θk ) = κ(x) exp {hη(θk ), φ(x)i − A(θk )} .
Suppose that Θ and X jointly satisfy Assumption A2. And let Θ be conjugate
to X as in Theorem 4.2. Then we can write
Θ=
m,x
∞ X
∞ ρX
X
θm,x,j δψm,x,j
m=1 x=1 j=1
iid
ψm,x,j ∼ G
iid across m, x, j
Mm,x = γ · κ(0)m−1 κ(x) · exp {B(ξ + (m − 1)φ(0) + φ(x), λ + m)}
indep
ρm,x ∼ Poisson (ρ|Mm,x )
independently across m, x
(28)
indep
θm,x,j ∼ fsize,m,x (θ) dθ
= exp {hξ + (m − 1)φ(0) + φ(x), η(θ)i + (λ + m)[−A(θ)]
− B(ξ + (m − 1)φ(0) + φ(x), λ + m)}
iid across j and independently across m, x.
Proof. The corollary follows from Theorem 5.1 by plugging in the particular
forms for ν(dθ) and h(x|θ).
In particular,
Z
ν(dθ)h(0|θ)m−1 h(x|θ)
Mm,x =
θ∈R+
Z
γ exp {hξ, η(θ)i + λ [−A(θ)]}
=
θ∈R+
m−1
· [κ(0) exp {hη(θ), φ(0)i − A(θ)}]
· κ(x) exp {hη(θ), φ(x)i − A(θ)} dθ
27
= γκ(0)m−1 κ(x) exp {B (ξ + (m − 1)φ(0) + φ(x), λ + m)} ,
Corollary 5.2 can be used to find the known size-biased representation of
the beta process [Thibaux and Jordan, 2007]; we demonstrate this derivation in
detail in Example B.1 in Appendix B. Here we use Corollary 5.2 to discover a
new size-biased representation of the gamma process.
Example 5.3. Let Θ be a gamma process, and let Xn be iid Poisson likelihood
processes conditioned on Θ for each n as in Example 4.3. That is, we have
ν(dθ) = γθξ e−λθ dθ.
And
h(x|θk ) =
1 x −θk
θ e
x! k
with
γ > 0,
ξ ∈ (−2, −1],
ξf ix,k > −1 and λf ix,k > 0
λ > 0;
for all k ∈ [Kprior,f ix ]
by Example 4.3.
We can pick out the following components of h:
κ(x) =
1
,
x!
φ(x) = x,
η(θ) = log(θ),
A(θ) = θ.
Thus, by Corollary 5.2, we have
fsize,m,x (θ) ∝ θξ+x e−(λ+m)θ ∝ Gamma (θ |ξ + x + 1, λ + m ) .
We summarize the representation that follows from Corollary 5.2 in the following
result.
Corollary 5.4. Let the gamma process be a CRM Θ with fixed-location atom
weight distributions as in Eq. (19) and ordinary component weight measure as
in Eq. (20). Then we may write
Θ=
m,x
∞ X
∞ ρX
X
θm,x,j δψm,x,j
m=1 x=1 j=1
iid
ψm,x,j ∼ G
Mm,x = γ ·
iid across m, x, j
1
· Γ(ξ + x + 1) · (λ + m)−(ξ+x+1) across m, x
x!
indep
ρm,x ∼ Poisson (ρ|Mm,x) across m, x
indep
θm,x,j ∼ Gamma (θ |ξ + x + 1, λ + m )
iid across j and independently across m, x.
28
6
Marginal processes
In Section 5, although we conceptually made use of the observations {X1 , X2 , . . .},
we focused on a representation of the prior Θ: cf. Eqs. (27) and (28). In this
section, we provide a representation of the marginal of X1:N , with Θ integrated
out.
The canonical example of a marginal process again comes from the Dirichlet
process (DP). In this case, the full model consists of the DP-distributed prior
on ΘDP (as in Eq. (24)) together with the likelihood for Xmult,n conditional on
ΘDP (iid across n) described by Eq. (25). Then the marginal distribution of
Xmult,1:N is described by the Chinese restaurant process. This marginal takes
the following form.
For each n = 1, 2, . . . , N ,
K
n−1
be the union of atom locations in Xmult,1 , . . . , Xmult,n−1 .
1. Let {ψk }k=1
Then Xmult,n |Xmult,1 , . . . , Xmult,n−1 has a single atom at ψ, where
PKn−1
ψk
with probability ∝
k=1 Xmult,m ({ψk })
ψ=
ψnew with probability ∝ c
ψnew ∼ G
In the case of CRMs, the canonical example of a marginal process is the Indian buffet process [Griffiths and Ghahramani, 2006]. Both the Chinese restaurant process and Indian buffet process have proven popular for inference since
the underlying infinite-dimensional prior is integrated out in these processes
and only the finite-dimensional marginal remains. By Assumption A2, we know
that the marginal will generally be finite-dimensional for our CRM Bayesian
models. And thus we have the following general marginal representations for
such models.
Theorem 6.1 (Marginal representations). Let Θ be a completely random measure that satisfies Assumptions A0 and A1; that is, Θ is a CRM with Kf ix
fixed atoms such that Kf ix < ∞ and such that the kth atom can be written
θf ix,k δψf ix,k . The ordinary component of Θ has rate measure
µ(dθ × dψ) = ν(dθ) · G(dψ),
P∞
where G is a proper distribution and ν(R+ ) = ∞. Write P
Θ =
k=1 θk δψk ,
∞
and let Xn be generated iid given Θ according to Xn =
x
δψk with
n,k
k=1
indep
xn,k ∼ h(x|θk ) for proper, discrete probability mass function h. And suppose
Xn and Θ jointly satisfy Assumption A2 so that
∞ Z
X
ν(dθ)h(x|θ) < ∞.
x=1
θ∈R+
Then the marginal distribution of X1:N is the same as that provided by the
following construction.
For each n = 1, 2, . . . , N ,
29
K
n−1
be the union of atom locations in X1 , . . . , Xn−1 . Let xm,k :=
1. Let {ψk }k=1
Xm ({ψk }). Let xn,k denote the weight of Xn |X1 , . . . , Xn−1 at ψk . Then
xn,k has distribution described by the following probability mass function:
R
Qn−1
m=1 h(xm,k |θ)
θ∈R+ ν(dθ)h(x|θ)
.
hcond xn,k = x x1:(n−1),k =
R
Qn−1
m=1 h(xm,k |θ)
θ∈R+ ν(dθ)
2. For each x = 1, 2, . . .
ρ
n,x
,
• Xn has ρn,x new atoms. That is, Xn has atoms at locations {ψn,x,j }j=1
where
ρn,x
Kn−1
= ∅ a.s.
∩ {ψk }k=1
{ψn,x,j }j=1
Moreover,
Z
indep
ρn,x ∼ Poisson ρ
ν(dθ)h(0|θ)n−1 h(x|θ) across n, x
θ
iid
ψn,x,j ∼ G(dψ) across n, x, j.
Proof. We saw in the proof of Theorem 5.1 that the marginal for X1 can be
expressed as follows. For each x ∈ Z+ , there are ρ1,x atoms of X1 with weight
x, where
Z
indep
ν(dθ)h(x|θ) across x.
ρ1,x ∼ Poisson
These atoms have locations
θ
ρ1,x
{ψ1,x,j }j=1 ,
where
iid
ψ1,x,j ∼ G(dψ) across x, j.
P∞
1
For the upcoming induction, let K1 := x=1 ρ1,x . And let {ψk }K
k=1 be the (a.s.
ρ1,x
disjoint by assumption) union of the sets {ψ1,x,j }j=1 across x. Note that K1 is
finite by Assumption A2.
We will also find it useful in the upcoming induction to let Θpost,1 have the
distribution of Θ|X1 . Let θpost,1,x,j = Θpost,1 ({ψ1,x,j }). By Theorem 3.1 or the
proof of Theorem 5.1, we have that
indep
θpost,1,x,j ∼ Fpost,1,x,j (dθ) ∝ ν(dθ)h(x|θ)
independently across x and iid across j.
K
n−1
is the union
Now take any n > 1. Inductively, we assume {ψn−1,k }k=1
of all the atom locations of X1 , . . . , Xn−1 . Further assume Kn−1 is finite. Let
Θpost,n−1 have the distribution of Θ|X1 , . . . , Xn−1 . Let θn−1,k be the weight of
Θpost,n−1 at ψn−1,k . And, for any m ∈ [n − 1], let xm,k be the weight of Xm at
ψn−1,k . We inductively assume that
indep
θn−1,k ∼ Fn−1,k (dθ) ∝ ν(dθ)
n−1
Y
m=1
independently across k.
30
h(xm,k |θ)
(29)
Now let ψn,k equal ψn−1,k for k ∈ [Kn−1 ]. Let xn,k denote the weight of Xn
at ψn,k for k ∈ [Kn−1 ]. Conditional on the atom weight of Θ at ψn,k , the atom
weights of X1 , . . . , Xn−1 , Xn are independent. Since the atom weights of Θ are
independent as well, we have that xn,k |X1 , . . . , Xn−1 has the same distribution
as xn,k |x1,k , . . . , xn−1,k . We can write the probability mass function of this
distribution as follows.
hcond (xn,k = x |x1,k , . . . , xn−1,k )
Z
Fn−1,k (dθ)h(x|θ)
=
θ∈R+
=
R
θ∈R+
R
h
ν(dθ)
θ∈R+
Qn−1
m=1
ν(dθ)
i
h(xm,k |θ) · h(x|θ)
Qn−1
m=1
h(xm,k |θ)
,
where the last line follows from Eq. (29).
We next show the inductive hypothesis in Eq. (29) holds for n and k ∈
[Kn−1 ]. Let xn,k denote the weight of Xn at ψn,k for k ∈ [Kn−1 ]. Let Fn,k (dθ)
denote the distribution of xn,k and note that
Fn,k (dθ) ∝ Fn−1,k (dθ) · h(xn,k |θ)
n
Y
= ν(dθ)
h(xm,k |θ),
m=1
which agrees with Eq. (29) for n when we assume the result for n − 1.
The previous development covers atoms that are present in at least one of
X1 , . . . , Xn−1 . Next we consider new atoms in Xn ; that is, we consider atoms in
Xn for which there are no atoms at the same location in any of X1 , . . . , Xn−1 .
We saw in the proof of Theorem 5.1 that, for each x ∈ Z+ , there are ρn,x
new atoms of Xn with weight x such that
Z
indep
ρn,x ∼ Poisson ρ
ν(dθ)h(0|θ)n−1 h(x|θ) across x.
θ
ρ
n,x
with
These new atoms have locations {ψn,x,j }j=1
iid
ψn,x,j ∼ G(dψ) across x, j.
By Assumption A2,
P∞
x=1
ρn,x < ∞. So
Kn := Kn−1 +
∞
X
ρn,x
x=1
remains finite. Let ψn,k for k ∈ {Kn−1 + 1, . . . , Kn } index these new locations.
Let θn,k be the weight of Θpost,n at ψn,k for k ∈ {Kn−1 + 1, . . . , Kn }. And let
xn,k be the value of X at ψn,k .
31
We check that the inductive hypothesis holds. By repeated application of
Theorem 3.1, the ordinary component of Θ|X1 , . . . , Xn−1 has rate measure
ν(dθ)h(0|θ)n−1 .
So, again by Theorem 3.1, we have that
indep
θn,k ∼ Fn.k (dθ) ∝ ν(dθ)h(0|θ)n−1 h(xn,k |θ).
Since Xm has value 0 at ψn,k for m ∈ {1, . . . , n − 1} by construction, we have
that the inductive hypothesis holds.
As in the case of size-biased representations (Section 5 and Corollary 5.2),
we can find a more detailed recipe when the prior is in a conjugate exponential
CRM to the likelihood.
Corollary 6.2 (Exponential CRM marginal representations). Let Θ be an exponential CRM with no fixed-location atoms (thereby trivially satisfying Assumption A0) such that Assumption A1 holds.
Let X be generated conditional on Θ according to an exponential CRM with
fixed-location atoms at {ψk }∞
k=1 and no ordinary component. Let the distribution
of the weight xn,k at ψk have probability mass function
h(x|θk ) = κ(x) exp {hη(θk ), φ(x)i − A(θk )} .
Suppose that Θ and X jointly satisfy Assumption A2. And let Θ be conjugate
to X as in Theorem 4.2. Then the marginal distribution of X1:N is the same as
that provided by the following construction.
For each n = 1, 2, . . . , N ,
K
n−1
be the union of atom locations in X1 , . . . , Xn−1 . Let xm,k :=
1. Let {ψk }k=1
Xm ({ψk }). Let xn,k denote the weight of Xn |X1 , . . . , Xn−1 at ψk . Then
xn,k has distribution described by the following probability mass function:
hcond xn,k = x x1:(n−1),k
(
)
n−1
n−1
X
X
= κ(x) exp −B(ξ +
xm , λ + n − 1) + B(ξ +
xm + x, λ + n) .
m=1
m=1
2. For each x = 1, 2, . . .
ρ
n,x
,
• Xn has ρn,x new atoms. That is, Xn has atoms at locations {ψn,x,j }j=1
where
ρn,x
Kn−1
= ∅ a.s.
∩ {ψk }k=1
{ψn,x,j }j=1
Moreover,
Mn,x := γ · κ(0)n−1 κ(x) · exp {B(ξ + (n − 1)φ(0) + φ(x), λ + n)}
across n, x
indep
ρn,x ∼ Poisson (ρ |Mn,x ) across n, x
iid
ψn,x,j ∼ G(dψ) across n, x, j.
32
Proof. The corollary follows from Theorem 6.1 by plugging in the forms for
ν(dθ) and h(x|θ).
In particular,
Z
ν(dθ)
θ∈R+
=
Z
n
Y
h(xm,k |θ)
m=1
γ exp {hξ, η(θ)i + λ [−A(θ)]} ·
θ∈R+
=γ
"
n
Y
"
n
Y
κ(xm,k ) exp {hη(θ), φ(xm,k )i − A(θ)}
m=1
#
κ(xm,k ) B
m=1
ξ+
n
X
#
!
φ(xm,k ), λ + n .
m=1
So
hcond xn,k = x x1:(n−1),k
R
Qn−1
ν(dθ)h(x|θ) m=1
h(xm,k |θ)
θ∈R+
=
R
Qn−1
m=1 h(xm,k |θ)
θ∈R+ ν(dθ)
(
)
n−1
n−1
X
X
= κ(x) exp −B(ξ +
xm , λ + n − 1) + B(ξ +
xm + x, λ + n) .
m=1
m=1
In Example C.1 in Appendix C we show that Corollary 6.2 can be used to
recover the Indian buffet process marginal from a beta process prior together
with a Bernoulli process likelihood. In the following example, we discover a new
marginal for the Poisson likelihood process with gamma process prior.
Example 6.3. Let Θ be a gamma process, and let Xn be iid Poisson likelihood
processes conditioned on Θ for each n as in Example 4.3. That is, we have
ν(dθ) = γθξ e−λθ dθ
and
h(x|θk ) =
1 x −θk
θ e
x! k
with
γ > 0,
ξ ∈ (−2, −1],
λ > 0;
ξf ix,k > −1 and λf ix,k > 0
for all k ∈ [Kprior,f ix ]
by Example 4.3.
We can pick out the following components of h:
κ(x) =
And we calculate
Z
exp {B(ξ, λ)} =
1
,
x!
φ(x) = x,
η(θ) = log(θ),
exp {hξ, η(θ)i + λ[−A(θ)]} dθ =
A(θ) = θ.
Z
θ∈R+
θ∈R+
33
θξ e−λθ = Γ(ξ + 1)λ−(ξ+1) .
So, for k ∈ Z∗ , we have
(
P(xn = x) = κ(x) exp −B(ξ +
n−1
X
xm , λ + n − 1) + B(ξ +
m=1
Pn−1
n−1
X
)
xm + x, λ + n)
m=1
Pn−1
1 (λ + n − 1)ξ+ m=1 xm +1 Γ(ξ + m=1 xm + x + 1)
·
·
Pn−1
Pn−1
x!
Γ(ξ + m=1 xm + 1)
(λ + n)ξ+ m=1 xm +x+1
Pn−1
ξ+Pnm=1 xm +1
x
Γ(ξ + m=1 xm + x + 1)
1
λ+n−1
=
·
Pn−1
λ+n
λ+n
Γ(x + 1)Γ(ξ + m=1 xm + 1)
!
n−1
X
= NegBin x ξ +
xm + 1, (λ + n)−1 .
=
m=1
And
Mn,x := γ · κ(0)n−1 κ(x) · exp {B(ξ + (n − 1)φ(0) + φ(x), λ + n)}
1
=γ·
· Γ(ξ + x + 1)(λ + n)−(ξ+x+1) .
x!
We summarize the marginal distribution representation of X1:N that follows
from Corollary 6.2 in the following result.
Corollary 6.4. Let Θ be a gamma process with fixed-location atom weight distributions as in Eq. (19) and ordinary component weight measure as in Eq. (20).
Let Xn be drawn, iid across n, conditional on Θ according to a Poisson likelihood process with fixed-location atom weight distributions as in Eq. (18). Then
X1:N has the same distribution as the following construction.
For each n = 1, 2, . . . , N ,
K
n−1
be the union of atom locations in X1 , . . . , Xn−1 . Let xm,k :=
1. Let {ψk }k=1
Xm ({ψk }). Let xn,k denote the weight of Xn |X1 , . . . , Xn−1 at ψk . Then
xn,k has distribution described by the following probability mass function:
!
n−1
X
xm,k + 1, (λ + n)−1 .
hcond xn,k = x x1:(n−1),k = NegBin x ξ +
m=1
2. For each x = 1, 2, . . .
ρ
n,x
,
• Xn has ρn,x new atoms. That is, Xn has atoms at locations {ψn,x,j }j=1
where
ρn,x
Kn−1
= ∅ a.s.
∩ {ψk }k=1
{ψn,x,j }j=1
Moreover,
1 Γ(ξ + x + 1)
·
x! (λ + n)ξ+x+1
across n, x
Mn,x := γ ·
34
indep
ρn,x ∼ Poisson (ρ |Mn,x ) independently across n, x
iid
ψn,x,j ∼ G(dψ) independently across n, x and iid across j.
7
Discussion
In the preceding sections, we have shown how to calculate posteriors for general
CRM-based priors and likelihoods for Bayesian nonparametric models. We have
also shown how to represent Bayesian nonparametric priors as a sequence of finite draws, and full Bayesian nonparametric models via finite marginals. We
have introduced a notion of exponential families for CRMs, which we call exponential CRMs, that has allowed us to specify automatic Bayesian nonparametric
conjugate priors for exponential CRM likelihoods. And we have demonstrated
that our exponential CRMs allow particularly straightforward recipes for sizebiased and marginal representations of Bayesian nonparametric models. Along
the way, we have proved that the gamma process is a conjugate prior for the
Poisson likelihood process and the beta prime process is a conjugate prior for
the odds Bernoulli process. We have discovered a size-biased representation of
the gamma process and a marginal representation of the gamma process coupled
with a Poisson likelihood process.
All of this work has relied heavily on the description of Bayesian nonparametric models in terms of completely random measures. As such, we have worked
very particularly with pairings of real values—the CRM atom weights, which we
have interpreted as trait frequencies or rates—together with trait descriptors—
the CRM atom locations. However, all of our proofs broke into essentially two
parts: the fixed-location atom part and the ordinary component part. The fixedlocation atom development essentially translated into the usual finite version of
Bayes Theorem and could easily be extended to full Bayesian models where the
prior describes a random element that need not be real-valued. Moreover, the
ordinary component development relied entirely on its generation as a Poisson
point process over a product space. It seems reasonable to expect that our development might carry through when the first element in this tuple need not be
real-valued. And thus we believe our results are suggestive of broader results
over more general spaces.
Acknowledgements
Support for this project was provided by ONR under the Multidisciplinary University Research Initiative (MURI) program (N00014-11-1-0688). T. Broderick
was supported by a Berkeley Fellowship. A. C. Wilson was supported by an
NSF Graduate Research Fellowship.
35
A
Further automatic conjugate priors
We use Theorem 4.2 to calculate automatic conjugate priors for further exponential CRMs.
Example A.1. Let X be generated according to a Bernoulli process as in
Example 2.1. That is, X has an exponential CRM distribution with Klike,f ix
fixed-location atoms, where Klike,f ix < ∞ in accordance with Assumption A0:
Klike,f ix
X=
X
xlike,k δψlike,k .
k=1
The weight of the kth atom, xlike,k , has support on {0, 1} and has a Bernoulli
density with parameter θk ∈ (0, 1]:
h(x|θk ) = θkx (1 − θk )1−x
= exp {x log(θk /(1 − θk )) + log(1 − θk )} .
The final line is rewritten to emphasize the exponential family form of this
density, with
κ(x) = 1
φ(x) = x
η(θ) = log
θ
1−θ
A(θ) = − log(1 − θ).
Then, by Theorem 4.2, X has a Bayesian nonparametric conjugate prior for
Klike,f ix
Θ :=
X
θk δ ψ k .
k=1
This conjugate prior has two parts.
First, Θ has a set of Kprior,f ix fixed-location atoms at some subset of the
Klike,f ix fixed locations of X. The kth such atom has random weight θf ix,k
with density
fprior,f ix,k (θ) = exp {hξf ix,k , η(θ)i + λf ix,k [−A(θ)] − B(ξf ix,k , λf ix,k )}
= θξf ix,k (1 − θ)λf ix,k −ξf ix,k exp {−B(ξf ix,k , λf ix,k )}
= Beta (θ |ξf ix,k + 1, λf ix,k − ξf ix,k + 1 ) ,
where Beta(θ|a, b) denotes the beta density with shape parameters a > 0 and
b > 0. So we must have fixed hyperparameters ξf ix,k > −1 and λf ix,k >
ξf ix,k − 1. Further,
exp {−B(ξf ix,k , λf ix,k )} =
Γ(λf ix,k + 2)
Γ(ξf ix,k + 1)Γ(λf ix,k − ξf ix,k + 1)
36
to ensure normalization.
Second, Θ has an ordinary component characterized by any proper distribution G and weight rate measure
ν(dθ) = γ exp {hξ, η(θ)i + λ [−A(θ)]} dθ
= γθξ (1 − θ)λ−ξ dθ.
Finally, we need to choose the allowable hyperparameter ranges for γ, ξ, and
λ. γ > 0 ensures ν is a measure. By Assumption A1, we must have ν(R+ ) = ∞,
so ν must represent an improper beta distribution. As such, we require either
ξ + 1 ≤ 0 or λ − ξ ≤ 0. By Assumption A2, we must have
∞ Z
X
ν(dθ) · h(x|θ)
θ∈R+
x=1
=
Z
ν(dθ)h(1|θ)
θ∈(0,1]
since the support of x is {0, 1} and the support of θ is (0, 1]
Z
=γ
θξ (1 − θ)λ−ξ dθ · θ
θ∈(0,1]
<∞
Since the integrand is the kernel of a beta distribution, the integral is finite if
and only if ξ + 2 > 0 and λ − ξ + 1 > 0.
Finally, then the hyperparameter restrictions can be summarized as:
γ>0
ξ ∈ (−2, −1]
λ>ξ−1
ξf ix,k > −1 and λf ix,k > ξf ix,k − 1 for all k ∈ [Kprior,f ix ]
By setting α = ξ+1, c = λ+2, ρf ix,k = ξf ix,k +1, and σf ix,k = λf ix,k −ξf ix,k +1,
we recover the hyperparameters of Eq. (11) in Example 2.1. Here, by contrast to
Example 2.1, we found the conjugate prior and its hyperparameter settings given
just the Bernoulli process likelihood. Henceforth, we use the parameterization
of the beta process above.
B
Further size-biased representations
Example B.1. Let Θ be a beta process, and let Xn be iid Bernoulli processes
conditioned on Θ for each n as in Example A.1. That is, we have
ν(dθ) = γθξ (1 − θ)λ−ξ dθ.
And
h(x|θk ) = θkx (1 − θk )1−x
37
with
γ>0
ξ ∈ (−2, −1]
λ>ξ−1
ξf ix,k > −1 and λf ix,k > ξf ix,k − 1 for all k ∈ [Kprior,f ix ]
by Example A.1.
We can pick out the following components of h:
κ(x) = 1
φ(x) = x
η(θ) = log
θ
1−θ
A(θ) = − log(1 − θ).
Thus, by Corollary 5.2,
Θ=
m,x
∞ X
∞ ρX
X
θm,x,j δψm,x,j
m=1 x=1 j=1
iid
ψm,x,j ∼ G iid across m, x, j
indep
θm,x,j ∼ fsize,m,x (θ) dθ
∝ θξ+x (1 − θ)λ+m−ξ−x dθ
∝ Beta (θ |ξ + x, λ − ξ + m − x ) dθ
iid across j and independently across m, x
Mm,x := γ ·
Γ(ξ + x + 1)Γ(λ − ξ + m − x + 1)
Γ(λ + m + 2)
indep
ρm,x ∼ Poisson (Mm,x )
across m, x
Broderick et al. [2012] and Paisley et al. [2012] have previously noted that
this size-biased representation of the beta process arises from the Poisson point
process.
C
Further marginals
Example C.1. Let Θ be a beta process, and let Xn be iid Bernoulli processes
conditioned on Θ for each n as in Examples A.1 and B.1.
We calculate the main components of Corollary 6.2 for this pair of processes.
In particular, we have
(
)
n−1
n−1
X
X
P(xn = 1) = κ(k) exp −B(ξ +
xm , λ + n − 1) + B(ξ +
xm + 1, λ + n)
m=1
m=1
38
Γ(λ + n − 1 + 2)
Pn−1
x
+
1)Γ(λ + n − 1 − ξ − m=1 xm + 1)
m=1 m
Pn−1
Pn−1
Γ(ξ + m=1 xm + 1 + 1)Γ(λ + n − ξ − m=1 xm − 1 + 1)
·
Γ(λ + n + 2)
Pn−1
ξ + m=1 xm + 1
=
λ+n+1
=
Γ(ξ +
Pn−1
And
Mn,1 := γ · κ(0)n−1 κ(1) · exp {B(ξ + (n − 1)φ(0) + φ(1), λ + n)}
=γ·
Γ(ξ + 1 + 1)Γ(λ + n − ξ − 1 + 1)
Γ(λ + n + 2)
Thus, the marginal distribution of X1:N is the same as that provided by the
following construction.
For each n = 1, 2, . . . , N ,
1. At any location ψ for which there is some atom in X1 , . . . , Xn−1 , let xm be
the weight of Xm at ψ for m ∈ [n−1]. Then we have that Xn |X1 , . . . , Xn−1
has weight xn at ψ, where
!
Pn−1
ξ + m=1 xm + 1
P(dxn ) = Bern xn
λ+n+1
2. Xn has ρn,1 atoms at locations {ψn,1,j } with j ∈ [ρn,1 ] where there have
not yet been atoms in any of X1 , . . . , Xn−1 . Moreover,
Γ(ξ + 1 + 1)Γ(λ + n − ξ − 1 + 1)
Γ(λ + n + 2)
across n
Mn,1 := γ ·
indep
ρn,1 ∼ Poisson (Mn,1 ) across n, x
iid
ψn,1,j ∼ G(dψ) across n, j
Here, we have recovered the three-parameter extension of the Indian buffet
process [Teh and Görür, 2009, Broderick et al., 2013].
39
References
E. M. Airoldi, D. Blei, E. A. Erosheva, and S. E. Fienberg. Handbook of Mixed
Membership Models and Their Applications. CRC Press, 2014.
D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of
Machine Learning Research, 3:993–1022, 2003.
T. Broderick, M. I. Jordan, and J. Pitman. Beta processes, stick-breaking, and
power laws. Bayesian Analysis, 7(2):439–476, 2012.
T. Broderick, M. I. Jordan, and J. Pitman. Cluster and feature modeling from
combinatorial stochastic processes. Statistical Science, 2013.
T. Broderick, L. Mackey, J. Paisley, and M. I. Jordan. Combinatorial clustering
and the beta negative binomial process. IEEE TPAMI, 2015.
P. Damien, J. Wakefield, and S. Walker. Gibbs sampling for Bayesian nonconjugate and hierarchical models by using auxiliary variables. Journal of the
Royal Statistical Society: Series B, 61(2):331–344, 1999.
M. H. DeGroot. Optimal Statistical Decisions. John Wiley & Sons, Inc, 1970.
P. Diaconis and D. Ylvisaker. Conjugate priors for exponential families. The
Annals of Statistics, 7(2):269–281, 1979.
K. Doksum. Tailfree and neutral random probabilities and their posterior distributions. The Annals of Probability, pages 183–201, 1974.
F. Doshi, K. T. Miller, J. Van Gael, and Y. W. Teh. Variational inference for
the Indian buffet process. In AISTATS, 2009.
M. D. Escobar. Estimating normal means with a Dirichlet process prior. Journal
of the American Statistical Association, 89(425):268–277, 1994.
M. D. Escobar and M. West. Bayesian density estimation and inference using
mixtures. Journal of the American Statistical Association, 90(430):577–588,
1995.
M. D. Escobar and M. West. Computing nonparametric hierarchical models. In
Practical nonparametric and semiparametric Bayesian statistics, pages 1–22.
Springer, 1998.
T. S. Ferguson. A Bayesian analysis of some nonparametric problems. The
Annals of Statistics, pages 209–230, 1973.
T. S. Ferguson. Prior distributions on spaces of probability measures. The
Annals of Statistics, pages 615–629, 1974.
T. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian
buffet process. In NIPS, 2006.
40
N. L. Hjort. Nonparametric Bayes estimators based on beta processes in models
for life history data. The Annals of Statistics, pages 1259–1294, 1990.
H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors.
Journal of the American Statistical Association, 96(453), 2001.
L. F. James. Poisson latent feature calculus for generalized Indian buffet processes. arXiv preprint arXiv:1411.2936, 2014.
L. F. James, A. Lijoi, and I. Prünster. Posterior analysis for normalized random
measures with independent increments. Scandinavian Journal of Statistics,
36(1):76–97, 2009.
M. Kalli, J. E. Griffin, and S. G. Walker. Slice sampling mixture models. Statistics and Computing, 21(1):93–105, 2011.
Y. Kim. Nonparametric Bayesian estimators for counting processes. Annals of
Statistics, pages 562–588, 1999.
J. F. C. Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59–78, 1967.
J. F. C. Kingman. Poisson Processes, volume 3. Oxford University Press, 1992.
A. Lijoi and I. Prünster. Models beyond the Dirichlet process. In N. L. Hjort,
C. Holmes, P. Müller, and S. G. Walker, editors, Bayesian Nonparametrics.
Cambridge Series in Statistical and Probabilistic Mathematics, 2010.
A. Y. Lo. Bayesian nonparametric statistical inference for Poisson point processes. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 59
(1):55–66, 1982.
A. Y. Lo. On a class of Bayesian nonparametric estimates: I. Density estimates.
Annals of Statistics, 12(1):351–357, 1984.
S. N. MacEachern. Estimating normal means with a conjugate style Dirichlet
process prior. Communications in Statistics-Simulation and Computation, 23
(3):727–741, 1994.
R. M. Neal. Markov chain sampling methods for Dirichlet process mixture
models. Journal of Computational and Graphical Statistics, 9(2):249–265,
2000.
R. M. Neal. Slice sampling. Annals of Statistics, pages 705–741, 2003.
P. Orbanz. Conjugate projective limits. arXiv preprint arXiv:1012.0363, 2010.
J. W. Paisley, A. K. Zaas, C. W. Woods, G. S. Ginsburg, and L. Carin. A
stick-breaking construction of the beta process. In ICML, pages 847–854,
2010.
41
J. W. Paisley, L. Carin, and D. M. Blei. Variational inference for stick-breaking
beta process priors. In ICML, pages 889–896, 2011.
J. W. Paisley, D. M. Blei, and M. I. Jordan. Stick-breaking beta processes and
the Poisson process. In AISTATS, pages 850–858, 2012.
M. Perman, J. Pitman, and M. Yor. Size-biased sampling of poisson point
processes and excursions. Probability Theory and Related Fields, 92(1):21–39,
1992.
J. Pitman. Random discrete distributions invariant under size-biased permutation. Advances in Applied Probability, pages 525–539, 1996a.
J. Pitman. Some developments of the Blackwell-MacQueen urn scheme. Lecture
Notes-Monograph Series, pages 245–267, 1996b.
J. Pitman. Poisson-Kingman partitions. Lecture Notes-Monograph Series, pages
1–34, 2003.
J. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica,
4:639–650, 1994.
Y. W. Teh and D. Görür. Indian buffet processes with power-law behavior. In
NIPS, pages 1838–1846, 2009.
Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet
processes. Journal of the American Statistical Association, 101(476), 2006.
Y. W. Teh, D. Görür, and Z. Ghahramani. Stick-breaking construction for the
Indian buffet process. In AISTATS, pages 556–563, 2007.
R. Thibaux and M. I. Jordan. Hierarchical beta processes and the Indian buffet
process. In AISTATS, pages 564–571, 2007.
R. J. Thibaux. Nonparametric Bayesian Models for Machine Learning. PhD
thesis, UC Berkeley, 2008.
M. K. Titsias. The infinite gamma-Poisson feature model. In NIPS, pages
1513–1520, 2008.
S. G. Walker. Sampling the Dirichlet mixture model with slices. Communications in Statistics—Simulation and Computation, 36(1):45–54, 2007.
C. Wang and D. M. Blei. Variational inference in nonconjugate models. The
Journal of Machine Learning Research, 14(1):1005–1031, 2013.
M. West and M. D. Escobar. Hierarchical priors and mixture models, with application in regression and density estimation. In P. R. Freeman and A. F. M.
Smith, editors, Aspects of Uncertainty: A Tribute to D. V. Lindley. Institute
of Statistics and Decision Sciences, Duke University, 1994.
M. Zhou, L. Hannah, D. Dunson, and L. Carin. Beta-negative binomial process
and Poisson factor analysis. AISTATS, 2012.
42
| 10 |
Ergodicity and Accuracy of Optimal Particle Filters
for Bayesian Data Assimilation
arXiv:1611.08761v1 [math.PR] 26 Nov 2016
March 7, 2018
D. Kelly⋆ and A.M. Stuart†
⋆
†
New York University, Email: [email protected]
California Institute of Technology, Email: [email protected]
Abstract
Data assimilation refers to the methodology of combining dynamical models and observed data with the objective
of improving state estimation. Most data assimilation algorithms are viewed as approximations of the Bayesian
posterior (filtering distribution) on the signal given the observations. Some of these approximations are controlled,
such as particle filters which may be refined to produce the true filtering distribution in the large particle number
limit, and some are uncontrolled, such as ensemble Kalman filter methods which do not recover the true filtering
distribution in the large ensemble limit. Other data assimilation algorithms, such as cycled 3DVAR methods, may
be thought of as approximating the mean of the posterior, but are also uncontrolled in general. For particle filters
and ensemble Kalman filters it is of practical importance to understand how and why data assimilation methods
can be effective when used with a fixed small number of particles, since for many large-scale applications it
is not practical to deploy algorithms close to the large particle limit asymptotic. In this paper we address this
question for particle filters and, in particular, study their accuracy (in the small noise limit) and ergodicity (for
noisy signal and observation) without appealing to the large particle number limit. We first prove the accuracy
and ergodicity properties for the true filtering distribution, working in the setting of conditional Gaussianity for
the dynamics-observation model. We then show that these properties are inherited by optimal particle filters for
any fixed number of particles. For completeness we also prove large particle number consistency results for the
optimal particle filters, by writing the update equations for the underlying distributions as recursions. In addition to
looking at the optimal particle filter with standard resampling, we derive all the above results for the Gaussianized
optimal particle filter and show that the theoretical properties are favorable when compared to the standard optimal
particle filter.
1 Introduction
1.1 Background and Literature Review
Data assimilation describes the blending of dynamical models with data, with the objective of improving state
estimation and forecasts. The use of data assimilation originated in the geophysical sciences, but is now ubiquitous in engineering and the applied sciences. In numerical weather prediction, large scale ocean-atmosphere
models are assimilated with massive data sets, comprising observational data from satellites, ground based
weather stations and underwater sensors for example [3]. Data assimilation is prevalent in robotics; the SLAM
problem seeks to use sensory data made by robots in an unknown environment to create a map of that environment and locate the robot within it [41]. It is used in modelling of traffic flow [48]. And data assimilation
is being used in bio-medical applications such as glucose-insulin systems [37] and the sleep cycle [38]. These
examples serve to illustrate the growth in the use of the methodology, its breadth of applicability and the very
different levels of fidelity present in the models and the data in these many applications.
Although typical data assimilation problems can be understood from a Bayesian perspective, for non-linear
and potentially high dimensional models it is often infeasible to make useful computations with the posterior. To
1
circumvent this problem, practitioners have developed assimilation methods that approximate the true posterior,
but for which computations are more feasible. In the engineering communities, particle filters have been developed for this purpose, providing empirical approximations of non-Gaussian posteriors [11, 10]. In geoscience
communities, methods are typically built on Kalman filtering theory, after making suitable Gaussian approximations [25]. These include variational methods like 3DVAR and 4DVAR [14, 28], the extended Kalman filter
(ExKF) [12] and the ensemble Kalman filter (EnKF) [5, 13] which relies on a Gaussian ansatz rendering it, in
general, invalid as an approximation of the true filtering distribution [26].
Despite their enormous success, many of these algorithms remain mysterious from a theoretical perspective.
At the heart of the mystery is the fact that data assimilation methods are frequently and successfully implemented in regimes where the approximate filter is not provably valid; it is not known which features of the
posterior (the true filtering distribution) are reflected in the approximation and which are not. For example,
the ensemble Kalman filter is often implemented with ensemble size several orders of magnitude smaller than
needed to reproduce large sample size behaviour, and is applied to problems for which the Gaussian ansatz may
not be valid; it nonetheless can still exhibit skillful state estimates, with high correlations between the estimate
and true trajectories [16, 29]. Indeed, the success of the methods in this non-asymptotic regime is the crux of
their success; the methods would often be computationally intractable at large ensemble sizes.
This lack of theory has motivated recent efforts to better understand the properties of data assimilation methods in the practical, non-asymptotic regimes. The 3DVAR algorithm has been investigated in the context of toy
models for numerical weather prediction, including the Lorenz-63 [24], Lorenz-96 [23] and 2d Navier-Stokes
equations [22]; see also [33]. These works focus primarily on the question of accuracy - how well does the state
estimate track the true underlying signal. Accuracy for the EnKF with fixed ensemble size was first investigated
in [19]. Accuracy for EnKF has been further developed in [30] using linear models with random coefficients, but
much more realistic (practical) assumptions on observations than [19], and moreover focussing on covariance
consistency through the Mahalanobis norm. The articles [42, 43] were the first to investigate the stability of
EnKF with fixed ensemble size, by formulating the filter as a Markov chain and applying coupling techniques.
This line of research has been continued in [8] by framing the EnKF as a McKean-Vlasov system. The limitations of the non-practical regimes have also been investigated; in [17] the authors construct simple dissipative
dynamical models for which the EnKF is shown to be highly unstable with respect to initial perturbations. This
was the first theoretical insight into the frequently observed effect of catastrophic filter divergence [15].
For the nonlinear filtering distribution itself, there has been a great deal of research over the last several
decades, particularly on the question of stability. Conditional ergodicity for the filtering distribution for general
nonlinear hidden Markov models has been investigated in [21] and later refined in [45]. Ergodicity for nonlinear
filters has been discussed in [20, 9, 7] and exponential convergence results were first obtained in [2, 4].
1.2 Our Contributions
For particle filters, much of the theoretical literature focuses on the question of consistency in the large ensemble
limit, that is, does the empirical approximation converge to the true posterior as the number of particles in the
ensemble N approaches infinity. In many high dimensional applications, particle filters are implemented in the
non-asymptotic regime, notably in robotics [41] and ocean-atmosphere forecasting [46]. In geosciences, new
filtering algorithms have been proposed to beat the curse of dimensionality and are implemented with ensemble
sizes many orders of magnitude smaller than the state dimension [47]. In this article, we continue the trend
of analyzing algorithms in practical regimes by focusing on the success of particle filters for fixed ensemble
sizes. In this regime, we ask the question of whether particle filters inherit important long-time properties from
the true posterior distributions. In particular, we address the following questions: if it is known that the true
posterior distribution is accurate and conditionally ergodic, can the same be proved of the approximate filter.
We focus our attention on the optimal particle filter (OPF) [1, 49, 27]. The OPF is a sequential importance
sampling procedure whereby particle updates are proposed using an optimal convex combination of the model
prediction and the observational data at the next time step. For details on the OPF, including the justification for
calling it optimal, see [11]. There are two main reasons that we focus our attentions on the OPF. First, the optimal particle filter is known to compare favorably to the standard particle filter, particularly from the perspective
of weight degeneracy in high dimensions [39, 40]. Indeed the optimal particle filter can be considered a special
case of more complicated filters that have been proposed to beat the curse of dimensionality [6, 47]. Second,
2
under natural assumptions on the dynamics-observation model, the optimal particle filters can be formulated as
a random dynamical system, which is very similar to the 3DVAR algorithm. This means that techniques for
proving accuracy for the 3DVAR filter in earlier literature [35] can be leveraged for the OPF.
Throughout the article, we make the assumption of conditional Gaussianity for the dynamics-observation
model. This framework is frequently employed in practice, particularly in geoscience data assimilation problems. Under this assumption, we show that the true posterior, the filtering distribution, satisfies the long-time
properties of stability and accuracy. The stability result is a type of conditional ergodicity: two copies of the
posterior distribution, initialized at different points but conditioned on the same fixed data, will converge to each
other in the long time limit, with exponential rate. The accuracy result states that, if sufficiently many variables
are observed, the posterior will concentrate around the true trajectory in the long time limit. Related conditional
ergodicity results are obtained under quite general assumptions in [44, 45]. However we believe that the approach we adopt in this article is of considerable value, not only here but also in other potential applications,
due to the simplicity of the underlying coupling argument, the formulation through random dynamical systems
and the explicit link to 3DVAR type algorithms.
We then show that, under the same model-observation assumptions, the OPF also exhibits the long-time
properties of stability and accuracy for any fixed ensemble size. For the conditional ergodicity result, we show
that the two copies of the particle ensembles, initialized differently, but updated with the same observational
data, will converge to each other in the long term limit, in a distributional sense. Both results use very similar
arguments to those employed for the posterior.
In addition, we also establish large ensemble consistency results for the OPF. Here we employ a technique
exposed very clearly in [34], which finds a recursion that is approximately satisfied by the bootstrap particle
filter, and leverages this fact to obtain an estimate on the distance between the true posterior and the empirical
approximation. We show that the same idea can be applied to not only the OPF, but a very large class of
sequential importance sampling procedures. We would like to comment that large particle consistency results
for particle filters should not be considered practical results for high dimensional data assimilation problems,
as in practice particle filters are never implemented in this regime. The consistency results are included here
as they are practically informative for low dimensional data assimilation problems and moreover as the results
are natural consequences of the random dynamical system formulation that has been adopted for accuracy
and ergodicity results. For high dimensional data assimilation problems, it may be more practical to look at
covariance consistency, as done in [30].
In addition to obtaining the results of stability, accuracy and consistency for the OPF, for which we perform resampling at the end of each assimilation cycle, we also prove the corresponding results for the so called
Gaussianized OPF. The Gaussianized OPF was introduced in [18] and differs from the OPF only in the implementation of the resampling. Nevertheless, it was shown numerically in [18] that the GOPF compares favorably
to the OPF, particularly when applied to high dimensional models. The analysis in this article lends theoretical
weight to the advantages of the GOPF over the OPF. In particular we find that the upper bound on the convergence rate for conditional ergodicity for the GOPF has favourable dependence on dimension when compared
with the OPF.
1.3 Structure of Article and Notation
The remainder of the article is structured as follows. At the end of this section we introduce some notation
and terminology that will be useful in the sequel. In Section 2, we formulate the Bayesian problem of data
assimilation, introduce the model-observation assumptions under which we work, and prove the stability and
accuracy results for the true posterior. In Section 3, we introduce the bootstrap particle filter, optimal particle
filter and Gaussianized optimal particle filter. In Section 4, we prove the conditional ergodicity results for the
optimal particle filters. In Section 5, we prove the accuracy results for the optimal particle filters. Finally, in
Section 6, we prove the consistency results for the optimal particle filters.
Throughout we let X denote the finite dimensional Euclidean state space and and we let Y denote the finite
dimensional Euclidean observation space. We write M(X ) for the set of probability measures on X . We
denote the Euclidean norm on X by | · | and for a symmetric positive definite matrix A ∈ L(X , X ), we define
P
| · |A = |A−1/2 · |. We define S N : M(X ) → M(X ) to be the sampling operator S N µ = N1 N
n=1 δu(n) where
3
u(n) ∼ µ are i.i.d. random variables.
2 Bayesian Data Assimilation
We describe the set-up which encompasses all the work in this paper, and then study the conditional ergodicity
and accuracy of the true filtering distribution.
2.1 Set-Up
The state model is taken to be a discrete time Markov chain {uk }k≥0 taking values in the state space X . We
assume that the initial condition u0 of the chain is distributed according to µ0 , where µ0 ∈ M(X ). The
transition kernel for the Markov chain is given by P (uk+1 |uk ). For each k ≥ 1, we make an observation of the
Markov chain
yk+1 = h(uk+1 ) + ηk+1 ,
(2.1)
where h : X → Y maps the state space to the observation space, and ηk ∼ N (0, Γ) are centred i.i.d. random
variables representing observational noise. We denote by Yk = (y1 , . . . , yk ) the accumulated observational data
up to time k. We are interested in studying and approximating the filtering distribution µk (·) = P(uk ∈ ·|Yk )
for all k ≥ 1. We will denote the density of µk by P (uk |Yk ).
The focus of this article is analysis of variants of the optimal particle filter, and in this setting we will always
require the following assumptions on the dynamics-observation model:
Assumption 2.1. The dynamics-observation model is given by
uk+1 = ψ(uk ) + ξk ,
yk+1 = Huk+1 + ηk+1 ,
(2.2a)
(2.2b)
where u0 ∼ µ0 , ξk ∼ N (0, σ 2 Σ0 ) i.i.d. and where ηk ∼ N (0, γ 2 Γ0 ) i.i.d. and σ, γ 6= 0. We require that Σ0
and Γ0 are strictly positive-definite, and that the function ψ(·) is bounded.
In order to facilitate comparison with the standard bootstrap filter, for which the theory we describe is
sometimes derived under different conditions, we retain the general setting of a dynamics given by a general
Markov chain with observations (2.1).
For most of the results in this article, we will be interested in properties of conditional distributions P (uk |Yk )
when the observational data Yk is generated by a fixed realization of the model. For this reason, we introduce
the following notation to emphasize that we are considering a fixed realization of the data, generated by a fixed
trajectory of the underlying dynamical system.
Assumption 2.2. Fix u†0 ∈ X and positive semi-definite matrices Σ∗ and Γ∗ on X and Y respectively. Let
{u†k } be a realization of the dynamics satisfying
u†k+1 = ψ(u†k ) + rγξk†
where u†0 ∈ X is fixed and ξk† ∼ N (0, Σ∗ ) i.i.d. and similarly define {yk† } by
†
†
yk+1
= Hu†k+1 + γηk+1
(2.3)
†
where ηk+1
∼ N (0, Γ∗ ) i.i.d. We will refer to {u†k }k≥0 as the true signal and {yk† }k≥1 as the given fixed data.
As above, we use the shorthand Yk† = {yi† }ki=1 .
Note that this data is not necessarily generated from the same statistical model used to define the filtering
distribution since Σ∗ and Γ∗ may differ from Σ and Γ, and the initial condition is fixed. Note also that we have
set σ = rγ and that, when studying accuracy, we will consider families of data sets and truths parameterized by
γ → 0; in this setting it is natural to think of r and the noise sequences {ξk† }k≥0 and {ηk† }k≥0 as fixed, whilst
the truth and data sequences will depend on the value of γ.
4
Assumption 2.1 renders the conditional distribution P (uk |Yk ) tractable as it has a conditionally Gaussian
structure described by the inhomogeneous Markov process with transition kernel
1
1
2
2
(2.4)
pk+1 (uk , duk+1 ) ∝ exp − |yk+1 − Huk+1 |Γ − |uk+1 − ψ(uk )|Σ duk+1
2
2
and initialized at the measure µ0 . A simple completion of the square yields an alternative representation for the
transition kernel, namely
1
1
2
2
(2.5)
pk+1 (uk , duk+1 ) ∝ exp − |yk+1 − Hψ(uk )|S − |uk+1 − mk+1 |C duk+1 ,
2
2
where
C −1 = Σ−1 + H ∗ Γ−1 H
mk+1
S = HΣH ∗ + Γ
= C Σ−1 ψ(uk ) + H ∗ Γ−1 yk+1 .
,
(2.6)
The conditional mean mk+1 is often given in Kalman filter form
mk+1 = (I − KH)ψ(uk ) + Kyk+1 ,
(2.7)
K = ΣH ∗ S −1 .
(2.8)
where K is the Kalman gain matrix
The expression (2.4) is an application of Bayes’ formula in the form
pk+1 (uk , duk+1 ) ∝ P (yk+1 |uk+1 )P (uk+1 |uk )duk+1
whilst (2.5) follows from a second application of Bayes’ formula to derive the identity
P (yk+1 |uk+1 )P (uk+1 |uk )duk+1 = P (yk+1 |uk )P (uk+1 |uk , yk+1 )duk+1 .
We note that
1
2
P (yk+1 |uk ) =
− |yk+1 − Hψ(uk )|S
2
1
−1
2
P (uk+1 |uk , yk+1 ) = ZC exp − |uk+1 − mk+1 |C .
2
ZS−1 exp
(2.9)
These formulae are prevalent in the data assimilation literature; in particular (2.7) describes the evolution of
the mean state estimate in the cycled 3DVAR algorithm, setting uk+1 = mk+1 [25]. We will make use of the
formulae in section 3 when describing optimal particle filters as random dynamical systems.
We now state two theorems concerning the ergodicity of the filtering distribution itself. The first shows that,
when initialized at two different points, the filtering distributions converge towards one another at a geometric
rate. The second shows that, on average over all instances of randomness in the signal process and the observations process, the mean under the filtering distribution is, asymptotically for large k, O(γ) from the truth. We
state and prove these two results in this section. The remainder of the paper is devoted to establishing analogous
results for various particle filters.
2.2 Conditional Ergodicity
The ergodicity result uses a metric on probability measures to quantify convergence of differently initialized
posteriors in the long time limit. To this end, we define the total variation metric on M(X ) by
dT V (µ, ν) =
1
sup |µ(h) − ν(h)|
2 |h|≤1
where theR supremum is taken over all bounded functions h : X → R with |h| ≤ 1, and where we define
µ(h) := X h(x)µ(dx) for any probability measure µ ∈ M(X ) and any real-valued test function h bounded
by 1 on X .
5
Theorem 2.3. Consider the filtering distributions µk , µ′k , defined by Assumption 2.1, and initialized at two
different Dirac masses µ0 = δz0 and µ0 = δz0′ respectively. Assume moreover that the data appearing in each
filtering distribution is the same, and given by {yk† }k≥1 defined in Assumption 2.2. Then there exists z ∈ (0, 1)
such that, almost surely with respect to the randomness generating the data {yk† }k≥1 ,
1/k
lim dTV µk , µ′k
=z.
(2.10)
k→∞
Proof. The proof uses a standard coupling procedure, as can be found for instance in [32]; we follow the
exposition of the methodology in [31]. We divide the proof into three steps; Step A is where we recast the
problem as a coupling problem, Step B is where we obtain a minorization condition and Step C is where we
apply the law of large numbers to obtain a rate. Subsequent proofs will use the same structure and it will be
useful to refer to specific steps in the proof later on.
Step A: We first introduce two Markov chains zk , zk′ whose laws are given by µk , µ′k respectively. To this
end, define zk to be the inhomogeneous Markov chain, with initialization z0 co-inciding with the center of
the Dirac mass defined above, and with inhomogeneous transition kernel pk defined by (2.4). If we define
recursively the kernels pk (z, ·) by
Z
pk+1 (v, ·)pk (z, dv)
(2.11)
pk+1 (z, ·) =
X
0
with p (z, ·) = δz (·), then it is easy to see that µk = pk (z0 , ·) and µ′k = pk (z0′ , ·) for all k ≥ 0. Since pk (z, ·)
is the law of zk , we see that zk is indeed the required Markov chain, and similarly for zk′ .
The main step is to prove a minorization condition for the transition kernel pk+1 . That is, we seek a measure
Q ∈ M(X ) and a sequence of constants ǫk > 0 satisfying
pk+1 (u, A) ≥ ǫk Q(A)
(2.12)
for all u ∈ X and all measurable sets A ⊂ X . Given a minorization condition, we obtain the result via the
following coupling argument. The minorization condition allows us to define a new Markov kernel
pek+1 (x, A) = (1 − ǫk )−1 pk+1 (x, A) − ǫk Q(A) .
We then define a Markov chain z̃k by z̃0 = z0 and
z̃k+1 ∼ pek+1 (z̃k , ·) w.p. (1 − ǫk )
z̃k+1 ∼ Q(·) w.p. ǫk .
(2.13a)
(2.13b)
and similarly for z̃k′ . By evaluating Eϕ(z̃k ) for a suitable class of test functions ϕ, it is easy to verify that the
Markov chain z̃k is equivalent in law to zk .
We derive the minorization condition which allows this coupling below, in Step B. Here in Step A we couple
the two Markov chains z̃k , z̃k′ in such a way that z̃k = z̃k′ for all k ≥ τ , where τ is the first coupling time, that
is, the smallest n such that z̃n = z̃n′ . Importantly the two chains z̃k , z̃k′ share the same random variables and
live on the same probability space; in particular, once a draw from Q(·) is made, the chains can be coupled
and remain identical thereafter. Let Ak be the event that, in (2.13), the first (state dependent) Markov kernel is
picked at times j = 0, · · · , k − 1. Then since pk (z0 , ·) = µk is the law of z̃k , and similarly for z̃k′ , we have that
1
sup |E f (z̃k ) − f (z̃k′ ) |
dTV pk (z0 , ·), pk (z0′ , ·) =
2 |f |∞ ≤1
1
sup E f (z̃k ) − f (z̃k′ ) IAk + f (z̃k ) − f (z̃k′ ) IAck .
=
2 |f |∞ ≤1
Note that for this coupling the second term vanishes, as in the event Ack , the two chains will have coupled for
τ < k. It follows that
dTV pk (z0 , ·), pk (z0′ , ·) ≤ E(IAk ) = P(Ak ) = Πkj=1 (1 − ǫj ) .
6
To obtain the result (2.10), we need to understand the limiting behaviour of the constants ǫj appearing in the
minorization condition (2.12). Hence we turn our attention toward obtaining the minorization condition.
Step B: If we recall (2.4) and let ρ†k,0 = σHξk† + γηk† , then we obtain, via Z ≤ 1, the identity
1
1
pk+1 (u, dv) ≥ exp − |Hψ(u†k ) + ρ†k,0 − Hv|2Γ − |v − ψ(u)|2Σ dv
2
2
† 2
† 2
2
≥ exp −2|Hψ(uk )|Γ − |ψ(u)|Σ − 2|ρk,0 |Γ − |Hv|2Γ − |v|2Σ dv
1
≥ exp −λ2 − 2|ρ†k,0 |2Γ exp − |v|2D dv
2
where
(2.14)
(2.15)
(2.16)
(2.17)
λ2 = sup 2|Hψ(v)|2Γ + |ψ(u)|2Σ
u,v
and
1 −1
D = Σ−1 + H ∗ Γ−1 H .
2
Thus we have a minorization condition of the form
pk+1 (u, A) ≥ ǫk Q(A)
where Q(·) is the Gaussian N (0, D) and ǫk = exp −λ2 − 2|ρ†k,0 |2Γ .
Step C: By the argument above, it follows that
1/k
dTV pk (z0 , ·), pk (z0′ , ·)
≤ zk
1/k
. Since the ǫk are i.i.d. and integrable, by the law of large numbers, almost
where zk = Πkj=1 (1 − ǫj )
surely with respect to the randomness generating the true signal and the data, we have
ln zk =
∞
k
X
1X
1 n
ln(1 − ǫj ) → E ln(1 − ǫ1 ) = −E
ǫ1 .
k j=1
n
n=1
But ǫ1 = c exp(−2|ρ†1,0 |2Γ ) for some c ∈ (0, 1]. Since ρ†1,0 is Gaussian it follows that the nth moment of ǫ1
1
scales like n− 2 so that the limit of ln zk is negative and finite; the result follows.
2.3 Accuracy
We now discuss accuracy of the posterior filtering distribution in the small noise limit γ ≪ 1.
Assumption 2.4. Let r = σ/γ and assume that there is rc > 0 such that, for all r ∈ [0, rc ), the function
(I − KH)ψ(·), with K defined through (2.6) and (2.8), is globally Lipschitz on X with respect to the norm k · k
and with constant α = α(r) < 1.
Theorem 2.5. Suppose Assumption 2.4 holds for some rc > 0. Then for all r ∈ [0, rc ) we have
lim sup Ekuk − Eµk uk k2 ≤ cγ 2 ,
k→∞
where Eµk denotes expectation over the posterior µk defined through Assumption 2.1 and E denotes expectation
over the dynamical model and the observational data.
7
Proof. This follows similarly to Corollary 4.3 in [36], using the fact that the mean of the filtering distribution is
optimal in the sense that
Ekuk − Eµk uk k2 ≤ Ekuk − mk k2
for any Yk −measurable sequence {mk }. We use for mk the 3DVAR filter
mk+1 = (I − KH)ψ(mk ) + Kyk+1 .
Let ek = uk − mk . Following closely Theorem 4.10 of [25] we obtain
Ekek+1 k2k+1 ≤ α2 Ekek k2 + O(γ 2 ).
Application of the Gronwall lemma, plus use of the optimality property, gives the required bound.
3 Particle Filters With Resampling
In this section we introduce the bootstrap particle filter, and the optimal particle filters, in all three cases with
resampling at every step. Assumption 2.1 ensures that the three particle filters have an elegant interpretation
as a random dynamical system (RDS) which, in addition, is useful for our analyses. We thus introduce the
filters in this way before giving the algorithmic definition which is more commonly found in the literature. The
bootstrap particle filter will not be the focus of subsequent theory, but does serve as an important motivation for
the optimal particle filters, and in particular for the consistency results in Section 6.
For each of the three particle filters, we will make frequent use of a resampling operator, which draws a
(n)
(m)
(m) N
sample uk from {b
uk }N
m=1 with weights {wk }m=1 which sum to one. To define this operator, we define
(m)
(m)
(m+1)
(m+1)
(m)
(m)
the intervals Ik = [αk , αk
) where αk
= αk + wk and then set
(n)
uk
=
N
X
(n)
(m)
II (m) (rk )b
uk
m=1
(3.1)
k
(n)
(n)
where rk ∼ U (0, 1) i.i.d. . Since the weights sum to one, rk
(n)
(i )
we will have uk = u
bk ∗ . We also notice that
(i )
will lie in exactly one of the intervals Ik ∗ and
N
N
X
X
1
(m)
δu(m) = S N
wk δub(m)
k
k
N
m=1
m=1
where S N is the sampling operator defined previously.
3.1 The Bootstrap Particle Filter
The bootstrap particle filter (BPF) approximates the filtering distribution µk with an empirical measure
ρN
k =
N
X
1
δu(n) .
k
N
n=1
(3.2)
(n)
The particle positions {uk }N
n=1 are defined as follows.
(n)
(n)
(n)
,
(n)
(m)
u
bk+1 = ψ(uk ) + ξk
(n)
uk+1 =
N
X
m=1
(n)
ξk
II (m) (rk+1 )b
uk+1 ,
k+1
8
∼ N (0, Σ) i.i.d. ,
(3.3)
where the second equation uses the resampling operator defined in (3.1) with weights computed according to
1
(n)
(n),∗
bk+1 |2Γ ) ,
wk+1 = exp(− |yk+1 − H u
2
(n),∗
w
(n)
wk+1 = PN k+1(j),∗ .
j=1 wk+1
(3.4)
Thus, for each particle in the RDS, we propagate them forward using the dynamical model and then re-sample
from the weighted particles to account for the observation likelihood.
As with most particle filters, the motivation for the bootstrap particle filter stems from an importance sampling scheme applied to a particular decomposition of the filtering distribution. By Bayes formula, we have
P (uk+1 |Yk+1 ) = P (uk+1 |Yk , yk+1 )
1
=
P (yk+1 |uk+1 , Yk )P (uk+1 |Yk )
P (yk+1 |Yk )
Z
1
P (yk+1 |uk+1 )P (uk+1 |uk , Yk )P (uk |Yk )duk
=
P (yk+1 |Yk )
Z
1
P (yk+1 |uk+1 )P (uk+1 |uk )P (uk |Yk )duk
=
P (yk+1 |Yk )
Z
1
=
P (yk+1 |uk+1 ) P (uk+1 |uk )P (uk |Yk )duk
P (yk+1 |Yk )
(3.5)
The bootstrap particleR filter approximates the posterior via a sequential application of importance sampling,
using P (uk+1 |Yk ) = P (uk+1 |uk )P (uk |Yk )duk as the proposal and re-weighting according to the likelihood
P (yk+1 |uk+1 ). Thus the method is typically described by the following algorithm for updating the particle
(n)
positions. The particles are initialized with u0 ∼ µ0 and then updated by
(n)
(n)
1. Draw u
bk+1 ∼ P (uk+1 |uk ).
(n)
2. Define the weights wk+1 for n = 1, . . . , N by
(n),∗
(n),∗
(n)
wk+1 = P (yk+1 |b
uk+1 ) ,
(n)
(n)
w
(n)
wk+1 = PN k+1(m),∗ .
m=1 wk+1
(n)
N
3. Draw uk+1 from {b
uk+1 }N
n=1 with weights {wk+1 }n=1 .
Under Assumption 2.1, it is clear that the sampling and re-weighting procedures are consistent with (3.3). Note
that the normalization factor P (yk+1 |Yk ) is not required in the algorithm and is instead approximated via the
normalization procedure in the second step.
It is useful to define the related measure
ρbN
k =
N
X
(n)
wk δub(n) ,
(3.6)
k
n=1
N
N N
with ρbN
bk . As we shall see in Section 6, the
0 = µ0 , which is related to the bootstrap particle filter by ρk = S ρ
N
advantage of ρbk is that it has a recursive definition which allows for elegant proofs of consistency results [34].
3.2 Optimal Particle Filter
The optimal particle filter with resampling can also be defined as a RDS. We once again approximate the filtering
distribution µk with an empirical distribution
µN
k =
N
X
1
δu(n) .
k
N
n=1
9
(3.7)
Under Assumption 2.1 the particles in this approximation are defined as follows. The particle positions are
(n)
initialized with u0 ∼ µ0 and then evolve according to the RDS
(n)
(n)
(n)
u
bk+1 = (I − KH)ψ(uk ) + Kyk+1 + ζk
(n)
uk+1 =
N
X
m=1
(n)
,
(n)
ζk
∼ N (0, C) i.i.d.
(3.8)
(m)
II (m) (rk+1 )b
uk+1 .
k+1
where C, S, K are defined in (2.6), (2.7) and as with the BPF, the second equation uses the resampling operator
defined in (3.1) but now using weights computed by
(n),∗
wk+1
1
(n)
= exp(− |yk+1 − Hψ(uk )|2S ) ,
2
(n),∗
(n)
wk+1
wk+1
= PN
m=1
(m),∗
wk+1
.
(3.9)
In light of the formulae given in (2.9), which are derived under Assumption 2.1, we see that the optimal particle
(n)
filter is updating the particle positions by sampling from P (uk+1 |uk , yk+1 ) and then re-sampling to account
(n)
for the likelihood factor P (yk+1 |uk ). In particular, without necessarily making Assumption 2.1, the optimal
particle filter is a sequential importance sampling scheme applied to the following decomposition of the filtering
distribution
Z
P (uk+1 |Yk+1 ) =
P (uk+1 , uk |Yk+1 )duk
ZX
=
P (uk+1 |uk , yk+1 )P (uk |Yk+1 )duk
(3.10)
ZX
P (yk+1 |uk )
P (uk+1 |uk , yk+1 )P (uk |Yk )duk .
=
X P (yk+1 |Yk )
(n)
In the algorithmic setting, the filter is initialized with u0
(n)
∼ µ0 , then for k ≥ 0
(n)
1. Draw u
bk+1 from P (uk+1 |uk , yk+1 )
(n)
2. Define the weights wk+1 for n = 1, . . . , N by
(n),∗
(n),∗
wk+1
=
(n)
P (yk+1 |b
uk )
,
(n)
wk+1
wk+1
= PN
m=1
(n)
(m)
(m),∗
wk+1
.
(m)
N
3. Draw uk+1 from {b
uk+1 }N
m=1 with weights {wk+1 }m=1 .
It is important to note that, although the OPF is well defined in this general setting for any choice of dynamicsobsevation model, it is only implementable under stringent assumptions on the forward and observation model,
such as those given in Assumption 2.1; under this assumption the step 1 may be implemented using the formulae
given in (2.9) and exploited in the derivation of (3.8). However we emphasize that models satisfying Assumption
2.1 do arise frequently in practice.
As with the BPF, it is beneficial to consider the related particle filter given by
µ
bN
k =
N
X
(n)
wk δub(n)
n=1
k
N
N N
for k ≥ 1 and with µ
bN
bk .
0 = µ0 . Clearly, we have that µk = S µ
10
(3.11)
3.3 Gaussianized Optimal Particle Filter
In [18], an alternative implementation of the OPF is investigated and found to have superior performance on
a range of test problems, particularly with respect to the curse of dimensionality. We refer to this filter as
the Gaussianized optimal particle filter (GOPF). Once again, we approximate the filtering distribution with an
empirical measure
N
X
1
δv(n) .
(3.12)
νkN =
k
N
n=1
As in the previous subsection, we first describe the filter under Assumption 2.1. The filter is initialized with
(n)
v0 ∼ µ0 , with subsequent iterates generated by the RDS
(n)
ṽk
=
N
X
(n)
,
(3.13)
k+1
m=1
(n)
vk+1
(m)
II (m) (rk+1 )vk
= (I −
(n)
KH)ψ(ṽk )
+ Kyk+1 +
(n)
ζk
,
(n)
ζk
∼ N (0, C) i.i.d.
and the weights appearing in the resampling operator are given by
1
(n),∗
(n)
wk+1 = exp(− |yk+1 − Hψ(vk )|2S ) ,
2
(n),∗
w
(n)
wk+1 = PN k+1(m),∗ .
m=1 wk+1
(3.14)
Thus, the update procedure for GOPF is weight-resample-propagate, as opposed to propagate-weight-resample
for the OPF. Hence the only difference between the OPF and GOPF is the implementation of the resampling
step.
In our analysis it is sometimes useful to consider the equivalent RDS
(m,n)
(m)
(m,n)
vbk+1 = (I − KH)ψ(vk ) + Kyk+1 + ζk
(n)
vk+1 =
N
X
m=1
(n)
(m,n)
ζk
∼ N (0, C) i.i.d.
(3.15)
(m,n)
II (m) (rk+1 )b
vk+1 .
k+1
(n)
The sequences vk defined in (3.13) and (3.15) agree because for every n there is exactly one m = m∗ (n) such
(m∗ (n),n)
that vbk+1
survives the resampling step. Writing it this way allows certain parts of our subsequent analysis
to be performed very similarly for both the OPF and GOPF.
For a general dynamics-observation model, the GOPF is described by the algorithm
(n)
1. Define the weights wk+1 for n = 1, . . . , N by
(n),∗
(n),∗
wk+1
=
(n)
P (yk+1 |vk )
,
(n)
wk+1
wk+1
= PN
m=1
(n)
2. Draw ṽk
(n)
(m)
(m),∗
wk+1
.
(m)
N
from {vk }N
m=1 with weights {wk+1 }m=1 .
(n)
3. Draw vk+1 from P (uk+1 |ṽk , yk+1 ) .
Unlike for the previous filters, there is no need to define an associated ‘hatted’ measure, as the GOPF can be
shown to satisfy a very natural recursion. This will be discussed in Section 6.
4 Ergodicity for Optimal Particle Filters
In this section we study the conditional ergodicity of the two optimal particle filters. The proofs are structurally
very similar to that for the filtering distribution itself.
11
4.1 Optimal Filter
(1)
(N )
Before stating the conditional ergodicity result, we first need some notation. Define uk = (uk , . . . , uk ) to
(1)′
(N )′
be particle positions defined by the RDS (3.8) with µ0 = δz0 and similarly with u′k = (uk , . . . , uk ) with
µ0 = δz0′ . Then uk is a Markov chain taking values on X N , whose Markov kernel we denote by qk (z, ·). As in
the proof of Theorem 2.3, the law of uk is given by q k (z0 , ·), defined recursively as in (2.11) and similarly the
law of u′k is given by q k (z0′ , ·). The conditional ergodicity result states that if the two filters uk , u′k are driven
by the same observational data, then the law of uk will converge to the law of u′k exponentially as k → ∞. 1
Theorem 4.1. Consider the OPF particles uk , u′k defined above. Assume moreover that the observational data
used to define each filter is the same, and given by {yk† }k≥1 from Assumption 2.2. Then there exists zN ∈ (0, 1)
such that, almost surely with respect to the randomness generating {yk† }k≥1 ,
1/k
= zN .
lim dTV q k (z0 , ·), q k (z0′ , ·)
(4.1)
k→∞
Proof. We will follow the proof of Theorem 2.3 closely, via Steps A, B and C. Step A proceeds exactly as in
the proof of Theorem 2.3. In particular, if we assume a minorization condition
qk+1 (u, ·) ≥ ǫk Q(·) ,
(4.2)
and repeat the coupling argument from Step A, with uk , u′k in place of zk , zk′ , the we obtain the bound
dTV q k (z0 , ·), q k (z0′ , ·) ≤ Πkj=1 (1 − ǫj ) .
This is indeed how the proof proceeds, with the caveat that the minorization constants ǫk depend on the number
(1)
(N )
of particles N . In Step B, we obtain the minorization condition (4.2). Let u
bk = (b
uk , . . . , u
bk ) as defined in
the RDS (3.8). Before proceeding, we introduce some preliminaries. Using the fact that
†
†
yk+1
= Hψ(u†k ) + γ(rHξk† + ηk+1
)
and defining
N
N
(n)
(n)
ak = (I − KH)ψ(uk ) + KHψ(u†k )
ζk = ζk
n=1
n=1
N
†
†
ρ†k,0 = γ(rHξk† + ηk+1
), ρ†k = γK(rHξk† + ηk+1
)
n=1
we see that
u
bk+1 = ak + ρ†k + ζk .
The next element of the sequence, uk+1 , is then defined by the second identity in (3.8). We are interested in
† ∞
the conditional ergodicity of {uk }∞
k=1 with the sequence {ρk }k=1 fixed. By Assumption 2.1, ak is bounded
uniformly in k. We define the covariance operator C ∈ L(X N , X N ) to be a block diagonal covariance with
each diagonal entry equal to C and then
R = sup |(I − KH)ψ(u) + KHψ(v)|2C ,
(u,v)
which is finite by Assumption 2.1.
Now, let E0 be the event that, upon resampling, every particle survives the resampling. There are N ! such
permutations. We will do the calculation in the case of a trivial permutation, that is, where each particle is
mapped to itself under the resampling. However the bounds which follow work for any permutation of this
1 We abuse notation in this subsection by using u ∈ X N to denote the N particles comprising the optimal particle filter; this differs
k
from the notation uk ∈ X used in the remainder of the paper to denote the underlying dynamical model.
12
because we do not use any information about location of the mean; we simply use bounds on the drift ψ. If each
(n)
(n)
particle is mapped to itself, then uk+1 = u
bk+1 for all n = 1, . . . , N . It follows that
qk+1 (u, A) = P(uk+1 ∈ A|uk = u)
≥ P(uk+1 ∈ A|uk = u, E0 )P(E0 )
= P(b
uk+1 ∈ A|uk = u)P(E0 ) .
We first note that
Z
1
1
P(b
uk+1 ∈ A|uk = u) = p
exp − |x − ak − ρ†k |2C dx
2
(2π)dN det C A
exp −|ak + ρ†k |2C Z
≥ p
exp −|x|2C dx
(2π)dN det C
A
≥ 2−dN/2 exp(−2|ak |2C ) exp(−2|ρ†k |2C )QC (A)
≥ 2−dN/2 exp(−2N R2 ) exp(−2|ρ†k |2C )QC (A)
where QC (A) is the Gaussian measure N (0, 21 C). Thus we have shown that
P(b
uk+1 ∈ A|uk = u) ≥ δk QC (A)
where
(4.3)
δk = 2−dN/2 exp(−2N R2 ) exp(−2|ρ†k |2C ) .
Moreover, we have that
(n)
P(E0 ) = N !ΠN
n=1 wk+1 .
(n)
(n),∗
(m),∗
Note that we have the bound wk+1 ≥ wk+1 /N for each n = 1, . . . , N because each wk+1 is bounded by 1.
But we have
1
(n),∗
(n)
wk+1 = exp − |yk+1 − Hψ(uk )|2S
2
1
(n)
= exp − |Hψ(u†k ) − Hψ(uk ) + ρ†k,0 |2S
2
≥ exp −r2 − |ρ†k,0 |2S
where
r2 = sup |Hψ(u) − Hψ(v)|2S
u,v
which is finite by Assumption 2.1. From this we see that
P(E0 ) ≥ N !
1
exp −N r2 − N |ρ†k,0 |2S .
NN
Thus we obtain the minorization conditon (4.2) where
ǫk = N !
1
exp −N r2 − N |ρ†k,0 |2S δk
N
N
,
Q = QC .
Finally, Step C follows identically to the proof of Theorem 2.3 and the proof is complete.
13
4.2 Gaussianized Optimal Filter
(1)
(N )
As in the last section, we define vk = (vk , . . . , vk ) and similarly for vk′ using the RDS but now for the
GOPF (3.13) (or alternatively (3.15)) with distinct initializations µ0 = δz0 and µ0 = δz0′ . As with Theorem 4.1,
we let q k (z0 , ·) denote the law of vk .
Theorem 4.2. Consider the GOPF particles vk , vk′ defined above. Assume moreover that the observational data
used to define each filter is the same, and given by {yk† }k≥1 from Assumption 2.2. Then there exists zN ∈ (0, 1)
such that, almost surely with respect to the randomness generating {yk† }k≥1 ,
1/k
lim dTV q k (z0 , ·), q k (z0′ , ·)
= zN .
(4.4)
k→∞
Proof. The proof follows similarly to that of Theorem 4.1, in particular it suffices to obtain a minorization
condition for qk+1 (v, ·). We will use the RDS representation (3.15), which we now recall
(m,n)
(m)
(m,n)
vbk+1 = (I − KH)ψ(vk ) + Kyk+1 + ζk
(n)
vk+1 =
N
X
m=1
(n)
(m,n)
ζk
∼ N (0, C) i.i.d.
(4.5)
(m,n)
vk+1 .
II (m) (rk+1 )b
k+1
(n)
In this formulation, note that for each n there is one and only one m = m∗ (n) such that II (m) (rk ) = 1. We
k
see that
∗
(m (n),n) N
(n)
)n=1 .
vk
vk := (vk )N
n=1 = (b
Using the fact that
†
†
yk+1
= Hψ(u†k ) + γ(rHξk† + ηk+1
)
and defining
N
N
(m∗ (n),n)
(m∗ (n))
ζk = ζk
ak = (I − KH)ψ(vk
) + KHψ(u†k )
n=1
n=1
N
†
†
ρ†k,0 = γ(rHξk† + ηk+1
), ρ†k = γK(rHξk† + ηk+1
)
n=1
we see that
vk+1 = ak + ρ†k + ζk .
Now notice that
(m∗ (n),n) N
qk+1 (v, A) = P(vk+1 ∈ A|vk = v) = P (b
vk+1
)n=1 ∈ A|vk = v
Z
1
1
p
=
exp − |x − ak − ρ†k |2C dx
2
(2π)dN det C A
exp −|ak + ρ†k |2C Z
≥ p
exp −|x|2C dx
(2π)d det C
A
≥ 2−dN/2 exp(−2|ak |2C ) exp(−2|ρ†k |2C )QC (A)
≥ 2−dN/2 exp(−2N R2 ) exp(−2|ρ†k |2C )QC (A)
where QC is the Gaussian measure N (0, 12 C). Thus we have shown that
qk+1 (v, A) ≥ δk QC (A)
where
δk = 2−dN/2 exp(−2N R2 ) exp(−2|ρ†k |2C ) .
The remainder of the proof (step C) follows identically to Theorem 2.3, Theorem 4.1.
14
(4.6)
Remark 4.3. We can compare our (upper bounds on the) rates of convergence for the true filtering distribution,
and the two optimal filters, using the minorization constants. From the proof of Theorem 2.3, we see that the
minorization constants determine the rate of convergence in the statement of conditional ergodicity. In particular
for k large we have that
dT V (µk , µ′k ) . zk .
where
log z = E log(1 − ǫ1 ) .
The corresponding statements for the law of the particles in each optimal particle filter hold similarly, but ǫ1
depends on the number of particles. For the OPF we have
ǫ1 = N !
where
1
exp −N r2 − N |ρ†1,0 |2S δ1 ,
N
N
δ1 = 2−dN/2 exp(−2N R2 ) exp(−2|ρ†1 |2C ) ;
for the GOPF we simply have ǫ1 = δ1 . The extra N dependence in the OPF clearly leads to a slower (upper
bound on the) rate of convergence for the OPF. Thus, by this simple argument, we obtain a better convergence
rate for the GOPF than for the OPF. This suggests that the GOPF may have a better rate of convergence for fixed
ensemble sizes; further analysis or experimental study of this point would be of interest.
5 Accuracy for Optimal Particle Filters
In this section we study the accuracy of the optimal particle filters, in the small noise limit γ → 0. The
expectation appearing in the theorem statements is with respect to the noise generating the data, and with
respect to the randomness within the particle filter itself. Note that this situation differs from that in the accruacy
result for the filter itself which uses data generated by the statistical model itself. Assumption 2.2 relaxes this
assumption.
5.1 Optimal Particle Filter
(n)
Theorem 5.1. Let Assumption 2.4 hold and consider the OPF with particles {uk }N
n=1 defined by (3.8) with
data {yk† } from Assumption 2.2. It follows that there is constant c such that
(n)
lim sup E max kuk − u†k k2 ≤ cγ 2 .
n
k→∞
Proof. First recall the notation Σ = σΣ0 , Γ = γΓ0 and r = σ/γ. Now define
S0 = r2 HΣ0 H ∗ + Γ0 ,
C0 = r2 (I − KH)Σ0
and note that
S = γ 2 S0 , C = γ 2 C0 , K = r2 Σ0 H ∗ S0−1 .
We will use the RDS representation
(n)
(n)
(n)
†
u
bk+1 = (I − KH)ψ(uk ) + Kyk+1
+ γζ0,k
(n)
uk+1 =
N
X
m=1
(n)
,
(n)
ζk
(n)
II (m) (rk+1 )b
uk+1 .
k+1
(n)
where ζ0,k ∼ N (0, C0 ) i.i.d. . Hence we have
u†k+1 = (I − KH)ψ(u†k ) + KHψ(u†k ) + rγξk† ,
15
∼ N (0, C) i.i.d.
(5.1)
(n)
(n)
(n)
(n)
†
u
bk+1 = (I − KH)ψ(uk ) + K(Hψ(u†k ) + γηk+1
) + γζ0,k .
(n)
(n)
(n)
(n)
u
bk+1 − u†k+1 = (I − KH) ψ(uk ) − ψ(u†k ) + γιk
where ζ0,k ∼ N (0, C0 ) i.i.d. Subtracting, we obtain
where ιk
(5.2)
(n)
†
:= (Kηk+1
+ ζ0,k − rξk† ). Moreover we have the identity
u†k+1 =
N
X
(n)
II (m) (rk+1 )u†k+1 .
(5.3)
k+1
m=1
Thus, defining
(n)
ek
(n)
(n)
= uk − u†k ,
ebk
we have from (5.1) and (5.3)
(n)
ek+1 =
N
X
(n)
(m)
II (m) (rk+1 )b
ek+1 .
m=1
Thus
(n)
=u
bk − u†k
k+1
(n)
(m)
max kek+1 k2 ≤ max kb
ek+1 k2 ,
n
m
where the norm is the one in which we have a contraction. Using (5.2), the Lipschitz property of (I − KH)ψ(·),
taking expectations and using independence, yields
(n)
(n)
E max kuk+1 − u†k+1 k2 ≤ α2 E max kuk − u†k k2 + O(γ 2 )
n
n
and the result follows by Gronwall.
5.2 Gaussianized Optimal Filter
(n)
Theorem 5.2. Let Assumption 2.4 hold and consider the GOPF with particles {vk }N
n=1 defined by (3.13) (or
(3.15)) with data {yk† } from Assumption 2.2. It follows that there is constant c such that
(n)
lim sup E max kvk − u†k k2 ≤ cγ 2 .
k→∞
n
Proof. Recall the notation defined at the beginning of the proof of Theorem 5.1. Recall also the RDS representation of the GOPF (3.15)
(m,n)
(m)
(m,n)
vbk+1 = (I − KH)ψ(vk ) + Kyk+1 + γζ0,k
(n)
vk+1 =
N
X
m=1
(m,n)
where we now have ζ0,k
(n)
(m,n)
II (m) (rk+1 )b
vk+1 .
(5.4)
k+1
∼ N (0, C0 ) i.i.d. , recalling that C = γ 2 C0 . We also have the identity
u†k+1 = (I − KH)ψ(u†k ) + KHψ(u†k ) + rγξk† ,
(5.5)
(n)
where ζ0,k ∼ N (0, C0 ) i.i.d. Subtracting, we obtain
(m,n)
(m)
vbk+1 − u†k+1 = (I − KH) ψ(vk
16
(n)
) − ψ(u†k ) + γιk
(5.6)
(n)
where ιk
(n)
†
:= (Kηk+1
+ ζ0,k − rξk† ). Note that
u†k+1 =
N
X
(n)
II (m) (rk+1 )u†k+1
m=1
(5.7)
k+1
so that, defining
(n)
ek
(n)
= vk
− u†k ,
we have from (5.4), (5.6) and (5.7)
(n)
ek+1 =
N
X
m=1
Thus
(m,n)
ebk
(m,n)
= vbk
(n)
− u†k
(m,n)
II (m) (rk+1 )b
ek+1 .
k+1
(n)
(m,n)
max kek+1 k2 ≤ max kb
ek+1 k2 ,
n
m
where the norm is the one in which we have a contraction. Using (5.6), using the Lipschitz property of (I −
KH)ψ(·), taking expectations and using independence, gives
(n)
(n)
E max kvk+1 − u†k+1 k2 ≤ α2 E max kvk − u†k k2 + O(γ 2 ) .
n
n
The result follows by Gronwall.
6 Consistency in the Large Particle Limit
In this section we state and prove consistency results for the BPF, OPF and GOPF introduced in Section 3. For
the BPF the result is well known but we reproduce it here as it serves as an ideological template for the more
complicated proofs to follow; furthermore we present the clean proof given in [34] (see also [25, Chapter 4]) as
this particular approach to the result generalizes naturally to the OPF and GOPF.
6.1 Bootstrap Particle Filter
In the following, we let fk+1 : X → R be any function with fk+1 (uk+1 ) ∝ P (yk+1 |uk+1 ); any proportionality
constant will suffice, but the normalization constant is of course natural. As in previous sections, we let µk
denote the filtering distribution. The following theorem is stated and then proved through a sequence of lemmas
in the remainder of the subsection.
N
Theorem 6.1. Let ρbN
k , ρk be the BPFs defined by (3.6), (3.2) respectively, and suppose that there exists a
constant κ ∈ (0, 1] such that
κ ≤ fk+1 (uk+1 ) ≤ κ−1
(6.1)
for all uk+1 ∈ X , yk+1 ∈ Y and k ∈ {0, . . . , K − 1}. Then we have
d(b
ρN
K , µK ) ≤
K
X
(2κ−2 )k N −1/2
(6.2)
K
X
(2κ−2 )k N −1/2
(6.3)
k=1
and
d(ρN
K , µK ) ≤
k=0
for all K, N ≥ 1.
Remark 6.2. Note that the constant κ−2 appearing in the estimates above arises as the ratio of the upper and
lower bounds in (6.1). In particular, we cannot optimize κ by choosing a different proportionality constant for
fk+1 .
17
It is straightforward to check that the filtering distribution satisfies the recursion
µk+1 = Lk+1 P µk
(6.4)
as this is nothing more than a statement of the Bayes formula calculation given in (3.5). Here P : M(X ) →
M(X ) is the Markov semigroup
Z
P (uk+1 |uk )ν(duk ),
P ν(A) =
A
which gives the prior, and Lk+1 : M(X ) → M(X ) is the likelihood operator
Z
Lk+1 ν(A) =
Z −1 P (yk+1 |uk+1 )ν(duk+1 )
A
for each A ⊂ X measurable and Z is a normalization constant.
In terms of understanding the approximation properties of the BPF, the key observation is that the measures
{b
ρN
k }k≥0 satisfy the recursion
N
ρbN
bN
ρbN
(6.5)
k
0 = µ0 .
k+1 = Lk+1 S P ρ
where P : M(X ) → M(X ) is the Markov semigroup and, as defined in Section 1.3, S N : M(X ) → M(X )
is the sampling operator. The convergence of the measures is quantified by the metric on random elements of
M(X ) defined by
p
d(µ, ν) = sup|f |∞ ≤1 Eω |µ(f ) − ν(f )|2 ,
where, in our setting, Eω will always denote expectation with respect to the randomness in the sampling operator
S N ; this metric reduces to twice the total variation metric, used in studying ergodicity, when the measures are
not random. The main ingredients for the proof are the following three estimates for the operators P, S N and
Lk+1 with respect to the metric d.
Lemma 6.3. We have the following
1. supν∈M(X ) d(S N ν, ν) ≤ N −1/2 .
2. d(P µ, P ν) ≤ d(µ, ν) for all µ, ν ∈ M(X ).
Proof. See [25, Lemma 4.7, Lemma 4.8].
We state the following Lemma in a slightly more general form than necessary for the BPF, as it will be
applied in different contexts for the optimal particle filters.
Lemma 6.4. Let Z be a finite dimensional Euclidean space. Suppose that gk+1 : Z → [0, ∞) is bounded and
that there exists κ ∈ (0, 1] such that
κ ≤ gk+1 (u) ≤ κ−1
(6.6)
for all u ∈ Z and define Gk+1 : M(Z) → M(Z) by Gk+1 (ν)(ϕ) = ν(gk+1 ϕ)/ν(gk+1 ). Then
d(Gk+1 µ, Gk+1 ν) ≤ (2κ−2 )d(µ, ν)
for all µ, ν ∈ M(Z).
Proof. See [25, Lemma 4.9].
We can now prove the consistency result.
Proof of Theorem 6.1. First note that, taking Z = X and gk+1 = fk+1 in Lemma 6.4, we obtain Gk+1 ν =
Lk+1 ν. Thus, by (6.1), it follows that d(Lk+1 µ, Lk+1 ν) ≤ (2κ−2 )d(µ, ν) for all µ, ν ∈ M(X ). Combining
this fact with the recursions given in (6.5), (6.4) and the estimates given in Lemmas 6.3 we have
N
d(b
ρN
bN
k+1 , µk+1 ) = d(Lk+1 S P ρ
k , Lk+1 P µk )
18
≤ 2κ−2 d(S N P ρbN
k , P µk )
N
−2
d(S P ρbN
bN
bN
≤ 2κ
k ,Pρ
k ) + d(P ρ
k , P µk )
≤ 2κ−2 N −1/2 + 2κ−2 d(b
ρN
k , µk ) .
N N
And since ρbN
bk
0 = µ0 , we obtain (6.2) by induction. Moreover, since ρk = S ρ
and (6.3) follows.
N N
d(ρk , µk ) = d(S N ρbN
bk , ρbN
ρN
k ) + d(b
k , µk )
k , µk ) ≤ d(S ρ
6.2 Sequential Importance Resampler
In this section we will apply the above strategy to prove the corresponding consistency result for the OPF.
Instead of restricting to the OPF, we will obtain results for the sequential importance resampler (SIR), for which
the OPF is a special case. See [11, sections II, III] for background in sequential importance sampling, and on
the use of resampling. As with the OPF, the SIR is an empirical measure
µN
k =
N
X
1
δ (n) .
N uk
n=1
(6.7)
We will abuse notation slightly by keeping the same notation for the OPF and the SIR. The particle positions
are drawn from a proposal distribution π(uk+1 |uk , yk+1 ) and re-weighted accordingly. As usual, the positions
(n)
are initialized with u0 ∼ µ0 and updated by
(n)
(n)
1. Draw u
bk+1 from π(uk+1 |uk , yk+1 )
(n)
2. Define the weights wk+1 for n = 1, . . . , N by
(n)
(n),∗
wk+1
(n)
=
(n)
(n),∗
(n)
P (yk+1 |b
uk+1 )P (b
uk+1 |uk )
(n)
,
(n)
π(b
uk+1 |uk , yk+1 )
(m)
(n)
wk+1
wk+1
= PN
m=1
(m),∗
wk+1
.
(m)
N
3. Draw uk+1 from {b
uk+1 }N
m=1 with weights {wk+1 }m=1 .
Thus, if we take the proposal to be π(uk+1 |uk , yk+1 ) = P (uk+1 |uk , yk+1 ) then we obtain the OPF (3.7).
Without being more specific about the proposal π, it is not possible to represent the SIR as a random dynamical
system in general.
Precisely as with the OPF, for the SIR we define the related filter
µ
bN
k =
N
X
(n)
wk δub(n)
n=1
(6.8)
k
N N
N
bk . The following theorem, and corollary, are proved
with µ
bN
0 = µ0 and note the important identity µk = S µ
in the remainder of the subsection, through a sequence of lemmas.
Theorem 6.5. Let µ
bN , µN be the SIR filters defined by (6.8), (6.7) respectively, with proposal distribution π.
Suppose that there exists fk+1 : X × X → R with
fk+1 (uk+1 , uk ) ∝
P (yk+1 |uk+1 )P (uk+1 |uk )
π(uk+1 |uk , yk+1 )
(6.9)
and satisfying
κ ≤ fk+1 (uk+1 , uk ) ≤ κ−1
19
(6.10)
for all uk+1 , uk ∈ X , k ∈ {0, . . . , K − 1} and some κ ∈ (0, 1]. Then we have
d(b
µN
K , µK ) ≤
K
X
(2κ−2 )k N −1/2
(6.11)
K
X
(2κ−2 )k N −1/2
(6.12)
k=1
and
d(µN
K , µK ) ≤
k=0
for all K, N ≥ 1.
Remark 6.6. As for the boostrap particle filter, the appearance of κ−2 reflects the ratio of the upper and lower
bounds in (6.9); hence there is nothing to be gained from optimizing over the constant of proportionality. If we
let
P (yk+1 |uk+1 )P (uk+1 |uk )
fk+1 (uk+1 , uk ) =
π(uk+1 |uk , yk+1 )
then the estimate (6.10) is equivalent to
κπ(uk+1 |uk , yk+1 ) ≤ P (yk+1 |uk+1 )P (uk+1 |uk ) ≤ κ−1 π(uk+1 |uk , yk+1 ) .
This can thus be interpreted as a quantification of equivalence between measures π and the optimal proposal
P (yk+1 |uk+1 )P (uk+1 |uk ).
Remark 6.7. It is important to note that Assumption 2.1 on the dynamics-observation model is not required
by Theorem 6.5. For the consistency results, Assumption 2.1 is only used to ensure that (6.10) holds. This
observation leads to the following corollary.
Corollary 6.8. Let µ
bN , µN be the OPFs defined in (3.11), (3.7) respectively and satisfying Assumption 2.1.
Then there is κ = κ(YK ) such that we have
d(b
µN
K , µK ) ≤
K
X
(2κ−2 )k N −1/2
(6.13)
K
X
(2κ−2 )k N −1/2
(6.14)
k=1
and
d(µN
K , µK ) ≤
k=0
for all K, N ≥ 1 and where κ−1 = exp max0≤j≤K−1 |yj+1 |2 + supv |Hψ(v)|2S .
Although similar to the argument for the BPF, the recursion argument for the SIR is necessarily more
(n)
(n)
(n)
complicated than that for the BPF, as the weights wk+1 can potentially depend on both uk+1 and uk . This
suggests that we must build a recursion which updates measures on a joint space (uk+1 , uk ) ∈ X × X . This
(n)
would also be necessary if we restricted our attention to the OPF, as the weights are defined using uk and not
(n)
the particle positions u
bk+1 after the proposal.
The recursion is defined using the following three operators.
π
1. First Pk+1
maps probability measures on X to probability measures on X × X by
ZZ
π
Pk+1 µ(A) =
π(uk+1 |uk , yk+1 )µ(duk )duk+1
A
where A is a measurable subset of X × X .
20
2. The reweighting operator Lπk+1 maps probability measures on X × X to probability measures on X × X
and is defined by
ZZ
Lπk+1 Q(A) = Z −1
wk+1 (uk+1 , uk )Q(duk+1 , duk )
A
where Z is the normalization constant of the resulting measure. The weight function is given by
wk+1 (uk+1 , uk ) =
P (yk+1 |uk+1 )P (uk+1 |uk )
.
π(uk+1 |uk , yk+1 )
3. Finally, M maps probability measures on X × X into probability measures on X via marginalization onto
the first component:
Z Z
M Q(B) =
Q(duk+1 , duk )
B×X
.
It is easy to see that the posterior µk satisfies a natural recursion in terms of these operators.
π
Lemma 6.9. µk+1 = M Lπk+1 Pk+1
µk
π
Proof. Let P (uk |Yk ) denote the density of µk , then Pk+1
µk is a measure on X × X with density
π(uk+1 |uk , yk+1 )P (uk |Yk ) .
π
And Lπk+1 Pk+1
µk is a measure on X × X with density
Z −1 wk+1 (uk+1 , uk )π(uk+1 |uk , yk+1 )P (uk |Yk ) = Z −1 P (yk+1 |uk+1 )P (uk+1 |uk )P (uk |Yk ) .
π
Finally, M Lπk+1 Pk+1
µk is a measure on X with density
Z
Z −1 P (yk+1 |uk+1 )P (uk+1 |uk )P (uk |Yk )duk = Z −1 P (yk+1 |uk+1 )P (uk+1 |Yk ) .
(6.15)
X
Similarly, for the normalization factor, we have
Z Z
Z
Z=
P (yk+1 |uk+1 )P (uk+1 |uk )P (uk |Yk )duk duk+1 =
P (yk+1 |uk+1 )P (uk+1 |Yk )duk
X ×X
X
and thus by Bayes’ formula (6.15) is equal to P (uk+1 |Yk+1 ) as required.
We now show that an associated recursion is satisfied by the SIR filter µ
bN .
Lemma 6.10. Let µ
bN be the SIR filter given by (6.8), then
π
N π
µ
bN
bN
k+1 = M Lk+1 S Pk+1 µ
k
for all k ≥ 0 and N ≥ 1, where S N denotes the sampling operator acting on M(X × X ).
PN
(n)
π
Proof. By definition, µ
bN
bN
(n) so that P
k =
k+1 µ
k ∈ M(X × X ) with density
n=1 wk δu
b
k
N
X
n=1
(n)
(n)
(n)
wk π(uk+1 |b
uk , yk+1 )δ(uk − u
bk ) .
(n)
(n)
(n)
π
uk+1 , uk ) obtained as follows: first draw a sample uk
µ
bN
Note that a sample U ∼ Pk+1
k is a pair (b
(n)
(n) N
{b
uk }N
n=1 with weights {wk }n=1 and then draw sample
(n)
π
µ
bN
of the u
bk+1 sequence we see that S N Pk+1
k has density
(n)
u
bk+1
from
(n)
π(uk+1 |uk , yk+1 ).
N
1 X
(n)
(n)
δ(uk+1 − u
bk+1 )δ(uk − uk ) .
N n=1
21
from
Thus, by definition
π
It follows that Lπk+1 S N Pk+1
µN
k has density
N
X
n=1
(n)
(n)
(n)
(n)
Z −1 wk+1 (b
uk+1 , uk )δ(uk+1 − u
bk+1 )δ(uk − uk )
π
and M Lπk+1 S N Pk+1
µN
k has density
N
X
n=1
(n)
(n)
(n)
Z −1 wk+1 (b
uk+1 , uk )δ(uk+1 − u
bk+1 ) .
Lastly, the normalization factor is given by
Z=
Z Z
N
X
X ×X n=1
(n)
(n)
(n)
(n)
wk+1 (b
uk+1 , uk )δ(uk+1 − u
bk+1 )δ(uk − uk )duk duk+1 =
(n)
(n)
N
X
(n)
(n)
wk+1 (b
uk+1 , uk ) ,
n=1
(n)
so that Z −1 wk+1 (b
uk+1 , uk ) = wk+1 and we obtain the result.
In the final step before proving Theorem 6.5, we state some simple properties for the operators appearing in
the recursions. Note that these are similar but not (all) immediately implied by the corresponding results for the
BPF, Lemma 6.3.
Lemma 6.11. We have the following simple estimates:
1. d(M ν, M µ) ≤ d(ν, µ).
π
π
2. d(Pk+1
ν, Pk+1
µ) ≤ d(ν, µ)
3. supν∈M(X ×X ) d(S N ν, ν) ≤ N −1/2
Proof. Let f˜(x, y) = f (x) and let g(x, y) denote an arbitrary function. Then
M ν(f ) − M µ(f ) = ν(f˜) − µ(f˜)
The first inequality follows immediately from taking supremum over all |f | ≤ 1, which is necessarily smaller
than the supremum of ν(g) − µ(g) over all |g| ≤ 1.
We also have
π
π
Pk+1
ν(g) − Pk+1
µ(g) = ν(g π ) − µ(g π )
R
where g π (uk ) = g(uk+1 , uk )π(uk+1 |uk , Yk+1 )duk+1 . And since |g π |∞ ≤ 1, the second inequality follows.
The third inequality is proven in [25, Lemma 4.7], simply replacing X with X × X .
We can now proceed with the main result.
Proof of Theorem 6.5. In the context of Lemma 6.4, take Z = X × X and gk+1 = fk+1 , it follows that
Gk+1 ν = Lπk+1 ν.Indeed, for any ϕ : X × X → R and with gk+1 = Z −1 wk+1 we have
Gk+1 ν(ϕ) =
ν(wk+1 ϕ)
ν(gk+1 ϕ)
=
= Lπk+1 ν(ϕ) .
ν(gk+1 )
ν(wk+1 )
By Assumption 6.10, we therefore obtain from Lemma 6.4 that
d(Lπk+1 µ, Lπk+1 ν) ≤ (2κ−2 )d(µ, ν)
for all µ, ν ∈ M(X × X ).
22
Thus, using the recursions given in Lemmas 6.9, 6.10 and the estimates given in Lemma 6.11, we obtain
π
N π
π
π
d(b
µN
bN
k+1 , µk+1 ) = d(M Lk+1 S Pk+1 µ
k , M Lk+1 Pk+1 µk )
π
π
π
≤ d(Lπk+1 S N Pk+1
µ
bN
k , Lk+1 Pk+1 µk )
π
π
≤ 2κ−2 d(S N Pk+1
µ
bN
k , Pk+1 µk )
π
π
π
π
µ
bN
bN
bN
≤ 2κ−2 d(S N Pk+1
k , Pk+1 µ
k ) + d(Pk+1 µ
k , Pk+1 µk )
≤ 2κ−2 N −1/2 + 2κ−2 d(b
µN
k , µk ) .
N
N N
and since µ
bN
bk
0 = µ0 , we obtain (6.13) by induction. Moreover, since µk = S µ
N N
d(µN
bk , µk ) ≤ d(S N µ
bN
bN
µN
k , µk ) = d(S µ
k ,µ
k ) + d(b
k , µk )
and (6.14) follows.
The corollary follows immediately.
Proof of Corollary 6.8. For the OPF we have
π(uk+1 |uk , yk+1 ) = P (uk+1 |uk , yk+1 ) =
P (yk+1 |uk+1 )P (uk+1 |uk )
P (yk+1 |uk )
where we have applied Bayes formula in the final equality. But under Assumption 2.1 we have that
1
−1
2
P (yk+1 |uk ) = ZS exp − |yk+1 − Hψ(uk )|S .
2
Thus we define fk+1 by
P (yk+1 |uk+1 )P (uk+1 |uk )
1
= exp − |yk+1 − Hψ(uk )|2S
π(uk+1 |uk , yk+1 )
2
and hence (6.10) holds with κ−1 = exp max0≤j≤K−1 |yj+1 |2 + supv |Hψ(v)|2S , which, for each Yk , is finite
by Assumption 2.1. The result follows from Theorem 6.5.
fk+1 (uk+1 , uk ) = ZS
6.3 Gaussianized Optimal Particle Filter
In this section we derive the consistency result for the GOPF.
Theorem 6.12. Let ν N be the GOPF defined by (3.12) and let Assumption 2.1 hold. Then there is κ = κ(YK )
such that
K
X
N
d(νK
, µK ) ≤
(2κ−2 )k N −1/2
k=0
for all K, N ≥ 1, where κ
−1
= exp max0≤j≤K−1 |yj+1 |2 + supv |Hψ(v)|2S .
For the GOPF, the consistency proof uses the same strategy, but turns out to be much more straightforward.
First note that the decomposition of the filtering distribution given in (3.10) gives the recursion formula
µk+1 = Qk+1 Kk+1 µk
(6.16)
where Kk+1 : M(X ) → M(X ) is defined by
Kk+1 µ(A) = Z
−1
Z
P (yk+1 |uk )µ(duk )
A
for all measurable A ⊂ X where Z is the normalization constant, and Qk+1 : M(X ) → M(X ) is the Markov
semigroup with kernel P (uk+1 |uk , yk+1 ).
Moreover, we have the following recursion for the GOPF.
23
Lemma 6.13. The GOPF νkN satsfies the recursion
N
νk+1
= S N Qk+1 Kk+1 νkN
(6.17)
with ν0N = S N µ0 .
Proof. Let νkN =
1
N
PN
(n) ,
n=1 δvk
then Kk+1 ∈ M(X ) with density
N
X
(n)
(n)
Z −1 P (yk+1 |vk )δ(vk − vk ) .
n=1
The normalization constant is given by
Z=
Z X
N
X n=1
(n)
(n)
(n)
P (yk+1 |vk )δ(vk − vk )dvk =
N
X
(n)
P (yk+1 |vk )
n=1
(n)
and thus Z −1 P (yk+1 |vk ) = wk+1 . We then have Qk+1 Lk+1 νkN ∈ M(X ) with density
N
X
(n)
(n)
wk+1 P (vk+1 |vk , yk+1 ) .
n=1
To draw a sample
(n)
vk+1
(n)
(n)
from this mixture model, we draw ṽk
(m)
(m)
N
from {vk }N
m=1 with weights {wk+1 }m=1 and
(n)
N
then draw vk+1 from P (vk+1 |ṽk , yk+1 ). It follows that S N Qk+1 Kk+1 νkN = νk+1
.
Proof. If we let
then gk+1
1
2
gk+1 (vk ) := ZS P (yk+1 |vk ) = exp − |yk+1 − Hψ(vk )|S ,
2
satisfies the assumptions of Lemma 6.4 with
κ−1 = exp
max |yj+1 |2 + sup |Hψ(v)|2S .
0≤j≤K−1
v
In particular, since Gk+1 ν = Kk+1 ν, it follows from Lemma 6.4 that
d(Kk+1 µ, Kk+1 ν) ≤ (2κ−2 )d(µ, ν)
for all µ, ν ∈ M(X ).
Using the recursions (6.16),(6.17) and the estimates from Lemma 6.3, we see that
N
, µk+1 ) = d(S N Qk+1 Kk+1 νkN , Qk+1 Kk+1 µk )
d(νk+1
≤ d(S N Qk+1 Kk+1 νkN , Qk+1 Kk+1 νkN ) + d(Qk+1 Kk+1 νkN , Qk+1 Kk+1 µk )
≤ N −1/2 + d(Kk+1 νkN , Kk+1 µk )
≤ N −1/2 + 2κ−2 d(νkN , µN
k )
by induction, we obtain
N
d(νk+1
, µN
k+1 ) ≤
k
X
(2κ−2 )j N −1/2 + (2κ−2 )k+1 d(ν0N , µ0 ) .
j=0
And the result follows from the fact d(ν0N , µ0 ) = d(S N µ0 , µ0 ) ≤ N −1/2 .
Acknowledgements. DK is supported as a Courant instructor. The work of AMS is supported by EPSRC,
DARPA and ONR.
24
References
[1] Hajime Akashi and Hiromitsu Kumamoto. Random sampling approach to state estimation in switching
environments. Automatica, 13(4):429–434, 1977.
[2] Rami Atar and Ofer Zeitouni. Exponential stability for nonlinear filtering. In Annales de l’IHP Probabilités
et statistiques, volume 33, pages 697–725, 1997.
[3] Peter Bauer, Alan Thorpe, and Gilbert Brunet. The quiet revolution of numerical weather prediction.
Nature, 525(7567):47–55, 2015.
[4] Amarjit Budhiraja and D Ocone. Exponential stability of discrete-time filters for bounded observation
noise. Systems & Control Letters, 30(4):185–193, 1997.
[5] Gerrit Burgers, Peter Jan van Leeuwen, and Geir Evensen. Analysis scheme in the ensemble Kalman filter.
Monthly weather review, 126(6):1719–1724, 1998.
[6] Alexandre J Chorin and Xuemin Tu. Implicit sampling for particle filters. Proceedings of the National
Academy of Sciences, 106(41):17249–17254, 2009.
[7] Dan Crisan and Kari Heine. Stability of the discrete time filter in terms of the tails of noise distributions.
Journal of the London Mathematical Society, 78(2):441–458, 2008.
[8] Pierre Del Moral, Aline Kurtzmann, and Julian Tugaut. On the stability and the uniform propagation of
chaos of extended ensemble Kalman-Bucy filters. arXiv preprint arXiv:1606.08256, 2016.
[9] Randal Douc, Gersende Fort, Eric Moulines, and Pierre Priouret. Forgetting the initial distribution for
hidden markov models. Stochastic processes and their applications, 119(4):1235–1256, 2009.
[10] Arnaud Doucet, Nando De Freitas, and Neil Gordon. An introduction to sequential monte carlo methods.
In Sequential Monte Carlo methods in practice, pages 3–14. Springer, 2001.
[11] Arnaud Doucet, Simon Godsill, and Christophe Andrieu. On sequential monte carlo sampling methods
for Bayesian filtering. Statistics and computing, 10(3):197–208, 2000.
[12] Geir Evensen. Using the extended Kalman filter with a multilayer quasi-geostrophic ocean model. Journal
of Geophysical Research: Oceans, 97(C11):17905–17924, 1992.
[13] Geir Evensen. The ensemble Kalman filter: Theoretical formulation and practical implementation. Ocean
dynamics, 53(4):343–367, 2003.
[14] Michael Ghil and Paola Malanotte-Rizzoli. Data assimilation in meteorology and oceanography. Advances
in geophysics, 33:141–266, 1991.
[15] John Harlim, Andrew J Majda, et al. Catastrophic filter divergence in filtering nonlinear dissipative systems. Communications in Mathematical Sciences, 8(1):27–43, 2010.
[16] Peter L Houtekamer and Herschel L Mitchell. Ensemble kalman filtering. Quarterly Journal of the Royal
Meteorological Society, 131(613):3269–3289, 2005.
[17] David Kelly, Andrew J Majda, and Xin T Tong. Concrete ensemble Kalman filters with rigorous catastrophic filter divergence. Proceedings of the National Academy of Sciences, 112(34):10589–10594, 2015.
[18] David Kelly, Eric Vanden-Eijnden, and Jonathan Weare. Implicit resampling in the optimal particle filter.
2016.
[19] DTB Kelly, KJH Law, and Andrew M Stuart. Well-posedness and accuracy of the ensemble Kalman filter
in discrete and continuous time. Nonlinearity, 27(10):2579, 2014.
25
[20] ML Kleptsyna and A Yu Veretennikov. On discrete time ergodic filters with wrong initial data. Probability
Theory and Related Fields, 141(3-4):411–444, 2008.
[21] Hiroshi Kunita. Asymptotic behavior of the nonlinear filtering errors of markov processes. Journal of
Multivariate Analysis, 1(4):365–393, 1971.
[22] K Law, AM Stuart, KC Zygalakis, et al. Accuracy and stability of the continuous-time 3dvar filter for the
Navier-Stokes equation. Nonlinearity, 26(8):2193, 2013.
[23] KJH Law, D Sanz-Alonso, A Shukla, and AM Stuart. Filter accuracy for the lorenz 96 model: fixed versus
adaptive observation operators. Physica D: Nonlinear Phenomena, 325:1–13, 2016.
[24] KJH Law, Abhishek Shukla, and AM Stuart. Analysis of the 3dvar filter for the partially observed lorenz
’63 model. Discrete and Continuous Dynamical Systems A, 34:1061–1078, 2014.
[25] Kody Law, Andrew Stuart, and Kostas Zygalakis. Data Assimilation. Springer, 2015.
[26] Kody JH Law, Hamidou Tembine, and Raul Tempone. Deterministic mean-field ensemble kalman filtering. arXiv preprint arXiv:1409.0628, 2014.
[27] Jun S Liu and Rong Chen. Blind deconvolution via sequential imputations. Journal of the american
statistical association, 90(430):567–576, 1995.
[28] Andrew C Lorenc. Analysis methods for numerical weather prediction. Quarterly Journal of the Royal
Meteorological Society, 112(474):1177–1194, 1986.
[29] Andrew J Majda and John Harlim. Filtering complex turbulent systems. Cambridge University Press,
2012.
[30] Andrew J Majda and Xin T Tong. Robustness and accuracy of finite ensemble kalman filters in large
dimensions. arXiv, arXiv:1606.09321, 2016.
[31] Jonathan C Mattingly, Andrew M Stuart, and Desmond J Higham. Ergodicity for sdes and approximations: locally lipschitz vector fields and degenerate noise. Stochastic processes and their applications,
101(2):185–232, 2002.
[32] Sean P Meyn and Richard L Tweedie. Markov Chains and Stochastic Stability. Springer Science &
Business Media, 2012.
[33] Alexander JF Moodey, Amos S Lawless, Roland WE Potthast, and Peter Jan Van Leeuwen. Nonlinear
error dynamics for cycled data assimilation methods. Inverse Problems, 29(2):025002, 2013.
[34] Patrick Rebeschini, Ramon Van Handel, et al. Can local particle filters beat the curse of dimensionality?
The Annals of Applied Probability, 25(5):2809–2866, 2015.
[35] Daniel Sanz-Alonso and Andrew M Stuart. Long-time asymptotics of the filtering distribution for partially
observed chaotic dynamical systems. SIAM/ASA Journal on Uncertainty Quantification, 3(1):1200–1220,
2015.
[36] Daniel Sanz-Alonso and Andrew M Stuart. Long-time asymptotics of the filtering distribution for partially
observed chaotic dynamical systems. SIAM/ASA Journal on Uncertainty Quantification, 3(1):1200–1220,
2015.
[37] Madineh Sedigh-Sarvestani, David J Albers, and Bruce J Gluckman. Data assimilation of glucose dynamics for use in the intensive care unit. In 2012 Annual International Conference of the IEEE Engineering in
Medicine and Biology Society, pages 5437–5440. IEEE, 2012.
[38] Madineh Sedigh-Sarvestani, Steven J Schiff, and Bruce J Gluckman. Reconstructing mammalian sleep
dynamics with data assimilation. PLoS Comput Biol, 8(11):e1002788, 2012.
26
[39] Chris Snyder. Particle filters, the “optimal” proposal and high-dimensional systems. In Proceedings of the
ECMWF Seminar on Data Assimilation for atmosphere and ocean, 2011.
[40] Chris Snyder, Thomas Bengtsson, and Mathias Morzfeld. Performance bounds for particle filters using
the optimal proposal. Monthly Weather Review, 143(11):4750–4761, 2015.
[41] Sebastian Thrun. Particle filters in robotics. In Proceedings of the Eighteenth conference on Uncertainty
in artificial intelligence, pages 511–518. Morgan Kaufmann Publishers Inc., 2002.
[42] Xin T Tong, Andrew J Majda, and David Kelly. Nonlinear stability and ergodicity of ensemble based
Kalman filters. Nonlinearity, 29(2):657, 2016.
[43] Xin T Tong, Andrew J Majda, and David Kelly. Nonlinear stability of the ensemble Kalman filter with
adaptive covariance inflation. Communications in Mathematical Sciences, 14(5), 2016.
[44] Xin Thomson Tong and Ramon Van Handel. Ergodicity and stability of the conditional distributions of
nondegenerate Markov chains. The Annals of Applied Probability, pages 1495–1540, 2012.
[45] Ramon Van Handel. The stability of conditional Markov processes and Markov chains in random environments. The Annals of Probability, pages 1876–1925, 2009.
[46] Peter Jan Van Leeuwen. Particle filtering in geophysical systems. Monthly Weather Review, 137(12):4089–
4114, 2009.
[47] Peter Jan van Leeuwen. Nonlinear data assimilation in geosciences: an extremely efficient particle filter.
Quarterly Journal of the Royal Meteorological Society, 136(653):1991–1999, 2010.
[48] Daniel B Work, Sébastien Blandin, Olli-Pekka Tossavainen, Benedetto Piccoli, and Alexandre M Bayen.
A traffic model for velocity data assimilation. Applied Mathematics Research eXpress, 2010(1):1–35,
2010.
[49] VS Zaritskii, VB Svetnik, and LI Šimelevič. Monte-carlo technique in problems of optimal information
processing. Avtomatika i Telemekhanika, (12):95–103, 1975.
27
| 10 |
1
An Approximate ML Detector for MIMO
Channels Corrupted by Phase Noise
arXiv:1708.02321v1 [] 7 Aug 2017
Richard Combes and Sheng Yang
Abstract
We consider the multiple-input multiple-output (MIMO) communication channel impaired by phase
noises at both the transmitter and receiver. We focus on the maximum likelihood (ML) detection problem
for uncoded single-carrier transmission. We derive an approximation of the likelihood function, based
on which we propose an efficient detection algorithm. The proposed algorithm, named self-interference
whitening (SIW), consists in 1) estimating the self-interference caused by the phase noise perturbation,
then 2) whitening the said interference, and finally 3) detecting the transmitted vector. While the exact
ML solution is computationally intractable, we construct a simulation-based lower bound on the error
probability of ML detection. Leveraging this lower bound, we perform extensive numerical experiments
demonstrating that SIW is, in most cases of interest, very close to optimal with moderate phase noise.
More importantly and perhaps surprisingly, such near-ML performance can be achieved by applying
only twice the nearest neighbor detection algorithm. In this sense, our results reveal a striking fact: nearML detection of phase noise corrupted MIMO channels can be done as efficiently as for conventional
MIMO channels without phase noise.
Index Terms
MIMO systems, phase noise, maximum likelihood detection, probability of error.
I. I NTRODUCTION
We consider the signal detection problem for the following discrete-time multiple-input multipleoutput (MIMO) channel
y = diag ejθr,1 , . . . , ejθr,nr H diag ejθt,1 , . . . , ejθt,nt x + z ,
(1)
The authors are with the Laboratory of Signals and Systems (L2S), CentraleSupélec, 3 rue Joliot-Curie, 91190 Gif-sur-Yvette,
France. (e-mail: richard.combes,[email protected])
August 9, 2017
DRAFT
2
where H ∈ Cnr ×nt is the channel matrix known to the receiver; z ∈ Cnr ×1 represents a realization
of the additive noise whereas θt,l and θr,k are the phase noises at the l th transmit antenna and
the k th receive antenna, respectively; the input vector x ∈ Cnt ×1 is assumed to be carved
from a quadratic amplitude modulation (QAM). The goal is to estimate x from the observation
y ∈ Cnr ×1 , with only statistical knowledge on the additive noise and the phase noises.
In the case where the phase noise is absent, the problem is well understood, and the maximum likelihood (ML) solution can be found using any nearest neighbor detection (NND)
algorithm (see [1] and the references therein). In particular, the sphere decoder [2] has been
shown to be very efficient [3], so that its expected complexity (averaged over channel realizations)
is polynomial in the problem dimension nt . Furthermore, there exist approximate NND algorithms (e.g., based on lattice reduction) that achieve near-ML performance when applied for
MIMO detection [4].
The presence of phase noise in (1) is both a practical and long-standing problem in communication. In their seminal paper [5] back in the 70’s, Foschini et al. used this model to
capture the residual phase jitter at the phase-locked loop of the receiver side, and investigated
both the performance of decoding algorithms as well as constellation design in the scalar
case (nt = nr = 1). As a matter of fact, in most wireless communication systems, phase noise
is present due to the phase and frequency instabilities in the radio frequency oscillators used at
both the transmitter and the receiver [6]. The channel (1) can be seen as a valid mathematical
model when the phase noise varies slowly as compared to the symbol duration.1 While phase
noise can be practically ignored in conventional MIMO systems, its impact becomes prominent
at higher carrier frequencies since it can be shown that phase noise power increases quadratically
with carrier frequency [6], [8]. The performance degradation due to phase noise becomes even
more severe with the use of higher order modulations for which the angular separation between
constellation points can be small. At medium to high SNR, phase noise dominates additive noise,
becoming the capacity bottleneck [9], [10]. As for signal detection, finding the ML solution for
the MIMO phase noise channel (1) is hard in general. Indeed, unlike for conventional MIMO
channels, the likelihood function of the transmitted signal cannot be obtained in closed form.
1
As pointed out in [7] and the references therein, an effective discrete-time channel is usually obtained from a waveform
phase noise channel after filtering. When the continuous-time phase noise varies rapidly during the symbol period, the filtered
output also suffers from amplitude perturbation. More discussion is provided in Section VI-D.
August 9, 2017
DRAFT
3
Our Contribution. In this work, we propose an efficient MIMO detection algorithm which
finds an approximate ML solution in the presence of phase noise. The main contributions of this
work are summarized as follows.
(i) We derive a tractable approximation of the likelihood function of the transmitted signal.
While the exact likelihood does not have a close-form expression, the proposed approximation has a simple form and turns out to be accurate for weak to medium phase noises.
(ii) Since maximizing the approximate likelihood function over a discrete signal set is still
hard, we propose a heuristic method that finds an approximate solution by applying twice
the nearest neighbor detection algorithm. The proposed algorithm, called self-interference
whitening (SIW), has a simple geometric interpretation. Intuitively, the phase noise perturbation generates self-interference that depends on the transmitted signal through the
covariance matrix. The main idea is to first estimate the covariance of the self-interference
with a potentially inaccurate initial signal solution, then perform the whitening with the
estimated covariance, followed by a second detection. From the optimization point of view,
our algorithm can be seen as a (well-chosen) concave approximation to a non-concave
objective function.
(iii) We assess the performance of SIW and competing algorithms in different communication
scenarios. Since the error probability of ML decoding is unknown, we propose a simulationbased lower bound which we use as a benchmark. Simulation results show that SIW
achieves near ML performance in most scenarios. In this sense, our work reveals that
near optimal MIMO detection with phase noise can be done as efficiently as without phase
noise. Although the likelihood approximation is derived using the assumption that the phase
noise has a Gaussian distribution, our numerical experiments show that SIW works well
even with non-Gaussian phase noises.
Related Work. Receiver design with phase noise mitigation has been extensively investigated
in the past years (see [11], [12] and references therein). More complex channel models, including
multi-carrier systems (e.g., OFDM) and time-correlated phase noises (e.g., the Wiener process)
have also been considered. In particular, joint data detection and phase noise estimation algorithms have been proposed in [13], [11]. A phase noise estimation based scheme to improve the
system performance for smaller alphabets has been proposed in [12]. However, the challenging
problem of signal detection in MIMO phase noise channels using higher order modulation, where
August 9, 2017
DRAFT
4
performance is extremely sensitive to phase noise, has not been addressed adequately before.
The remainder of the paper is organized as follows. We start with a formal description of
the problem in the next section. The approximation of the likelihood function is derived in
Section III-A, followed by the proposed algorithm described in Section III-B. The hardness of
finding the exact ML solution is investigated in Section IV. We present the numerical experiments
in Section V. Further discussion on the proposed algorithm and relevance of the considered
channel model is provided in Section VI. Section VII concludes the paper.
II. A SSUMPTIONS AND P ROBLEM F ORMULATION
Notation
Throughout the paper, we use the following notation. For random quantities, we use upper
case letters, e.g., X, for scalars, upper case letters with bold and non-italic fonts, e.g., V , for
vectors, and upper case letter with bold and sans serif fonts, e.g., M , for matrices. Deterministic
quantities are denoted in a rather conventional way with italic letters, e.g., a scalar x, a vector
v , and a matrix M . The Euclidean norm of a vector v is denoted by kvv k, respectively. The
transpose and conjugated transpose of M are M T and M H , respectively.
A. System model
In the following, we describe formally the channel model (1) presented in the previous section.
We assume a MIMO channel with nt transmit and nr receive antennas. Let H denote the
channel matrix, where the (k, l)-th element of H , denoted as hk,l , represents the channel gain
between the l th transmit antenna and k th receive antenna. The transmitted vector is denoted
by x = [x1 , . . . , xnt ]T , where xl ∈ X , l = 1, . . . , nt , X being typically a QAM constellation
P
with normalized average energy, i.e., |X1 | x∈X |x|2 = 1. For a given transmitted vector x , the
received vector in base-band can be written as the following random vector
Y = Λ RH Λ T x + Z ,
(2)
where the diagonal matrices Λ R := diag ejΘr ,1 , . . . , ejΘr ,nr and Λ T := diag ejΘt,1 , . . . , ejΘt,nt
capture the phase perturbation at the receiver and transmitter, respectively; Z is the additive
white Gaussian noise (AWGN) vector with Z ∼ CN (0, γ −1I ), where γ is the nominal signalto-noise ratio (SNR). We assume that the phase noise Θ := [Θt,1 · · · Θt,nt Θr,1 · · · Θr,nr ]T is
jointly Gaussian with Θ ∼ N (0, Q θ ) where the covariance matrix Q θ can be arbitrary. Note
August 9, 2017
DRAFT
5
that this model includes as a special case the uplink channel in which nt is the number of
single-antenna users. In such a case, the transmit phase noises are independent. For simplicity,
we consider uncoded transmission in which each symbol xl can take any value from X with
equal probability.
Further, we assume that the channel matrix can be random but is perfectly known at the
receiver, whereas such knowledge at the transmitter side is irrelevant in uncoded transmission.
We also define H Θ := Λ RH Λ T and accordingly H θ for some realization of Θ = θ . By definition,
we have H 0 = H . Finally, we ignore the temporal correlation of the phase noise process and
the channel process, and focus on the spatial aspect of the signal detection problem.
B. Problem formulation
With AWGN, we have the following conditional probability density function (pdf)
p(yy | x , θ , H ) =
γ nr −γkyy −H
Hθ x k2
e
,
π nr
(3)
from which we obtain the likelihood function by integrating over Θ
p(yy | x , H ) = EΘ p(yy | x , Θ , H )
i
h
γ nr
y −H
HΘ x k2
−γky
+ ln nr .
= ln EΘ e
π
(4)
(5)
The ML detector finds an input vector from the alphabet X nt such that the likelihood function
is maximized. In practice, it is often more convenient to use the log-likelihood function as the
objective function, i.e., after removing a constant term
h
i
y −H
HΘ x k2
−γky
x
y
H
Q
f (x , , , γ, θ ) := ln EΘ e
,
(6)
where the arguments γ and Q θ can be omitted whenever confusion is not likely. Thus,
xML (yy , H ) := arg maxn f (x
x, y , H ).
x̂
x ∈X
t
(7)
From (7) we see two main challenges to compute the optimal solution:
1) The expectation in (7) cannot be obtained in closed form. A numerical implementation is
equivalent to finding the numerical integral in nt + nr dimensions. This can be extremely
hard in high dimensions.
2) The size of the optimization space, |X |nt , can be prohibitively large when the modulation
size |X | and the input dimension nt become large.
August 9, 2017
DRAFT
6
In Section IV, we examine in more details why both of these issues are indeed challenging.
In a conventional MIMO channel without phase noise, finding the ML solution is reduced to
solving the following problem
x0ML (yy , H ) := arg minn kyy − H 0 x k2 ,
x̂
x ∈X
t
(8)
which is also called the minimum Euclidean distance detection or nearest neighbor detection (NND). Although the search space in (8) remains large, the expectation is gone. Furthermore,
since the objective function is the Euclidean distance, efficient algorithms (e.g., sphere decoder [2]
or lattice decoder [1]) exploiting the geometric structure of the problem can be applied without
searching over the whole space X nt . It is shown in [3] that the sphere decoder has a polynomial
average complexity with respect to the input dimension nt when the channel matrix is drawn
i.i.d. from a Rayleigh distribution.
In practice, one may simply ignore the existence of phase noise and still apply (8) to obtain
x0ML which we refer to as the naive ML solution in our work. While this can work relatively
x̂
well when the phase noise is close to 0, it becomes highly suboptimal with stronger phase noise
which is usually the case in high frequency bands with imperfect oscillators. In this paper, we
provide a near ML solution by circumventing the two challenges mentioned earlier. We first
propose an approximation of the likelihood function. Then we propose an algorithm to solve
approximately the optimization problem (7).
III. P ROPOSED S CHEME
A. Proposed Approximation of the Likelihood Function
In this section, we propose to approximate the likelihood function with the implicit assumption
that the phase noise is not large. Indeed, in practice, the standard deviation of the phase noise
is typically smaller than 10 degrees ≈ 0.174 rad. For stronger phase noises, it is no longer
reasonable to use QAM and the problem should be addressed differently.
ΛRH Λ Tx k = kΛ
ΛH
H Λ Tx k,
The likelihood function (3) depends on the Euclidean norm kyy −Λ
R y −H
in which the difference vector ΛH
R y − H Λ Tx can be rewritten and approximated as follows
jθθ t
e
HDx Dy]
ΛH
(9)
R y − H Λ Tx = [−H
e−jθθ r
HDx Dy] θ,
≈ (yy − H x ) − j[H
August 9, 2017
(10)
DRAFT
7
where we define D x := diag(x1 , . . . , xnt ), D y := diag(y1 , . . . , ynr ), and recall that θ := θ Tt
θ Tr
T
(10) is from the linear approximation2 ejθθ = 1 + jθθ + o(θθ ). Thus the Euclidean norm has the
corresponding real approximation:
Aθ + b k2 ,
kyy − Λ RH Λ Tx k2 ≈ kA
where A ∈ R2nr ×(nt +nr ) and b ∈ R2nr ×1 are defined as
H D x)
Dy )
Im (H
Im (D
Re (yy − H x )
, b :=
.
A :=
H D x ) − Re (D
Dy )
− Re (H
Im (yy − H x )
(11)
(12)
With the above approximation, we can derive the approximation of the log-likelihood function.
Proposition 1. Let A and b be defined as in (12). Then we have the following approximation of
h
i
y −H
H θ xk2
−γky
x, y , H , γ, Q θ ) with
the log-likelihood function ln EΘ e
≈ fˆ(x
1
x, y , H ) := −γ b TWx−1b − ln det (W
Wx ) ,
fˆ(x
2
(13)
AQ θA T .
Wx := I + 2γA
(14)
where Wx is defined as
Hence, the proposed approximate ML (aML) solution is
1
T
−1
xaML (yy , H ) := arg minn γ b Wx b + ln det (W
Wx ) .
x̂
x ∈X t
2
(15)
Proof. Since we assume that Θ ∼ N (0, Q θ ), we have
Z
h
i
1
1 −1
T
T
T
AΘ +bbk2
−γkA
2
A A + Q θ ) θ − 2γbb Aθ − γkbbk
EΘ e
=p
d θ exp −θθ (γA
(16)
Qθ ) θ
det(2πQ
{z 2
}
|
1 −1
Q
2
s
Q)
det(Q
Q)A
ATb
exp −γkbbk2 + γ 2b TA (2Q
Qθ )
det(Q
Z
1
1 −1
T
T
T
Q)A
A b ) Q (θθ + γ(2Q
Q)A
A b)
· dθ p
exp −(θθ + γ(2Q
2
Q)
det(2πQ
θ
s
Q)
det(Q
Q)A
ATb
=
exp −γkbbk2 + γ 2b TA (2Q
Qθ )
det(Q
s
Q)
det(Q
A((2Q
Qθ )−1 + γA
ATA )−1A T )bb
=
exp −γbbT (II − γA
Qθ )
det(Q
=
2
(17)
(18)
(19)
Here we use, with a slight abuse of notation, ejθθ to denote the vector obtained from the element-wise complex exponential
operation. Similarly, the little-o Landau notation o(θθ ) is element-wise.
August 9, 2017
DRAFT
;
8
(a) 4-QAM, f = fˆ = −10.
(b) 16-QAM, f = fˆ = −4.
(c) 64-QAM, f = fˆ = −1.6.
Fig. 1: The proposed approximation of the likelihood function in the scalar case. Solid line is
the actual likelihood level set, dashed line is the approximation. Here γ = 30dB and phase noise
has standard deviation 3◦ at the transmitter and at the receiver.
s
=
Q)
det(Q
A(2Q
Qθ )A
AT )−1b
exp −γbbT (II + γA
Qθ )
det(Q
1
T
T −1
b
I
A
Q
A
b
=p
exp
−γb
(I
+
2γA
)
,
θ
AQ θA T )
det(II + 2γA
(20)
(21)
where (18) holds since the integrand in (17) is a pdf with respect to θ ; (20) is from the Woodbury
−1
V . Taking the logarithm and we obtain
matrix identity (II + U C V )−1 = I − U C −1 + V U
the approximated log-likelihood function (13).
In Figure 1, we illustrate the proposed approximation for 4-, 16-, and 64-QAM. In all three
cases, we plot for each constellation point a level set of the likelihood function with respect to “yy ”
in solid line. The level sets of the approximated likelihood function are plotted similarly in dashed
line. While the likelihood function is evaluated using numerical integration, the approximation is
in closed form given by (13). In this figure, we observe that the approximation is quite accurate,
especially for signal points with smaller amplitude. Further, the resemblance of the level sets for
the approximate likelihood to ellipsoids suggests that the main contribution in the right hand side
of (13) comes from the first term −γ b TWx−1b . We shall exploit this feature later on to construct
the proposed algorithm.
Qθ → 0, i.e., when the phase noise vanishes faster than
Remark 1. We can check that when γQ
the AWGN does, the above solution minimizes kbbk2 = kyy − H x k2 , which corresponds to the
optimal NND solution in the conventional MIMO case. Nevertheless, when the phase noise does
August 9, 2017
DRAFT
9
not vanish, the approximate log-likelihood function (13) depends on x in a rather complex way,
due to the presence of the matrix Wx . Focusing on the term b TWx−1b , we can think of Wx as
the covariance matrix of some equivalent noise. Indeed, if we approximate the multiplicative
phase noise as an additive perturbation, then the perturbation is a self-interference that depends
on the input vector x . This perturbation is not isotropic nor circularly symmetric, and can be
captured by the covariance matrix Wx . Some more discussions in this regard will be given in
the following subsection.
While the proposed approximation simplifies significantly the objective function, the optimization problem (15) remains hard when the search space is large. For instance, with 64-QAM
and 4 × 4 MIMO, the number of points in Xtn is more than 107 ! Therefore, we need further
simplification by exploiting the structure of the problem.
B. The Self-Interference Whitening Algorithm
As mentioned above, the difficulty of the optimization (15) is mainly due to the presence of
the matrix Wx that depends on x . Let us first assume that the Wx corresponding to the optimal
xaML were somehow known, and is denoted by Wx̂x . Then the optimization problem (15)
solution x̂
would be equivalent to
1
T
−1
Wx̂x )
xaML (yy , H ) = arg minn γ b Wx̂x b + ln det (W
x̂
x ∈X t
2
= arg minn b TWx̂x−1b
x ∈X
(23)
t
−1
−1
Wx̂x 2 ỹy − Wx̂x 2 H̃
H x̃
x k2 ,
= arg minn kW
x ∈X
− 21
where Wx̂x
(22)
t
−1 H
−1
x, ỹy , and H̃
H are defined by
is any matrix such that Wx̂x 2 Wx̂x 2 = Wx̂x−1 ; x̃
x)
H ) − Im (H
H)
Re (x
Re (yy )
Re (H
, ỹy :=
, H̃
.
x :=
H :=
x̃
x)
H ) Re (H
H)
Im (x
Im (yy )
Im (H
(24)
(25)
Note that for a given Wx̂x , (24) can be solved efficiently with any NND algorithm. Unfortunately,
xaML , the exact Wx̂x cannot be found. Therefore, the idea is
without knowing the optimal solution x̂
x, and then solve the optimization
to first estimate the matrix Wx̂x with some suboptimal solution x̂
problem (24) with a NND. We call this two-step procedure self-interference whitening (SIW).
August 9, 2017
DRAFT
10
x0ML as the initial estimate to obtain Wx̂x , and
For instance, we can use the naive ML solution x̂
have
−1
−1
ML
ML
x0aML (yy , H ) = arg minn kW
Wx̂x0 2 ỹy − Wx̂x0 2 H̃
H x̃
x k2 .
x̂
x ∈X
t
(26)
Remark 2. The intuition behind the SIW scheme is as follows. From the definition of Wx in
(14) and A in (12), we see that Wx depends on x only through H D x . First, the column space
of H D x does not vary with x since D x is diagonal. Second, a small perturbation of x does
x0ML is close to the
not perturb Wx too much in the Euclidean space. Since the naive ML point x̂
actual point x in the column space of H , it provides an accurate estimate of Wx . This can also
be observed on Figure 1c, where we see that the ellipsoid-like dashed lines have similar sizes
and orientations for constellation points that are close to each other.
Remark 3. Another possible initial estimate is the naive linear minimum mean square error (LMMSE) solution. As the naive ML, the naive LMMSE ignores the phase noise and returns
H H (γ −1I + H H H )−1y − x k2 .
x0LMMSE (yy , H ) := arg minn kH
x̂
x ∈X
t
(27)
It is worth mentioning that in the presence of phase noise the naive LMMSE is not necessarily
dominated by the naive ML solution, as will shown in the numerical experiments of Section V.
The main algorithm of this work is described in Algorithm 1. In the algorithm, the complex
function NND(yy , H , X ) finds among the points from the alphabet X the closest one to y in
H , X̃ ) is the real counterpart of NND. The
the column space of H ; the function realNND(ỹy , H̃
x0 )” embeds the real vector x̃
x0 to the complex space by taking the upper
function “complex(x̃
half as the real part and the lower half as the imaginary part. It is worth noting that the newly
obtained point is accepted only when it has a higher approximate likelihood value than the naive
ML point does. An example of the scalar case is provided in Figure 2 where 256-QAM is used.
The transmitted point is x and the received point is y. The solid line is the level set of the
likelihood function. If the likelihood function was computed for each point in the constellation,
then one would recover x from y successfully. But this would be hard computationally. With
the Euclidean detection, x̂ that is closer to y than x is would be found instead, which would
cause an erroneous detection. The SIW algorithm can “correct” the error as follows. First, to
estimate the unknown matrix Wx , we compute the matrix Wx̂x which is represented by the red
dashed ellipse around x̂. We can see that the estimate Wx̂x is very close to the correct value Wx ,
August 9, 2017
DRAFT
11
Algorithm 1 Self-interference whitening
Input: y , H , γ, Q θ
x0LMMSE from (27)
Find x̂
x0ML ← NND(yy , H , X )
Find x̂
x0ML , y , H , γ, Q θ ) then
x0LMMSE , y , H , γ, Q θ ) > fˆ(x̂
if fˆ(x̂
x ← x̂
x0LMMSE
x̂
else
x ← x̂
x0ML
x̂
end if
x using (12) and (14)
Generate Wx̂x from x̂
1
Find Wx̂x2 using the Cholesky decomposition
H according to (25)
Generate ỹy and H̃
−1
−1
x0 ← realNND(W
Wx̂x 2 ỹy , Wx̂x 2 H̃
H x̃
x, X̃ )
x̃
x0 ← complex(x̃
x0 )
x̂
x0 , y , H , γ, Q θ ) > fˆ(x̂
x, y , H , γ, Q θ ) then
if fˆ(x̂
x0aML ← x̂
x0
x̂
else
x0aML ← x̂
x
x̂
end if
x(2)
Output: x̂
aML
given by the actual x (blue dashed line). Then, we generate the coordinate system with Wx̂x and
search for the closest constellation point to y in this coordinate system. In this example, x can
be recovered successfully. More importantly, computationally efficient NND algorithms can be
used to perform the search.
Remark 4. The complexity of the SIW algorithm is essentially twice that of the NND algorithm
used, since the other operations including the LMMSE detection have at most cubic complexity
with respect to the dimension of the channel. The complexity of the NND algorithm depends directly on the conditioning of the given matrix. If the columns are close to orthogonal, then channel
inversion is almost optimal. However, in the worse case, when the matrix is ill-conditioned, the
NND algorithm can be slow and its complexity is exponential in the problem dimension. As
August 9, 2017
DRAFT
12
x
y
x̂ = x̂0ML
2
3
Wx̂
x
1
y
x̂ = x̂0ML
Fig. 2: Illustration of the proposed detection in the scalar case. An example with 256-QAM,
PN 2◦ . The dashed lines represent the ellipse defined by the matrix Wx̂ (in red) and Wx (in blue).
mentioned earlier, there exist approximate NND algorithms, e.g., based on lattice reduction, that
can achieve near optimal performance with much lower complexity.
IV. H ARDNESS OF ML D ECODING
In the section we explore how ML decoding may be implemented. While in most cases
of interest the SIW algorithm gives near-optimal performance, and ML decoding is too costly
computationally, it is useful to simply provide lower bounds on the performance of any decoding
algorithm. We comment on the difficulty of implementing ML decoding, in particular when the
dimensions nr , nt are large. For simplicity, we assume that {Θt,k }k=1,...,nt are i.i.d. N (0, σt2 ) and
{Θr,k }k=1,...,nr are i.i.d. N (0, σr2 ).
A. Hardness of computing the likelihood
h
i
HΘ x k2
To compute the likelihood one needs to compute EΘ e−γkyy −H
. For large dimensions,
this seems impossible to do in closed form or using numerical integration. However a natural
August 9, 2017
DRAFT
13
alternative is to use the Monte-Carlo method, using the following estimate:
!
s
X
1
2
H (t) x k
Θ
x, y , H , γ, Q θ ) ≈ F̂s := ln
fˆ(x
,
e−γkyy −H
s t=1
(28)
where Θ (1) , . . . , Θ (s) are s i.i.d. copies of Θ . Using the delta method [14] and the central limit
theorem, the asymptotic variance of this estimator is
HΘ x k2
1 Var e−γkyy −H
Var(F̂s ) ∼
2 ,
s
HΘ x k2 ]
E [e−γkyy −H
s → ∞.
(29)
Let us compute this quantity in the (arguably easiest) case where H is the identity matrix with
nt = nr = n:
2
kyy − HΘ x k =
n
X
2
yk − xk ej(Θr,k +Θt,k ) .
(30)
k=1
Define the one-dimensional likelihood
h
i
j(Θr +Θt ) |2
g(x, y, γ, σ) := E e−γ|y−xe
,
and using independence we calculate the moments:
n
h
i Y
y −H
HΘ xk2
−γky
E e
=
g(xk , yk , γ, σ),
Var(e
y −H
HΘ xk2
−γky
)=
k=1
n
Y
g(xk , yk , 2γ, σ) −
k=1
(31)
(32)
n
Y
g(xk , yk , γ, σ)2 .
(33)
k=1
From (29), the asmyptotic variance is hence:
Var(F̂s ) =
where v(x, y, γ, σ) :=
g(x,y,2γ,σ)
.
g(x,y,γ,σ)2
j(Θr +Θt ) |2
random variable e−γ|y−xe
1
s
n
Y
!
v(xk , yk , γ, σ) − 1 .
(34)
k=1
It is noted that, unless σr2 + σt2 = 0 or x = 0, or y = 0, the
cannot be a constant. Thus, we have v(xk , yk , γ, σ) > 1 for
k = 1, . . . , n. As a result, the asymptotic error (34) grows exponentially with the dimension n,
so that the Monte-Carlo method is infeasible in high dimensions, since the number of samples
must also scale exponentially with n to maintain a constant error.
For instance, consider xk = yk = 1 for all k, γ = 40dB, nr = nt = 20, and σr = σt = 3
degrees. Then v ≈ 7, so that, to obtain an error smaller than 0.1, one would require s ≈ 1018
samples, which is clearly not feasible in practice.
August 9, 2017
DRAFT
14
B. Hardness of maximum likelihood search
Assume that one is able to estimate the value of the likelihood with high accuracy (as seen
above this is typically hard), and denote by f¯ this value. We may then consider the following
algorithm. Given the received symbol y , and a radius ρ, one first computes the set of points
x ∈ X nt : kyy − H x k2 ≤ ρ2 }, then one computes the value of f¯ for each of those
S := {x
points and returns the point maximizing f¯. In the large system limit, the following concentration
phenomenon occurs.
Proposition 2. Assume that H has i.i.d. entries with distribution CN (0, 1) and X is chosen
Y − HX k2 . Then we have:
uniformly at random from X nt . Define the radius R2 := kY
2 +σ 2
σr
t
+ γ −1 nr ,
E R2 = 2nt nr 1 − e− 2
(35)
and for any η > 0 we have:
n
o
P (1 − η)E R2 ≤ R2 ≤ (1 + η)E R2 → 1,
(36)
nt , nr → ∞.
Note that the above result is general and does not impose that the number of antennas nr , nt
scale at the same speed. We draw two conclusions from this result: (i) Any such algorithm applied
p
with radius (1 + η) E [R2 ] for any η > 0 is guaranteed to inspect the optimal point with high
probability. (ii) Any such algorithm
which
r has a large success
probability needs to inspect every
2
√
σ 2 +σ
point in a sphere of radius O
nt nr 1 − exp − r 2 t
, and therefore typically has a very
high complexity.
From the above analysis we see that the second difficulty of the decoding problem for phase
noise channels, even when the likeihood can be computed, lies in the number of points to be
inspected which is exponentially large in nr , nt . The problem is that computing the likelihood at
any given point does not give us any information about the structure of f , and does not help in
maximizing f efficiently.
C. Non-concavity in the high SNR regime
Indeed maximizing f is difficult, and it seems that even performing zero-forcing, i.e., maximizing f over x ∈ Cnt rather than over x ∈ X nt , is difficult since f is non-concave, at least
in the high SNR regime. We first show that, in the high SNR regime, the log-likelihood can be
approximated by a function of the minimal value of kyy − Hθ xk2 , where the minimum is taken
over all possible phases θ .
August 9, 2017
DRAFT
15
Lemma 1. For all x, y , H , Qθ , we have the following high SNR behavior
1
x, y , H , γ, Q θ ) = m(x
x, y , H ) := min kyy − Hθ x k2 .
lim − f (x
θ ∈Rnt +nr
γ
(37)
γ→∞
Proof. Consider > 0, we have that
1 kyy − HΘ x k2 ≤ m(xx, y , H ) + e−γ(m(xx,yy ,HH )+) ≤ e−γkyy −HHΘ xk ≤ e−γm(xx,yy ,HH ) .
2
(38)
Taking expectations and then logarithms:
n
o 1
1
2
x, y , H )+)+ ln P kyy −H
HΘx k ≤ m(x
x, y , H )+ ≤ f (x
x, y , H , γ, Q θ ) ≤ −m(x
x, y , H ).
−(m(x
γ
γ
Since the mapping θ 7→ kyy − Hθ x k2 is continuous, for any given > 0, the probability
x, y , H ) + is strictly positive. Letting γ → ∞, we have
P kyy − HΘ x k2 ≤ m(x
1
1
x, y , H , γ, Q θ ) ≤ lim sup f (x
x, y , H , γ, Q θ ) ≤ −m(x
x, y , H ).
x, y , H ) + ) ≤ lim inf f (x
−(m(x
γ→∞
γ→∞
γ
γ
x, y , H , γ, Q θ ) = m(x
x, y , H ).
Since the above holds for all > 0, we have limγ→∞ − γ1 f (x
We can now show that in general the log-likelihood is not concave, hence maximizing it is
not straightforward.
x, y , H , γ, Q θ )
Proposition 3. For γ large enough and H 6= 0 , there exists y such that x 7→ f (x
is a non-concave function.
Proof. Assume that f is concave, then for all x we must have:
1
x + x∗
∗
x, y , H , γ, Q θ ) + f (x
x , y , H , γ, Q θ ) ≤ f
, y , H , γ, Q θ .
f (x
2
2
(39)
From Lemma 1, the above inequality implies that
1
x + x∗
∗
x, y , H ) + m(x
x , y, H ) ≥ m
m(x
, y, H .
2
2
(40)
We shall construct an example to show that the above does not hold in general. Consider any z
x. Then x and z are
such that H z 6= 0 , and let y = H z and x = j(|z1 |, ..., |znt |), so that x ∗ = −x
x, y , H ) = m(zz , y , H ) = 0. Similarly m(x
x∗ , y , H ) = 0.
equal up to a phase transformation, so m(x
∗
x
By definition m( x +x
, y , H ) = m(00, y , H ) = kyy k2 . In this example, (40) would imply 0 ≥ kyy k2 ,
2
which is clearly a contradiction since y 6= 0 . Hence f cannot be concave for γ large enough.
This fact that the log-likelihood is in general non-concave gives another important insight:
SIW can in fact be seen as a (well chosen) concave approximation of a non-concave function.
To circumvent the problem of non-concavity of the log-likelihood SIW approximates it by a
August 9, 2017
DRAFT
16
function which, when Wx is fixed, is concave. As non-concavity appears mainly for high SNRs,
the discrepancy between the performance of ML and SIW (if any) should be more visible in the
high SNR regime, and this will be confirmed by our numerical experiments in the next section.
V. N UMERICAL E XPERIMENTS
In this section, we look at different communication scenarios in which we compare the
proposed scheme to some baseline schemes, including two schemes that ignore the phase noise:
•
The naive LMMSE solution given by (27)
•
The naive ML solution (8)
and a scheme that takes the phase noise into account:
•
Selection between naive LMMSE and ML: the receiver first finds the naive LMMSE and
naive ML solutions, then computes the proposed approximate likelihood function and selects
the one with higher value.
Note that we focus on the vector detection error rate3 as our performance metric.
A. Simulation-based lower bounds
In order to appreciate the performance of the proposed algorithm, we need to compare it
not only with the existing schemes, but also to the fundamental limit given by ML detection,
which is optimal. Let us recall that the proposed SIW algorithm may suffer from two levels of
suboptimality. First, the approximate likelihood function (13) may be inaccurate in some cases.
Second, even if (13) is accurate, the SIW algorithm is not guaranteed to find the optimal solution
(15). Therefore, in order to identify the source of the potential suboptimality, it would be useful
to compare the performance of the SIW scheme with the performance given by (13) and with
that given by (7).
Unfortunately, as pointed out earlier, finding (13) requires an exhaustive search with complexity
growing as |X |nt . Finding (7) is even harder because of the numerical multi-dimensional integration. As such, we resort to lower bounds on the detection error for (13) and (7), respectively,
which are enough for our purpose of benchmarking. To that end, we write
aML
X, Y , H ) < maxn fˆ(x
x, Y , H )
Pe
≥ P fˆ(X
x ∈X
3
t
(41)
The detection is considered successful only when all the symbols in x are recovered correctly. Otherwise an error is declared.
August 9, 2017
DRAFT
17
x, Y , H ) ,
X, Y , H ) < max fˆ(x
≥ P fˆ(X
x ∈L
∀ L ⊆ X nt ,
(42)
where the first lower bound is from the definition of the detection criterion, namely, error occurs
if there exists at least one input vector that has a strictly higher approximate likelihood value.
Note that although the second inequality is valid for all L, it becomes equality if L contains all
the points in X nt that have a higher approximate likelihood value than X does. In this work, we
only take a large set around X to obtain the lower bound (42), without any theoretical guarantee
of tightness of (42). Similarly, for ML detection, we have
ML
X, Y , H ) < maxn f (x
x, Y , H )
Pe ≥ P f (X
x ∈X
(43)
t
X, Y , H ) < f (X
X0 , Y , H )}
≥ P {f (X
X 6= X 0 , f (X
X, Y , H ) < f (X
X0 , Y , H )} ,
= P {X
(44)
∀ X 0 ∈ X nt ,
(45)
where the first lower bound is from the definition of the ML detection criterion, namely, error
occurs if there exists at least one input vector that has a strictly higher likelihood value; the
X, Y , H ) < f (X
X0 , Y , H ); Note that the
equality (45) holds since X 6= X 0 is a consequence of f (X
second lower bound (44) holds for any vector X 0 from the alphabet X nt and with equality when
X 0 is the exact ML solution. Since the ML solution is unknown, we can use any suboptimal
solution instead and still obtain a valid lower bound. Now we can see that (45) is much easier
to evaluate than (43) is since there is no need to perform the maximization over X nt . Intuitively,
if X 0 is a near ML solution, then the lower bound should be tight enough. We shall have some
more discussions on this with the upcoming numerical examples. The lower bound (45) can be
obtained by simulation:
1) For a given observation y and channel H , find a suboptimal solution x 0 .
x, y , H ) and f (x
x0 , y , H ), count an ML error only when f (x
x, y , H ) <
2) If x 0 6= x , compute f (x
x0 , y , H ); otherwise the counter remains unchanged.
f (x
With the proposed method, we need to perform twice the numerical integration (e.g., MonteCarlo integration) only when x 0 6= x . If the latter event happens with small probability, then the
average complexity to evaluate (45) is low. In other words, using a x 0 from a better reference
scheme not only makes the lower bound tighter but also makes it easier to evaluate.
August 9, 2017
DRAFT
18
(a) 64-QAM, PN 3◦ .
(b) 256-QAM, PN 2◦ .
(c) 1024-QAM, PN 1◦ .
Fig. 3: SISO Rayleigh fading with i.i.d. phase noise.
B. Scenario 1: Point-to-point SISO channel
The first scenario focuses on the point-to-point Rayleigh fading single-antenna channels, also
known as single-input single-output (SISO) channels. We consider three different modulation
orders (64, 256, and 1024) with correspondingly three values of phase noise strength (3◦ , 2◦ ,
and 1◦ ) in terms of the standard deviation at both the transmitter and receiver sides. The idea is
to assess the performance of the proposed algorithm in different phase noise limited regimes. In
the SISO case, we compare the proposed scheme with the naive ML scheme which consists in a
simple threshold detection for the real and imaginary parts. Several remarks are in order. First,
we see that ignoring the existance of phase noise incurs a significant performance loss. Second,
if exhaustive search is done with the proposed likelihood approximation, then it achieves the ML
performance. This can be seen from the fact that the proposed simulation-based lower bound
overlaps with the curve with exhaustive search. This confirms the accuracy of the closed-form
approximation (13) at least in the SISO case. It is worth mentioning that the exact likelihood
in this case can also be derived via the Tikhonov distribution as shown in [5], [15] where the
analytic expression involving the Bessel function has been provided. Finally, more remarkably,
the SIW algorithm almost achieves the ML performance without exhaustive search.
C. Scenario 2: Point-to-point LoS-MIMO channel
The second scenario is the point-to-point line-of-sight (LoS) MIMO system commonly deployed as microwave backhaul links [16], [9], [17]. We assume that the channel is constant over
time but each antenna is driven by its own oscillator. This is the worst-case assumption but
also often motivated by the fact that the communication distance is large and thus the distance
August 9, 2017
DRAFT
19
(a) 0.33 Opt. distance, 64-QAM.
(b) 0.7 Opt. distance, 256-QAM.
(c) Opt. distance, 1024-QAM.
Fig. 4: 4 × 4 LoS MIMO. Each antenna has i.i.d. phase noise of standard deviation 1◦ .
between antenna elements is increased accordingly to make sure that the channel matrix is well
conditioned [16], [17]. Here, we adopt the model with two transmit and two receive antennas
each one with dual polarizations. This is effectively a 4×4 MIMO channel. The optimal distance
between the antenna elements at each side can be derived as a function of the communication
distance [16]. However, it may not always be possible to install the antennas with the optimal
spacing due to practical constraints. The condition number of the channel matrix increases when
the antenna spacing decreases away from the optimal distance. In Figure 4, we consider three
configurations with distances 0.33, 0.7, and 1 of the optimal value, generated using the model
from [17]. Accordingly, we use 64-, 256-, and 1024-QAM. For simplicity, we do not consider any
precoding although it may further improve the performance as shown in [17]. We assume that
the phase noises are i.i.d. with standard deviation of 1◦ . We make the following observations.
First, as in the SISO case, phase noise mitigation substantially improves performance. Also,
the proposed likelihood approximation remains accurate as shown by the comparison between
the exhaustive search (15) and the lower bound on ML detection. Further, the proposed SIW
algorithm follows closely the exhaustive search curve and hence achieves near ML performance.
Finally, although in the considered scenario the naive LMMSE is outperformed by the naive ML
scheme, the selection between them can provide a non-negligible gain as shown in Figure 4b.
D. Scenario 3: Uplink SIMO channel with centralized receiver oscillator
The third scenario is the uplink cellular communication channel with four single-antenna
users and one multi-antenna base station receiver. It is assumed that the phase noises at the
users’ side are i.i.d., whereas there is no phase noise at the receiver side. This is a reasonable
August 9, 2017
DRAFT
20
(a) 4 × 4, 64-QAM, PN 4◦ .
(b) 4 × 4, 256-QAM, PN 2◦ .
(c) 4 × 10, 256-QAM, PN 2◦ .
Fig. 5: Uplink SIMO channel. Four single-antenna users with i.i.d. phase noise, multi-antenna
receiver without phase noise.
assumption since the oscillators at the base station are usually of higher quality than those used
by mobile devices. We assume i.i.d. Rayleigh fading in this scenario where three configurations
are considered, as shown in Figure 5. Unlike in the previous scenarios, the naive ML is dominated
by the naive LMMSE at high SNR. This somewhat counter-intuitive observation can be explained
as follows. Without receiver phase noise, the channel can be inverted and we obtain a spatial
parallel channel. Although channel inversion incurs some power loss when the channel is not
orthogonal, each of the resulting parallel subchannels sees an independent phase noise. Therefore,
the demodulation only suffers from a scalar self-interference. On the other hand, with naive ML
the receiver tries to find the closest vector in the image space of H to the received vector y .
Since the linear map H mixes the perturbation of the different transmit phase noises, the naive
ML detection, ignoring the presence of phase noise, suffers from the aggregated perturbation
from all the phase noises. That is why the naive LMMSE can be better than the naive ML
scheme in the high SNR regime where phase noise dominates the additive noise. In the case
when both the transmitter and the receiver have comparable phase noises, such a phenomenon
is rarely observed since the channel inversion also increases the perturbation with the presence
of receiver phase noises.
From Figure 5, we remark that as before the proposed SIW scheme is superior to the other
schemes. With a relatively strong phase noise of 4◦ , the error rate of SIW is 3 to 4 times lower
than that of the naive schemes and is at most twice that of the ML lower bound. With a smaller
phase noise of 2◦ , the SIW scheme can support 256-QAM with four receive antennas, achieving
an error rate 5 times lower than that of the naive schemes. In Figure 5c, we increase the number
August 9, 2017
DRAFT
21
Fig. 6: Uniform phase noise at both transmitter and receiver, Rayleigh fading.
of receive antennas to 10, we see that the gap between the naive schemes is decreased due to the
increased orthogonality of the channel. Nevertheless, the gap between the naive schemes and the
proposed scheme does not decrease since the orthogonality between the users does not reduce
the impact of the phase noise from the transmitter side. We could expect the same observation
even with massive MIMO. Nevertheless, with massive MIMO uplink, the receiver phase noise
can be mitigated substantially due to the asymptotic orthogonality [18].
VI. F URTHER D ISCUSSION AND E XPERIMENTS
A. Robustness to the phase noise distribution
One of the main assumptions of our work is that the phase noise follows a Gaussian distribution. Indeed, our derivation of the closed-form approximation (13) depends on this assumption.
We have seen that this approximation is very accurate in various practical scenarios with Gaussian
phase noise. In practice, however, phase noises may not be Gaussian, which leads to the following
natural question on the robustness: Does the proposed algorithm still work when the phase noise
is not Gaussian? The answer turns out to be positive when we let the phase noise be uniformly
distributed. In Figure 6, we consider three previous configurations (shown in Figure 3c, 5a, and
5b, respectively) but with uniform phase noises. From the results, we see that the proposed
algorithm works well as in the Gaussian phase noise, especially when the phase noise is small.
In fact, we believe that the phase noise distribution with certain regularity (e.g., continuous and
August 9, 2017
DRAFT
22
Fig. 7: Impact of iterations.
bounded density) should not have great impact on the performance of the proposed algorithm
when the phase noise is not too strong.
B. Further improvement with more iterations
Although the proposed scheme achieves near ML performance in many cases, there is still
room for improvement in some situations. As shown in Figure 5 for the 4 × 4 channel with 64QAM, the error floor of the proposed scheme is three times higher than the one of the exhaustive
search. In order to reduce the error floor, we propose to introduce a list of candidate solutions
by iteration. We recall that in the SIW scheme we start from the best solution between naive ZF
and naive ML, and we use this solution as a starting point to estimate the matrix W x followed
by a nearest neighbor detection. The SIW algorithm replaces the starting point with the newly
found one if the latter has a higher approximated likelihood. We can extend the SIW algorithm
with more iterations. Specifically, we can fix a maximum number of iterations. As long as the
newly found point is not inside the list, it should be added to the list and we continue to iterate.
The procedure stops either when we hit the maximum number of iterations or we find a point
already inside the list. At the end, we select the point with the highest likelihood value from the
list.
August 9, 2017
DRAFT
23
C. Potential issues in the very high SNR regime
At very high SNR, the proposed approximation (13) may become less accurate and the perforHΘ x k
mance loss increases with SNR. It can be seen from (6) that the approximation error of kyy −H
has greater impact as γ grows. Numerical experiments show that such performance loss may
be apparent for SNR higher than 40dB depending on the constellation size. A straightforward
workaround is to impose a ceiling value of the SNR γmax for the decoder. In other words, for
SNR higher than the threshold γmax , we let the decoder work as if it were at γmax . Intuitively,
the probability of error cannot be larger than the one at γmax , since we are feeding less noisy
observations to the decoder than what they are supposed to be. With the decoding function, the
observation space can be partitioned into |X |nt regions, each one of which corresponds to a
vector in X nt . The probability of decoding error is the probability that the observation is outside
of the decoding region corresponding to the actual input. With a smaller variance of AWGN, the
observation has a higher probability to be inside the region, hence a lower probability of error.
Nevertheless, the formal proof of this argument is not trivial, and is outside of the scope of the
current paper.4
Another issue at very high SNR is that the likelihood may be too small as compared to the
finite numerical precision. Therefore, it becomes impossible to obtain the simulation-based lower
bound in a reliable way. Furthermore, the number of Monte-Carlo samples required to reach any
given accuracy grows with the SNR.
D. On the practical validity of the adopted discrete-time model
The discrete-time channel model (1) that we adopt in this work is a simplification of the
waveform phase noise channel. Indeed, the discrete-time output sequence is obtained from
filtering followed by sampling in most communication systems. Filtering a waveform corrupted
by phase noise results in not only phase perturbation but also amplitude variation [19]. In the
following, we shall show that the amplitude variation is negligible in the practical regime of
interest. For simplicity, we focus on the single-antenna case with a rectangular filter (i.e., an
integrator). We adhere to the commonly accepted Wiener model for the continuous-time phase
4
We can always add an artificial noise to the observation in order to reduce the SNR to γmax before the detection. In this
way we have an error floor for SNR beyond γmax .
August 9, 2017
DRAFT
24
noise process {Θ(t)}. In particular, Θ(t) ∼ N (0, βt). The equivalent filtered channel gain for
the k th symbol interval of duration T is
Z
Z
1 (k+1)T jΘ(t) d jΘ(kT ) 1 T jΘ(t)
e
dt = e
e
dt
T kT
T 0
(46)
d
from the property of a Wiener process where = means equality in distribution. Assuming that
we can somehow track the past state Θ(kT ) perfectly, we now focus on the following random
variable due to the residual phase noise corresponding to the innovation part:
Z
1 T jΘ(t)
e
dt
B(β, T ) :=
T 0
Z βT
d 1
=
ej Θ̃(t) dt
βT 0
d
= B(1, βT )
(47)
(48)
(49)
d
where in (48) follows from Θ(t) = Θ̃(βt) for some normalized Wiener process {Θ̃(t)} with
Θ̃(t) ∼ N (0, t). We notice that the random variable B(β, T ) depends on the parameters β
and T only through the product βT . The distribution of B(β, T ) has been characterized both
approximately [19] and exactly [20]. In particular, an approximate moment-generating function
has been derived in [19] for small phase noise. We can use the results therein to obtain the
following characterization.
Proposition 4. Let us consider the polar representation B(β, T ) = GejΦ with G ≥ 0 and
Φ ∈ [−π, π). When S := βT is small, we have
S
S2
,
Var (Φ) ≈ , Var (G) ≈
3
180
and thus,
2
1
Var (G) ≈
Var (Φ) .
20
(50)
(51)
In Figure 8, we compare the approximation given by (51) to the correct value obtained
numerically. The approximation (51) is surprisingly accurate even for a standard deviation of
20◦ for the phase. More importantly, such results show that for a small phase perturbation up
to 5 degrees – this is the regime of interest in the present work – the amplitude variation is
less than −55 dB. Therefore, the interference caused by the amplitude variation is dominated
by the AWGN and can be treated as noise without any performance loss. In other words, the
discrete-time model adopted in this work – ignoring the amplitude variation – is indeed valid
for phase noise with moderate variance.
August 9, 2017
DRAFT
25
Fig. 8: Amplitude vs phase perturbation for filtered channel gain.
E. Constellation design
The purpose of the current work is to design a detector that takes into account the phase
perturbation for existing systems in which typical QAM signaling is used. As we observe from the
numerical experiments, in some of the scenarios, even the lower bound on the ML detection error
exhibits an error floor. The error floor is however not a fundamental limit of either the detector or
the channel. With a carefully designed signaling scheme, the probability of ML detection error
can be arbitrarily small with an increasing SNR. For instance, if we use amplitude modulation,
then when the SNR grows, the amplitude ambiguity decreases and the detection error vanishes.
The cost of such an extreme scheme is a reduced spectral efficiency. Algorithms to design a
constellation based on the statistics of both the additive and phase noises have been proposed in
the literature (see [5], [21] and the references therein). To the best of the authors’ knowledge,
only SISO has been considered so far. In the MIMO case, such constellations can still be used
and should provide improvements over QAM. The difficulty with non-QAM constellation lies
in the MIMO detection part, since efficient NND algorithms cannot be applied directly. The
constellation design problem for MIMO phase noise channels is a challenging and interesting
problem in its own right, which is however out of the scope of the current work.
VII. C ONCLUSIONS
In this work, we have studied the ML detection problem for uncoded MIMO phase noise
channels. We have proposed an approximation of the likelihood function that has been shown
August 9, 2017
DRAFT
26
to be accurate in the regimes of practical interest. More importantly, thanks to the geometric
interpretation of the approximate likelihood function, we have designed a simple algorithm that
can solve approximately the optimization problem with only two nearest neighbor detections.
Numerical experiments show that the proposed algorithm can greatly mitigate the impact of
phase noises in different communication scenarios.
R EFERENCES
[1] E. Agrell, T. Eriksson, A. Vardy, and K. Zeger, “Closest point search in lattices,” IEEE Trans. Inf. Theory, vol. 48, no. 8,
pp. 2201–2214, Aug. 2002.
[2] E. Viterbo and J. Boutros, “A universal lattice code decoder for fading channel,” IEEE Trans. Inf. Theory, vol. 45, no. 10,
pp. 1639–1642, Jul. 1999.
[3] B. Hassibi and H. Vikalo, “On the sphere-decoding algorithm I. Expected complexity,” IEEE Trans. Signal Process.,
vol. 53, no. 8, pp. 2806– 2818, Aug. 2005.
[4] J. Jaldén and P. Elia, “DMT optimality of LR-aided linear decoders for a general class of channels, lattice designs, and
system models,” IEEE Trans. Inf. Theory, vol. 56, no. 10, pp. 4765–4780, Oct. 2010.
[5] G. J. Foschini, R. D. Gitlin, and S. B. Weinstein, “On the selection of a two-dimensional signal constellation in the presence
of phase jitter and gaussian noise,” Bell Labs Technical Journal, vol. 52, no. 6, pp. 927–965, 1973.
[6] A. Hajimiri and T. H. Lee, “A general theory of phase noise in electrical oscillators,” IEEE Journal of Solid-State Circuits,
vol. 33, no. 2, pp. 179–194, Feb. 1998.
[7] H. Ghozlan and G. Kramer, “Models and information rates for wiener phase noise channels,” IEEE Trans. Inf. Theory,
vol. 63, no. 4, pp. 2376–2393, Apr. 2017.
[8] A. Demir, A. Mehrotra, and J. Roychowdhury, “Phase noise in oscillators: A unifying theory and numerical methods for
characterization,” IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 47, no. 5, pp.
655–674, May 2000.
[9] G. Durisi, A. Tarable, C. Camarda, R. Devassy, and G. Montorsi, “Capacity bounds for MIMO microwave backhaul links
affected by phase noise,” IEEE Trans. Commun., vol. 62, no. 3, pp. 920–929, Mar. 2014.
[10] S. Yang and S. Shamai (Shitz), “On the multiplexing gain of discrete-time MIMO phase noise channels,” IEEE Trans. Inf.
Theory, vol. 63, no. 4, pp. 2394–2408, Apr. 2017.
[11] R. Krishnan, G. Colavolpe, A. G. i Amat, and T. Eriksson, “Algorithms for joint phase estimation and decoding for MIMO
systems in the presence of phase noise and quasi-static fading channels,” IEEE Trans. Signal Process., vol. 63, no. 13, pp.
3360–3375, July 2015.
[12] T. Datta and S. Yang, “Improving MIMO detection performance in presence of phase noise using norm difference criterion,”
in 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), Sep. 2015, pp. 286–292.
[13] H. Mehrpouyan, A. A. Nasir, S. D. Blostein, T. Eriksson, G. K. Karagiannidis, and T. Svensson, “Joint estimation of
channel and oscillator phase noise in MIMO systems,” IEEE Trans. Signal Process., vol. 60, no. 9, pp. 4790–4807, Sep.
2012.
[14] G. W. Oehlert, “A note on the delta method,” The American Statistician, vol. 46, no. 1, pp. 27–29, 1992.
[15] G. Colavolpe, A. Barbieri, and G. Caire, “Algorithms for iterative decoding in the presence of strong phase noise,” IEEE
J. Select. Areas Commun., vol. 23, no. 9, pp. 1748–1757, Sep. 2005.
August 9, 2017
DRAFT
27
[16] F. Bøhagen, P. Orten, and G. E. Øien, “Design of optimal high-rank line-of-sight MIMO channels,” IEEE Trans. Wireless
Commun., vol. 6, no. 4, pp. 1420–1425, Apr. 2007.
[17] P. Ferrand and S. Yang, “Blind precoding in line-of-sight MIMO channels,” in IEEE SPAWC, Edinburgh, UK, 2016.
[18] E. Björnson, J. Hoydis, M. Kountouris, and M. Debbah, “Massive MIMO systems with non-ideal hardware: Energy
efficiency, estimation, and capacity limits,” IEEE Trans. Inf. Theory, vol. 60, no. 11, pp. 7112–7139, Nov. 2014.
[19] G. J. Foschini and G. Vannucci, “Characterizing filtered light waves corrupted by phase noise,” IEEE Trans. Inf. Theory,
vol. 34, no. 6, pp. 1437–1448, Jun. 1988.
[20] Y. Wang, Y. Zhou, D. K. Maslen, and G. S. Chirikjian, “Solving phase-noise Fokker-Planck equations using the motiongroup Fourier transform,” IEEE Trans. Commun., vol. 54, no. 5, pp. 868–877, May 2006.
[21] R. Krishnan, A. G. i Amat, T. Eriksson, and G. Colavolpe, “Constellation optimization in the presence of strong phase
noise,” IEEE Trans. Commun., vol. 61, no. 12, pp. 5056–5066, Dec. 2013.
A PPENDIX
A. Proof of Proposition 2
ΛRHΛ T − H )X
X so that R2 = kV
V + Z k2 . We have:
Define the vector V := (Λ
V k2 + kZ
Z k2 + Z H V + V H Z
R2 = kV
(52)
V k2 + kZ
Zk2 )2 + (Z
ZH V + V H Z )2 + 2(kV
V k2 + kZ
Zk2 )2 (Z
ZH V + V H Z ).
R4 = (kV
(53)
Using the fact that V and Z are independent and Z has i.i.d. CN (0, γ −1 ) entries, we verify that
V k2 + E kZ
Z k2 ,
E R2 = E kV
V k4 + E kZ
Zk4 + 2E kZ
Zk2 E kV
V k2 + 2γ −1 kV
V k2 ,
E R4 = E kV
V k2 ) + Var(kZ
Zk2 ) + 2γ −1 EkV
V k2 .
Var(R2 ) = Var(kV
(54)
(55)
(56)
Zk2 ] = nr γ −1 and Var(kZ
Zk2 ) = 2nr γ −2 , it follows that
Since E [kZ
V k2 + γ −1 nr ,
E R2 = E kV
(57)
V k2 ) + 2γ −2 nr + 2γ −1 E kV
V k2 .
Var(R4 ) = Var(kV
(58)
V k2 .
We now calculate the moments of kV
Expectation over H : Let us define Ak,l := Xl (ej(Θt,l +Θr,k ) − 1) and write, conditional on A ,
2
Vk =
kV
nt
nr X
X
k=1
August 9, 2017
l=1
2
Hk,l Ak,l
d
=
nr
X
Ek ,
(59)
k=1
DRAFT
28
Ak k2 ) with Ak := [Ak,l ]l=1,...,nt , k = 1, . . . , nr ; we used the fact that H has
where Ek ∼ Exp(1/kA
V k2 for a given
i.i.d. CN (0, 1) entries. Then we can calculate the first and second moments of kV
A = [Aj,k ]j,k :
2
Vk ] =
EH |AA [kV
nr
X
EH |AA [Ek ] =
k=1
nt
nr X
X
|Ak,l |2 ,
"
V k4 = EH |AA
EH |AA kV
nr
X
#
E2k + EH |AA
k=1
=
nr
X
(60)
k=1 l=1
A k k4 +
2kA
k=1
nt
nt X
X
k0 =1
k0 6=k
nt X
nt
X
k0 =1
k0 6=k
Ek Ek0
(61)
k=1
Ak k2 kA
A k 0 k2
kA
(62)
k=1
nr X
nr
X
Ak k2 kA
Ak0 k2 .
(1 + 1{k = k 0 })kA
=
(63)
k0 =1 k=1
Moments of A : We recall that |Ak,l |2 = 2|Xl |2 (1 − cos(Θr,k + Θt,l )). We have that E |Xl |2 =
1 and define E |Xl |4 = P̄ 2 , for l = 1, . . . , nt . Using the independence and the identity
for zero-mean Gaussian Θ, we obtain
E [cos(Θ)] = exp − Var(Θ)
2
σ 2 +σ 2
2
2
− r2 t
E |Ak,l | = 2E |Xl | E [1 − cos(Θt,l + Θr,k )] = 2 1 − e
.
(64)
We now calculate the correlation between the entries of A
E(|Ak,l |2 |Ak0 ,l0 |2 ) = 4E(|Xl |2 |Xl0 |2 ) ρk,k,0 l,l0
(65)
with
ρk,k,0 l,l0 := E((1 − cos(Θt,l + Θr,k ))(1 − cos(Θt,l0 + Θr,k0 )))
σ 2 +σ 2
− r2 t
= 1 − 2e
+ e−σr −σt cosh(σr2 (1{k = k 0 } + σt2 1{l = l0 })),
2
2
(66)
(67)
where to obtain the last equality we use the trigonometric identities and again apply the identity
x
−x
E [cos(Θ)] = exp − Var(Θ)
; we recall that cosh(x) = e +e
.
2
2
V k2 : From (60) and (64), we have the first moment
Moments of kV
nt
nr X
σ 2 +σ 2
X
2
2
− r2 t
Vk =
E kV
E |Ak,l | = nt nr 2 1 − e
.
(68)
k=1 l=1
August 9, 2017
DRAFT
29
For the variance, we apply (63) and (68)
2
V k2
V k4 − E kV
V k2 = E kV
(69)
Var kV
nt
nt X
nr X
nr X
X
2
2
0
2
2
0
0
0
0
E |Ak,l | |Ak ,l | (1 + 1{k = k }) − E |Ak,l | E |Ak ,l | .
=
k0 =1 k=1 l0 =1 l=1
(70)
Noting that E [|Ak,l |2 |Ak0 ,l0 |2 ] = E [|Ak,l |2 ] E [|Ak0 ,l0 |2 ] if k 6= k 0 and l 6= l0 , we obtain the variance
V k2 ) = 4nt nr w1 + w2 (nt − 1) + w3 (nr − 1) ,
Var(kV
(71)
where w1 , w2 , w3 > 0 do not depend on nt , nr and correspond to the cases (k = k 0 , l = l0 ),
(k = k 0 , l 6= l0 ), (k 6= k 0 , l = l0 ), respectively, in the summation (70),
2
2
2
w1 := 2P̄ 2 (1 − 2e−σ + e−2σ cosh(2σ 2 )) − (1 − e−σ )2 ,
2
2
2
w2 := 2(1 − 2e−σ + e−2σ cosh(σr2 )) − (1 − e−σ )2 ,
2
2
2
w3 := P̄ 2 (1 − 2e−σ + e−2σ cosh(σt2 )) − (1 − e−σ )2 ,
where we define σ 2 :=
(72)
(73)
(74)
σt2 +σr2
.
2
Putting it together: From (54) and (68), we have
2 +σ 2
σr
t
V k2 + γ −1 nr = nt nr 2 1 − e− 2
+ γ −1 nr ,
E R2 = E kV
(75)
which yields the first result. Note that E [R2 ] ≥ c1 nt nr for some constant c1 > 0 with respect to
(nt , nr ). From (56) and (71), we have proven that there exists a constant c2 > 0 such that
V k2 ) + 2γ −2 nr + 2γ −1 E kV
V k2 ≤ c2 nr nt (nt + nr ).
Var(R2 ) = Var(kV
(76)
Hence
Var(R2 )
c2
≤
(E [R2 ])2
c21
1
1
+
nr nt
→ 0,
nt , nr → ∞,
(77)
and applying Chebychev’s inequality yields the second result:
P (1 − η)E R2 ≤ R2 ≤ (1 + η)E R2 → 1, nt , nr → ∞.
August 9, 2017
(78)
DRAFT
30
B. Proof of Proposition 4
From [19, eq.6]5 , we can derive the moment-generating function (MGF) of (G, Φ)
1
MG,Φ (ξ, η) = eξ sinhc− 2
p
Sξ
"
##
"
r ! −1
p
Sξ
Sξ
Sη 2
cothc
−
+ tanhc
Sξ
, (79)
· exp
8
4
4
where we define sinhc(x) :=
sinh(x)
,
x
coth(x)
,
x
cothc(x) :=
tanhc(x) :=
tanh(x)
,
x
and recall that
S := βT . It follows that the MGF of G and Φ are
p
MG (ξ) = MG,Φ (ξ, 0) = e sinhc
Sξ ,
1 2
Sη ,
MΦ (η) = MG,Φ (0, η) = exp
6
ξ
− 12
(80)
(81)
where we used the fact that, when x → 0, sinhc(x) = 1 + O(x2 ), tanhc(x) = 1 + O(x2 ), and
cothc(x) −
1
1
= + O(x2 ).
2
x
3
(82)
After finding the first and second derivatives of both MGF (80) and (81) with some elementary
manipulations, we obtain the desired variances
S2
2
Var (G) = E G2 − (E (G))2 = MG00 (0) − (MG0 (0)) =
,
180
S
2
Var (Φ) = E Φ2 − (E (Φ))2 = MΦ00 (0) − (MΦ0 (0)) = .
3
(83)
(84)
Note that we use approximate equality in (50) and (51) since the MGF derived in [19] is indeed
approximative with the assumption of small S.
5
Note that the random variable B in (47) differs with the one in [19, eq.1] in a normalization factor T . The MGF has been
scaled accordingly.
August 9, 2017
DRAFT
| 7 |
arXiv:1504.00941v2 [] 7 Apr 2015
A Simple Way to Initialize Recurrent Networks of
Rectified Linear Units
Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton
Google
Abstract
Learning long term dependencies in recurrent networks is difficult due to vanishing and exploding gradients. To overcome this difficulty, researchers have developed sophisticated optimization techniques and network architectures. In this
paper, we propose a simpler solution that use recurrent neural networks composed
of rectified linear units. Key to our solution is the use of the identity matrix or its
scaled version to initialize the recurrent weight matrix. We find that our solution is
comparable to a standard implementation of LSTMs on our four benchmarks: two
toy problems involving long-range temporal structures, a large language modeling
problem and a benchmark speech recognition problem.
1 Introduction
Recurrent neural networks (RNNs) are very powerful dynamical systems and they are the natural
way of using neural networks to map an input sequence to an output sequence, as in speech recognition and machine translation, or to predict the next term in a sequence, as in language modeling.
However, training RNNs by using back-propagation through time [30] to compute error-derivatives
can be difficult. Early attempts suffered from vanishing and exploding gradients [15] and this meant
that they had great difficulty learning long-term dependencies. Many different methods have been
proposed for overcoming this difficulty.
A method that has produced some impressive results [23, 24] is to abandon stochastic gradient
descent in favor of a much more sophisticated Hessian-Free (HF) optimization method. HF operates
on large mini-batches and is able to detect promising directions in the weight-space that have very
small gradients but even smaller curvature. Subsequent work, however, suggested that similar results
could be achieved by using stochastic gradient descent with momentum provided the weights were
initialized carefully [34] and large gradients were clipped [28]. Further developments of the HF
approach look promising [35, 25] but are much harder to implement than popular simple methods
such as stochastic gradient descent with momentum [34] or adaptive learning rates for each weight
that depend on the history of its gradients [5, 14].
The most successful technique to date is the Long Short Term Memory (LSTM) Recurrent Neural
Network which uses stochastic gradient descent, but changes the hidden units in such a way that
the backpropagated gradients are much better behaved [16]. LSTM replaces logistic or tanh hidden
units with “memory cells” that can store an analog value. Each memory cell has its own input and
output gates that control when inputs are allowed to add to the stored analog value and when this
value is allowed to influence the output. These gates are logistic units with their own learned weights
on connections coming from the input and also the memory cells at the previous time-step. There is
also a forget gate with learned weights that controls the rate at which the analog value stored in the
memory cell decays. For periods when the input and output gates are off and the forget gate is not
causing decay, a memory cell simply holds its value over time so the gradient of the error w.r.t. its
stored value stays constant when backpropagated over those periods.
1
The first major success of LSTMs was for the task of unconstrained handwriting recognition [12].
Since then, they have achieved impressive results on many other tasks including speech recognition [13, 10], handwriting generation [8], sequence to sequence mapping [36], machine translation [22, 1], image captioning [38, 18], parsing [37] and predicting the outputs of simple computer
programs [39].
The impressive results achieved using LSTMs make it important to discover which aspects of the
rather complicated architecture are crucial for its success and which are mere passengers. It seems
unlikely that Hochreiter and Schmidhuber’s [16] initial design combined with the subsequent introduction of forget gates [6, 7] is the optimal design: at the time, the important issue was to find any
scheme that could learn long-range dependencies rather than to find the minimal or optimal scheme.
One aim of this paper is to cast light on what aspects of the design are responsible for the success of
LSTMs.
Recent research on deep feedforward networks has also produced some impressive results [19, 3]
and there is now a consensus that for deep networks, rectified linear units (ReLUs) are easier to train
than the logistic or tanh units that were used for many years [27, 40]. At first sight, ReLUs seem
inappropriate for RNNs because they can have very large outputs so they might be expected to be far
more likely to explode than units that have bounded values. A second aim of this paper is to explore
whether ReLUs can be made to work well in RNNs and whether the ease of optimizing them in
feedforward nets transfers to RNNs.
2 The initialization trick
In this paper, we demonstrate that, with the right initialization of the weights, RNNs composed
of rectified linear units are relatively easy to train and are good at modeling long-range dependencies. The RNNs are trained by using backpropagation through time to get error-derivatives for the
weights and by updating the weights after each small mini-batch of sequences. Their performance
on test data is comparable with LSTMs, both for toy problems involving very long-range temporal
structures and for real tasks like predicting the next word in a very large corpus of text.
We initialize the recurrent weight matrix to be the identity matrix and biases to be zero. This means
that each new hidden state vector is obtained by simply copying the previous hidden vector then
adding on the effect of the current inputs and replacing all negative states by zero. In the absence
of input, an RNN that is composed of ReLUs and initialized with the identity matrix (which we call
an IRNN) just stays in the same state indefinitely. The identity initialization has the very desirable
property that when the error derivatives for the hidden units are backpropagated through time they
remain constant provided no extra error-derivatives are added. This is the same behavior as LSTMs
when their forget gates are set so that there is no decay and it makes it easy to learn very long-range
temporal dependencies.
We also find that for tasks that exhibit less long range dependencies, scaling the identity matrix by
a small scalar is an effective mechanism to forget long range effects. This is the same behavior as
LTSMs when their forget gates are set so that the memory decays fast.
Our initialization scheme bears some resemblance to the idea of Mikolov et al. [26], where a part of
the weight matrix is fixed to identity or approximate identity. The main difference of their work to
ours is the fact that our network uses the rectified linear units and the identity matrix is only used for
initialization. The scaled identity initialization was also proposed in Socher et al. [32] in the context
of tree-structured networks but without the use of ReLUs. Our work is also related to the work of
Saxe et al. [31], who study the use of orthogonal matrices as initialization in deep networks.
3 Overview of the experiments
Consider a recurrent net with two input units. At each time step, the first input unit has a real value
and the second input unit has a value of 0 or 1 as shown in figure 1. The task is to report the sum of
the two real values that are marked by having a 1 as the second input [16, 15, 24]. IRNNs can learn
to handle sequences with a length of 300, which is a challenging regime for other algorithms.
2
Another challenging toy problem is to learn to classify the MNIST digits when the 784 pixels are
presented sequentially to the recurrent net. Again, the IRNN was better than the LSTM, having been
able to achieve 3% test set error compared to 34% for LSTM.
While it is possible that a better tuned LSTM (with a different architecture or the size of the hidden
state) would outperform the IRNN for the above two tasks, the fact that the IRNN performs as well
as it does, with so little tuning, is very encouraging, especially given how much simpler the model
is, compared to the LSTM.
We also compared IRNNs with LSTMs on a large language modeling task. Each memory cell of an
LSTM is considerably more complicated than a rectified linear unit and has many more parameters,
so it is not entirely obvious what to compare. We tried to balance for both the number of parameters
and the complexity of the architecture by comparing an LSTM with N memory cells with an IRNN
with four layers of N hidden units, and an IRNN with one layer and 2N hidden units. Here we find
that the IRNN gives results comparable to the equivalent LSTM.
Finally, we benchmarked IRNNs and LSTMs on a acoustic modeling task on TIMIT. As the tasks
only require a short term memory of the inputs, we used a the identity matrix scaled by 0.01 as
initialization for the recurrent matrix. Results show that our method is also comparable to LSTMs,
despite being a lot simpler to implement.
4 Experiments
In the following experiments, we compare IRNNs against LSTMs, RNNs that use tanh units and
RNNs that use ReLUs with random Gaussian initialization.
For IRNNs, in addition to the recurrent weights being initialized at identity, the non-recurrent
weights are initialized with a random matrix, whose entries are sampled from a Gaussian distribution with mean of zero and standard deviation of 0.001.
Our implementation of the LSTMs is rather standard and includes the forget gate. It is observed that
setting a higher initial forget gate bias for LSTMs can give better results for long term dependency
problems. We therefore also performed a grid search for the initial forget gate bias in LSTMs from
the set {1.0, 4.0, 10.0, 20.0}. Other than that we did not tune the LTSMs much and it is possible that
the results of LSTMs in the experiments can be improved.
In addition to LSTMs, two other candidates for comparison are RNNs that use the tanh activation
function and RNNs that use ReLUs with standard random Gaussian initialization. We experimented
with several values of standard deviation for the random initialization Gaussian matrix and found
that values suggested in [33] work well.
To train these models, we use stochastic gradient descent with a fixed learning rate and gradient
clipping. To ensure that good hyperparameters are used, we performed a grid search over several
learning rates α = {10−9 , 10−8 , ..., 10−1 } and gradient clipping values gc = {1, 10, 100, 1000} [9,
36]. The reported result is the best result over the grid search. We also use the same batch size of
16 examples for all methods. The experiments are carried out using the DistBelief infrastructure,
where each experiment only uses one replica [20, 4].
4.1 The Adding Problem
The adding problem is a toy task, designed to examine the power of recurrent models in learning
long-term dependencies [16, 15]. This is a sequence regression problem where the target is a sum of
two numbers selected in a sequence of random signals, which are sampled from a uniform distribution in [0,1]. At every time step, the input consists of a random signal and a mask signal. The mask
signal has a value of zero at all time steps except for two steps when it has values of 1 to indicate
which two numbers should be added. An example of the adding problem is shown in figure 1 below.
A basic baseline is to always predict the sum to have a value of 1 regardless of the inputs. This will
give the Mean Squared Error (MSE) around 0.1767. The goal is to train a model that achieves MSE
well below 0.1767.
3
Figure 1: An example of the “adding” problem, where the target is 1.2 which is the sum of 2nd and
the 7th numbers in the first sequence [24].
The problem gets harder as the length of the sequence T increases because the dependency between
the output and the relevant inputs becomes more remote. To solve this problem, the recurrent net
must remember the first number or the sum of the two numbers accurately whilst ignoring all of the
irrelevant numbers.
We generated a training set of 100,000 examples and a test set of 10,000 examples as we varied T .
We fixed the hidden states to have 100 units for all of our networks (LSTMs, RNNs and IRNNs).
This means the LSTMs had more parameters by a factor of about 4 and also took about 4 times as
much computation per timestep.
As we varied T , we noticed that both LSTMs and RNNs started to struggle when T is around 150.
We therefore focused on investigating the behaviors of all models from this point onwards. The
results of the experiments with T = 150, T = 200, T = 300, T = 400 are reported in figure 2
below (best hyperparameters found during grid search are listed in table 1).
Adding two numbers in a sequence of 150 numbers
Adding two numbers in a sequence of 200 numbers
0.8
0.8
LSTM
RNN + Tanh
RNN + ReLUs
IRNN
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
1
2
3
4
5
6
7
8
Steps
LSTM
RNN + Tanh
RNN + ReLUs
IRNN
0.7
Test MSE
Test MSE
0.7
0
9
0
Adding two numbers in a sequence of 300 numbers
3
4
5
6
7
8
9
6
x 10
Adding two numbers in a sequence of 400 numbers
0.8
LSTM
RNN + Tanh
RNN + ReLUs
IRNN
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
1
2
3
4
5
Steps
6
7
LSTM
RNN + Tanh
RNN + ReLUs
IRNN
0.7
Test MSE
Test MSE
2
Steps
0.8
0
1
6
x 10
8
0
9
0
1
2
3
4
5
Steps
6
x 10
6
7
8
9
6
x 10
Figure 2: The results of recurrent methods on the “adding” problem for the case of T = 150 (top
left), T = 200 (top right), T = 300 (bottom left) and T = 400 (bottom right). The objective
function is the Root Mean Squared Error, reported on the test set of 10,000 examples. Note that
always predicting the sum to be 1 should give MSE of 0.1767.
The results show that the convergence of IRNNs is as good as LSTMs. This is given that each LSTM
step is more expensive than an IRNN step (at least 4x more expensive). Adding two numbers in a
sequence of 400 numbers is somewhat challenging for both algorithms.
4
T
150
200
300
400
LSTM
lr = 0.01, gc = 10, f b = 1.0
lr = 0.001, gc = 100, f b = 4.0
lr = 0.01, gc = 1, f b = 4.0
lr = 0.01, gc = 100, f b = 10.0
RNN + Tanh
lr = 0.01, gc = 100
N/A
N/A
N/A
IRNN
lr = 0.01, gc = 100
lr = 0.01, gc = 1
lr = 0.01, gc = 10
lr = 0.01, gc = 1
Table 1: Best hyperparameters found for adding problems after grid search. lr is the learning rate, gc
is gradient clipping, and f b is forget gate bias. N/A is when there is no hyperparameter combination
that gives good result.
4.2 MNIST Classification from a Sequence of Pixels
Another challenging toy problem is to learn to classify the MNIST digits [21] when the 784 pixels
are presented sequentially to the recurrent net. In our experiments, the networks read one pixel at a
time in scanline order (i.e. starting at the top left corner of the image, and ending at the bottom right
corner). The networks are asked to predict the category of the MNIST image only after seeing all
784 pixels. This is therefore a huge long range dependency problem because each recurrent network
has 784 time steps.
To make the task even harder, we also used a fixed random permutation of the pixels of the MNIST
digits and repeated the experiments.
All networks have 100 recurrent hidden units. We stop the optimization after it converges or when
it reaches 1,000,000 iterations and report the results in figure 3 (best hyperparameters are listed in
table 2).
Pixel−by−pixel MNIST
Pixel−by−pixel permuted MNIST
100
100
LSTM
RNN + Tanh
RNN + ReLUs
IRNN
90
80
90
80
70
Test Accuracy
Test Accuracy
70
60
50
40
60
50
40
30
30
20
20
10
0
LSTM
RNN + Tanh
RNN + ReLUs
IRNN
10
0
1
2
3
4
5
Steps
6
7
8
9
0
10
5
x 10
0
1
2
3
4
5
Steps
6
7
8
9
10
5
x 10
Figure 3: The results of recurrent methods on the “pixel-by-pixel MNIST” problem. We report the
test set accuracy for all methods. Left: normal MNIST. Right: permuted MNIST.
Problem
MNIST
permuted
MNIST
LSTM
lr = 0.01, gc = 1
f b = 1.0
lr = 0.01, gc = 1
f b = 1.0
RNN + Tanh
lr = 10−8 , gc = 10
RNN + ReLUs
lr = 10−8 , gc = 10
IRNN
lr = 10−8 , gc = 1
lr = 10−8 , gc = 1
lr = 10−6 , gc = 10
lr = 10−9 , gc = 1
Table 2: Best hyperparameters found for pixel-by-pixel MNIST problems after grid search. lr is the
learning rate, gc is gradient clipping, and f b is the forget gate bias.
The results using the standard scanline ordering of the pixels show that this problem is so difficult
that standard RNNs fail to work, even with ReLUs, whereas the IRNN achieves 3% test error rate
which is better than most off-the-shelf linear classifiers [21]. We were surprised that the LSTM did
not work as well as IRNN given the various initialization schemes that we tried. While it still possible that a better tuned LSTM would do better, the fact that the IRNN perform well is encouraging.
5
Applying a fixed random permutation to the pixels makes the problem even harder but IRNNs on
the permuted pixels are still better than LSTMs on the non-permuted pixels.
The low error rates of the IRNN suggest that the model can discover long range correlations in the
data while making weak assumptions about the inputs. This could be important to have for problems
when input data are in the form of variable-sized vectors (e.g. the repeated field of a protobuffer 1 ).
4.3 Language Modeling
We benchmarked RNNs, IRNNs and LSTMs on the one billion word language modelling dataset [2],
perhaps the largest public benchmark in language modeling. We chose an output vocabulary of
1,000,000 words.
As the dataset is large, we observed that the performance of recurrent methods depends on the size
of the hidden states: they perform better as the size of the hidden states gets larger (cf. [2]). We
however focused on a set of simple controlled experiments to understand how different recurrent
methods behave when they have a similar number of parameters. We first ran an experiment where
the number of hidden units (or memory cells) in LSTM are chosen to be 512. The LSTM is trained
for 60 hours using 32 replicas. Our goal is then to check how well IRNNs perform given the same
experimental environment and settings. As LSTM have more parameters per time step, we compared
them with an IRNN that had 4 layers and same number of hidden units per layer (which gives
approximately the same numbers of parameters).
We also experimented shallow RNNs and IRNNs with 1024 units. Since the output vocabulary is
large, we projected the 1024 hidden units to a linear layer with 512 units before the softmax. This
avoids greatly increasing the number of parameters.
The results are reported in table 3, which show that the performance of IRNNs is closer to the
performance of LSTMs for this large-scale task than it is to the performance of RNNs.
Methods
LSTM (512 units)
IRNN (4 layers, 512 units)
IRNN (1 layer, 1024 units + linear projection with 512 units before softmax)
RNN (4 layer, 512 tanh units)
RNN (1 layer, 1024 tanh units + linear projection with 512 units before softmax)
Test perplexity
68.8
69.4
70.2
71.8
72.5
Table 3: Performances of recurrent methods on the 1 billion word benchmark.
4.4 Speech Recognition
We performed Phoneme recognition experiments on TIMIT with IRNNs and Bidirectional IRNNs
and compared them to RNNs, LSTMs and Bidirectional LSTMs and RNNs. Bidirectional LSTMs
have been applied previously to TIMIT in [11]. In these experiments we generated phoneme alignments from Kaldi [29] using the recipe reported in [17] and trained all RNNs with two and five
hidden layers. Each model was given log Mel filter bank spectra with their delta and accelerations,
where each frame was 120 (=40*3) dimensional and trained to predict the phone state (1 of 180).
Frame error rates (FER) from this task are reported in table 4.
In this task, instead of the identity initialization for the IRNNs matrices we used 0.01I so we refer
to them as iRNNs. Initalizing with the full identity led to slow convergence, worse results and
sometimes led to the model diverging during training. We hypothesize that this was because in the
speech task similar inputs are provided to the neural net in neighboring frames. The normal IRNN
keeps integrating this past input, instead of paying attention mainly to the current input because it
has a difficult time forgetting the past. So for the speech task, we are not only showing that iRNNs
work much better than RNNs composed of tanh units, but we are also showing that initialization
with the full identity is suboptimal when long range effects are not needed. Mulitplying the identity
with a small scalar seems to be a good remedy in such cases.
1
https://code.google.com/p/protobuf/
6
Methods
RNN (500 neurons, 2 layers)
LSTM (250 cells, 2 layers)
iRNN (500 neurons, 2 layers)
RNN (500 neurons, 5 layers)
LSTM (250 cells, 5 layers)
iRNN (500 neurons, 5 layers)
Bidirectional RNN (500 neurons, 2 layers)
Bidirectional LSTM (250 cells, 2 layers)
Bidirectional iRNN (500 neurons, 2 layers)
Bidirectional RNN (500 neurons, 5 layers)
Bidirectional LSTM (250 cells, 5 layers)
Bidirectional iRNN (500 neurons, 5 layers)
Frame error rates (dev / test)
35.0 / 36.2
34.5 / 35.4
34.3 / 35.5
35.6 / 37.0
35.0 / 36.2
33.0 / 33.8
31.5 / 32.4
29.6 / 30.6
31.9 / 33.2
33.9 / 34.8
28.5 / 29.1
28.9 / 29.7
Table 4: Frame error rates of recurrent methods on the TIMIT phone recognition task.
In general in the speech recognition task, the iRNN easily outperforms the RNN that uses tanh units
and is comparable to LSTM although we don’t rule out the possibility that with very careful tuning
of hyperparameters, the relative performance of LSTMs or the iRNNs might change. A five layer
Bidirectional LSTM outperforms all the other models on this task, followed closely by a five layer
Bidirectional iRNN.
4.5 Acknowledgements
We thank Jeff Dean, Matthieu Devin, Rajat Monga, David Sussillo, Ilya Sutskever and Oriol Vinyals
for their help with the project.
References
[1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align
and translate. arXiv preprint arXiv:1409.0473, 2014.
[2] C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, and P. Koehn. One billion word benchmark for measuring progress in statistical language modeling. CoRR, abs/1312.3005, 2013.
[3] G. E. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks
for large vocabulary speech recognition. IEEE Transactions on Audio, Speech, and Language
Processing - Special Issue on Deep Learning for Speech and Language Processing, 2012.
[4] J. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. A. Ranzato,
A. Senior, P. Tucker, K. Yang, and A. Y. Ng. Large scale distributed deep networks. In NIPS,
2012.
[5] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and
stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159, 2011.
[6] F. A. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction with
LSTM. Neural Computation, 2000.
[7] F. A. Gers, N. N. Schraudolph, and J. Schmidhuber. Learning precise timing with lstm recurrent networks. The Journal of Machine Learning Research, 2003.
[8] A. Graves.
Generating sequences with recurrent neural networks.
arXiv:1308.0850, 2013.
arXiv preprint
[9] A. Graves. Generating sequences with recurrent neural networks. In Arxiv, 2013.
[10] A. Graves and N. Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the 31st International Conference on Machine Learning, 2014.
[11] A. Graves, N. Jaitly, and A-R. Mohamed. Hybrid speech recognition with deep bidirectional
lstm. In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU),, 2013.
7
[12] A. Graves, M. Liwicki, S. Fernández, R. Bertolami, H. Bunke, and J. Schmidhuber. A novel
connectionist system for unconstrained handwriting recognition. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 2009.
[13] A. Graves, A-R. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural
networks. In IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), 2013.
[14] G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
[15] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the
difficulty of learning long-term dependencies. A Field Guide to Dynamical Recurrent Neural
Networks, 2001.
[16] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997.
[17] N. Jaitly. Exploring Deep Learning Methods for discovering features in speech signals. PhD
thesis, University of Toronto, 2014.
[18] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with
multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014.
[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in Neural Information Processing Systems, 2012.
[20] Q. V. Le, M. A. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y.
Ng. Building high-level features using large scale unsupervised learning. In International
Conference on Machine Learning, 2012.
[21] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 1998.
[22] T. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba. Addressing the rare word
problem in neural machine translation. arXiv preprint arXiv:1410.8206, 2014.
[23] J. Martens. Deep learning via Hessian-free optimization. In Proceedings of the 27th International Conference on Machine Learning, 2010.
[24] J. Martens and I. Sutskever. Learning recurrent neural networks with Hessian-Free optimization. In ICML, 2011.
[25] J. Martens and I. Sutskever. Training deep and recurrent neural networks with Hessian-Free
optimization. Neural Networks: Tricks of the Trade, 2012.
[26] T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, and M. A. Ranzato. Learning longer memory
in recurrent neural networks. arXiv preprint arXiv:1412.7753, 2014.
[27] V. Nair and G. Hinton. Rectified Linear Units improve Restricted Boltzmann Machines. In
International Conference on Machine Learning, 2010.
[28] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks.
arXiv preprint arXiv:1211.5063, 2012.
[29] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann,
P. Motlicek, Y. Qian, P. Schwarz, J. Silovsky, G. Stemmer, and K. Vesely. The kaldi speech
recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society, 2011.
[30] D. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating
errors. Nature, 323(6088):533–536, 1986.
[31] A. M. Saxe, J. L. McClelland, and S. Ganguli. Exact solutions to the nonlinear dynamics of
learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.
[32] R. Socher, J. Bauer, C. D. Manning, and A. Y. Ng. Parsing with compositional vector grammars. In ACL, 2013.
[33] D. Sussillo and L. F. Abbott. Random walk intialization for training very deep networks. arXiv
preprint arXiv:1412.6558, 2015.
[34] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and
momentum in deep learning. In Proceedings of the 30th International Conference on Machine
Learning, 2013.
8
[35] I. Sutskever, J. Martens, and G. E. Hinton. Generating text with recurrent neural networks.
In Proceedings of the 28th International Conference on Machine Learning, pages 1017–1024,
2011.
[36] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks.
In NIPS, 2014.
[37] O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. Grammar as a foreign
language. arXiv preprint arXiv:1412.7449, 2014.
[38] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption
generator. arXiv preprint arXiv:1411.4555, 2014.
[39] W. Zaremba and I. Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615, 2014.
[40] M. Zeiler, M. Ranzato, R. Monga, M. Mao, K. Yang, Q. V. Le, P. Nguyen, A. Senior, V. Vanhoucke, and J. Dean. On rectified linear units for speech processing. In IEEE Conference on
Acoustics, Speech and Signal Processing (ICASSP), 2013.
9
| 9 |
1
Online Thevenin Equivalent Parameter Estimation using
Nonlinear and Linear Recursive Least Square Algorithm
Md. Umar Hashmi
Department of Energy Science & Engineering
IIT Bombay
Bombay, Maharashtra
[email protected]
Rahul Choudhary
Department of Systems and Control
IIT Bombay
Bombay, Maharashtra
[email protected]
Abstract—This paper proposes method for detection, estimation of
Thevenin equivalent parameters to describe power system behavior.
Thevenin equivalent estimation is a challenge due to variation in
system states caused by power flow in the network. Thevenin
equivalent calculation based on changes in system with multiple
sources integrated with grid, isolated distributed generator system is
analysed and nonlinear least square fit estimation technique for
algorithm is adopted. Linear least square fit is used with a linearized
model. Performance evaluation of proposed method is carried out
through mathematical model, nonlinear and linear least square fit
based
algorithm
technique
and
simulation
through
MATLAB/SIMULINK package. Accurate grid and source side
impedance estimation technique is applicable for distributed
generation sources interfaced with grid to improve dynamic response,
stability, reliability when subjected to faults or any other
disturbances in network. Algorithm can accurately estimate Thevenin
equivalent of multiple sources connected in parallel simultaneously
with voltage and current phasor measurements at point of common
coupling. Mathematical analysis and simulation results validate the
effectiveness of proposed method.
Index Terms—Adaptive controller, Impedance, Least square fit
estimation, Point of common coupling, Thevenin equivalent.
I. INTRODUCTION
W
ith integration of large number of distributed generation
sources (DG) into distribution network has raised concerns
related to stability, power quality and reliability of the
supply [1-3]. Distributed power generation sources depending
upon their mode of operation affects line impedance of distributed
electricity network. Performance of grid connected converter
depends upon line impedance [4]. Islanding operation due to
faults or grid disturbances due to voltage or frequency sag if not
detected properly leads to poor power quality at point of common
coupling (PCC).Online measurement due its numerous advantages
is preferred to fulfill anti islanding requirement. The distortion in
network caused due to electrical loads and distributed generation
sources strongly depends upon grid impedance and amount of
apparent power connected [5].To optimize the operation and
performance of distributed generation sources in microgrid
network, time and frequency dependent grid parameter estimation/
calculation is key [5].
In literature several methods are proposed for impedance
estimation. Estimation based on control loop variations which
include new values of inductance and resistance in control loop to
improve performance of system is proposed in [6]. Grid
impedance estimation based on use of extra devices is proposed in
[7]. Injection of harmonics signal into the grid and use of
mathematical models to obtain new impedance parameters are
Jayesh G. Priolkar
Department of Electrical Engineering
Goa College of Engineering
Farmagudi, Ponda Goa
[email protected]
addressed in [8-10].Grid impedance using controlled excitation
based on frequency characteristics of inductance -capacitance–
inductance (LCL) filter resonance is reported in [3]. The various
methods used to determine grid impedance characteristics offline
by means of frequency response analysis is reported in [11-13].
The methods of equivalent impedance estimation can be
broadly classified as active and passive type [14]. Passive
methods utilize the existing disturbance present in power
networks, for example detection of low order harmonic frequency
impedance. In active methods in addition to regular operation,
forced disturbance is injected into the grid or distributed
generation network for parameter estimation. Regression analysis
is used to extract parameters from measured data to define
physical characteristics of system. Thevenin equivalent estimation
based on concept of regression is also reported in literature.
Recursive least square estimation technique based on varying
system states to determine Thevenin equivalent is proposed in
[15].
Detection and estimation of Thevenin equivalent impedance for
different application is reported in literature. For example, power
system fault detection, load matching for maximum power
transfer, state of charge estimation for battery bank [16],
simultaneous estimation of Thevenin equivalent of multiple
sources, for load management by load shedding in power system
network based on detection of undervoltage [17], voltage stability
margin adjustment and analysis for prevention of voltage collapse
by a real time voltage instability identification algorithm based on
local phasor measurements [18].
The main aim of our work is to develop the algorithm for
online Thevenin equivalent parameter estimation. It is active
method which is available to estimate Thevenin equivalent
voltage and current. To evaluate the performance of proposed
method, mathematical model, nonlinear least square fit based
algorithm
technique
and
simulation
through
MATLAB/SIMULINK package are done.
Paper is organized as follows, Introduction of paper is given in
section I, section II discusses in brief applications of online
Thevenin equivalent estimation algorithm, section III presents
parameter estimation for isolated power system, and section IV
discusses the parameter estimation for grid connected system.
Simulation results are discussed in section V. Section VI
concludes the paper.
II. APPLICATIONS OF THEVENIN EQUIVALENT ESTIMATION
Online Thevenin equivalent estimation algorithm finds
application in adaptive control of electrical processes for
performance improvement and preemptive corrective action can be
taken for safe system operation. Some of the applications have
been discussed in this section.
2
i) Power system fault detection: - The Thevennin equivalent of
power system will indicate the change of estimatedd parameters due
to fault in power system. The relationships betweeen the real fault
distance and the varying Thevenin equivalennt impedance is
presented in paper [19]. The proposed Thevvenin equivalent
estimation algorithm in our work uses balancced symmetrical
components for correct estimation. Balanced ground faults,
balanced short circuits and balanced impedancee change can be
identified using the presented algorithm.
ii) Load matching for maximizing power transsfer: - Maximum
power is transferred to a load resistor (RL) when the value of the
load resistor is selected to match the value oof the Thevenin
resistance (Rth) of the power source. Figure 1 dem
monstrates this for
a value of Rth of 10 ohms.
Load Power Response to Changes in Load Resistance forr Rth = 10 ohms
1000
Power (Watts)
800
600
iii) State of charge estimation of battery or battery bank: A
6] between the open circuit
linear relationship is presented in [16
voltage (VOC) and SOC
)
%) =
The values of a and b are found experrimentally [16]. Open circuit
voltage is the Thevenin equivalent voltage of a battery under no
load condition.
ustment and analysis for
iv) Voltage stability margin adju
prevention of voltage collapse: Voltage instability is a major
concern for power systems operatiion. A Real-Time voltage
instability identification algorithm Based on Local Phasor
Measurements is proposed in [18] which recognize instabilities in
voltage. The power transferred to the bus reaches its voltage
stability limit when that Thevenin impedance has the same
magnitude as the load impedance at the bus [18].
The apparent power supplied is S, Y = 1/ZL, than
400
=
200
0
0
10
20
30
Load Resistance(Ohms)
40
50
Figure 1: Maximum Power Transfer at Zload=
=Zsource
Maximum Power Transfer with Complex Impedance
)
))^
(2)
The condition for maximum load ap
pparent power is dS/dY =0,
hence the critical point of voltage insstability is Zth = ZL. Hence
maximum loading point can be accurrately monitored online by
calculating dS/dY. The value of dS/d
dY close to zero indicates
proximity to voltage collapse point [20]].
The proposed algorithm assumes no simplification in system for
estimating Thevenin equivalent parameters.
Power Transfer (Watts)
1000
III. PARAMETER ESTIMATION FOR ISOLATED
M
SYSTEM
800
600
400
200
0
5
20
15
0
Description of the Proposed Algorith
hm:
Consider a voltage source feeding a lo
oad which has an impedance
of itself, line and filter impedance befo
ore it’s actually connected to
the load. The Thevenin equivalent of the
t circuit is shown adjacent
to it in the figure below.
10
Impedance (Ohms)
5
-5
Resiistance (Ohms)
0
Figure 2: Maximum Power Transfer with complexx impedance
2
P = I R, PL = [Vth/ (Rth+ RL)]2(RL)
(1)
One example is load matching is for wirelesss power transfer
circuitry design using inductive coils. Four coill wireless power
transfer simulation diagram along with the matcching network is
shown in Figure 3. Parameter values of the matchhing network are
dependent on the Thevenin equivalent impedancee of the four coil
system.
Matching network shown in figure 3 is one of thhe many possible
architectures of a matching network.
Figure 3: Matching network design for 4 coil wireless pow
wer transfer circuit
Figure 4: Isolated system with itss Thevenin equivalent
Using Kirchhoff’s Voltage Law we can write the voltage loop
equation as:
VT
cos θ j sin θ) = VPCC cos
j sin ) IPCC cos φ
j sin φ) × R T
jXT
)
(3)
We can separate the real and imaaginary component of the
equation as,
VT
cos θ = VPCC cos
IPCC cos φ × R T
sin φ ×
XT
)
(4)
VT
sin θ = VPCC sin
IPCC co
os φ × XT
RT
× sin φ)
(5)
3
The algorithm will be able to estimate the Theevenin equivalent
voltage and current. The point of common coupliing is the point at
which the load is connected. The known and unkn
known parameters
are shown below in table 1:
Table 1: State of Parameters
Known Parameters
Voltage Magnitude at PCC (VPCC)
Start
Unknown P
Parameters
Thevenin Voltage (VThevenin) ( 1)
Voltage Angle at PCC ( )
Thevenin Voltagee Angle (θ) (
Current Magnitude at PCC (IPCC)
Thevenin Resistancee (RThevenin) ( 3)
Current Angles at PCC (φ)
Thevenin Inductancee (XThevenin)(
VPCC cos = VT
XT
)
cos θ
IPCC cos φ × R T
VPCC sin = VT
RThevenin×sinφ)
sin θ
IPCC cos φ × XT
Equation (6) and (7) are non-linear in nature.
cos )
=
sin )
Where, y = VPCC cos , = VPCC sin , a = IPCC cos φ,
b = IPCC sin φ are known parameters and = VT
,
= RT
, = XT
are unknown param
meters.
=
Where
, , )
=
Estimation of Thevenin equivalent uses local measurements of at
least two different voltage and curreent vectors (magnitude and
phase) pairs measured at different time with different values
nin equivalent parameters.
associated with same reference Theven
Initialize Simulation
Variables
2)
For i = 1: number of iterations
s
4)
1. Define Load
2. Run Simulation
p
3. Get Voltage and Current (both phase
& magnitude ) at PCC
sin φ ×
(6)
Yes
(7)
No
Initial Guess for unknown
parameters X
Use
U
previous value of
unknown
u
parameter X
No
(8)
= ,
1. Minimize the error function E 2 for X
[E2 = f(Vpcc,Ipcc, load, Vsourse
e)]
2. Get X using least square meth
hod
Store
unknown
parameter X
Display X
(9)
and
If i = 1
If i = iteration limit
=
Yes
end
= min ∑
)
(10)
It is intended to solve these equations to obtaain the values X
which satisfy the system of equations describedd in equation 10.
Initial guess values are random numbers. Equattion 10 has to be
minimized for optimal values of unknown param
meters. Ideally β
has to be zero but since the algorithm recursiveely optimizes the
unknown parameters, there is a residual error w
which has to be
maintained as low as possible for accurate ressults. Residual is
defined by equation (11).
=
(11)
Nonlinear Recursive algorithm needs two Phasoor measurements
of voltage and current at the point of commoon coupling. The
figure below shows the circuit diagram for two different loading
scenarios. Phasor measurements are done at steaddy state.
Figure 7: Flow chart for the proposed algorithm
Parameters Estimation using linear regression
Non-linearity of the Equation (8) can be
b taken care by eliminating
the trigonometric variables.
=
=
(12)
1
0
0
1
×
(13)
Where, y = VPCC cos , = VPCC sin , a = IPCC cos φ, b =
IPCCsinφ are known parameters and 1=VThevenincosθ,
= VT
sin θ,
= RT
, = XT
are unknown
parameters.
a
Additional variable can be calculated as
(14)
VT
=
)
= tan
(15)
For n different loads, Equation (13) caan be written in following
structures (augmentation of matrix),
Figure 5: Circuit description of two data set coollection
=
(16)
Where dimension of the matrices are as,
a
For given initial estimate of
Figure 6: Phasor diagram of at least two different measurem
ments is required for
operation of estimation algorithm.
=
Estimation error
=
=
=
=
×
=
×
×
= ,
(17)
(18)
(19)
4
Error between actual and estimate will be minimuum for
)
=0
=0
)
=
Vector
(20)
(21)
(22)
provides good estimate of unknown paraameters.
IV. PARAMETER ESTIMATION FO
OR GRID
CONNECTED SYSTEM
Consider DG source connected to grid as shown in figure 8.
The total impedance of network includes line,, filter and load
impedance.
Thevenin Estimation Results:
Positive sequence voltage and current angle annd magnitude are
measured. It is assumed that the input voltagee magnitude and
angle and the Thevenin impedance as seen from
m the load side is
constant while the algorithm is in operation. A
As the estimation
gives an output which is within ±1% error wiith just one load
change; and hence it does not take more than a few seconds for
accurate prediction.
The positive sequence extraction is done too eliminate the
unbalanced components in voltage and currentt. The impact of
negative sequence is in terms of losses or heeating caused by
unbalances, for this case it’s not that relevant.
The results of the estimation with multiple numbbers of iterations
are tabulated below table.
Table 2: Estimation of Parameters for Isolatedd system
No. of Phasor
Generator
Voltage
Resistaance
Inductance
Measurements
Voltage
angle
Actual
70.7107
1
0.377
2 Phasor
70.642
1.98E-05
0.984442
0.3747
Error %
0.09706
1.557333
0.5993
3 Phasor
70.642
2.98E-05
0.9843399
0.374741
Error %
0.09715
1.56001
0.5992
5 Phasor
70.6254
1.11E-05
0.978555
0.373026
Error %
0.12057
2.144227
1.05397
10 Phasor
70.576
2.76E-05
0.96009
0.3691
Error %
0.1905
3.911
2.0955
100 Phasor
70.5762
1.34E-06
0.961
0.3691
Error %
0.19015
3.899449
2.09503
The estimated values tabulated in Table 2 shows accurate
prediction with just two Phasor measurem
ments of known
parameters at point of common coupling.
•
Current from individual so
ources (Igen and Igrid in the
figure above) is needed
Equivalent Thevenin voltage (maagnitude and angle) and
impedance can be estimated. The resullts of estimation are listed in
Table 4 and 6. Generator Side Estim
mation known and unknown
parameters are listed in table 3. A hypothetical scenario is
simulated in which voltage levels off two sources connected in
parallel are drastically different. The
T
generator side voltage
magnitude is set at 70.7 V and the grid is assumed at 49.5 V. The
algorithm accurately estimates the volttage level difference.
Table 3: State of Parameters on Generator sid
de
Known Parameters
Unknown Parameters
Voltage Magnitude at PCC
Generator Voltage
Voltage Angle at PCC
Generator Voltage Angle
Current Magnitude of generator
Generator resistance
Current Angles
Generator Inductance
Parameter Estimated with their errors percentages
p
for generator
side parameter estimation is tabulated in Table 4.
Table 4: Estimation of Parameters with error on Generator side
No. of Phasor
Generator
Voltage
Resistance
Measurements
Voltage
angle
Actual
70.7107
1
2 Phasor
70.70932
-6.285
0.999933
Error %
0.00195
0.0067
3 Phasor
70.70948
-6.279
0.999942
Error %
0.00172
0.00577
5 Phasor
70.71005
-6.290
0.999971
Error %
0.00092
0.00289
0.377
0.376977
0.0061
0.376974
0.007004
0.376978
0.0059
Grid Side Estimation known and unkn
nown parameters are listed in
table 5.
Table 5: State of Parameters on grid side
Known Parameters
Voltage Magnitude at PCC
Voltage Angle at PCC
Current Magnitude from grid
Current Angles
Unknown Parameters
Grid Voltage
Grid Voltage Angle
Grid Resistance
Grid Inductance
Parameter Estimated with their errors:
Table 6: Estimation of parameters with errors on grid side
No. of Phasor Grid Voltage
Voltagee
Grid
Measurements
angle
Resistance
Actual
49.4975
0.5
2 Phasor
49.4973
-0.0019
9
0.5
Error %
0.0004
0
3 Phasor
49.49726
-6.29
0.500017
Error %
0.000483
-0.00334
5 Phasor
49.49754
-6.29
0.499
Error %
-8.26E-05
0.00112
Grid
Inductance
0.0377
0.0377
0
0.037699
0.00285
0.03769
0.00157
'n' number of Power Sources co
onnected in Parallel
Superposition theorem implementation of proposeed algorithm:
140
Known Parameters for Parame
eter Estimation
Unknown Parameters Estimat ed
120
Number of Parameters
The algorithm can be used for multiple soources parameter
estimation simultaneously. The information nneeded from the
system is that we should know the currents cominng out of each of
the sources and the common voltage vector at ppoint of common
coupling.
Inductance
100
80
60
40
20
0
Figure 8: Generator and grid connected in parallel tto feed a load
For parameter estimation for a system with multipple sources:
• Voltage magnitude and angle at PCC is needed.
0
5
10
15
20
0
25
Number of Sources in
n Parallel
30
35
Figure 9: Known & Unknown parameters forr ‘n’ sources connected in parallel
5
Generator
Voltage
70.711
15.395
78.228
47.984
32.140
70.602
0.154
70.651
0.084
70.599
0.158
70.583
0.181
70.636
0.105
Voltage
angle
39.479
-30.161
-0.151
-0.112
-0.143
-0.151
-0.110
Resistance
1.000
-8.829
982.878
-1.342
234.184
1.005
-0.483
1.006
-0.557
0.999
0.091
0.996
0.374
0.998
0.219
Inductance
0.377
-3.351
988.831
-6.005
1692.739
0.390
-3.496
0.392
-4.019
0.374
0.665
0.367
2.730
0.371
1.603
Linear recursive least square method derived for thevenin
equivalent estimation requires at least three phasor measurements
for accurate prediction of unknown parameters. The result for
linear recursive least square estimation is listed in table 7. The
results with 1st and 2ndphasor measurements are not reliable,
however the results converge with 3 phasor measurements and
own wards.
Compared to linear recursive fit, nonlinear recursive fit is better as
the error percentages are much lower as per results tabulated in
table 4, 6 and 7.
Nonlinear recursive method uses exact equation, therefore
computationally exhaustive, for estimation and hence the overall
accuracy of estimation is higher. Furthermore nonlinear recursion
estimation converges with 2 phasor measurement data but linear
method takes at least 3 phasor measurements for convergence.
The results generated using proposed algorithms has an upper
bound of 5000 as maximum function evaluations and maximum
number of iterations as 8000.
Power System Grid Impedance Change Detection
Power system undergoes various kinds of faults which lead to
overall impedance change of the system. Simulations are
conducted in MATLAB Simulink for isolated system to verify the
capability of proposed algorithm to detect impedance change in
Volts
70.711
70.7105
70.71
70.7095
0
2
4
6
8
10
Time(s)
Figure10: Voltage magnitude estimation
The voltage magnitude from source side is unchanged and only
line impedance is changed. Therefore the Thevenin voltage
magnitude and phase estimated value remains fairly stable.
-4
-9
Voltage Angle
x 10
-9.2
-9.4
-9.6
-9.8
-10
0
2
4
6
8
10
Figure 11: Voltage Phase estimation
Resitance
2.2
2
1.8
Ohms
No. of Phasor
Measurements
Actual
1 Phasor
Error %
2 Phasor
Error %
3 Phasor
Error %
4Phasor
Error %
5 Phasor
Error %
6 Phasor
Error %
10 Phasor
Error %
RMS Voltage Magnitude
70.7115
Angle
Table 7: Parameter Estimation with linear regression
power system. Single source simulations with impedance change
from 1+0.377j to 2+0.755jat 5 second in a 9.6 second simulation
is conducted. The algorithm is capable to detect the impedance
change very accurately. As the source voltage and phase remains
unchanged, the estimation remains fairly stable at its initial
values.
1.6
1.4
1.2
1
0.8
0
2
4
6
8
10
Time(s)
Figure12: Power source Resistance estimation
Reactance
0.8
0.7
Ohms
The results of estimation indicate accurate prediction of both
generator and grid side parameters simultaneously. This algorithm
can be extrapolated to ‘n’ number of parallel sources feeding a
load. The algorithm uses the shared current from the source and
voltage at point of common coupling for estimating Thevenin
equivalent parameters. Figure 9 shows the number of estimated
parameters and number of known parameters used for estimating
the unknown values using the proposed algorithm. For very large
number of sources connected in parallel, the algorithm proposed
estimates twice the number of unknown parameters. This aspect
of the proposed algorithm can help in controller design and tuning
for multiple sources i.e. inverters and/or UPS to be connected in
parallel. Virtual impedance design requires equivalent Thevenin
impedance seen at from load side for proper compensation of
parameter mismatches (Line and/or filter parameters).
Simultaneous estimation of multiple sources Thevenin’s
equivalent will help in proper selection of virtual impedance for
minimizing circulating current flowing among power sources.
0.6
0.5
0.4
0
1
2
3
4
5
Time(s)
6
7
Figure 13: Power System reactance
8
9
10
6
As the resistance and reactance values are doubled at 5sec, the
estimated values changes level around 5 sec. The error % for the
estimated quantities is well below 0.5% under steady state. The
positive sequence voltage and current angle and magnitude are
measured. It is assumed that the input voltage magnitude and
angle and the Thevenin impedance as seen from the load side is
constant while the algorithm is in operation. The positive
sequence extraction is done to eliminate the unbalanced
components in voltage and current. Proposed algorithm can be
used for detection of balanced power system faults using the
proposed parameter estimation algorithm using linearized and
nonlinear recursive least square algorithms evaluated through
simulation results shown from Figure 10 to 13.
VI. CONCLUSION
Online Thevenin equivalent parameter estimation using
nonlinear and linear recursive least square algorithm are proposed
and evaluated in this paper. Simultaneous parameter estimation of
‘n’ parallel source with known parameters at point of common
coupling i.e. current and voltage phasor are evaluated and
analysed. The proposed algorithms require at least two phasor
measurements for accurate prediction. The robustness of the
algorithms has also been tested and verified. Fault in power
system leads to impedance change. The Thevenin equivalent
estimation proposed in the paper can help to determine power
system behavior and adopt the corrective actions to improve
response of system network and stability. Simulation results
validating the performance of the proposed algorithms have also
been verified for power system impedance change detection.
From the analysis carried out it is found that nonlinear recursion is
comparatively better than linear recursion in terms of error
percentage of the estimation. However linear method is
computationally less exhaustive, more viable for online controller
implementation.
REFERENCES
[1]A. Azmy and I. Erlich, “Impact of distributed generation on the stability of
electrical power systems,” in proc. power engineering society general meeting,
IEEE 2005, pp. 1056-1063, 2005.
[2]S. Jian, “Small signal methods for AC distributed power system – A
Review,” IEEE Trans. on Power Electronics, vol.24, no.11, pp.2545-2554,
Nov.2009.
[3] M. Liserre, R. Teodorescu and F. Blaabjerg, “Stability of photovoltaic and
wind turbine grid connected inverters for large set of grid impedance values,”
IEEE Trans.on Power Electronics, vol.21, no.1, pp.263-272, Jan.2006.
[4] M.Liserre, F. Blaabjerg, and R.Teodorescu, “Grid Impedance estimation
via excitation of LCL filter resonance,” IEEE Trans. on Industry applications,
vol.43, no.5, Oct.2007.
[5] N. Hoffmann, and F. Fuchs , “Online grid estimation for control of grid
connected converters in inductive –resistive distributed power networks using
extended Kalmann filter,” IEEE conference ,pp.922-929, 2012.
[6] J. Vasquez, J. Gurerreo, A. Luna, P. Rodriguez, and R. Teoderscu,
“Adaptive control applied to voltage source inverters operating in grid
connected and islanded modes,” IEEE Transactions on Industrial Electronics,
Vol. 56 , No. 10, pp. 4088-4096, Oct. 2009.
[7] J. Huang, K. Corzine, M. Belkhyat, “ Small signal Impedance measurement
of power electronics based AC power systems using line to line current
injection,” IEEE Transactions on Power Electronics,vol.56, pp.4088-4096,
2009.
[8] M. Ciobotaru, R. Teodorescu and F. Blaabjberg, “On line grid
estimation based on harmonic injection for grid connected PV inverter,” In
proc. IEEE International Symposium on Industrial electronics, ISIE 2007.
[9]A. Timbus , P. Roriguez, R. Teodorescu, M. Ciobotaru, “Line Impedance
estimation using active and reactive power variations,” In proc. IEEE , PESC
,pp.1273-1279, 2007.
[10]L. Siminoaei, R. Teodorescu, F. Blaabjerg, U. Borup, “ A digital
controlled PV inverter with grid impedance estimation for ENS detection ,”
IEEE Trans. on Power Electronics, vol.20, no.6, pp. 1480-1490, Nov.2005.
[11]Z. Straozcyzk, “A method for real time wide identification of the source
impedance in power system,” IEEE Trans. on Instrumentation measurement,
vol.54,no.1,pp.377-385, Feb. 2005.
[12] M. Sumner, B. Palethorpe, D. Thomas,P. Zanchetta, and M. Piazza, “ A
Technique for power supply harmonic impedance estimation using controlled
voltage disturbance,” IEEE Trans. Power Electronics,vo.17,no.2, pp.207-212,
Mar.2002.
[13] M. Sumner, B. Palethorpe, and D. Thomas, “Impedance measurement
for improved power quality- part 1: the measurement technique,” IEEE Trans.
on Power Electronics, vol.19,no.3, pp.1442-1448, July 2004.
[14] S. Cobaceus, E. Bueno, D. Pizzrro,F. Rodriguez, and F.Huerta, “ Grid
Impedance monitoring system for distributed power generation electronic
interfaces,” IEEE Trans. on Instrumentation and Measurement, vol.58, no.9,
pp.3112-3121, Sept.2009.
[15] K. Vu, M. Begovic, D. Novosel, and M. Saha, “Use of local
measurements to estimate voltage stability margin,” IEEE Trans. on Power
systems, vol.14, pp.1029-1034, 1999.
[16] Vairamohan Bhaskar, “State of charge estimation for batteries,” Masters
Thesis, University of Tenesse, 2002.
[17]S. Tsai, K. Wong , “Adaptive undervoltage load shedding relay design
using Thevenin equivalent estimation”, in Proc. Power and energy society
general meeting, IEEE, 2008.
[18] S. Corsi, and G. Taranto, “A real time voltage instability identification
algorithm on local Phasor measurement,” IEEE Trans. on Power systems,
vol.23, no.3, Aug. 2008.
[19]C. Tsai, C. Chiachu, “Fault locating estimation using Thevenin equivalent
in power systems,” in proc. IPEC,IEEE conference, 2010.
[20] A. R. Phadke, M. Fozdar, K. R. Niazi, “A New Technique for on-line
monitoring of voltage Stability margin using local signals”, Fifteenth National
Power Systems Conference (NPSC), IIT Bombay, December 2008.
| 3 |
An economic approach to vehicle dispatching for ride sharing
Mengjing Chen, Weiran Shen, Pingzhong Tang, and Song Zuo
IIIS, Tsinghua University ∗
arXiv:1707.01625v2 [] 1 Mar 2018
March 2, 2018
Abstract
Over the past few years, ride-sharing has emerged as an effective way to relieve traffic congestion. A key problem
for these platforms is to come up with a revenue-optimal (or GMV-optimal) pricing scheme and an induced vehicle
dispatching policy that incorporate geographic and temporal information. In this paper, we aim to tackle this problem
via an economic approach.
Modeled naively, the underlying optimization problem may be non-convex and thus hard to compute. To this
end, we use a so-called “ironing” technique to convert the problem into an equivalent convex optimization one via a
clean Markov decision process (MDP) formulation, where the states are the driver distributions and the decision
variables are the prices for each pair of locations. Our main finding is an efficient algorithm that computes the exact
revenue-optimal (or GMV-optimal) randomized pricing schemes. We characterize the optimal solution of the MDP
by a primal-dual analysis of a corresponding convex program. We also conduct empirical evaluations of our solution
through real data of a major ride-sharing platform and show its advantages over fixed pricing schemes as well as
several prevalent surge-based pricing schemes.
1
Introduction
The recently established applications of shared mobility, such as ride-sharing, bike-sharing, and car-sharing, have been
proven to be an effective way to utilize redundant transportation resources and to optimize social efficiency (Cramer
and Krueger, 2016). Over the past few years, intensive researches have been done on topics related to the economic
aspects of shared mobility (Crawford and Meng, 2011; Kostiuk, 1990; Oettinger, 1999).
Despite these researches, the problem of how to design revenue optimal prices and vehicle dispatching schemes
has been largely open and one of the main research agendas in sharing economics. There are at least two challenges
when one wants to tackle this problem in the real-world applications. First of all, due to the nature of transportation,
the price and dispatch scheme must be geographically dependent. Secondly, the price and dispatch scheme must take
into consideration the fact that supplies and demands in these environments may change over time. As a result, it
may be difficult to compute, or even to represent a price and dispatch scheme for such complex environments.
Traditional price and dispatch schemes for taxis (Laporte, 1992; Gendreau et al., 1994; Ghiani et al., 2003) and
airplanes (Gale and Holmes, 1993; Stavins, 2001; McAfee and Te Velde, 2006) do not capture the dynamic aspects of
the environments: taxi fees are normally calculated by a fixed rate of distance and time and the prices of flight tickets
are sold via relatively long booking periods, while in contrast, the customers of shared vehicles make their decisions
instantly.
The dynamic ride-sharing market studied in this paper is also known to have imbalanced supply and demand,
either globally in a city or locally in a particular time and location. Such imbalance in supply and demand is known to
cause severe consequences on revenues (e.g, the so-called wild goose chase phenomenon (Castillo et al., 2017)). Surging
price is a way to balance dynamic supply and demand (Chen and Sheldon, 2015) but there is no known guarantee that
surge based pricing can dispatch vehicles efficiently and solve the imbalanced supplies and demands. Traditional
dispatch schemes (Laporte, 1992; Gendreau et al., 1994; Ghiani et al., 2003) focus more on the algorithmic aspect of
static vehicle routing, without consider pricing. However, vehicle dispatching and pricing problem are tightly related,
since a new price scheme will surely induces a change on supply and demand since the drivers and passengers are
strategic. In this paper, we aim to come up with price schemes with desirable induced supplies and demands.
∗ Contacts:
[email protected], {emersonswr, kenshinping, songzuo.z}@gmail.com
1
1.1
Our contribution
In this paper, we propose a graph model to analyze the vehicle pricing and dispatching problem mentioned above. In
the graph, each node refers to a region in the city and each edge refers to a possible trip that includes a pair of origin
and destination as well as a cost associated with the trip on this edge. The design problem is, for the platform, to set a
price and specify the vehicle dispatch for each edge at each time step. Drivers are considered to be non-strategic in
our model, meaning that they will accept whatever offer assigned to them. The objective of the platform can either be
its revenue or the GMV or any convex combination between them.
Our model naturally induces a Markov Decision Process (MDP) with the driver distributions on each node as
states, the price and dispatch along each edge as actions, and the revenue as immediately reward. Although the
corresponding mathematical program is not convex (thus computationally hard to compute) in general, we show that
it can be reduced to a convex one without loss of generality. In particular, in the resulting convex program where the
throughput along each source and destination pair in each time period are the variables, all the constraints are linear
and hence the exact optimal solutions can be efficiently computed (Theorem 3.1).
We further characterize the optimal solution via primal-dual analysis. In particular, a pricing scheme is optimal
if and only if the marginal contribution of the throughput along each edge equals to the system-wise marginal
contribution of additional supply minus the difference of the long term contributions of unit supply at the origin and
the destination (see Section 5).
We also perform extensive empirical analysis based on a public dataset with more than 8.5 million orders. We
compare our policy with other intensively studied policies such as surge pricing (Chen and Sheldon, 2015; Cachon
et al., 2016; Castillo et al., 2017). Our simulations show that, in both the static and the dynamic environment, our
optimal pricing and dispatching scheme outperforms surge pricing by 17% and 33%. Interestingly, our simulations
show that our optimal policy has much stronger ability in dispatching the vehicles than other policies, which results
directly in its performance boost (see Section 6).
1.2
Related work
Driven by real-life applications, a large number of researches have been done on ride-share markets. Some of them
employ queuing networks to model the markets (Iglesias et al., 2016; Banerjee et al., 2015; Tang et al., 2016). Iglesias
et al. (2016) describe the market as a closed, multi-class BCMP queuing network which captures the randomness of
customer arrivals. They assume that the number of customers is fixed, since customers only change their locations
but don’t leave the network. In contrast, the number of customer are dynamic in our model and we only consider
the one who asks for a ride (or sends a request to the platform). Banerjee et al. (2015) also use a queuing theoretic
approach to analyze the ride-share markets and mainly focus on the behaviors of drivers and customers. They assume
that the drivers enter or leave the market with certain possibilities. Bimpikis et al. (2016) take account for the spatial
dimension of pricing schemes in ride-share markets. They price for each region and their goal is to rebalance the
supply and demand of the whole market. However, we price for each routing and aim to maximize the total revenue
or social welfare of the platform. We also refer the readers to the line of researches initiated by (Ma et al., 2013) for
the problems about the car-pooling in the ride-sharing systems (Alonso-Mora et al., 2017; Zhao et al., 2014; Chan and
Shaheen, 2012).
Many works on ride-sharing consider both the customers and the drivers to be strategic, where the drivers
may reject the requests or leave the system if the prices are too low (Banerjee et al., 2015; Fang et al., 2017). As we
mentioned, if the revenue sharing ratios between the platform and the drivers can be dynamic, then the pricing
problem and the revenue sharing problem could be independent and hence the drivers are non-strategic in the pricing
problem. In addition, the platform can also increase the profit by adopting dynamic revenue sharing schemes (Balseiro
et al., 2017).
Another work closely related to ours is by Banerjee et al. (2017). Their work is concurrent and has been developed
independently from ours. In particular, the customers arrive according to a queuing model and their pricing policy is
state-independent and depends on the transition volume. Both their and our models are built upon the underlying
Markovian transitions between the states (the distribution of drivers over the graph). The major differences are: (i) our
model is built for the dynamic environments with a very large number of customers (each of them is non-atomic) to
meet the practical situations, while theirs adopts discrete agent settings; (ii) they overcome the non-convexity of the
problem by relaxation and focus only on concave objectives, which makes this work hard to use for real applications,
while we solve the problem via randomized pricing and transform the problem to a convex program; (iii) they prove
2
approximation bounds of the relaxation problem, while we give exact optimal solutions of the problem by efficiently
solving the convex program.
2
Model
A passenger (she) enters the ride-sharing platform and sends a request including her origin and destination to the
platform. The platform receives the request and determines a price for it. If user accepts the price, then the platform
may decide whether to send a driver (he) to pick her up. The platform is also able to dispatch drivers from one place to
another even there is no request to be served. By the pricing and dispatching methods above, the goal of maximizing
revenue or social welfare of the entire platform can be achieved. Our model incorporates the two methods into a
simple pricing problem. In this section, we define basic components of our model and consider two settings: dynamic
environments with a finite time horizon and static environments with an infinite time horizon. Finally we reduce the
action space of the problem and give a simple formulation.
Requests We use a strongly connected digraph G = (V, E) to model the geographical information of a city.
Passengers can only take rides from nodes to nodes on the graph. When a passenger enters the platform, she expects
to get a ride from node s to node t, and is willing to pay at most x ≥ 0 for the ride. She then sends to the platform a
request, which is associated with the tuple e = (s, t). Upon receiving the request, The platform sets a price p for it. If
the price is accepted by the passenger (i.e., x ≥ p), then the platform tries to send a driver to pick her up. We say that
the platform rejects the request, if no driver is available.
A request is said to be accepted if both the passenger accepts the price p and there are available drivers. Otherwise,
the request is considered to end immediately.
Drivers Clearly, within each time period, the total number of accepted requests starting from s cannot be more
than the number of drivers available at s. Formally, let q(e) denote the total number of accepted request along edge e,
then:
Õ
q(e) ≤ w(v), ∀v ∈ V,
(2.1)
e∈OUT(v)
where OUT(v) is the set of edges starting from v and w(v) is the number of currently available drivers at node v.
In particular, we assume that both the total number of drivers and the number of requests are very large, which is
often the case in practice, and consider each driver and each request to be non-atomic. For simplicity, we normalize
the total amount of drivers on the graph to be 1, thus w(v) is a real number in [0, 1]. We also normalize the number of
requests on each edge with the total number of drivers. Note that the amount of requests on an edge e can be more
than 1, if there are more requests on e than the total drivers on the graph.
Geographic Status For each accepted request on edge e, the platform will have to cover a transportation cost cτ (e)
for the driver. In the meanwhile, the assigned driver, who currently at node s, will not be available until he arrives
the destination t. Let ∆τ(e) be the traveling time from s to t and τ be the timestep of the driver leaving s. He will be
available again at timestep τ + ∆τ(e) on node t. Formally, the amount of available drivers on any v ∈ V is evolving
according to the following equations:
Õ
Õ
wτ+1 (v) = wτ (v) −
qτ (e) +
qτ+1−∆τ(e) (e),
(2.2)
e ∈OUT(v)
e∈IN(v)
where IN(v) is the set of edges ending at v. Here we add subscripts to emphasize the timestamp for each quantity. In
particular, throughout this paper, we focus on the discrete time step setting, i.e., τ ∈ N.
Demand Function As we mentioned, the platform could set different prices for the requests. Such prices may
vary with the request edge e, time step t, and the driver distribution but must be independent of the passenger’s
private value x as it is not observable. Formally, let Dτ (·|e) : R+ → R+ be the demand function of edge e, i.e., Dτ (p|e)
is the amount of requests on edge e with private value x ≥ p in time step τ.1 Then the amount of accepted requests
qτ (e) ≤ E[Dτ (pτ (e)|e)], where the expectation is taken over the potential randomness of the pricing rule pτ (e).2
1 In
practice, such a demand function can be predicted from historical data Tong et al. (2017); Moreira-Matias et al. (2013).
randomized pricing rule may set different prices for the requests on the same edge e.
2 The
3
Design Objectives In this paper, we consider a class of state-irrelevant objective functions. A function is stateirrelevant if its value only depends on the amount of accepted request on each edge q(e) but not the driver distribution
of the system w(v). Note that a wide range of objectives are included in our class of objectives, such as the revenue of
the platform:
Õ
REVENUE(p, q) =
E[(pτ (e) − cτ (e)) · qτ (e)],
e,τ
and the social welfare of the entire system:
WELFARE(p, q) =
Õ
E[(x − cτ (e)) · qτ (e)].
e,τ
In general, our techniques work for any state-irrelevant objectives. Let g(p, q) denote the general objective
function and the dispatching and pricing problem can be formulated as follows:
Õ
maximize
g(pτ (e), qτ (e)|e)
(2.3)
e,τ
subject to
(2.1) and (2.2).
Static and Dynamic Environment In general, our model is defined for a dynamic environment in the sense that
the demand function Dτ and the transportation cost cτ could be different for each time step τ. In particular, we study
the problem (2.3) in general dynamic environments with finite time horizon from τ = 1 to T, where the initial driver
distribution w1 (v) is given as input.
In addition, we also study the special case with static environment and infinite time horizon, where Dτ ≡ D and
cτ ≡ c are consistent across each time step.
2.1
Reducing the action space
In this section, we rewrite the problem to an equivalent reduced form by incorporating the action of dispatching
into pricing, i.e., using p to express q. The idea is straightforward: (i) for the requests rejected by the platform, the
platform could equivalently set an infinitely large price; (ii) if the platform is dispatching available drivers (without
requests) from node s to t, we can create virtual requests from s to t with 0 value and let the platform sets price 0 for
these virtual requests. In fact, we can assume without loss of generality that D(0|e) ≡ 1, the total amount of drivers,
because one can always add enough virtual requests for the edges with maximum demand less than 1 or remove the
requests with low values for the edges with maximum demand exceeds the total driver supply, 1.
As a result, we may conclude that q(e) ≤ D(p|e). Since our goal is to maximize the objective g(p, q), raising prices
to achieve the same amount of flow q(e) (such that E[D(p|e)] = q(e)) never eliminates the optimal solution. In other
words,
Observation 2.1. The original problem is equivalent to the following reduced problem, where the flow variables qτ (e)
are uniquely determined by the price variables pτ (e):
Õ
maximize
g(pτ (e), Dτ (pτ (e)|e))
e,τ
subject to
qτ (e) = E[D(pτ (e)|e)]
(2.4)
(2.1) and (2.2).
3
Problem Analysis
In this section, we demonstrate how the original problem (2.4) can be equivalently rewritten as a Markov decision
process with a convex objective function. Formally,
Theorem 3.1. The original problem (2.4) of the instance hG, D, g, ∆τi is equivalent to a Markov decision process problem
of another instance hG 0, D 0, g 0, ∆τ 0i with g 0 being convex.
The proof of Theorem 3.1 will be immediate after Lemma 3.2 and 3.4. The equivalent Markov decision process
problem could be formulated as a convex program, and hence can be solved efficiently.
4
3.1
Unifying travel time
Note that the original problem (2.4), in general, is not a MDP by itself, because the current state wτ+1 (v) may depend
on the action qτ+1−∆τ(e) in (2.2). Hence our first step is to equivalently map the original instance to another instance
with traveling time is always 1, i.e., ∆τ(e) ≡ 1:
Lemma 3.2 (Unifying travel time). The original problem (2.4) of an general instance hG, D, g, ∆τi is equivalent to the
problem of a 1-travel time instance hG 0, D 0, g 0, ∆τ 0i, where ∆τ 0(·) ≡ 1.
Intuitively, we tackle this problem by adding virtual nodes into the graph to replace the original edges. This
operation splits the entire trip into smaller ones, and at each time step, all drivers become available.
Proof. For edges with traveling time ∆τ(e) = 1, we are done.
e
For edges with traveling time ∆τ(e) > 1, we add ∆τ(e) − 1 virtual nodes into the graph, i.e., v1e, . . . , v∆τ(e)−1
, and
the directed edges connecting them to replace the original edge e, i.e.,
e
e
e
E 0(e) = {(s, v1e ), (v1e, v2e ), . . . , (v∆τ(e)−2
, v∆τ(e)−1
), (v∆τ(e)−1
, t)},
Ø
Ø
e
E0 =
E 0(e), V 0 =
{v1e, . . . , v∆τ(e)−1
} ∪ V.
e ∈E
e ∈E
We set the demand function of each new edge e 0 ∈ E 0(e) to be identical to those of the original edge e: D 0(·|e 0) ≡ D(·|e).
An important but natural constraint is that if a driver handles a request on edge e of the original graph, then he
must go along all edges in E 0(e) of the new graph, because he cannot leave the passenger halfway. To guarantee this,
we only need to guarantee that all edges in E 0(e) have the same price. Also, we need to split the objective of traveling
along e into the new edges, i.e., each new edge has objective function
g 0(p, q|e 0) = g(p, q|e)/∆τ(e), ∀e 0 ∈ E 0(e).
One can easily verify that the above operations increase the graph size to at most maxe ∈E ∆τ(e)∗ times of that of
the original one. In particular, there is a straightforward bijection between the dispatching behaviors of the original
G = (V, E) and the new graph G 0 = (V 0, E 0). Hence we can always recover the solution to the original problem.
3.2
Flow formulation and randomized pricing
By Lemma 3.2, the original problem (2.4) can be formulated as an MDP:
Definition 3.3 (Markov Decision Process). The vehicle pricing and dispatching problem is a Markov decision process,
denoted by a tuple (G, D, g, S, A, W), where G = (V, E) is the given graph, D is the demand function, objective g is the
reward function, S = ∆(V) is the state space including all possible driver distributions over the nodes, A is the action
space, and W is the state transition rule:
Õ
Õ
wτ+1 (v) = wτ (v) −
qτ (e) +
qτ (e).
(3.1)
e∈OUT(v)
e ∈IN(v)
However, by naïvely using the pricing functions pτ (e) as the actions, the induced flow qτ (e) = E[Dτ (pτ (e)|e)],
in general, is neither convex nor concave. In other words, both the reward g and the state transition W of the
corresponding MDP is non-convex. As a result, it is hard to solve the MDP efficiently.
In this section, we show that by formulating the MDP with the flows qτ (e) as actions, the corresponding MDP is
convex.
Lemma 3.4 (Flow-based MDP). In the MDP (G, D, g, S, A, W) with all possible flows as the action set A, i.e., A = [0, 1] |E | ,
the state transition rules are linear functions of the flows and the reward functions g are convex functions of the flows.
Proof. To do this, we first need to rewrite the prices pτ (e) as functions of the flows qτ (e). In general, since the prices
could be randomized, the inverse function of qτ (e) = E[Dτ (pτ (e)|e)] is not unique.
5
Note that conditional on fixed flows qτ (e), the state transition of the MDP is also fixed. In this case, different
prices yielding such specific flows only differs in the rewards. In other words, it is without loss of generality to let the
inverse function of prices be as follows:
pτ (e) = arg max gτ (pτ (e), qτ (e)|e), s.t. qτ (e) = E[Dτ (pτ (e)|e)].
p
In particular, since the objective function g we studied in this paper is linear and weakly increasing in the prices
p and the demand function D(p|e) is decreasing in p, the inversed price function could be defined as follows:
• Let gτ (q|e) = gτ (Dτ−1 (q|e), q|e), i.e., the objective obtained by setting the maximum fixed price p = Dτ−1 (q|e)
such that the induced flow is exactly q;
• Let ĝτ (q|e) be the ironed objective function, i.e., the smallest concave function that upper-bounds gτ (q|e) (see
Figure 1);
• For any given qτ (e), the maximum objective on edge e is ĝτ (qτ (e)|e) and could be achieve by setting the price
to be randomized over Dτ−1 (q 0 |e) and Dτ−1 (q 00 |e).
Figure 1: Ironed objective function
Finally, we prove the above claim to complete the proof of Lemma 3.4.
By the definition of ĝτ (q|e), for any randomized price p,
E[gτ (Dτ (p|e)|e)] ≤ E[ĝτ (Dτ (p|e)|e)].
p
p
Since ĝ is concave, applying Jensen’s inequality yields:
E[ĝτ (Dτ (p|e)|e)] ≤ ĝτ E[Dτ (p|e)] e = ĝτ (q̄|e)
p
p
Now it suffices to show that the upper bound ĝτ (q̄|e) is attainable.
If ĝτ (q̄|e) = gτ (q̄|e), then the right-hand-side could be achieved by letting pτ (e) be the deterministic price
Dτ−1 (q̄|e).
Otherwise, let I = (q 0, q 00) be the ironed interval (where ĝτ (q|e) > gτ (q|e), ∀q ∈ I but ĝτ (q 0 |e) = gτ (q 0 |e) and
ĝτ (q 00 |e) = gτ (q 00 |e)) containing q̄. Thus q̄ can be written as a convex combination of the end points q 0 and q 00:
q̄ = λq 0 + (1 − λ)q 00. Note that the function ĝτ is linear within the interval I. Therefore
λgτ (q 0 |e) + (1 − λ)gτ (q 00 |e) = λĝτ (q 0 |e) + (1 − λ)ĝτ (q 00 |e) = ĝτ (λq 0 + (1 − λ)q 00 |e) = ĝτ (q̄|e).
In other words, the upper bound ĝτ (q̄|e) could be achieved by setting the price to be q 0 with probability λ and q 00
with probability 1 − λ. In the meanwhile, the flow qτ (e) would retain the same.
Proof of Theorem 3.1. The theorem is implied by Lemma 3.2 and Lemma 3.4. In particular, the reward function is the
ironed objective function ĝ.
In the rest of the paper, we will focus on the following equivalent problem:
Õ
maximize
ĝτ (qτ (e)|e)
e,τ
subject to
(2.1) and (3.1).
6
(3.2)
4
Optimal Solution in Static Environment
In this setting, we restrict our attention to the case where the environment is static, hence the objective function does
not change over time, i.e., ∀τ ∈ [T], ĝτ (q|e) ≡ ĝ(q|e). We aim to find the optimal stationary policy that maximizes the
objective function, i.e., the decisions qτ depends only on the current state wτ .
In this section, we discretize the MDP problem and focus on stable policies. With the introduction of the ironed
objective function ĝτ , we show that for any discretization scheme, the optimal stationary policy of the induced
discretized MDP is dominated by a stable dispatching scheme. Then we formulate the stable dispatching scheme as a
convex problem, which means the optimal stationary policy can be found in polynomial time.
Definition 4.1. A stable dispatching scheme is a pair of state and policy (wτ , π), such that if policy π is applied, the
distribution of available drivers does not change over time, i.e., wτ+1 (v) = wτ (v).
In particular, under a stable dispatching scheme, the state transition rule (3.1) is equivalent to the following form:
Õ
Õ
q(e) =
q(e).
(4.1)
e ∈OUT(v)
e ∈IN(v)
Definition 4.2. Let M = (G, D, ĝ, S, A, W) be the original MDP problem. A discretized MDP DM with respect to M is
a tuple (G d, Dd, ĝd, Sd, Ad, Wd ), where G d = G, Dd = D, ĝd = ĝ, Wd = W, Sd is a finite subset of S, and Ad is a finite
subset of A that contains all feasible transition flows between every two states in Sd .
Theorem 4.3. Let DM and M be a discretized MDP and the corresponding original MDP. Let πd : Sd → Ad be an
optimal stationary policy of DM. Then there exists a stable dispatching scheme (w, π), such that the time-average
objective of π in M is no less than that of πd in DM.
Proof. Consider policy πd in DM. Starting from any state in Sd with policy πd , let {wτ }0∞ be the subsequent state
sequence. Since DM has finitely many states and policy πd is a stationary policy, there must be an integer n, such
that wn = wm for some m < n and from time step m on, the state sequence become a periodic sequence. Define
w̄ =
n−1
1 Õ
wk ,
n − m k=m
q̄ =
n−1
1 Õ
πd (wk )
n − m k=m
Denote by πd (wk |e) or qd (e) the flow at edge e of the decision πd (wk ). Sum the transition equations for all the time
steps m ≤ k < n, and we get:
n−1
Õ
wk+1 (v) −
k=m
n−1
Õ
k=m
wk (v) =
n−1 Õ
n−1
Õ
©
ª Õ© Õ
ª
πd (wk |e)® −
πd (wk |e)®
k=m «IN(v)
¬ k=m «OUT(v)
¬
© Õ
ª ©Õ
ª
w̄(v) = w̄(v) −
q̄(e)® +
q̄(e)®
«OUT(v)
¬ «IN(v)
¬
Also, policy πd is a valid policy, so ∀v ∈ V and ∀m ≤ k < n:
Õ
qk (e) ≤ wk (v)
OUT(v)
Summing over k, we have:
Õ
q̄(e) ≤ w̄(v)
OUT(v)
Now consider the original problem M. Let w = w̄ and π be any stationary policy such that:
• π(w) = q̄;
7
• starting from any state w 0 , w, policy π leads to state w within finitely many steps.
Note that the second condition can be easily satisfied since the graph G is strongly connected.
With the above definitions, we know that (w, π) is a stable dispatching scheme. Now we compare the objectives of
the two policies πd and π. The time-average objective function is not sensitive about the first finitely many immediate
objectives. And since the state sequences of both policies πd and π are periodic, Their time-average objectives can be
written as:
n−1
1 ÕÕ
ĝ(qd (e)|e)
n − m k=m e ∈E
Õ
OBJ(π) =
ĝ(q̄(e)|e)
OBJ(πd ) =
e ∈E
By Jensen’s inequality, we have:
n−1
Õ
1 ÕÕ
OBJ(πd ) =
ĝ(qd (e)|e) ≤
ĝ
n − m k=m e∈E
e∈E
"
! #
n
Õ
1 Õ
ĝ(q̄(e)|e) = OBJ(π)
qd (e) e =
n − m k=m
e ∈E
With Theorem 4.3, we know there exists a stable dispatching scheme that dominates the optimal stationary policy
of the our discretized MDP. Thus we now only focus on stable dispatching schemes. The problem of finding an
optimal stable dispatching scheme can be formulated as a convex program with linear constraints:
Õ
maximize
ĝ(q|e)
e∈E
(4.2)
subject to (2.1) and (4.1).
Because ĝ(q|e) is concave, the program is convex. Since all convex programs can be solved in polynomial time,
our algorithm for finding optimal stationary policy of maximizing the objective functions is efficient.
5
Characterization of optimality
In this section, we characterize the optimal solution via dual analysis. For the ease of presentation, we consider
Program 4.2 in the static environment with infinite horizon. Our characterization directly extends to the dynamic
environment.
The Lagrangian is defined to be
!
Õ
Õ
Õ © Õ
Õ
ª
L(q, λ, µ) = −
ĝ(q|e) + λ
q(e) − 1 +
µv
q(e) −
q(e)®
e ∈E
e∈E
v ∈V
IN(v)
«OUT(v)
¬
Õ
=−λ+
[−ĝ(q|e) + (λ + µs − µt )q(e)] ,
e ∈E
where s and t are the origin and destination of e, i.e., e = (s, t), and λ and µ are Lagrangian multipliers
Í with λ ≥ 0.
Note that we implicitly transform program 4.2 to the standard form that minimizes the objective − e∈E ĝ(q∗ |e).
The Lagrangian dual function is
Õ
h(λ, µ) = inf L(q, λ, µ) =
[−ĝ(q̃|e) + (λ + µs − µt )q̃(e)] ,
q
e ∈E
where q̃(e) is a function of λ and µ such that λ + µs − µt = ĝ 0(q̃|e), where ĝ 0(q̃|e) is the derivative of the objective
function with respect to flow q. The dual program corresponding to Program 4.2 is
maximize
h(λ, µ)
subject to
λ≥0
According to the KKT conditions, we have the following characterization for optimal solutions.
8
(5.1)
∗ ∗
Theorem 5.1. Let q∗ (e) be a feasible solution to the primal program 4.2 and (λ
to the dual
Í , µ ) be∗a feasible ∗solution
∗
∗
∗
program 5.1. Then both q (e) and (λ , µ ) are primal and dual optimal with − e∈E ĝ(q |e) = h(λ , µ∗ ), if and only if
!
Õ
λ∗
q∗ (e) − 1 = 0
(5.2)
e∈E
ĝ (q |e) = λ∗ + µ∗s − µ∗t , ∀v ∈ V
0
∗
(5.3)
Proof. According to the definition of h(λ, µ), we have h(λ∗, µ∗ ) = inf q L(q, λ∗, µ∗ ). Since ĝ(q|e) are concave functions,
Equation 5.3 is equivalent to the fact that q∗ (e) minimizes the function L(q, λ∗, µ∗ ).
h(λ∗, µ∗ ) = inf L(q, λ∗, µ∗ )
q
=L(q∗, λ∗, µ∗ )
!
=−
Õ
ĝ(q |e) + λ
∗
e ∈E
=−
Õ
∗
Õ
q (e) − 1 +
∗
Õ
v ∈V
e∈E
∗
Õ
© Õ ∗
ª
µ∗v
q (e) −
q∗ (e)®
IN(v)
«OUT(v)
¬
ĝ(q |e),
e∈E
where the last equation uses the Equation 5.2 and the fact that q∗ (e) is feasible.
Continuing with Theorem 5.1, we will analyze the dual variables from the economics angle and some interesting
insights into this problem for real applications.
5.1
Economic interpretation
The dual variables have useful economic interpretations (see (Boyd and Vandenberghe, 2004, Chapter 5.6)). λ∗ is the
system-wise marginal contribution of the drivers (i.e. the increase in the objective function when a small amount of
drivers are added to the system). Note that by the complementary slackness (Equation 5.2), if λ∗ > 0, the sum of the
total flow must be 1, meaning that all drivers are busy, and more requests can be accepted (hence increase revenue) if
more drivers are added to the system. Otherwise, there must be some idle drivers, and adding more drivers cannot
increase the revenue.
µ∗v is the marginal contribution of the drivers at node v. If we allow the outgoing flow from node v to be slightly
more than the incoming flow to node v, then µv is the revenue gain from adding more drivers at node v.
5.2
Insights for applications
The way we formulate and solve the problem, in fact, naturally leads to two interesting insights into this problem,
which are potentially useful for real applications.
1. Scalability In our model, the size of the convex program increases linearly in the number of edges, hence
quadratically in the number of regions. This could be one hidden feature that is potentially an obstacle to real
applications, where the number of regions in a city might be quite large.
A key observation to the issue is that any dispatching policy induced by a real system is a feasible solution of our
convex program and any improvement (maybe via gradient descent) from such policy in fact leads to a better solution
for this system. In other words, it might be hard to find the exact optimal or nearly optimal solutions, but it is easy to
improve from the current state. Therefore, in practice, the platform can keep running the optimization in background
and apply the most recent policy to gain more revenue (or achieve a higher value of some other objectives).
2. Alternative solution As suggested by the characterization and its economic interpretation, instead of solving the
convex programs directly, we also have an alternative way to find the optimal policy by solving the dual program.
The optimal policy can be easily recovered from dual optimal solutions. In particular, according to the economic
interpretation of dual variables, we need to estimate the marginal contributions of drivers.
More importantly, the number of dual variables (= the number of regions) is much smaller than the number of
primal variables (= the number of edges ≈ square of the former). So solving the dual program may be more efficient
when applied to real systems, and is also of independent interest of this paper.
9
6
Empirical Analysis
We design experiments to demonstrate the good performance of our algorithms for real applications. In this section,
we first describe the dataset and then introduce how to extract useful information for our model from the dataset.
Two benchmark policies, FIXED and SURGE, are compared with our pricing policy. The result analysis includes
demand-supply balance and instantaneous revenue in both static and dynamic environments.
6.1
Dataset
We perform our empirical analysis based on a public dataset from a major ride-sharing company. The dataset includes
the orders in a city for three consecutive weeks and the total number of orders is more than 8.5 million. An order is
created when a passenger send a ride request to the platform.
Each order consists of a unique order ID, a passenger ID, a driver ID, an origin, a destination, and an estimated
price, and the timestamp when the order is created (see Table 1 for example). The driver ID might be empty if no
driver was assigned to pick up the passenger. There are 66 major regions of the city and the origins and destinations
in the dataset are given as the region IDs. We say a request is related to a region if the region is either the origin or
the destination of the request. And the popularity of a region is defined as the number of related requests. Since some
of the regions in the dataset have very low popularity values, we only consider the most popular 21 or 5 regions in
the two settings (see Section 6.4 and Section 6.5 respectively for details). The related requests of the most popular 21
(or 5) regions cover about 90% (or 50%) of the total requests in the original dataset.
For ease of presentation, we relabel the region IDs in descending order of their popularities (so region #1 is the
most popular region). Figure 2 illustrates the frequencies of requests on different origin-destination pairs. From the
figure, one can see that the frequency matrix is almost symmetric and the destination of a request is most likely to be
in the same region as the origin.
order
hash
driver
hash
user
hash
origin
hash
dest
hash
price
37.5
timestamp
01-15 00:35:11
Table 1: An example of a row in the dataset, where “hash” stands for some hash strings of the IDs that we didn’t show
the exact value here.
Heatmap of Routes
Log Frequency
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
12.5
Origin IDs
10.0
7.5
5.0
2.5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
0.0
Destination IDs
Figure 2: The logarithmic frequencies of request routes.
6.2
Data preparation
The time consumptions from nodes to nodes and demand curves for edges are known in our model. However, the
dataset doesn’t provide such information directly. We filter out "abnormal" requests and apply a linear regression to
10
Log Frequency
8
6
30
0
0
8
6
4
2
0
15
2
15
30
4
Time & Price Filtered
Log Frequency
Time
in Mins
45 60 75 90 105 120 135 150
Time
in Mins
45 60 75 90 105 120 135 150
Time & Price
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80
Price in Local Currency
0
Price in Local Currency
(a) Time & price without filtering
(b) Time & price with filtering
Figure 3: The logarithmic frequencies of (time, price) pairs, with or without filtering the “abnormal” requests.
0
Density
Request value from region 6 to 2
Density
Request value from region 9 to 11
5
10
15
Value
20
25
0
5
10
15
20
Value
25
30
Figure 4: Fitting request values to lognormal distributions.
get the relationship of the travel time and the price. It makes possible to infer the travel time from the order price.
For the demand curves, we observe the values of each edge and fit them to lognormal distributions.
Distance and travel time The distance (or equivalently the travel time) from one region to another is required to
perform our simulation. We approximate the travel time by the time interval of two consecutive requests assigned to
the same driver. In Figure 3(a), we plot the frequencies of requests with certain (time, price) pairs. We cannot see
clear relationship between time and price, which are supposed to be roughly linearly related in this figure.3 We think
that this is due to the existence of two types of “abnormal” requests:
• Cancelled requests, usually with very short completion time but not necessarily low prices (appeared in the
right-bottom part of the plot);
• The last request of a working period, after which the driver might go home or have a rest. These requests usually
have very long completion time but not necessarily high enough prices (appeared in the left-top part of the
plot).
With the observations above, we filter out the requests with significantly longer or shorter travel time compared
with most of the requests with the same origin and destination. Figure 3(b) illustrates the frequencies of requests
after such filtering. As expected, the brightest region roughly surrounds the 30◦ line in the figure. By applying a
standard linear regression, the slope turns out to be approximately 0.5117 CNY per minute. One may also notice
some “right-shifting shadows” of the brightest region, which are caused by the surge-pricing policy with different
multipliers.
3 The price of a ride is the maximum of a two-dimension linear function of the traveled distance and spent time and a minimal price (which is 7
CNY as one can see the vertical bright line at price = 7 in Figure 3). Since the traveled distance is almost linearly related to the spent time, the
price, if larger than the minimal price, should also be almost linearly related to the traveling time. Readers may notice that from the figures, there
are many requests with price less than 7 (even as low as 0). This is because there are many coupons given to passengers to stimulate their demand
for riding and the prices given in the dataset are after applying the coupons.
11
Revenue per Minute
Revenue per Minute
1600
1400
1200
1000
800
600
400
200
0
0
20
40
60
80
# of Iterations
100
400
300
200
100
0
120
(a) Static environment
0
100
200
300
400
500
# of Iterations
600
700
800
(b) Dynamic environment
Figure 5: Convergence of revenue.
Estimation of demand curves To estimate the demand curves, we first gather all the requests along the same edge
(also within the same time period for dynamic environment, see Section 6.5) and take the prices associated with the
requests as the values of the passengers. Then, we fit the values of each edge (and each time period for dynamic
environment) to a lognormal distribution. The reason that we choose the lognormal distribution is two-fold: (i) the
data fits lognormal distributions quite well (see Figure 4 as examples); (ii) lognormal distributions are commonly used
in some related literatures Ostrovsky and Schwarz (2011); Lahaie and Pennock (2007); Roberts et al. (2016); Shen and
Tang (2017).
We set the cost of traveling to be zero, because we do not have enough information from the dataset to infer the
cost.
6.3
Benchmarks
We consider two benchmark policies:
• FIXED: fixed per-minute pricing, i.e., the price of a ride equals to the estimated traveling time from the origin
to the destination of this ride multiplied by a per-minute price α, where α is a constant across the platform.
• SURGE: based on FIXED policy, using surge pricing to clear the local market when supply is not enough.
In other words, the price of a ride equals to the estimated traveling time multiplied by αβ, where α is the
fixed per-minute price and β ≥ 1 is the surge multiplier. Note that β is dynamic and can be different for
requests initiated at different regions, while the requests initiated at the same regions will share the same surge
multipliers.
In the rest of this section, we evaluate and compare our dynamic pricing policy DYNAM with these two benchmarks
in both static and dynamic environments.
6.4
Static environment
We first present the empirical analysis for the static environment, which is simpler than the dynamic environment
that we will consider next, hence easier to begin with.
In the static environment, we use the average of the statistics of all 21 days as the inputs to our model. For
example, the demand function D(p|e) is estimated based on the frequencies and prices of the requests along edge
e averaged over time. Similarly, the total supply of drivers is estimated based on the total durations of completed
requests.
With the static environment, we can instantiate the convex program (4.2) and solve via standard gradient descent
algorithms. In our case, we simply use the MATLAB function fmincon to solve the convex program on a PC with
Intel i5-3470 CPU. We did not apply any additional techniques to speed-up the computation as the optimization
of running time is not the main focus of this paper. Figure 5(a) illustrates the convergence of the objective value
(revenue) with increasing number of iterations, where each iteration roughly takes 0.2 second.
12
Dynamic Environment
1550
Revenue per Minute
Revenue per Minute
Static Environment
DYNAM
FIXED
SURGE
1500
1450
1400
1350
1300
1250
1200
0
4
8
12
Hours
16
20
DYNAM
FIXED
1000
800
600
400
200
0
24
SURGE
0
4
(a) Static environment
8
12
16
Hours
20
24
(b) Dynamic environment
Figure 6: Instantaneous revenue in different environments.
300%
200%
100%
0%
500%
400%
300%
200%
100%
0
4
8
12
Hours
16
20
24
0%
500%
400%
300%
200%
100%
0
4
8
12
Hours
16
20
24
0%
600%
4
8
12
Hours
16
20
24
500%
400%
300%
200%
0%
DYNAM
FIXED
SURGE
700%
100%
0
Region #5
800%
DYNAM
FIXED
SURGE
700%
Supply/Demand
400%
600%
Region #4
800%
DYNAM
FIXED
SURGE
700%
Supply/Demand
500%
600%
Region #3
800%
DYNAM
FIXED
SURGE
700%
Supply/Demand
Supply/Demand
600%
Region #2
800%
DYNAM
FIXED
SURGE
700%
Supply/Demand
Region #1
800%
600%
500%
400%
300%
200%
100%
0
4
8
12
Hours
16
20
24
0%
0
4
8
12
Hours
16
20
24
Figure 7: Instantaneous supply ratios for different regions.
To compare the performance of policy DYNAM with the benchmark policies FIXED and SURGE, we also simulates
them under the same static environment. In particular, the length of each timestep is set to be 15 minutes and the
number of steps in simulation is 96 (so 24 hours in total). For both FIXED and SURGE, we use the per-minute price
fitted from data as the base price, α = 0.5117, and allow the surge ratio β to be in [1.0, 5.0]. To make the evaluations
comparable, we use the distribution of drivers under the stationary solution of our convex program as the initial
driver distributions for FIXED and SURGE. Figure 6(a) shows how the instantaneous revenues evolve as the time
goes by, where DYNAM on average outperforms FIXED and SURGE by roughly 24% and 17%, respectively.
Note that our policy DYNAM is stationary under the static environment, the instantaneous revenue is constant
(the red horizontal line). Interestingly, the instantaneous revenue curves of both FIXED and SURGE are decreasing
and the one of FIXED is decreasing much faster. The observation reflects that both FIXED and SURGE are not
doing well in dispatching the vehicles: FIXED simply never balances the supply and demand, while SURGE shows
better control in the balance of supply and demand because the policy seeks to balance the demand with local supply
when supply can not meet the demand. However, neither of them really balance the global supply and demand, so
the instantaneous revenue decrease as the supply and demand become more unbalanced.
In other words, the empirical analysis supports our insight about the importance of vehicle dispatching in
ride-sharing platforms.
6.5
Dynamic environment
In the dynamic environment, the parameters (i.e., the demand functions and the total number of requests) are estimated
based on the statistics of each hour but averaged over different days. For example, the demand functions Dh (p|e)
are defined for each edge e and each of the 24 hours, h ∈ {0, . . . , 23}. In particular, we only use the data from the
weekdays (14 days in total)4 among the most popular 5 regions for the estimation.
Again, we instantiate the convex program (3.2) for the dynamic environment and solve via the fmincon function
on the same PC that we used for the static case. Figure 5(b) shows the convergence of the objective value with
increasing number of iterations, where each iteration takes less than 1 minute.
4 The reason that we only use data from weekdays is that the dynamics of demands and supplies in weekdays do have similar patterns but quite
different from the patterns of weekends.
13
We setup FIXED and SURGE in exactly the same way as we did for the static environment, except that the initial
driver distribution is from the solution of the convex program for dynamic environment.
Figure 6(b) shows the instantaneous revenue along the simulation. In particular, the relationship DYNAM
SURGE FIXED holds almost surely. Moreover, the advantages of DYNAM over the other two policies are more
significant at the high-demand “peak times”. For example, at 8 a.m., DYNAM (∼800) outperforms SURGE (∼600) and
FIXED (∼500) by roughly 33% and 60%, respectively.
Demand-supply balance Balancing the demand and supply is not the goal of our dispatching policy. However, a
policy without such balancing abilities are unlikely to perform well. In Figure 7, we plot the supply ratios (defined as
the local instantaneous supply divided by the local instantaneous demand) for all the 5 regions during the 24 hours of
the simulation.
From the figures, we can easily check that comparing with the other two lines, the red line (the supply ratio of
DYNAM) tightly surrounds the “balance” line of 100%, which means that the number of available drivers at any time
and at each region is close to the number of requests sent from that region at that time. The lines of other two policies
sometimes could be very far from the “balance” line, that is, the drivers under policy FIXED and SURGE are not in
the location where many passengers need the service.
As a result, our policy DYNAM shows much stronger power in vehicle dispatching and balancing demand and
supply in dynamic ride-sharing systems. Such advanced techniques in dispatching can in turn help the platform to
gain higher revenue through serving more passengers.
References
Javier Alonso-Mora, Samitha Samaranayake, Alex Wallar, Emilio Frazzoli, and Daniela Rus. 2017. On-demand
high-capacity ride-sharing via dynamic trip-vehicle assignment. PNAS (2017), 201611675.
Santiago Balseiro, Max Lin, Vahab Mirrokni, Renato Paes Leme, and Song Zuo. 2017. Dynamic revenue sharing. In
NIPS 2017.
Siddhartha Banerjee, Daniel Freund, and Thodoris Lykouris. 2017. Pricing and Optimization in Shared Vehicle Systems:
An Approximation Framework. In EC 2017.
Siddhartha Banerjee, Carlos Riquelme, and Ramesh Johari. 2015. Pricing in Ride-share Platforms: A QueueingTheoretic Approach. (2015).
Kostas Bimpikis, Ozan Candogan, and Saban Daniela. 2016. Spatial Pricing in Ride-Sharing Networks. (2016).
Stephen Boyd and Lieven Vandenberghe. 2004. Convex optimization. Cambridge university press.
Gerard P Cachon, Kaitlin M Daniels, and Ruben Lobel. 2016. The role of surge pricing on a service platform with
self-scheduling capacity. (2016).
Juan Camilo Castillo, Dan Knoepfle, and Glen Weyl. 2017. Surge pricing solves the wild goose chase. In EC 2017. ACM,
241–242.
Nelson D Chan and Susan A Shaheen. 2012. Ridesharing in north america: Past, present, and future. Transport Reviews
32, 1 (2012), 93–112.
M Keith Chen and Michael Sheldon. 2015. Dynamic pricing in a labor market: Surge pricing and flexible work on the
Uber platform. Technical Report. Mimeo, UCLA.
Judd Cramer and Alan B Krueger. 2016. Disruptive change in the taxi business: The case of Uber. The American
Economic Review 106, 5 (2016), 177–182.
Vincent P Crawford and Juanjuan Meng. 2011. New york city cab drivers’ labor supply revisited: Reference-dependent
preferences with rationalexpectations targets for hours and income. AER 101, 5 (2011), 1912–1932.
Zhixuan Fang, Longbo Huang, and Adam Wierman. 2017. Prices and subsidies in the sharing economy. In Proceedings
of the 26th International Conference on World Wide Web. WWW 2017, 53–62.
14
Ian L Gale and Thomas J Holmes. 1993. Advance-purchase discounts and monopoly allocation of capacity. The
American Economic Review (1993), 135–146.
Michel Gendreau, Alain Hertz, and Gilbert Laporte. 1994. A tabu search heuristic for the vehicle routing problem.
Management science 40, 10 (1994), 1276–1290.
Gianpaolo Ghiani, Francesca Guerriero, Gilbert Laporte, and Roberto Musmanno. 2003. Real-time vehicle routing:
Solution concepts, algorithms and parallel computing strategies. European Journal of Operational Research 151, 1
(2003).
Ramon Iglesias, Federico Rossi, Rick Zhang, and Marco Pavone. 2016. A BCMP Network Approach to Modeling and
Controlling Autonomous Mobility-on-Demand Systems. arXiv preprint arXiv:1607.04357 (2016).
Peter F Kostiuk. 1990. Compensating differentials for shift work. Journal of political Economy 98, 5, Part 1 (1990),
1054–1075.
Sébastien Lahaie and David M Pennock. 2007. Revenue analysis of a family of ranking rules for keyword auctions. In
EC 2007. ACM, 50–56.
Gilbert Laporte. 1992. The vehicle routing problem: An overview of exact and approximate algorithms. European
journal of operational research 59, 3 (1992).
Shuo Ma, Yu Zheng, and Ouri Wolfson. 2013. T-share: A large-scale dynamic taxi ridesharing service. In ICDE. IEEE,
410–421.
R Preston McAfee and Vera Te Velde. 2006. Dynamic pricing in the airline industry. forthcoming in Handbook on
Economics and Information Systems, Ed: TJ Hendershott, Elsevier (2006).
Luis Moreira-Matias, Joao Gama, Michel Ferreira, Joao Mendes-Moreira, and Luis Damas. 2013. Predicting taxi–
passenger demand using streaming data. IEEE Transactions on Intelligent Transportation Systems 14, 3 (2013),
1393–1402.
Gerald S Oettinger. 1999. An empirical analysis of the daily labor supply of stadium venors. Journal of political
Economy 107, 2 (1999), 360–392.
Michael Ostrovsky and Michael Schwarz. 2011. Reserve prices in internet advertising auctions: A field experiment. In
EC 2011Practical. ACM, 59–60.
Ben Roberts, Dinan Gunawardena, Ian A Kash, and Peter Key. 2016. Ranking and tradeoffs in sponsored search
auctions. ACM Transactions on Economics and Computation 4, 3 (2016), 17.
Weiran Shen and Pingzhong Tang. 2017. Practical versus Optimal Mechanisms. In AAMAS. 78–86.
Joanna Stavins. 2001. Price discrimination in the airline market: The effect of market concentration. Review of
Economics and Statistics 83, 1 (2001), 200–202.
Christopher S Tang, Jiaru Bai, Kut C So, Xiqun Michael Chen, and Hai Wang. 2016. Coordinating supply and demand
on an on-demand platform: Price, wage, and payout ratio. (2016).
Yongxin Tong, Yuqiang Chen, Zimu Zhou, Lei Chen, Jie Wang, Qiang Yang, Jieping Ye, and Weifeng Lv. 2017. The
simpler the better: a unified approach to predicting original taxi demands based on large-scale online platforms. In
KDD 2017. ACM, 1653–1662.
Dengji Zhao, Dongmo Zhang, Enrico H Gerding, Yuko Sakurai, and Makoto Yokoo. 2014. Incentives in ridesharing
with deficit control. In AAMAS 2014.
15
| 3 |
Signal Amplitude Estimation and Detection
from Unlabeled Binary Quantized Samples
Guanyu Wang, Jiang Zhu, Rick S. Blum, Peter Willett,
arXiv:1706.01174v3 [] 8 Mar 2018
Stefano Marano, Vincenzo Matta, Paolo Braca
Abstract
Signal amplitude estimation and detection from unlabeled quantized binary samples are studied,
assuming that the order of the time indexes is completely unknown. First, maximum likelihood (ML)
estimators are utilized to estimate both the permutation matrix and unknown signal amplitude under
arbitrary, but known signal shape and quantizer thresholds. Sufficient conditions are provided under
which an ML estimator can be found in polynomial time and an alternating maximization algorithm is
proposed to solve the general problem via good initial estimates. In addition, the statistical identifiability
of the model is studied.
Furthermore, the generalized likelihood ratio test (GLRT) detector is adopted to detect the presence
of signal. In addition, an accurate approximation to the probability of successful permutation matrix
recovery is derived, and explicit expressions are provided to reveal the relationship between the number
of signal samples and the number of quantizers. Finally, numerical simulations are performed to verify
the theoretical results.
Index Terms
Estimation, detection, permutation, unlabeled sensing, quantization, identifiability, alternating
maximization.
I. I NTRODUCTION
In many systems, the data is transmitted with time information, which may sometimes be imprecise
[1], [2], [3], [4], [5], [6], [7]. One example is the global positioning system (GPS) spoofing attack which
can alter the time stamps on electric grid measurements [1] to make them useless so that the data must
be processed without time stamps. Since the exact form of civilian GPS signals is publicly known and
the elements needed are inexpensive, building a circuit to generate signals to spoof the GPS is easy.
In [2], a refined assessment of the spoofing threat is provided. In addition, the detailed information
of receiver-spoofer architecture, its implementation and performance, and spoofing countermeasures are
DRAFT
1
introduced. As a case study in [3], the impact of the GPS spoofing attack on wireless communication
networks, more specifically, the frequency hopping code division multiple access (FH-CDMA) based
ad hoc network, is investigated. A timing synchronization attack (TSA) is coined to the wide area
monitoring systems (WAMSs), and its effectiveness is demonstrated for three applications of a phasor
measurement unit (PMU) [4]. In [5], the out-of-sequence measurement (OOSM) problem where sensors
produce observations that are sent to a fusion center over communication networks with random delays are
studied, and a Bayesian solution is provided. The problem of random delay and packet loss in networked
control systems (NCS) is studied in [6]. In addition, a minimum error covariance estimator for the system
is derived and two alternative estimator architectures are presented for efficient computation. In [7], the
effect of an unknown timestamp delay in Automatic Identification System (AIS) is studied, and a method
based on adaptive filtering is proposed.
In the above examples, the relative order of the data is unknown, i.e., the samples are unlabeled.
Estimation and detection from unlabeled samples have drawn a great deal of attention recently [8], [9],
[10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20]. In [8], it is shown that the convex relaxation
based on a Birkhoff polytope approach does not recover the permutation matrix, and a global branch and
bound algorithm is proposed to estimate the permutation matrix. In the noiseless case with a random
linear sensing matrix, it is shown that the permutation matrix can be recovered correctly with probability
1, given that the number of measurements is twice the number of unknowns [9], [18]. In [10], [19],
the noise is taken into account and a condition under which the permutation matrix can be recovered
with high probability is provided. In addition, a polynomial time algorithm is proposed for a scalar
parameter case. Denoising linear regression model with shuffled data and additive Gaussian noise are
studied in [11], and the characterization of minimax error rate is provided. In addition, an algorithm for
the noiseless problem is also proposed, and its performance is demonstrated on an image point-cloud
matching task [11]. In [12], several estimators are compared in recovering the weights of a noisy linear
model from shuffled labels, and an estimator based on the self-moments of the input features and labels is
introduced. For unlabeled ordered sampling problems where the relative order of observations is known,
an alternating maximization algorithm combined with dynamic programming is proposed [13]. In [15],
a signal detection problem where the known signal is permuted in an unknown way is studied.
Compared to the location parameter estimation problem (xi = θ + wi ) in [17], the model in this paper
is a scale parameter estimation problem (xi = hi θ + wi ), in which hi , i = 1, · · · , K is the shape of a
signal, and θ is an amplitude of signal. As a result, the scale parameter estimation problem is much more
difficult than the location estimation in several aspects, and the scale parameters are especially relevant
DRAFT
2
in relation to the mislabeling/permutation issue. First, the model in [17] is always identifiable, while our
model may be unidentifiable, as shown later. Second, the problem in [17] can be solved efficiently via
simple sorting, while we can only prove that problem in this paper can be solved efficiently provided
certain conditions are satisfied. Third, good initial points are proposed to improve the performance of
alternating maximization algorithm. Furthermore, we provide an approximation to the probability of
successful permutation matrix recovery, which reveals the relationship between the length of signal and
the number of quantizers.
In this paper, we focus on the problems of scale estimation and signal detection from unlabeled
quantized samples. The main contribution of this work can be summarized as follows. First, a sufficient
condition for the existence of a polynomial time algorithm is provided for the unlabeled estimation
problem, and the model is shown to be unidentifiable in some special cases. Second, good initial
points are provided to improve the performance of an alternating maximization algorithm. And third,
we provide analytic approximations on probability of permutation matrix recovery in the case of known
signal amplitude, which can be used to predict when the permutation matrix can be correctly recovered.
The organization of this paper is as follows. In Section II, the problem is described. Background on
ML estimation and generalized likelihood ratio test (GLRT) detection from labeled data are presented in
Section III. In section IV, the model identifiability is studied, and the estimation problem from unlabeled
data is studied. Section V extends the detection work to unlabeled data, and derives an approximate
analytic formula for permutation matrix recovery probability. Finally, numerical results are presented in
Section VI, and conclusion follows in Section VII.
Notation: The K ×1 vector of ones is 1K . For an unknown deterministic parameter θ, θ0 denotes its true
value. For an unknown permutation matrix Π, Π0 denotes its true value. For a random vector y, p(y; θ)
denotes the probability density function (PDF) of y parameterized by θ, and Ey [·] denotes the expectation
taken with respect to y. Let N (µ, σ 2 ) denote a Gaussian distribution with mean µ and variance σ 2 . Let
Φ(·) and ϕ(·) denote the cumulative distribution function (CDF) and probability density function (PDF)
of a standard Gaussian random variable respectively. Let U(a, b) denote an uniform distribution, whose
minimum and maximum values are a and b. Let B(N, p) denote a binomial distribution, where N and p
denote the number of trials and the probability of event, respectively.
II. P ROBLEM S ETUP
Consider a signal amplitude estimation and detection problem where a collection of N binary quantizers
generate binary quantized samples which will be utilized to estimate the unknown scaling factor θ of a
K length signal and detect the presence of the signal, as shown in Fig. 1. The binary quantized samples
DRAFT
3
Ampulitude
Noiseless
Signal
hi θ
···
0 1
···
i
···
···
K
Time
Noise
Quantizers
Q1
···
Qj
···
QN
τi
τK
b11
bi1
bK1
···
b1j
···
···
···
bij
···
bKj
···
···
···
Binary
quantized
samples
τ1
b1N
biN
bKN
Binary channel
u11
···
u1j
···
···
···
uij
···
u1N
uiN
···
uKj
···
···
Flipped
samples
uK1
ui1
uKN
Permuted
···
uij
···
uKj
···
···
···
u1N
···
u1j
···
···
Unlabeled
samples
uK1
ui1
u11
uiN
uKN
Fig. 1: System diagram of unlabeled binary quantized samples generation.
bij are obtained via
bij = Qi (hi θ + wij ),
i = 1, · · · , K,
j = 1, · · · , N,
(1)
and the corresponding hypothesis problem can be formulated as
H0 : bij = Qi (wij ), i = 1, · · · , K, j = 1, · · · , N,
H1 : bij = Qi (hi θ + wij ), i = 1, · · · , K, j = 1, · · · , N,
where i and j respectively denote one of the K time indexes and one of the N quantizers, hi is the known
2 -variance distribution
coefficient characterizing the signal shape, wij is the i.i.d. noise drawn from the σw
whose PDF is fw (x/σw )/σw and CDF is Fw (x/σw ), where fw (x) and Fw (x) are the corresponding
unit-variance PDF and CDF, and Qi (·) implies a binary quantizer which produces 1 if the argument is
DRAFT
4
larger than a scalar threshold τi and 0 otherwise. The thresholds of N quantizers are identical given
any time index1 . We assume that the PDF fw (w) is log-concave, which is often met in practice such
as Gaussian distributions, and the thresholds of N quantizers are identical given any time index. We
assume that the PDF fw (w) is logconcave, which is often met in practice by, for example, the Gaussian
distribution.
The quantized data {bij } are transmitted over a binary channel with flipping probabilities q0 and q1
which are defined as Pr(uij = 1|bij = 0) = q0 and Pr(uij = 0|bij = 1) = q1 , where uij is the sample
received at the output of the channel, which we call the fusion center (FC) [21].
We assume that all the sets of data {uij }N
j=1 are transmitted to the FC with permuted time indexes.
Accordingly, the FC receives sets of data, say {ũij }N
j=1 , in which the time reference (represented by the
index i) is invalid. Specifically, the FC does not know which time index the data {ũij }N
j=1 belongs to,
but knows that {ũij }N
j=1 belongs to one of the K time indexes. Let us introduce the matrix U whose
e , as follows:
(i, j)−th entry is uij . Then, the unlabeled samples can be collected in a matrix U
e = ΠU,
U
(2)
where Π ∈ RK×K is an unknown permutation matrix; that is, a matrix of {0,1} entries in which each
row and each column sums to unity. We assume that θ is constrained to an interval [−∆, ∆], for algorithm
and theoretical reasons [22].
III. P RELIMINARIES
In this section, standard materials of parameter estimation and signal detection using labeled data are
presented.
A. Maximum Likelihood Estimation
The probability mass function (PMF) of uij can be calculated as
Pr(uij = 1) = q0 + (1 − q0 − q1 )Fw
hi θ − τi
σw
, pi ,
(3)
Pr(uij = 0) = 1 − pi .
1
Here we have thresholds fixed across quantizers and varying with time, with permutation across time. We could, equivalently,
have fixed thresholds of quantizers across time but varying across sensors and permuted across sensors. Mathematically, it is
the same problem and the formulation could as easily encompass it.
DRAFT
5
The PMF of U is
p(U; θ) =
K Y
N
Y
Pr(uij = 1)uij Pr(uij = 0)(1−uij ) .
(4)
i=1 j=1
Let ηi denote the fraction of uij = 1 in {uij }N
j=1 , i.e.,
ηi =
N
X
uij /N.
(5)
j=1
Consequently, the log-likelihood function l(η; θ) is
l(η; θ) = N
K
X
(ηi log pi + (1 − ηi ) log(1 − pi )),
(6)
i=1
where pi is given in (3). Note that in an error free binary symmetric channel scenario, i.e., q0 = q1 = 0 or
q0 = q1 = 1, the CDF Fw (x) is log-concave as it is the integral of a log-concave PDF fw (x). Therefore
maximizing the log-likelihood function is a convex optimization problem, which can be solved efficiently
via numerical algorithms [23], [24], [25]. For 0 < q0 + q1 < 2, it is difficult to determine the convexity
of the negative log-likelihood function. In this case all that can be guaranteed is a local optimum. As we
show in numerical experiments, we found that the ML estimator using gradient descent algorithm works
well and approaches the Cramér Rao lower bound (CRLB).
In addition, the Fisher Information (FI) I(θ) is the expectation of the negative second derivative of the
log-likelihood function l(η; θ) (6) taken with respect to θ, i.e., [28],
(
)
K
hi θ−τi
∂ ηi 1−ηi
∂
hi θ−τi
ηi 1−ηi
N (1−q0 −q1) X
hi fw
Eη
−
+ fw
Eη
−
I(θ) = −
σw
σw
∂θ pi 1−pi
∂θ
σw
pi 1−pi
i=1
K h2 f 2 hi θ−τi
2
X
i w
σw
N (1−q0 −q1)
=
,
(7)
2
σw
pi (1 − pi )
i=1
where (7) follows due to Eη [ηi /pi − (1 − ηi )/(1 − pi )] = 0. Consequently, the CRLB is
CRLB(θ) = 1/I(θ),
(8)
which is later used as a benchmark performance for ML estimation from labeled data in Section VI.
B. GLRT detection
In the case of known θ, the optimal detector according to the NP criterion is the log-likelihood ratio
test [26]. For unknown θ, the GLRT is usually adopted. Although there is no optimality associated with
DRAFT
6
the GLRT, it appears to work well in many scenarios of practical interest [27]. The GLRT replaces the
unknown parameter by its MLE and decides H1 if
T1 (η) =
l(η; θ) − l(η; 0) > γ,
max
(9)
θ∈[−∆,∆]
where γ is a threshold determined by the given false alarm probability PF A .
IV. E STIMATION FROM UNLABELED DATA
In this section, we study the estimation problem from unlabeled data. First, we delineate the model.
The statistical identifiability is investigated, and the estimation problem is studied separately in the cases
of known and unknown θ.
A. Maximum likelihood estimation
Introduce the function π(·) such that m = π(i) if the permutation matrix Π in (2) maps the ith row
of U to the mth row of Ũ. The PMF of Ũ is
e θ, Π) =
p(U;
K Y
N
Y
Pr(ũmj = 1)ũmj Pr(ũmj = 0)(1−ũmj )
m=1 j=1
=
K Y
N
Y
(10)
Pr(ũπ(i)j = 1)ũπ(i)j Pr(ũπ(i)j = 0)(1−ũπ(i)j ) ,
i=1 j=1
where (Pr(ũij = 1), Pr(ũij = 0)) is the PMF of ũij . The corresponding log-likelihood function l(η̃; θ, Π)
is
l(η̃; θ, Π) = N
K
X
η̃π(i) log pi + (1 − η̃π(i) ) log(1 − pi ) ,
(11)
i=1
where η̃π(i) =
PN
j=1 ũπ(i)j /N
=
PN
j=1 ũmj /N .
The ML estimation problem can be formulated as
max
l(η̃; θ, Π),
θ∈[∆,∆],Π∈PK
(12)
where PK denotes the set of all possible K × K permutation matrices.
B. Estimation with permuted data and known θ
In this subsection, the permutation matrix recovery problem is studied in the case of known θ. It is
shown that the permutation matrix under ML estimation criterion can be recovered efficiently under the
ML criterion.
DRAFT
7
Proposition 1 Given the ML estimation problem in (12) with known θ, the ML estimate of the permutation
e , and equivalently the elements of η̃ , to have the same relative order
matrix Π will reorder the rows of U
as the elements of (1 − q0 − q1 )(hθ − τ ).
Proof: Note that the objective function l(η̃; θ, Π) (11) can be decomposed as
l(η̃; θ, Π) = K
N
X
η̃π(i) si + K
i=1
N
X
log(1 − pi ),
(13)
i=1
where si = log(pi /(1 − pi )). From (13), the ML estimate of the permutation matrix Π will reorder the
rows of U, and equivalently the elements of η̃ to have the same relative order as the elements of s
[15], [17]. Because si is monotonically increasing with respect to (1 − q0 − q1 )(hi θ − τi ), the elements
of η̃ should be reordered by the permutation matrix to have the same relative order as the elements of
(1 − q0 − q1 )(hθ − τ ) to maximize the likelihood.
If τi = c0 hi , then changing θ − c0 in −(θ − c0 ) would reverse the ordering. This might help to explain
why two solutions appear in the subsequent Proposition 3 when θ is unknown.
C. Estimation with permuted data and unknown θ
In general, θ may be unknown. Consequently, we should jointly estimate θ and permutation matrix Π.
However, finding the best permutation matrix is very challenging in most problems due to non-convexity.
One could try all the possible permutation matrices, at complexity cost O(N !). Given a permutation
matrix, one obtains the ML estimate of θ via numerical algorithms and achieves global optimum under
q0 = q1 = 0 or 1. Under 0 < q0 + q1 < 2, we do not know whether the negative log-likelihood function
is convex or not, and local optimum is guaranteed. Given θ, the computation complexity of finding the
optimal permutation matrix is just reordering, which costs O(N log N ), as we show in subsection IV-B.
1) Alternating maximization algorithm for general case: The problem structure induces us to optimize
the two unknowns alternately as shown in Algorithm 1. The alternating maximization in Algorithm 1 can
Algorithm 1 Alternating Maximization
Initialize t = 1 and θ̂t−1 ;
Fix θ = θ̂t−1 , reorder η̃ according to (1 − q0 − q1 )(hθ − τ ) and obtain the corresponding permutation
matrix Π̂t−1 ;
3: Solve max l(η̃; θ, Π̂t−1 ) and obtain θ̂t ;
θ
4: Set t = t + 1 and return to step 2 until a sufficient number of iterations has been performed or
|θ̂t − θ̂t−1 | ≤ , where is a tolerance parameter.
1:
2:
be viewed as the alternating projection with respect to θ and Π. The objective function is l(η̃; θ, Π). In
DRAFT
8
step 2, given θ̂t−1 , we update the permutation matrix as Π̂t−1 , and the objective value is l(η̃; θ̂t−1 , Π̂t−1 ).
Given Π̂t−1 , we obtain ML estimation of θ as θ̂t , and the objective value is l(η̃; θ̂t , Π̂t−1 ) satisfying
l(η̃; θ̂t , Π̂t−1 ) ≥ l(η̃; θ̂t−1 , Π̂t−1 ). Given θ̂t , we update the permutation matrix as Π̂t , and the objective
value is l(η̃; θ̂t , Π̂t ) satisfying l(η̃; θ̂t , Π̂t ) ≥ l(η̃; θ̂t , Π̂t−1 ). Consequently, we have
l(η̃; θ̂t , Π̂t ) ≥ l(η̃; θ̂t−1 , Π̂t−1 ).
(14)
Provided that the maximum with respect to each θ and Π is unique, any accumulation point of the
sequence generated by Algorithm 1 is a stationary point [29].
2) Special cases for efficient recovery of Π under unknown θ:
Proposition 2 Given the ML estimation problem in (12) with unknown θ, if there exist constants c, d, e ∈
R such that cτ + dh = e1, the elements of η̃ should be reordered according to the order of the elements
of (q0 + q1 − 1)τ if c = 0, otherwise reordered according to h or −h.
Proof: We separately address the cases c = 0 and c 6= 0. In the case of c = 0, h must be a
constant vector. Reordering according to (1 − q0 − q1 )(hθ − τ ) is equivalent to reordering according
to (q0 + q1 − 1)τ . Since (q0 , q1 ) are known in this problem, η̃ should be reordered according to τ if
q0 + q1 > 1 or −τ if q0 + q1 < 1. In the case of c 6= 0, we have τ = (e/c)1 − (d/c)h. Consequently,
hθ − τ = (θ + d/c)h − (e/c)1, and η̃ is reordered according to h or −h.
The above proposition deals with four cases, i.e., h is a constant vector (c = 0), τ is a constant
vector (d = 0), h is a multiple of τ (e = 0) and each pair of components of h and τ lies in the same
line cτi + dhi = e (cde 6= 0). In [17] it is shown that reordering yields the optimal MLE given h = 1.
Proposition 2 extends the special case in [17] to more general cases. Consequently, we propose Algorithm
2, an efficient algorithm for parameter estimation.
Algorithm 2 Reordering algorithm
1:
If c = 0, reorder the elements of η̃ according to the elements of (q0 + q1 − 1)τ . The corresponding
permutation matrix is Π̂s0 . Solve the parameter estimation problem by numerical algorithm and
obtain θ̂ML = argmax l(η̃; θ, Π̂s0 );
θ
If c 6= 0, reorder the elements of η̃ according to the elements of h and −h. The corresponding
permutation matrices are Π̂s1 and Π̂s2 ;
3: Solve the single variable optimization problems and obtain θ̂s1 = argmax l(η̃; θ, Π̂s1 ) and θ̂s2 =
2:
θ
argmax l(η̃; θ, Π̂s2 ). Choose θ̂ML = θ̂s1 given that l(η̃; θ̂s1 , Π̂s1 ) ≥ l(η̃; θ̂s2 , Π̂s2 ), otherwise θ̂ML =
θ
θ̂s2 .
DRAFT
9
D. Statistical identifiability
Note that Algorithm 2 may generate two solutions (θ̂s1 , Π̂s1 ) and (θ̂s2 , Π̂s2 ). Given system parameters
h and τ , it is important to determine whether the two solutions (θ̂s1 , Π̂s1 ) and (θ̂s2 , Π̂s2 ) will yield
the same log-likelihood l(η̃; θ̂s1 , Π̂s1 ) = l(η̃; θ̂s2 , Π̂s2 ). If l(η̃; θ̂s1 , Π̂s1 ) = l(η̃; θ̂s2 , Π̂s2 ), two pairs
of parameter values lead to the same maximum likelihood. In this situation, (θ, Π) clearly cannot be
estimated consistently since η̃ provide no information as to whether it is (θ̂s1 , Π̂s1 ) or (θ̂s2 , Π̂s2 ).
This phenomenon motivates us delving into the identifiability of the model. Statistical identifiability
is a property of a statistical model which describes one-to-one correspondence between parameters and
probability distributions [32]. In this subsection, we provide the following proposition which justifies
that there exist cases in which the model is unidentifiable, i.e., there exist two different parameter values
((θs1 , Πs1 ) and (θs2 , Πs2 )) leading to the same distribution of the observations η̃ [32].
Proposition 3 Let ha and hd denote the ascending and descending ordered versions of h, and Πa h = ha
and Πd h = hd , where Πa and Πd are permutation matrices. Given τ = c0 h and ha = −hd , the
model is unidentifiable, i.e., l(η̃; θ, Π)|θ=θs1 ,Π=Πs1 = l(η̃; θ, Π)|θ=θs2 ,Π=Πs2 , where θs2 = 2c0 − θs1 and
Πs2 = Πs1 ΠT
a Πd .
Proof: Let Πs1 be a permutation matrix such that ΠT
s1 η̃ has the same relative order as h. Now we
T
prove that ΠT
s2 η̃ has the same relative order as −h. Utilizing ha = −hd = −Πd h and Πd Πd = I,
T
T
T
T
we obtain ΠT
d ha = −h. Note that Πs2 η̃ = Πd Πa Πs1 η̃ . Because Πs1 η̃ has the same relative order as
T
T
h, Πa ΠT
s1 η̃ has the same relative order as Πa h = ha , and Πd Πa Πs1 η̃ has the same relative order as
ΠT
d ha = −h.
Next we prove that l(η̃; θ, Π)|θ=θs1 ,Π=Πs1 = l(η̃; θ, Π)|θ=θs2 ,Π=Πs2 holds. Because θs2 = 2c0 − θs1 ,
we have
hi θs1 − τi = hi (θs1 − c0 ),
hi θs2 − τi = −hi (θs1 − c0 ).
(15)
By examining l(η̃; θ, Π) (13) and utilizing ha = −hd , the second term of l(η̃; θ, Π)|θ=θs1 ,Π=Πs1 is
equal to that of l(η̃; θ, Π)|θ=θs2 ,Π=Πs2 . For the first term, note that given θs1 and θs2 , the corresponding
s1 and s2 in (13) can be viewed as evaluating at h and −h according to (15), respectively. Because
ha = −hd , we can conclude that s1 is a permutated version of s2 . The first term of (13) can be
T
T
T
T
T
expressed as either (ΠT
s1 η̃) s1 or (Πs2 η̃) s2 . Because (Πs1 η̃) and s1 have the same relative order as
T
T
T
T
T
h, and (ΠT
s2 η̃) and s2 have the same relative order as −h, one has (Πs1 η̃) s1 = (Πs2 η̃) s2 . Thus
l(η̃; θ, Π)|θ=θs1 ,Π=Πs1 = l(η̃; θ, Π)|θ=θs2 ,Π=Πs2 .
Now an example is presented to substantiate the above proposition. Let c0 = 0.5, the true value
DRAFT
10
θ0 = 1, h = [2, −1, −2, 1]T , η = [η1 , η2 , η3 , η4 ]T and Π0 = [0 0 1 0; 0 1 0 0; 0 0 0 1; 1 0 0 0]. Then
η̃ = [η3 , η2 , η4 , η1 ]T , ha = [−2, −1, 1, 2]T , hd = [2, 1, −1, −2]T and ha = −hd . We can conclude that
l(η̃; θ, Π)|θ=1,Π=Π0 = l(η̃; θ, Π)|θ=0,Π=Π0 , where Π0 = [0 0 0 1; 1 0 0 0; 0 0 1 0; 0 1 0 0].
In addition, given |c0 | ≥ ∆, only one of {θs1 , θs2 } lies in the interval [−∆, ∆], and the model is
identifiable. In the following, a method to select good initial points for the alternating maximization
algorithm is provided for the general case.
E. Good initial points
For alternating maximization algorithms dealing with nonconvex optimization problems, an initial point
is important for the algorithm to converge to the global optimum. In the following text, we provide good
initial points for the alternating maximization algorithm. The key idea is to obtain a coarse estimate of
θ via matching the expected and actual number of ones in observations, and utilizing the orthogonal
property of permutation matrix.
Suppose that the number of measurements K is large. Consequently, as the number of measurements
tends to infinity, the law of large numbers (LLN) implies
p
ηi −→ q0 + (1 − q0 − q1 )Fw ((hi θ − τi )/σw ) ,
(16)
p
where −→ denotes convergence in probability. Given θ ∈ [−∆, ∆], −|hi |∆ − τi ≤ hi θ − τi ≤ |hi |∆ −
τi . In the following text, we only deal with q0 + q1 < 1 case. The case that q0 + q1 > 1 is very
similar and is omitted here. Define l =
max
i∈[1,··· ,N ]
min
i∈[1,··· ,N ]
(q0 + (1 − q0 − q1 )Fw ((−|hi |∆ − τi )/σw )) and u =
(q0 + (1 − q0 − q1 )Fw ((|hi |∆ − τi )/σw )). Then ηi should satisfy l ≤ ηi ≤ u. Let Il,u (η̃i )
denotes the projection of η̃i onto the interval [l, u]. Note that this projection operation is needed because
(16) is valid in the limit as K goes to infinity. From (16) one obtains
p
m , σw Fw−1 ((Il,u (η̃) − q0 1N )/(1 − q0 − q1 )) −→Π(hθ − τ ).
Utilizing ΠΠT = I yields
p
mT m−→hT hθ2 − 2τ T hθ + τ T τ ,
(17)
which is a quadratic equation in θ. Accordingly, using the asymptotic properties of mT m, one obtains
(18) via inverting (17)
θ1,2
τ Th
= T ±
h h
r
mT m − τ T τ
τ Th
+ ( T )2 .
T
h h
h h
(18)
DRAFT
11
The above two solutions can be used for the alternating maximization algorithm as initial points. Finally,
the optimum with larger likelihood is chosen as ML estimator. In Section VI, to provide a fair comparison
of the alternating maximization algorithm with good initial points, −∆ and ∆ are used as two initial
points, and we choose as ML estimator the solution whose likelihood is larger.
The result of (18) is consistent with that of Proposition 3. Given that the conditions in Proposition 3
are satisfied, and substituting τ = c0 h into (18), the solutions are θ1 = θ and θ2 = 2c0 − θ.
V. D ETECTION FROM UNLABELED DATA
In this section, we study the detection problem from unlabeled data. The GLRT detector is studied
separately in the cases of known and unknown θ. In addition, we investigate the permutation matrix
recovery probability.
A. Detection with permuted data and known θ
In the case of known θ, the GLRT can be formulated as
T2 (η̃) = max l(η̃; θ, Π) − max l(η̃; 0, Π) > γ.
Π∈PN
Π∈PN
(19)
As shown in Proposition 1, the ML estimate of the permutation matrix Π corresponding to the first term in
(19) will reorder the elements of η̃ to have the same relative order as the elements of (1−q0 −q1 )(hθ−τ ).
Similarly, for the ML estimation problem corresponding to the second term in (19), we reorder the
elements of η̃ to have the same order as that of −(1 − q0 − q1 )τ .
B. Detection with permuted data and unknown θ
For the unknown θ and unknown Π case, a GLRT is used to decide H1 if
T3 (η̃) = max l(η̃; θ, Π) − max l(η̃; 0, Π) > γ.
θ,Π∈PN
Π∈PN
(20)
Algorithm 1 for joint estimation of θ and Π has been described in section IV-C1. The performance of
the GLRT (20) will be evaluated in the Algorithm 1 for joint estimation of θ and Π is necessary for the
first term and has been described in section IV-C1; the second term is as in section V-A.
C. Approximations on permutation matrix recovery probability
In this subsection, we investigate the permutation matrix recovery probability problem. Since errors
in permutation matrix recovery are more likely to happen in the relatively indistinguishable cases, the
performances in terms of signal detection or estimation tasks may not be closely related to the recovery of
DRAFT
12
permutation matrix. However, it is meaningful to extract the accurate timestamp information or sensors’
identity information which corresponds to recovery of permutation matrix, as presented in the following.
It is difficult to obtain the permutation matrix recovery probability in the case of unknown θ. Instead,
we assume that θ is known, and analyze the permutation matrix recovery probability in terms of the
recovery algorithm provided in Proposition 1. Without loss of generality, we also assume that q0 + q1 < 1
in the following analysis. The case that q0 + q1 > 1 is similar and is omitted here.
First, let pi be ordered such that p(1) > p(2) > · · · > p(K) . From (3) we have (hi θ − τi )(1) >
(hi θ − τi )(2) > · · · > (hi θ − τi )(K) . Provided q0 + q1 < 1, the elements of η̃ should be reordered
according to the order of the elements of hθ − τ in Proposition 1. Therefore the permutation matrix will
be correctly recovered if and only if η(1) > η(2) > · · · > η(K) . Note that the subscripts of (hi θ − τi )(·)
and η(·) also correspond to the order of pi , instead of the order of hi θ − τi or ηi .
Define Ei as the event such that η(i) > η(i+1) and Ēi as the corresponding complement event of Ei ,
namely, η(i) ≤ η(i+1) . The probability that permutation matrix is recovered correctly can be written as
Pr(Π̂ML = Π0 ) = Pr(η(1) > · · · > η(K) ) = Pr
K−1
\
!
Ei
i=1
=1 − Pr
K−1
[
!
≥1−
Ēi
K−1
X
i=1
K−1
S
Pr(Ēi ),
i=1
K−1
P
where union bound Pr
Ēi ≤
Pr(Ēi ) is utilized in (21). From (3), we have uij ∼ B(1, pi )
i=1
i=1
PN
and N ηi = j=1 uij ∼ B(N, pi ). When N is large, the De Moivre-Laplace theorem [30] implies that
the distribution of ηi can be approximated by N (pi , pi (1 − pi )/N ). As a consequence, η(i) − η(i+1) is
approximately distributed as N (p(i) − p(i+1) , p(i) (1 − p(i) )/N + p(i+1) (1 − p(i+1) )/N ), and
Pr(Π̂ML = Π0 ) ≥ 1 −
K−1
X
Pr(Ēi ) = 1 −
i=1
≈1 −
K−1
X
Φ q
K−1
X
Pr(η(i) − η(i+1) ≤ 0)
i=1
√
−(p(i) − p(i+1) ) N
p(i) (1 − p(i) ) + p(i+1) (1 − p(i+1) )
i=1
√
≥1 − (K − 1)Φ −t N
≈1 − (K − 1) √
min
i=1,··· ,K−1
√
1
2
√ e−t N/2
2πt N
1
t2
1
=1 − √ eln(K−1)−ln t− 2 ln N − 2 N , Pr(K, N ),
2π
where t =
vi
,
p(i) (1−p(i) )+p(i+1) (1−p(i+1) )
(21)
vi = p(i) − p(i+1) and the approximation Φ(−x) ≈
DRAFT
13
2
x
√ 1 e− 2
2πx
(x 0) is utilized.
Utilizing p(i) (1 − p(i) ) + p(i+1) (1 − p(i+1) ) ≤ 1/2, we define t̃ satisfying
√
2
t̃ =
min
vi ≤
t.
i=1,··· ,K−1
2
(22)
We conjecture that t̃ is on the order of K −α , i.e., t̃ = O(K −α ), which means that there exists constant
ct such that
t̃ ≈ ct K −α .
(23)
In the following text, we show that we can construct h such that t̃ = O(K −1 ) and t̃ = O(K −2 ).
According to (22) and (23), the approximation Pr(K, N ) (21) can be further simplified and relaxed as
1
1
2
f
Pr(K,
N ) = 1 − √ eln(K−1)−ln t̃− 2 ln N −t̃ N
2 π
c2
1
1
t
≈ 1 − √ e(1+α) ln K− 2 ln N − K 2α N .
2 πct
(24)
(25)
2
t
N of (25) must be far less than 0 for the recovery
From (25), the exponent (1 + α) ln K − 12 ln N − Kc2α
of permutation matrix. Given N is large, the term − 21 ln N is small compared to N . Thus (1 + α) ln K −
c2t
K 2α N
< 0 will ensure that the permutation matrix can be recovered in high probability. Simplifying
(1 + α) ln K −
c2t
K 2α N
< 0 yields
N>
(1 + α) 2α
K ln K.
c2t
(26)
The following cases are examples to illustrate t̃ = O(K −α ). For simplicity, we assume 1 − q0 − q1 > 0,
τ = ch(c < θ) and
a , (θ − c)/σw > 0.
1) t̃ = O(K −1 ): Let h be the shape of a ramp signal such that hi = u −
(27)
(u−l)(i−1)
(u
K−1
> |l|), and
2 ). Then the ordered sequence p
wij ∼ N (0, σw
(i) = pi , and t̃ can be approximated as
t̃ =
min
i=1,··· ,K−1
pi − pi+1
a(1 − q0 − q1 )(u − l)
min
fw (aξi )
i=1,··· ,K−1
K −1
a(1 − q0 − q1 )(u − l)fw (au)
≈
K −1
=
(28)
≈ct K −1 ,
DRAFT
14
where mean value theorem is utilized for ξi ∈ (hi+1 , hi ), ξ1 ≈ h1 = u is utilized when K is large, and
ct = a(1 − q0 − q1 )(u − l)fw (au).
(29)
Therefore t̃ can be reshaped in the form of (23).
2) t̃ = O(K −2 ): Let hi be independently drawn from the same distribution of wij /σw . The CDF of
pi is
Fpi (x) = Pr(pi ≤ x)
= Pr(q0 + (1 − q0 − q1 )Fw (ahi ) ≤ x)
1 −1
x − q0
= Pr hi ≤ Fw
a
1 − q0 − q1
1 −1
x − q0
= Fw
F
.
a w
1 − q0 − q1
(30)
In this case, we conjecture that t̃ = O(K g(a) ), where g(a) is a function of a, and the numerical results
under different a are shown in Fig. 2. In addition, the case in which h is the shape of a sinusoidal signal
is also presented in Fig. 3.
10 -2
10 -2
(a)
(b)
10 -6
10 -6
t~
10 -4
t~
10 -4
10 -8
10
10 -8
-10
10 -10
a=0.5
10 -12
10 1
10 2
a=1
10 3
10 4
10 -12
10 1
10 5
K
10 2
10 0
10 3
10 4
10 0
(d)
10 -5
10 -20
10 -10
10 -40
t~
t~
(c)
10 -15
10 -60
a=4
a=2
10 -20
10 1
10 5
K
t~, realization one
t~, realization two
t~, mean of 1000 realizations
10 2
10 3
K
10 4
10 5
10 -80
10 1
10 2
10 3
10 4
10 5
K
Fig. 2: The relationship of t̃ and K under different a. Note that q0 = q1 = 0, hi ∼ N (0, 1) and
2 ).
wij ∼ N (0, σw
Now we prove that t̃ = O(K −2 ) under certain conditions. Given that hi and wij /σw are i.i.d. random
variables and a = 1, the CDF Fpi (x) = (x − q0 )/(1 − q0 − q1 ), and the PDF of pi is
1
1−q0 −q1 , q0 ≤ x ≤ 1 − q1 ,
fpi (x) =
0,
otherwise.
(31)
DRAFT
15
10 -2
10 -2
(a)
(b)
-4
10 -4
10 -6
10 -6
t~
t~
10
10
-8
10 -8
10 -10
10 -10
a=0.5
10
a=1
-12
10 1
10 2
10 3
10 4
10 -12
10 1
10 5
K
10 2
10 -2
10 3
10 4
10 0
(d)
(c)
10
10 5
K
t~, realization one
t~, realization two
t~, mean of 1000 realizations
-4
10 -5
t~
t~
10 -6
10 -8
10 -10
10 -10
a=4
a=2
10 -12
10 1
10 2
10 3
10 4
10 -15
10 1
10 5
10 2
K
10 3
10 4
10 5
K
Fig. 3: The relationship of t̃ and K under different a. Note that q0 = q1 = 0, hi = sin(2πxi ),
2 ).
xi ∼ U(0, 1) and wij ∼ N (0, σw
Then the variates p(1) , p(2) · · · , p(K) are distributed as K descending ordered statistics from an uniform
(q0 , 1 − q1 ) parent. For x ≤ (1 − q0 − q1 )/(K − 1), the CDF of t̃ can be derived as [31] (page 135,
equation (6.4.3))
Ft̃ (x) =Pr
min
i=1,··· ,K−1
vi ≤ x
=1 − Pr(v1 > x, v2 > x, · · · , vK−1 > x)
(K − 1)x K
.
=1 − 1 −
1 − q0 − q1
For x ≥ (1 − q0 − q1 )/(K − 1), Ft̃ (x) = 1. Then the PDF of t̃ is
h
iK−1
K(K−1) 1 − (K−1)x
, 0≤x≤
1−q0 −q1
1−q0 −q1
ft̃ (x) =
0,
otherwise.
1−q0 −q1
K−1 ,
(32)
(33)
The expectation of t̃ is
1
Z
Et̃ [t̃] =
0
Z
xft̃ (x) dx =
0
1−q0 −q1
K−1
t̃ft̃ (x) dx
Z 1−q0 −q1
K−1
K(K − 1)
(K − 1)x K−1
=
x 1−
dx
1 − q0 − q1 0
1 − q0 − q1
1 − q0 − q1
=
.
K2 − 1
(34)
DRAFT
16
Hence the probability that t̃ falls into [c1 /K 2 , c2 /K 2 ] is
Pr(c1 /K 2 ≤ t̃ ≤ c2 /K 2 ) = Ft̃ (c2 /K 2 ) − Ft̃ (c1 /K 2 )
K
K
c1 (K − 1)
c2 (K − 1)
= 1−
− 1−
.
(1 − q0 − q1 )K 2
(1 − q0 − q1 )K 2
(35)
0
When K is large, (K−1)/K ≈ 1 and (1−1/(c0 K))c K ≈ 1/e(c0 > 0). Equation (35) can be approximated
as
Pr(c1 /K 2 ≤ t̃ ≤ c2 /K 2 ) ≈ e
c
− 1−q 1−q
0
c
1
− 1−q 2−q
−e
0
1
.
(36)
Provided that q0 = q1 = 0, when c1 = 0.1 and c2 = 10, Pr(0.1/K 2 ≤ t̃ ≤ 10/K 2 ) ≈ 0.94; when
c1 = 0.01 and c2 = 100, Pr(0.01/K 2 ≤ t̃ ≤ 100/K 2 ) ≈ 0.99. It can be seen that t̃ falls near the order
of magnitude of K −2 with high probabilities. Thus it is reasonable that t̃ = O(K −2 ).
According to the definition of pi (3), equations (22) and (23), ct ∝ 1 − q0 − q1 . From (26), the number
of quantizers Nreq required for permutation matrix recovery probability is
Nreq ∝ 1/(1 − q0 − q1 )2 .
(37)
From (37), one can conclude that the number of quantizers for permutation matrix recovery with high
probability is 1/(1 − q0 − q1 )2 times that of unflipped case where q0 = q1 = 0.
VI.
NUMERICAL
S IMULATIONS
In this section, numerical experiments are conducted to evaluate the theoretical results. For simplicity,
2 ).
the distribution of noise wij is selected as the Gaussian distribution N (0, σw
A. Parameter estimation
For the first two experiments, we evaluate the performance of ML estimators proposed in section IV.
2 = 1, ∆ = 2, q = 0.05, q = 0.05 and the tolerance
Parameters are set as follows: K = 20, θ = 1, σw
0
1
parameter in Algorithm 1 is 10−7 . The number of Monte Carlo trials is 5000.
For the first experiment, the MSE performance of Algorithm 2 is evaluated in Fig. 4. We let τ =
0.5h, which is a special case mentioned in Proposition 2. The coefficients h is equispaced with h =
[−1.50, −1.29, −1.08, · · · , 2.50]T , which corresponds to a ramp signal. It can be seen that h does not
satisfy the condition in Proposition 3, thus the model may be identifiable. It can be seen that the ML
estimator from labeled data always works well. Given limited number of quantizers, there is an obvious
DRAFT
17
gap between the MSEs of two estimators. As the number of quantizers increases, the performance of the
estimator from unlabeled data approaches that from labeled data.
9
#10 -3
Labeled data
Unlabeled data
CRLB
8
7
MSE
6
5
4
3
2
1
0
20
40
60
80
100
120
140
160
180
200
N
Fig. 4: MSE of θ vs. number of quantizers for the ML estimators from labeled and unlabeled data,
compared with the CRLB (8) for ramp signal.
For the second experiment, the MSE performance of Algorithm 1 (for the general case) is evaluated In
Fig. 5. The elements of the vector h describe the shape of a sinusoidal signal such that hi = sin(2πxi ),
where xi is drawn independently and randomly from the uniform distribution U(0, 1) and then sorted
in ascending order. The elements of the vector τ is drawn independently and randomly from the
uniform distribution U(−∆, ∆). It can be seen that when N < 80, good initial points improve the
MSE performance of the alternating maximization algorithm from unlabeled data. As N increases to 80,
the MSE performances of both unlabeled ML estimators approach a common level which is larger than
that achieved by the labeled data. Finally, the MSEs of both estimators from unlabeled data approach to
that from labeled data around N = 3 × 104 .
B. Signal detection
In Fig. 6, the relationship between PD and the number of quantizers N is employed. Parameters are
2 = 9 and P
consistent with the first experiment, except that σw
F A = 0.05.
In subgraph (a), h and τ are the same as those in the first experiment. It can be seen that the number
of quantizers has a significant effect on the detection probability. As N increases, the performance of
all the detectors improves, and the detection performance of the unlabeled GLRT approaches to that of
labeled GLRT. In subgraph (b), h and τ are the same as those in the second experiment, and similar
phenomena are observed. It seems that in this case little is gained by good initialization.
DRAFT
18
10 -1
0.1
MSE
10 -2
0.05
10 -3
MSE
0
20
40
60
80
N
10
-4
Labeled data
Unlabeled data
Unlabeled data, good initial points
CRLB
10 -5
10 -6
10 2
10 3
10 4
10 5
N
Fig. 5: MSE of θ vs. number of quantizers for the three ML estimators from labeled data, unlabeled
data via initial points ±∆ and unlabeled data via good initial points (18), compared with the CRLB (8)
for sinusoidal signal.
1
1
(b)
(a)
0.9
0.8
0.8
0.7
PD
PD
0.6
0.6
0.5
0.4
0.4
NP
labeled GLRT, 3 unknown
reorderd GLRT, permutation unknown
unlabeled GLRT
0.3
0.2
0.1
NP
labeled GLRT, 3 unknown
reorderd GLRT, permutation unknown
unlabeled GLRT
unlabeled GLRT,good initial points
0.2
0
20
40
60
80
N
100
120
140
20
40
60
80
100
120
N
Fig. 6: Pd vs. number of quantizers N for the ramp signal in subgraph (a) and the sinusoidal signal in
subgraph (b).
C. Permutation matrix recovery
In this subsection, the approximations for permutation matrix recovery are verified. Parameters are set
2 = 1. The number of Monte Carlo trials is
as follows: K = 20 , θ = 1.5, ∆ = 2, q0 = 0, q1 = 0 and σw
1000.
First, the relationship of t and t̃ (22) and the conjecture of t̃ (23) are illustrated in three cases.
√
From Fig. 7, one obtains that t can be approximated as 2t̃ in practice. For a ramp signal, h =
√
[−0.800, −0.705, −0.610, · · · , 1.000]T and τ = 0.5h. t ≈ 2ce /K where ct = ce = 0.4355 is evaluated
√
via (29). Because of the gap between t and 2ce /K , we use linear regression to fit t and obtain
cea = 0.6717, which is much more accurate than ce and will be utilized later to predict the number of
quantizers for permutation matrix recovery. For random generated h, h is drawn from standard normal
distribution and τ = 0.5h. It can be seen that t can be approximated by 1/K 2 . For a sinusoidal signal,
h and τ are drawn in the same way of the second experiment. We use linear regression and obtain
DRAFT
19
t ≈ 0.71/K 2.23 ≈
√
2t̃ =
√
2ct,s /K αt,s , ct,s = 0.5020 and αt,s = 2.23.
100
10-2
t
10-4
10-6
10-8
10-10
10-12
101
102
103
104
105
Fig. 7: The relationship of t and K , including equispaced, randomly generated and sinusoidal h cases.
Next, the empirical permutation recovery probability Pr(Π̂ML = Π0 ) versus N or K are presented in
f
Fig. 8, and the theoretical approximations Pr(K, N ) (21) and Pr(K,
N ) (24) are plotted for comparison.
In subgraph (a), (b) and (c), we set K = 20. While in subgraph (d), we set N = 104 . All h are drawn
in the same way as the second experiment. We also evaluate the empirical permutation matrix recovery
probability in the case of unknown θ, which has negligible difference compared to that in the known θ
case.
In subgraph(a), it can be seen that the permutation matrix of the ramp signal can be recovered with
high probability given N ≥ 5000. From N >
can conclude that N >
2
2
0.43552 K ln K|K=20
1+α 2α
ln K
c2t K
(26) where ct = ce = 0.4355 and α = 1, one
≈ 12636, which is more than twice of 5000. Utilizing the
fitted parameter cea , one obtain a more accurate result that N >
2
2
0.67172 K ln K|K=20
≈ 5312 ensures
permutation matrix recovery with high probability. For random h, N > 3K 4 ln K|K=20 ≈ 1.438 × 106
ensures recovery with high probability, which is not accurate enough, as subgraph(b) shows that N ≈ 105
is enough for recovery of permutation matrix. In subgraph(c), it is shown that N ≈ 106 is enough for
recovery of permutation matrix, which is also inaccurate compared to the fitted results of the sinusoidal
signal N >
3.23
4.46 ln K|
K=20
0.50202 K
≈ 2.437 × 107 . The numerical results show that the theoretical
bound Pr(K, N ) is accurate in predicting N with high probability in permutation matrix recovery, which
f
demonstrates that Pr(K,
N ) may be too conservative in predicting the number of quantizers ensuring
perfect permutation matrix recovery. In subgraph(d), 10000 = N >
2
2
0.67172 K ln K|K=26
≈ 9763, thus
K ≤ 26 will ensure permutation matrix recovery with high probability, which is consistent with the
numerical results.
In Fig. 9, the relationship of flipping probabilities (q0 , q1 ) and number of quantizers Nreq (37) required
DRAFT
20
1
Permutation Recovery Probability
Permutation Recovery Probability
1
(a)
0.8
0.6
0.4
0.2
0
10 3
0.2
10 4
10 5
10 6
N
1
Permutation Recovery Probability
Permutation Recovery Probability
0.4
experiment, 3 known, reordering
experiment, 3 unknown, AM
theorical lower bound, Pr(K; N )
f
theorical lower bound, Pr(K;
N)
1
(c)
0.8
0.6
0.4
0.2
0
10 3
0.6
0
10 3
10 4
N
(b)
0.8
(d)
0.8
0.6
0.4
0.2
0
10 4
10 5
10 6
20
40
60
80
100
K
N
Fig. 8: Pr(Π̂ML = Π0 ) vs. N or K for the ramp signal in subgraph (a)(d), random generated h in
f
subgraph (b) and the sinusoidal signal in subgraph (c). Pr(K, N ) and Pr(K,
N ) are evaluated via (21)
and (24), respectively.
for permutation matrix recovery with high probability is verified. Parameters are the same as those in
Fig. 8-(a) except for (q0 , q1 ). We use the result of the experiment in which q0 = q1 = 0 to predict those
in which q0 = q1 = 0.05, q0 = q1 = 0.1 and q0 = q1 = 0.15, and plot the experimental results for
comparison. It can be seen that the predictions are basically consistent with the experimental results,
which verifies (37).
1
Permutation Recovery Probability
0.9
0.8
0.7
experiment, q0 = q1 = 0
prediction, q0 = q1 = 0:05
experiment, q0 = q1 = 0:05
prediction, q0 = q1 = 0:1
experiment, q0 = q1 = 0:1
prediction, q0 = q1 = 0:15
experiment, q0 = q1 = 0:15
0.6
0.5
0.4
0.3
0.2
0.1
0
10 3
10 4
N
Fig. 9: Pr(Π̂ML = Π0 ) vs. number of quantizers N for ramp signal under different flipping
probabilities (q0 , q1 ).
DRAFT
21
VII.
CONCLUSION
We study a scale parameter estimation and signal detection problem from unlabeled quantized data for
a canonical (known signal shape) sensing model. A sufficient condition under which the signal amplitude
estimation problem can be solved efficiently is provided. It is also shown that in some settings the
model can even be unidentifiable. Given that the number of quantizers is limited, the performance of the
unlabeled estimator via reordering and alternating maximization algorithms is good, although there is a
gap between the performances of labeled and unlabeled ML estimators. In addition, good initial points
are provided to improve the performance of an alternating maximization algorithm for general estimation
problems. As the number of quantizers increases, the performance of the unlabeled estimator approaches
that of the labeled estimator due to the recovery of permutation matrix.
Furthermore, the performance of GLRT detector under unlabeled samples is evaluated, and numerical
results show that the performance degradation of the GLRT detector under unlabeled samples is significant
in noisy environments, compared to the GLRT detector with labeled samples given that the number
of quantizers is small. As the number of quantizers increases, the performance of the GLRT under
unlabeled samples approaches that of the GLRT detector under labeled samples. The explicit approximated
permutation matrix recovery probability predicts that in order to find the true label of K time indexes,
the number of quantizers N should be on the order of K 2α log K , where α is a constant depending on
the signal shape and the distribution of noise.
R EFERENCES
[1] P. Pradhan, K. Nagananda, P. Venkitasubramaniam, S. Kishore and R. S. Blum, “GPS spoofing attack characterization and
detection in smart grids,” Communications and Network Security (CNS), 2016 IEEE Conference on, pp. 391-395, 2016.
[2] T. E. Humphreys, B. M. Ledvina, M. L. Psiaki, B. W. O’hanlon and P. M. Kintner, “Assessing the spoofing threat:
development of a portable GPS civilian spoofer,” in Proc. Int. Tech. Meet. Satellite Div. The Ins. Navigation, pp. 2314-2325,
2008.
[3] Q. Zeng, H. Li and L. Qian,“GPS spoofing attack on time synchronization in wireless networks and detection scheme
design,” in MILCOM’12, pp. 1-5, 2012.
[4] Z. Zhang, S. Gong, A. D. Dimitrovski and H. Li, “Time synchronization attack in smart grid: Impact and analysis,” IEEE
Trans. Smart Grid, vol. 4, no. 1, pp. 87-98, 2013.
[5] S. Challa, R. J. Evans and X. Wang, “A Bayesian solution and its approximations to out-of-sequence measurement problems,”
Information Fusion, vol. 4, no. 3, pp. 185-199, 2003.
[6] L. Schenato, “Optimal estimation in networked control systems subject to random delay and packet drop,” IEEE Trans.
Autom. Control, vol. 53, no. 5, pp. 1311-1317, 2008.
[7] L. M. Millefiori, P. Braca, K. Bryan and P. Willett, “Adaptive filtering of imprecisely time-stamped measurements with
application to AIS networks,” in Proc. of the 18th Intern. Conf. on Inform. Fusion (FUSION), pp. 359-365, 2015.
DRAFT
22
[8] V. Emiya, A. Bonnefoy, L. Daudet and R. Gribonval, “Compressed sensing with unknown sensor permutation,” ICASSP,
pp. 1040-1044, 2014.
[9] J. Unnikrishnan, S. Haghighatshoar and M. Vetterli, “Unlabeled sensing: Solving a linear system with unordered
measurements,” Communication, Control, and Computing (Allerton), 2015 53rd Annual Allerton Conference on. IEEE,
pp. 786-793, 2015.
[10] A. Pananjady, M. J. Wainwright and T. A. Courtade, “Linear regression with an unknown permutation: statistical and
computational limits,”Communication, Control, and Computing (Allerton), 2015 53rd Annual Allerton Conference on. IEEE,
pp. 417-424, 2016.
[11] A. Pananjady, M. J. Wainwright and T. A. Courtade, “Denoising linear models with permutated data,”
http://arxiv.org/abs/1704.07461, 2017.
[12] A. Abid, A. Poon and J. Zou, “Linear regression with shuffled labels,” http://arxiv.org/abs/1705.01342, 2017.
[13] S. Haghighatshoar and G. Caire, “Signal recovery from unlabeled samples,” ISIT, pp. 451-455, 2017.
[14] L. Keller, M. J. Siavoshani, C. Fragouli and K. Argyraki, “Identity aware sensor networks,” Proceedings - IEEE INFOCOM,
pp. 2177-2185, 2009.
[15] S. Marano, V. Matta, P. Willett, P. Braca and R. S. Blum, “Hypothesis testing in the presence of Maxwell’s daemon: Signal
detection by unlabeled observations,” ICASSP, pp. 3286-3290, 2017.
[16] P. Braca, S. Marano, V. Matta, P. Willett, “Asymptotic efficiency of the PHD in multitarget/multisensor estimation,” IEEE
Journal of Selected Topics in Signal Processing, vol. 7, no. 3, pp. 553-564, 2013.
[17] J. Zhu, H. Cao, C. Song and Z. Xu, “Parameter estimation via unlabeled sensing using distributed sensors,” IEEE commun.
lett., vol. 21, no. 10, pp. 2130-2133, 2017.
[18] J. Unnikrishnan, S. Haghighatshoar, M. Vetterli, “Unlabeled sensing with random linear measurements,” avaliable at
https://arxiv.org/pdf/1512.00115.pdf, 2015.
[19] A. Pananjady, M. J. Wainwright and T. A. Courtade, “Linear regression with an unknown permutation: statistical and
computational limits,” available at https://arxiv.org/abs/1608.02902, 2016.
[20] S.
Haghighatshoar
and
G.
Caire,
“Signal
recovery
from
unlabeled
samples,”
avaliable
at
https://arxiv.org/pdf/1701.08701.pdf, 2017.
[21] O. Ozdemir and P. K. Varshney, “Channel aware target location with quantized data in wireless sensor networks,” IEEE
Trans. Signal Process., vol. 57, pp. 1190-1202, 2009.
[22] H. C. Papadopoulos, G. W. Wornell and A. V. Oppenheim, “Sequential signal encoding from noisy measurements using
quantizers with dynamic bias control,” IEEE Trans. Inf. Theory, vol. 47, no. 3, pp. 978-1002, Mar. 2001.
[23] A. Ribeiro and G. B. Giannakis, “Bandwidth-constrained distributed estimation for wireless sensor networks-part I: Gaussian
case,”IEEE Trans. Signal Process., vol. 54, no. 3, pp. 1131-1143, 2006.
[24] A. Ribeiro and G. B. Giannakis, “Bandwidth-constrained distributed estimation for wireless sensor networks-part II:
unknown probability density function,”IEEE Trans. Signal Process., vol. 54, no. 7, pp. 2784-2796, 2006.
[25] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
[26] S. M. Kay, Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory, Englewood Cliffs, NJ: Prentice
Hall, 1993.
[27] S. M. Kay, Fundamentals of Statistical Signal Processing, Volume II: Detection Theory, Englewood Cliffs, NJ: Prentice
Hall, 1993.
[28] S. A. Kassam, Signal Detection in Non-Gaussian Noise, World Publishing Corp., 1992.
[29] D. P. Bertsekas, Nonlinear Programming, 2nd ed., Athena Scientific, Belmont, MA, 1999.
DRAFT
23
[30] A. Papoulis and S. U. Pillai, Probability, Random Variables, and Stochastic Processes, Fourth Edition, 2002.
[31] H. A. David and H. N. Nagaraja, Order Statistics: Third Edition, John Wiley, pp. 133-135, 2003.
[32] E. L. Lehmann, Elements of large-sample theory, Springer-Verlag New York, inc., pp. 456-457, 1999.
DRAFT
| 7 |
Reliable Uncertain Evidence Modeling in
Bayesian Networks by Credal Networks
arXiv:1802.05639v1 [] 15 Feb 2018
Sabina Marchetti
Sapienza University of Rome
Rome (Italy)
[email protected]
Alessandro Antonucci
IDSIA
Lugano (Switzerland)
[email protected]
February 16, 2018
Abstract
A reliable modeling of uncertain evidence in Bayesian networks based
on a set-valued quantification is proposed. Both soft and virtual evidences
are considered. We show that evidence propagation in this setup can be reduced to standard updating in an augmented credal network, equivalent to a
set of consistent Bayesian networks. A characterization of the computational
complexity for this task is derived together with an efficient exact procedure
for a subclass of instances. In the case of multiple uncertain evidences over
the same variable, the proposed procedure can provide a set-valued version
of the geometric approach to opinion pooling.
1
Introduction
Knowledge-based systems are used in AI to model relations among the variables
of interest for a particular task, and provide automatic decision support by inference algorithms. This can be achieved by joint probability mass functions. When
a subset of variables is observed, belief updating is a typical inference task that
propagates such (fully reliable) evidence. Whenever the observational process is
1
unable to clearly report a single state for the observed variable, we refer to uncertain evidence. This might take the form of a virtual instance, described by
the relative likelihoods for the possible observation of every state of a considered
variable [25]. Also, soft evidence [30] denotes any observational process returning
a probabilistic assessment, whose propagation induces a revision of the original
model [21]. Bayesian networks are often used to specify joint probability mass
functions implementing knowledge-based systems [22]. Full, or hard [30], observation of a node corresponds to its instantiation in the network, followed by
belief updating. Given virtual evidence on some variable, the observational process can be modeled à la Pearl in Bayesian networks: an auxiliary binary child
of the variable is introduced, whose conditional mass functions are proportional
to the likelihoods [25]. Instantiation of the auxiliary node yields propagation of
virtual evidence, and standard inference algorithms for Bayesian networks can be
used [22]. Something similar can be done with soft evidence, but the quantification of the auxiliary node should be based on additional inferences in the original
network [9].
In the above classical setup, sharp probabilistic estimates are assumed for the
parameters modeling an uncertain observation. We propose instead a generalized
set-valued quantification, with interval-valued likelihoods for virtual evidence and
sets of marginal mass functions for soft evidence. This offers a more robust modeling of observational processes leading to uncertain evidence. To this purpose,
we extend the transformations defined for the standard case to the set-valued case.
The original Bayesian network is converted into a credal network [12], equivalent
to a set of Bayesian networks consistent with the set-valued specification. We
characterize the computational complexity of the credal modeling of uncertain
evidence in Bayesian networks, and propose an efficient inference scheme for a
special class of instances. The discussion is indeed specialized to opinion pooling
and our techniques used to generalize geometric functionals to support set-valued
opinions.
1.1 Related Work
Model revision based on uncertain evidence is a classical topic in AI. Entropybased techniques for the absorption of uncertain evidence were proposed in the
Bayesian networks literature [30, 26], as well as for the pooling of convex sets
of probability mass functions [1]. Yet, this approach was proved to fail standard
postulates for revision operators in generalized settings [20]. Uncertain evidence
absorption has been also considered in the framework of generalized knowledge
representation and reasoning [17]. The discussion was specialized to evidence
theory [32, 23], although revision based on uncertain instances with graphical
models becomes more problematic and does not give a direct extension of the
2
Bayesian networks formalism [28]. Finally, credal networks have been considered
in the model revision framework [13]. Yet, these authors consider the effect of a
sharp quantification of the observation in a previously specified credal network,
while we consider the opposite situation of a Bayesian network for which credal
uncertain evidence is provided.
2
Background
2.1 Bayesian and Credal Networks
Let X be any discrete variable. Notation x and ΩX is used, respectively, for a
generic value and for the finite set of possible values of X. If X is binary, we
set ΩX := {x, ¬x}. We denote as P (X) a probability mass function (PMF) and
as K(X) a credal set (CS), defined as a set of PMFs over ΩX . We remove inner
points from CSs, i.e. those which can be obtained as convex combinations of other
points, and assume the CS finite after this operation. CS K0 (X), whose convex
hull includes all PMFs over ΩX is called vacuous.
Given another variable Y , define a collection of conditional PMFs as P (X|Y ) :=
{P (X|y)}y∈ΩY . P (X|Y ) is called conditional probability table (CPT). Similarly,
a credal CPT (CCPT) is defined as K(X|Y ) := {K(X|y)}y∈ΩY . An extensive
CPT (ECPT) is a finite collection of CPTs. A CCPT can be converted into an
equivalent ECPT by considering all the possible combinations from the elements
of the CSs.
Given a joint variable X := {X0 , X1 , . . . , Xn }, a Bayesian network (BN)
[25] serves as a compact way to specify a PMF over X. A BN is represented by
a directed acyclic graph G, whose nodes are in one-to-one correspondence with
the variables in X, and a collection of CPTs {P (Xi |Πi )}ni=0 , where Πi is the joint
variable of the parents of Xi according to G. Under the Markov condition, i.e. each
variable is conditionally independent of its non-descendants
Q non-parents given its
parents, the joint PMF P (X) factorizes as P (x) := ni=0 P (xi |πi ), where the
values of xi and πi are those consistent with x, for each x ∈ ΩX = ×ni=0 ΩXi .
A credal network (CN) [12] is a BN whose CPTs are replaced by CCPTs (or
ECPTs). A CN specifies a joint CS K(X), obtained by considering all the joint
PMFs induced by the BNs with CPTs in the corresponding CCPTs (or ECPTs).
The typical inference task in BNs is updating, defined as the computation of
the posterior probabilities for a variable of interest given hard evidence about some
other variables. Without loss of generality, let the variable of interest and the observation be, respectively, X0 and Xn = xn . Standard belief updating corresponds
3
to:
P
P (x0 |xn ) = P
x1 ,...,xn−1
Qn
x0 ,x1 ,...,xn−1
Qi=0
n
P (xi |πi )
i=0
P (xi |πi )
.
(1)
Updating is NP-hard in general BNs [11], although efficient computations can be
performed in polytrees [25] by message propagation routines [22].
CN updating is similarly intended as the computation of lower and upper
bounds of the updated probability in Eq. (1) with respect to K(X). Notation
P (x0 |xn ) (P (x0 |xn )) is used to denote lower (upper) bounds. CN updating extends BN updating and it is therefore NP-hard [14]. Contrary to the standard
setting, inference in generic polytrees is still NP-hard [24], with the notable exception of those networks whose variables are all binary [18].
2.2 Virtual and Soft Evidence
Eq. (1) gives the updated beliefs about queried variable X0 . The underlying assumption is that Xn has been the subject of a fully reliable observational process,
and its actual value is known to be xn . This is not always realistic. Evidence
might result from a process which is unreliable and only the likelihoods for the
possible values of the observed variable may be assessed (e.g., the precision and
the false discovery rate for a positive medical test). Virtual evidence (VE) [25]
applies to such type of observation. Notation λXn := {λxn }xn ∈ΩXn identifies a
VE, λxn being the likelihood of the observation provided (Xn = xn ). Given VE,
the analogous of Eq. (1) is:
P
xn λxn P (x0 , xn )
PλXn (x0 ) := P
,
(2)
xn λxn P (xn )
where the probabilities in the right-hand side are obtained by marginalization
of the joint PMF of the BN. Eq. (2) can be equivalently obtained by augmenting the BN with auxiliary binary node DXn as a child of Xn . By specifying
P (dXn |xn ) := λxn for each xn ∈ ΩXn , it is easy to check that P (x0 |dXn ) =
PλXn (x0 ), i.e. Eq. (2) can be reduced to a standard updating in an augmented BN.
The notion of soft evidence (SE) refers to a different situation, in which the
observational process returns an elicitation P ′ (Xn ) for the marginal PMF of Xn .
See [5] for a detailed discussion on the possible situations producing SE. If this is
the case, P ′(Xn ) is assumed to replace the original beliefs about Xn by Jeffrey’s
updating [21], i.e.
X
PX′ n (x0 ) :=
P (x0 |xn ) · P ′(xn ) .
(3)
xn
4
Eq. (3) for SE reduces to Eq. (1) whenever P ′ (Xn ) assigns all the probability
mass to a single value in ΩXn . The same happens for VE in Eq. (2), when all
the likelihoods are zero apart from the one corresponding to the observed value.
Although SE and VE refer to epistemologically different informational settings,
the following result provides means for a unified approach to their modeling.
Proposition 1 ([9]). Absorption of a SE P ′ (Xn ) as in Eq. (3) is equivalent to
Eq. (2) with a VE specified as:
λx n ∝
P ′(xn )
,
P (xn )
(4)
for each xn ∈ ΩXn .1
Vice versa, absorption of a VE λXn as in Eq. (2) is equivalent to Eq. (3) with
a SE specified as:
for each xn ∈ ΩXn .
λx P (xn )
P ′ (xn ) := P n
,
xn λxn P (xn )
(5)
In the above setup for SE, states that are impossible in the original BN cannot
be revised, i.e. if P (xn ) = 0 for some xn ∈ ΩXn , then also P ′ (xn ) = 0 and any
value can be set for λxn . Vice versa, according to Eq. (5), a zero likelihood in a
VE renders impossible the corresponding state of the SE. Thus, at least a non-zero
likelihood should be specified in a VE. All these issues are shown in the following
example.
Example 1. Let X denote the actual color of a traffic light with ΩX := {g, y, r}.
Assume g (green) more probable than r (red), and y (yellow) impossible. Thus, for
instance, P (X) = [4/5, 0, 1/5]. We eventually revise P (X) by a SE P ′ (X), which
keeps yellow impossible and assigns the same probability to the two other states,
i.e. P ′ (X) = [1/2, 0, 1/2]. Because of Eq. (4), this can be equivalently achieved by
a VE λX ∝ {1, 1, 4}. Vice versa, because of Eq. (5), a VE λ̃X ∝ {1, 1, 5} induces
an updated Pλ̃ (X) = [4/9, 0, 5/9]. Such PMF coincides with P (X|dX ) in a twonode BN, with DX child of X, CPT P (DX |X) with P (dX |X) = [1/10, 1/10, 1/2]
and marginal PMF P (X) as in the original specification.
1
VE is defined as a collection of likelihoods, which in turn are defined up to a multiplicative
positive constant. This clearly follows from Eq. (2). The relation in Eq. (4) is proportionality and
not equality just to make all the likelihoods smaller or equal than one.
5
3
Credal Uncertain Evidence
3.1 Credal Virtual Evidence
We propose credal VE (CVE) as a robust extension of sharp virtual observations.
Notation ΛXn is used here for the intervals {λxn , λxn }xn ∈ΩXn . CVE updating is
defined as the computation of the bounds of Eq. (2) with respect to all VEs λXn
consistent with the interval constraints in ΛXn . Notation P ΛXn (x0 ) and P ΛXn (x0 )
is used to denote these bounds. CVE absorption in BNs is done as follows.
Transformation 1. Given a BN over X and a CVE ΛXn , add a binary child DXn
of Xn and quantify its CCPT K(DXn |Xn ) with constraints λxn ≤ P (dXn |xn ) ≤
λxn .2 A CN with a single credal node results.
By Tr. 1, CVE updating in a BN is reduced to CN updating.
Theorem 1. Given a CVE in a BN, consider the CN returned by Tr. 1. Then:
P (x0 |dXn ) = P ΛXn (x0 ) ,
(6)
and analogously for the upper bounds.
Standard VE can be used to model partially reliable sensors or tests, whose
quantification is based on sensitivity and specificity data. Since these data are not
always promptly/easily available (e.g., a pregnancy test whose failure can be only
decided later), a CVE with interval likelihoods can be quantified by the imprecise
Dirichlet model3 [6] as in the following example.
Example 2. The reference standard for diagnosis of anterior cruciate legament
sprains is arthroscopy. In a trial, 40 patients coming in with acute knee pain are
examined using the Declan test [10]. Every patient also has an arthroscopy procedure for a definitive diagnosis. Results are TP=17 (Declan positive, arthroscopy
positive), FP=3 (Declan positive, arthroscopy negative), FN=6 (Declan negative,
arthroscopy positive) and TN=14 (Declan negative, arthroscopy negative). Patients visiting a clinic have prior sprain probability P (x) = 0.2. Given a positive
Declan, the imprecise Dirichlet model (see Footnote 3) with s = 1 corresponds
to CVE λx = 17/23 + 1, λx = 17 + 1/23 + 1, λ¬x = 3/17 + 1, λ¬x = 3 + 1/17 + 1. The
bounds of the updated sprain probability with respect to the above constraints
are P ΛX (x) = 1/3, P ΛX (x) ≃ 0.53. A VE with frequentist estimates would have
produced instead PλX ≃ 0.51.
2
For binary B, constraint l ≤ P (b) ≤ u defines a CS K(B) with elements P1 (B) := [l, 1 − l]
and P2 (B) := [u, 1 − u].
3
Given N observations of X, if n(x) of them reports x, the lower bound of P (x) for to the
n(x)
n(x)+s
imprecise Dirichlet model is N
+s , and the upper bound N +s , with s effective prior sample size.
6
3.2 Credal Soft Evidence
Analogous to CVE, credal soft evidence (CSE) on Xn can be specified by any
CS K ′ (Xn ). Accordingly, CSE updating computes the bounds spanned by the
updating of all SEs based on PMFs consistent with the CS, i.e.
X
P ′Xn (x0 ) := ′ min′
P (x0 |xn ) · P ′ (xn ) ,
(7)
P (Xn )∈K (Xn )
xn
′
and analogously for the upper bound P Xn (x0 ).
The shadow of a CS K(X) is a CS K̂(X) obtained from all the PMFs P̂ (X)
such that, for each x ∈ ΩX :
min
P (X)∈K(X)
P (x) ≤ P̂ (x) ≤
max
P (X)∈K(X)
P (x) .
(8)
A CS coinciding with its shadow is called shady. It is a trivial exercise to check
that CSs over binary variables are shady. 4
The following result extends Pr. 1 to the imprecise framework.
Theorem 2. Absorption of a CSE with shady K ′ (Xn ) is equivalent to that of CVE
ΛXn such that:
P ′ (xn )
λx n ∝
,
(9)
P (xn )
where P ′ (xn ) := minP ′ (Xn )∈K ′ (Xn ) P ′ (xn ) and analogously for the upper bound.
Vice versa absorption of a CVE ΛXn is equivalent to that of a CSE such that:
P ′ (xn ) =
P (xn )λxn
,
P
P (xn )λxn + x′n 6=xn P (x′n )λx′n
(10)
and analogously with a swap between lower and upper likelihoods for the upper
bound.
By Th. 1 and 2, CSE updating in a BN is reduced to standard updating in a CN.
This represents a generalization to the credal case of Pr. 1. For CSEs with nonshady CSs, the procedure is slightly more involved, as detailed by the following
result.
Proposition 2. Given a CSE K ′ (Xn ) := {Pi′ (Xn )}ki=1 in a BN, add a binary child
DXn of Xn quantified by an ECPT {Pi (DXn |Xn )}ki=1 such that Pi (dXn |xn ) ∝
Pi′ (xn )
for each i = 1, . . . , k and xn ∈ ΩXn . Then:
P (xn )
P ′Xn (x0 ) = P (x0 |dXn ) .
4
Following [8], a shadow is just the set of probability intervals induced by a generic CS.
7
(11)
To clarify these results, consider the following example.
Example 3. Consider the same setup as in Ex. 1. Let us revise the original
PMF P (X) by a CSE based on the shady CS K ′ (X) := {P1′ (X), P2′ (X)}, with
P1′ (X) := [0.6, 0, 0.4] and P2′ (X) := [0.4, 0, 0.6]. Th. 2 can be used to convert
such CSE in a CVE ΛX := {2-3 : 1 : 8-12}. Vice versa, the beliefs induced
by CVE Λ̃X := {3-5 : 1 : 8-10} are P Λ̃X (g) = 3/5, P Λ̃X (g) = 2/3, P Λ̃X (y) =
P Λ̃X (y) = 0, and P Λ̃X (r) = 1/3, P Λ̃X (r) = 2/5. These bounds may be equivalently obtained in a two-node CN with DX child of X and CCPT K(DX |X) such
that P (dX |X = g) ∈ [0.6, 1], P (dX |X = y) = 1, and P (dX |X = r) ∈ [0.8, 1].
Alternatively, following Pr. 2, absorption of K ′ (X) can be achieved by a ECCPT
with two CPTs.
We point out that conservative updating (CU), a credal updating rule for reliable treatment of missing non-MAR data [15], falls as a special case in our formalism. CU is defined as:
P ′Xn (x0 ) = min P (x0 |xn ) ,
xn ∈ΩXn
(12)
and represents the most conservative approach to belief revision. A vacuous
CCPT is specified, with [0, 1] intervals for each value, either i) by Tr. 1, given CVE
whose likelihoods take any value between zero and one 5 , or ii) by straightforward
application of Th. 2, if a vacuous CSE K0′ (Xn ) is provided. The resulting ECPT
with |ΩX | CPTs6 corresponds to the CU implementation in [3]. Also, Eq. (7) reduces to Eq. (12), given vacuous CSE. We can similarly proceed in the case of
incomplete observations, i.e. some values of Xn are recognized as impossible, but
no information can be provided about the other ones. If this is the case, we just
replace ΩXn with Ω′Xn ⊂ ΩXn .
4
Credal Probability Kinematics
Given two joint PMFs P (X) and P ′(X), we say that the latter comes from the
first by probability kinematics (PK) on the (coarse) partition of ΩX induced by
Xn if and only if P ′ (x|xn ) = P (x|xn ) for each x ∈ ΩX and xn ∈ ΩXn [16, 9].7
This is the underlying assumption in Eq. (3). If P ′(X) is replaced by a CS, PK is
generalized as follows.
5
As VE likelihoods are defined up to a positive multiplicative constant, we can set any positive
λxn provided that λxn = 0.
6
The induced ECPT contains all 2|ΩXn | combinations of zero and ones in the CPTs. Yet, only
those having a single one in the row associated to dXn remains after the convex hull.
7
Full consistency of P ′ with the evidence inducing the revision process is not explicitly required. A more stringent characterization of PK was proposed, among others, by [31]
8
Definition 1. Let P (X) and K ′ (X) be, respectively, a joint PMF and a joint CS.
We say that K ′ (X) comes from P (X) by credal probability kinematics (CPK) on
′
the partition of ΩX induced by Xn if and only if it holds P ′ (x|xn ) = P (x|xn ) =
P (x|xn ), for each x ∈ ΩX and xn ∈ ΩXn .
That is, any revision process based on (generalized) PK guarantees invariance
of the relevance of xn , for each xn ∈ ΩXn , to any other possible event in the
model, say x0 . The following consistency result holds for CSEs.
Theorem 3. Given a BN over X and a shady CSE K ′ (Xn ), convert the CSE into
a CVE as in Th. 2 and transform the BN into a CN by Tr. 1. Let K ′ (X, DXn )
be the joint CS associated to the CN. Then, K ′ (X|dXn ) comes from P (X) by
CPK on the partition induced by Xn . Moreover K ′ (Xn |dXn ) coincides with the
marginal CS in the CN.
5
Multiple Evidences
So far, we only considered the updating of a single CVE or CSE. We call uncertain
credal updating (UCU) of a BN the general task of computing updated/revised beliefs in a BN with an arbitrary number of CSEs, CVEs, and hard evidences as well.
Here, UCU is intended as iterated application of the procedures outlined above.
See for instance [17], for a categorization of iterated belief revision problems and
their assumptions. When coping with multiple VEs in a BN, it is sufficient to add
the necessary auxiliary children to the observed variables and quantify the CPTs
as described. We similarly proceed with multiple CVEs.
The procedure becomes less straightforward when coping with multiple SEs or
CSEs, since quantification of each auxiliary child by Eq. (4) requires a preliminary
inference step. As a consequence, iterated revision might be not invariant with
respect to the revision process scheme [31].
Additionally, with CSEs, absorption of the first CSE transforms the BN into
a CN, and successive absorption of other CSEs requires further extension of the
procedure in Th. 2. We leave such an extension as future work, and here we just
consider simultaneous absorption of all evidences. If this is the case, multiple
CSEs can be converted in CVEs and the inferences required for the quantification
of the auxiliary children is performed in the original BN.
5.1 Algorithmic and Complexity Issues
ApproxLP [2] is an algorithm for general CN updating based on linear programming. It provides an inner approximation of the updated intervals with the same
9
complexity of a BN inference on the same graph. Roughly, CN updating is reduced by ApproxLP to a sequence of linear programming tasks. Each is obtained
by iteratively fixing all the local models to single elements of the corresponding
CSs, while leaving a free single variable. It follows the algorithm efficiently produces exact inferences whenever a CN has all local CSs made of a single element
apart from one. This is the case of belief updating with a single CVE/CSE.
5.2 Complexity Issues
Since standard BN updating of polytrees can be performed efficiently, the same
happens with VEs and/or SEs, as Tr. 1 does not affect the topology (nor the
treewidth) of the original network. Similarly, with multiply connected models,
BN updating is exponential in the treewidth, and the same happens with models
augmented by VEs and/or SEs.
As already noticed, with CNs, binary polytrees can be updated efficiently,
while updating ternary polytrees is already NP-hard. An important question is
therefore whether or not a similar situation holds for UCU in BNs. The (positive)
answer is provided by the two following results.
Proposition 3. UCU of polytree-shaped binary BNs can be solved in polynomial
time.
The proof of this proposition is trivial and simply follows from the fact that
the auxiliary nodes required to model CVE and/or CSE are binary (remember
that CSs over binary variables are always shady). The CN solving the UCU is
therefore a binary polytree that can be updated by the exact algorithm proposed in
[18].
Theorem 4. UCU of non-binary polytree-shaped BNs is NP-hard.
The proof of this theorem is based on a reduction to the analogous result for
CNs [24]. This already concerns models whose variables have no more than three
states and treewidth equal to two. In these cases, approximate inferences can be
efficiently computed by ApproxLP.
6
Credal Opinion Pooling
Consider the generalized case of m ≥ 1 overlapping probabilistic instances on
Xn . For each j = 1, . . . , m, let Pj′ (Xn ) denote the SE reported by the j-th source.
Straightforward introduction of m auxiliary nodes as outlined above would suffer
10
confirmational dynamics, analogous to the well-known issue with posterior probability estimates in the naive Bayes classifier [27]. This might likely yield inconsistent revised beliefs, i.e. P̃ ′ (Xn ) falls outside the convex hull of {Pj′(Xn )}m
j=1 .
A most conservative approach to prevent such inconsistency adopts the convex hull of all the opinions [29]. In our formalism, this is just the CS K ′ (Xn ) :=
′
′
{Pj′ (Xn )}m
j=1 . Yet, consider any small ǫ > 0, and assume P1 (xn ) = ǫ, P2 (xn ) =
′
1 − ǫ, and Pj (xn ) = p ∈ (ǫ, 1 − ǫ) for each j = 3, . . . , m. Despite the consensus of all remaining sources on sharp value p, the conservative approach above
would yield K ′ (Xn ) ≃ K0 (Xn ). To what extent should this be preferred to the
confirmational case is an open question.
A compromise solution might be offered by the geometric pooling
operator
Pm
(or LogOp) [4]. Given a collection of positive weights {αj }m
,
with
j=1
j=1 αj =
1, the LogOp functional produces the PMF P̃ ′ (Xn ) such that:
′
P̃ (xn ) ∝
m
Y
Pj′ (x)αj ,
(13)
j=1
for each xn ∈ ΩXn . P̃ ′ (Xn ) belongs to the convex hull of {Pj′(Xn )}m
j=1 for any
specification of the weights [1]. The overlapping SEs associated to the PMF in
Eq. (13) can be equivalently modeled by a collection of m VEs defined as follows.
Transformation 2. Consider a BN over X and a collection of SEs on Xn , {Pj′(Xn )}m
j=1 .
(j)
For each j = 1, . . . , m, augment the BN with binary child DXn of Xn whose CPT
h ′
iαj
Pm
(j)
(xn )
is such that P (dXn |xn ) ∝ PP (x
,
with
j=1 αj = 1.
)
n
The transformation is used for the following result.
Proposition 4. Consider the same inputs as in Tr. 2. Then:
(1)
(m)
P̃X′ n (x0 ) = P (x0 |dXn , . . . , dXn ) ,
(14)
where the probability on the left-hand side is obtained by the direct revision induced by P̃ ′ (Xn ), while the probability on the right-hand side of Eq. (14) has been
computed in the BN returned by Tr. 2.
The proof follows from the conditional independence of the auxiliary nodes
given Xn . Also, note how our proposal simultaneously performs pooling and
absorption of overlapping SEs.
Suppose m sources provide generalized CSEs about Xn , say {Kj′ (Xn )}m
j=1 .
′
′
Let K̃ (Xn ) denote the CS induced by LogOp as in Eq. (13), for each Pj (Xj ) ∈
Kj′ (Xn ), j = 1, . . . , m [1]. We generalize Tr. 2 as follows:
11
Transformation 3. Consider a BN over X and the collection of CSEs {Kj′ (Xn )}m
j=1 .
(j)
For each j = 1, . . . , m, augment the BN with binary child DXn of Xn , whose
iαj
iαj
h ′
h ′
(j)
(j)
(xn )
P (xn )
CCPT is such that P (dXn |xn ) ∝ PP (x
and
.
P
(d
|x
)
∝
n
Xn
P (xn )
n)
This transformation returns a CN. A result analogous to Pr. 4 can be derived.
Theorem 5. Consider the same inputs as in Tr. 3. Then:
′
(1)
(m)
P̃ Xn (x0 ) = P (x0 |dXn , . . . , dXn ) ,
(15)
where the lower probability on the left-hand side has been computed by absorption of the single CSE K̃ ′ (Xn ) and the probability on the right-hand side has
been computed in the CN returned by Tr. 3. The same relation also holds for the
corresponding upper probabilities.
7
Conclusions
Credal, or set-valued, modeling of uncertain evidence has been proposed within
the framework of Bayesian networks. Such procedure generalizes standard updating. More importantly, our proposal allows to reduce the task of absorption of
uncertain evidence to standard updating in credal networks. Complexity results,
specific inference schemes, and generalized pooling procedures have been also
derived.
As a future work we intend to evaluate the proposed technique with knowledgebased decision-support systems based on Bayesian network to model unreliable
observational processes. Moreover the proposed procedure should be extended
to the framework of credal networks, thus reconciling the orthogonal viewpoints
considered in this paper and in [13], and tackling the case of non-simultaneous
updating.
A Proofs
Proof of Th. 1. The proof follows from the analogous result with BNs. For any
BN consistent with the CN returned by Tr. 1, we have:
P (x0 , dXn )
P (d )
P Xn
P (x0 |xn )P (dXn |xn )P (xn )
= xn P
.
xn P (dXn |xn )P (xn )
P (x0 |dXn ) =
12
As P (dXn |xn ) reaches its minimum at λxn , for every xn ∈ ΩXn , the minimization
of the last term coincides with that of Eq. (2) and gives P ΛXn (x0 ), the other elements being constant. Analogous reasoning yields P ΛXn (x0 ).
For the proof of Th. 2, we need to introduce the following transformation and
lemma.
Transformation 4. Consider a CSE K ′ (Xn ) in a BN. Let {Pi′(Xn ), i = 1 . . . , nv }
denote the elements of the CS K ′ (Xn ).8 In the BN, compute the marginal PMF
P (Xn ) with standard algorithms. Augment the BN with a binary node DXn , such
that ΠXn := {Xn }. Quantify the local model for DXn as an ECPT K(DXn |Xn ),
specified as a set of nv CPTs {Pi (DXn |Xn ) : i = 1, . . . , nv }. Pi (DXn |Xn ) is
defined as:
P ′ (xn )
Pi (dXn |xn ) ∝ i
,
(16)
P (xn )
for each i = 1, . . . , nv . The same prescriptions provided after Pr. 1 for the case of
zero-probability events should be followed here.
Lemma 1. Given a CSE K ′ (Xn ) in a BN, consider the CN returned by Tr. 4.
Then:
(17)
P (x0 |dXn ) = P ′Xn (x0 ) ,
and analogously for the upper bound.
Proof. DXn is the only credal node in the CN. Thus:
P (x0 |dXn ) =
P (x0 , dXn )
.
P (dXn |Xn )∈K(dXn |Xn ) P (dXn )
min
(18)
Let us rewrite Eq. (18) by: (i) explicitly enumerating the CPTs in the ECPT
K(DXn |Xn ), (ii) making explicit the marginalization of Xn , (iii) exploiting the
fact that, by the Markov condition, we have conditional independence between
DXn and X0 given Xn . The result is:
P
xn P (x0 |xn ) · Pi (d|xn ) · P (xn )
P
min
.
(19)
i=1,...,nv
xn Pi (d|xn ) · P (xn )
Thus, because of Eq. (16):
min
i=1,...,v
P
xn
P (x0 |xn ) · Pi′ (xn )
P
.
′
xn Pi (xn )
(20)
As the denominator in Eq. (20) is one we obtain Eq. (7). This proves the lemma.
8
Remember that in our definition of CS we remove the inner points of the convex hull.
13
We can now prove the second theorem.
Proof of Th. 2. Let us first prove the second part of the theorem. As a consequence of Pr. 1, each VE consistent with the CVE can be converted in a SE
defined as in Eq. (5). The CS implementing the CSE equivalent to the CVE is
therefore:
K ′ (Xn ) :=
P ′ (Xn )
P ′ (xn )= P
P (xn )λxn
xn P (xn )λxn
λxn ≤λxn ≤λxn ∀xn
.
(21)
The computation of P ′ (xn ) is therefore a linearly constrained linear fractional
task. If P (xn ) > 0, we can rewrite the objective function as:
−1
X λx′ P (x′n )
n
.
P ′(xn ) = 1 +
P
(x
)
λ
n
xn
x′ 6=x
n
(22)
n
As f (α) = (1 + α)−1 is a monotone decreasing function of α, minimizing the
objective function is equivalent to maximize:
X λx′ P (x′n )
n
,
λ
xn P (xn )
′
(23)
xn 6=xn
and vice versa for the maximization. As each λxn can vary in its interval independently of the others, the maximum of the function in Eq. (23) is obtained by
maximizing the numerator and minimizing the denominator, i.e., for λxn = λxn
and λx′n = λx′n . This proves Eq. (10), which remains valid also for P (xn ) = 0.
To prove the first part of the theorem, because of Lm. 1, we only need to prove
that the CN returned by Tr. 4 and the CN returned by Tr. 1 for the CVE specified
in Eq. (9) provides the same P (x0 |xn ). This lower posterior probability in the
second CN rewrites as:
P
xn P (x0 |xn )λxn P (xn )
P
P (x0 |dXn ) = min
.
(24)
λxn ≤λxn ≤λxn
xn λxn P (xn )
Again, this is a linearly constrained linear fractional task, which can be reduced to a linear task by [7]. In the linear task, the minimum is achieved when the
λxn corresponding to the maximum coefficient P (x0 |xn )P (xn ) of the numerator
P ′ (x )
of the objective function takes the minimum value λxn . But as λxn = mini Pi(xnn) ,
we can equivalently obtain this value with the ECPT in the first CN. This proves
the first part of the theorem.
14
Proof of Th. 3. The result follows from the analogous for PK. Thus, let us first
assume K ′ (Xn ) composed of a single PMF P ′ (Xn ). This means that the CSE degenerates into a standard SE. Let λXn denote the corresponding VE and consider
the augmented BN obtained by adding the auxiliary binary child DXn . Also, let
x ∈ ΩXn be any configuration of the joint variable X. By the Markov condition:
P ′ (x|xn ) = P (x|xn , dXn )
P (x|xn )P (xn )P (dXn |xn )
=
P (xn )P (dXn |xn )
= P (x|xn ) .
For a CSE K ′ (Xn ) including more than a PMF, we just repeat the same above
considerations separately for each P ′ (Xn ) ∈ K ′ (Xn ) and obtain the proof of the
statement. Also, by Th. (2) it holds K ′ (x)↓Xn = K ′ (xn ), for all configurations x
consistent with Xn = xn , for all xn ∈ Xn .
Proof of Th. 4. To prove the theorem we show that the non-binary polytreeshaped CN used by [24, Th. 1] to prove the NP-hardness of non-binary credal
polytrees can be used to model UCU in a non-binary polytree-shaped BN. To do
that for an arbitrary k, consider the BN over X := (X0 , X1 , . . . , X2k ) with the
topology in Fig. 1, Nodes (X0 , . . . , Xk=1 ) are associated to binary variables, the
others to ternary variables. A uniform marginal PMF is specified for Xk , while
the CPTs for the other ternary variables are as indicated in Table 2 of the proof
we refer to (the numerical values being irrelevant for the present proof). For the
binary variables we also specify a uniform prior.
We specify indeed a vacuous CSE for each binary variable. These CSEs can be
asborbed by replacing the uniform PMFs with vacuous CSs. The resulting model
is exactly the CN used to reduce CN updating to the PARTITION problem [19]
and hence proves the thesis.
Xk
X0
X1
X2
Xk+1
Xk+2
Xk+3
Xk−1
···
X2k
Figure 1: A polytree-shaped directed acyclic graph.
15
Proof of Th. 5. For any BN consistent with the CN resulting from Tr. 3 it holds
(m)
(1)
PLogOpα,P ′ (x0 ) = P (x0 |dXn , . . . , dXn ), by Prop. 4. By definition, see Eq. (13), we
have:
min
Pj ∈Kj ,Kj ∈K ′
cLogOpα
K ′ (xn )
=k
m
Y
P ′j (xn )αj ,
(25)
j=1
with k being the normalization constant and P ′j (xn ) = minP ∈Kj′ (xn ) P (xn ), for
every j = 1, . . . , m and for all xn ∈ ΩXn .
It follows:
min
P (xn )∈cLogOpα
(xn )
K′
P̃ (x0 ) = k
X
xn
=
P (x0 |xn )
n
Y
P ′j (xn )
j=1
(1)
(m)
P (x0 |dXn , . . . , dXn ) ,
where the second term comes by Eq. (2) and Eq. (25). This gives the proof of the
theorem.
References
[1] Martin Adamčı́k, The information geometry of Bregman divergences and
some applications in multi-expert reasoning, Entropy 16 (2014), no. 12,
6338–6381.
[2] A. Antonucci, C.P. de Campos, M. Zaffalon, and D. Huber, Approximate
credal network updating by linear programming with applications to decision making, International Journal of Approximate Reasoning 58 (2014),
25–38.
[3] A. Antonucci and M. Zaffalon, Decision-theoretic specification of credal
networks: a unified language for uncertain modeling with sets of Bayesian
networks, International Journal of Approximate Reasoning 49 (2008), no. 2,
345–361.
[4] Michael Bacharach, Group decisions in the face of differences of opinion,
Management Science 22 (1975), no. 2, 182–191.
[5] Ali Ben Mrad, Véronique Delcroix, Sylvain Piechowiak, Philip Leicester,
and Mohamed Abid, An explication of uncertain evidence in Bayesian networks: likelihood evidence and probabilistic evidence, Applied Intelligence
43 (2015), no. 4, 802–824.
16
[6] Jean-Marc Bernard, An introduction to the imprecise Dirichlet model for
multinomial data, International Journal of Approximate Reasoning 39
(2005), no. 2-3, 123–150.
[7] Stephen Boyd and Lieven Vandenberghe, Convex optimization, Cambridge
university press, 2004.
[8] L. Campos, J. Huete, and S. Moral, Probability intervals: a tool for
uncertain reasoning, International Journal of Uncertainty, Fuzziness and
Knowledge-Based Systems 2 (1994), no. 2, 167–196.
[9] Hei Chan and Adnan Darwiche, On the revision of probabilistic beliefs using
uncertain evidence, Artificial Intelligence 163 (2005), no. 1, 67–90.
[10] J. Cleland, Orthopaedic clinical examination: An evidence-based approach
for physical therapists, Saunders, 2005.
[11] G. F. Cooper, The computational complexity of probabilistic inference using
Bayesian belief networks, Artificial Intelligence 42 (1990), 393–405.
[12] F. G. Cozman, Credal networks, Artificial Intelligence 120 (2000), 199–233.
[13] J.C.F. da Rocha, A.M. Guimaraes, and C.P. de Campos, Dealing with soft
evidence in credal networks, Proceedings of Conferencia Latino-Americana
de Informatica, 2008.
[14] C. P. de Campos and F. G. Cozman, The inferential complexity of
Bayesian and credal networks, Proceedings of IJCAI ’05 (Edinburgh), 2005,
pp. 1313–1318.
[15] G. De Cooman and M. Zaffalon, Updating beliefs with incomplete observations, Artificial Intelligence 159 (2004), no. 1-2, 75–125.
[16] Persi Diaconis and Sandy L. Zabell, Updating subjective probability, Journal
of the American Statistical Association 77 (1982), no. 380, 822–830.
[17] Didier Dubois, Three scenarios for the revision of epistemic states, Journal
of Logic and Computation 18 (2008), no. 5, 721–738.
[18] Enrico Fagiuoli and Marco Zaffalon, 2U: an exact interval propagation
algorithm for polytrees with binary variables, Artificial Intelligence 106
(1998), 77–107.
[19] Michael R. Garey and David S. Johnson, Computers and intractability: a
guide to the theory of NP-completeness, W. H. Freeman & Co., 1979.
17
[20] Adam J. Grove and J. Y. Halpern, Probability update: conditioning vs. crossentropy, Proceedings of the Thirteenth conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers Inc., 1997, pp. 208–214.
[21] Richard C Jeffrey, Ethics and the logic of decision, The Journal of Philosophy 62 (1965), no. 19, 528–539.
[22] D. Koller and N. Friedman, Probabilistic graphical models: principles and
techniques, MIT Press, 2009.
[23] Jianbing Ma, Weiru Liu, Didier Dubois, and Henri Prade, Bridging Jeffrey’s
rule, AGM revision and Dempster conditioning in the theory of evidence,
International Journal on Artificial Intelligence Tools 20 (2011), no. 04, 691–
720.
[24] D.D. Mauá, C.P. de Campos, A. Benavoli, and A. Antonucci, Probabilistic
inference in credal networks: new complexity results, Journal of Artificial
Intelligence Research 50 (2014), 603–637.
[25] J. Pearl, Probabilistic reasoning in intelligent systems: networks of plausible
inference, Morgan Kaufmann, San Mateo, California, 1988.
[26] Yun Peng, Shenyong Zhang, and Rong Pan, Bayesian network reasoning
with uncertain evidences, International Journal of Uncertainty, Fuzziness
and Knowledge-Based Systems 18 (2010), no. 05, 539–564.
[27] Irina Rish, An empirical study of the naive Bayes classifier, IJCAI 2001
workshop on empirical methods in Artificial Intelligence, vol. 3, IBM New
York, 2001, pp. 41–46.
[28] Christophe Simon, Philippe Weber, and Eric Levrat, Bayesian networks and
evidence theory to model complex systems reliability, Journal of Computers
2 (2007), no. 1, 33–43.
[29] Rush T Stewart and Ignacio Ojea Quintana, Probabilistic opinion pooling
with imprecise probabilities, Journal of Philosophical Logic (2017), 1–29.
[30] Marco Valtorta, Young-Gyun Kim, and Jiřı́ Vomlel, Soft evidential update
for probabilistic multiagent systems, International Journal of Approximate
Reasoning 29 (2002), no. 1, 71–106.
[31] Carl G. Wagner, Probability kinematics and commutativity, Philosophy of
Science 69 (2002), no. 2, 266–278.
18
[32] Chunlai Zhou, Mingyue Wang, and Biao Qin, Belief-kinematics Jeffrey’s
rules in the theory of evidence, Proceedings of UAI 2014, 2014, pp. 917–
926.
19
| 2 |
1
Supervisory Control of Discrete-event Systems
under Attacks
arXiv:1701.00881v1 [] 4 Jan 2017
Masashi Wakaiki, Paulo Tabuada, and João P. Hespanha
Abstract
We consider a supervisory control problem for discrete-event systems, in which an attacker corrupts
the symbols that are observed by the supervisor. We show that existence of a supervisor enforcing a
specification language, in the presence of attacks, is completely characterized by controllability (in the
usual sense) and observability of the specification (in a new appropriately defined sense). The new
notion of observability takes into account the attacker’s ability to alter the symbols received by the
attacker. For attacks that correspond to arbitrary insertions/removals of symbols, the new notion of
observability can be tested by checking the usual notion of observability for a set of discrete-event
systems with appropriately redefined observation maps. Focusing on attacks that replace and/or remove
symbols from the output strings, we construct observers that are robust against attacks and lead to
an automaton representation of the supervisor. We also develop a test for observability under such
replacement-removal attacks by using the so-called product automata.
Index Terms
Discrete-event systems, supervisory control, security
M. Wakaiki is with the Department of Electrical and Electronic Engineering, Chiba University, Chiba, 263-8522, Japan (e-mail:
[email protected]).
P. Tabuada is with the Department of Electrical Engineering, University of California, Los Angeles, CA 90095, USA (e-mail:
[email protected]).
J. P. Hespanha is with the Center for Control, Dynamical-systems and Computation (CCDC), University of California, Santa
Barbara, CA 93106, USA (e-mail: [email protected]).
This material is based upon work supported by the NSF under Grant No. CNS-1329650. The first author acknowledges Murata
Overseas Scholarship Foundation and The Telecommunications Advancement Foundation for their support of this work. The
work of the second author was partially supported by the NSF award 1136174.
January 5, 2017
DRAFT
2
I. I NTRODUCTION
Recent developments in computer and network technology make cyber-physical systems prevalent in modern societies. The integration between cyber and physical components introduces
serious risks of cyber attacks to physical processes. For example, it has been recently reported
that attackers can adversarially control cars [1] and UAVs [2]. Moreover, the Moroochy water
breach in March 2000 [3] and the StuxNet virus attack in June 2010 [4] highlight potential threats
to infrastructure systems. An annual report [5] published in 2014 by the German government
stated that an attacker tampered with the controls of a blast furnace in a German steel factory.
We study supervisory control for Discrete-event systems (DESs) under adversarial attacks.
DESs are dynamic systems equipped with a discrete state space and an event-driven transition
structure. Such models are widely used to describe cyber-physical systems such as chemical
batch plants [6], power grids [7], and manufacturing systems [8]. The objective of this paper is
to answer the question: How do we control DESs if an attacker can manipulate the information
provided by sensing and communication devices? This question encourages system designers to
reconsider the supervisory control problem from the viewpoint of security.
Fig. 1 shows the closed-loop system we consider, in which an observation string generated by
the plant is substituted by a string corrupted by an adversarial attack. The attack changes the original string by inserting, removing, and replacing symbols, and is allowed to non-deterministically
change the same original string to distinct strings. Furthermore, since we cannot foretell what
an attacker will do as we design supervisors, we consider a set of possible attacks. The problem
we study is how to determine if there exists a supervisor that can enforce the specification
notwithstanding the attacks. Whenever such supervisor exists, we also study the problem of how
to construct it. This work is inspired by the research on state estimation under sensor attacks
for linear time-invariant systems developed in [9]–[11].
Plant
Enabled events
in the next step
Observation string
Attack
Supervisor
Corrupted string
(may be an empty string)
Fig. 1: Closed-loop system under attacks.
January 5, 2017
DRAFT
3
Related works: Supervisory control theory has developed frameworks to handle plant uncertainties and faults; see, e.g., [12]–[14] for robust control and [15]–[17] for fault tolerant
control. These studies can be used as countermeasures against attacks, but there are conceptual
differences between uncertainties/faults and attacks. In fact, uncertainties/faults do not coordinate
with harmful intent, whereas attackers can choose their action in order to achieve malicious
purposes. For example, the stealthy deception attacks in [18], [19] aim to inject false information
without being detected by the controller. Therefore we present a new framework for supervisory
control under adversarial attacks.
Several aspects of security have also been explored in the DES literature. One particular line
of research aims at studying the opacity of DESs, whose goal is to keep a system’s secret
behavior uncertain to outsiders; see, e.g., [20]–[23] and reference therein. Intrusion detection in
the DES framework has been investigated in [24]–[27]. These security methods guarantee the
confidentiality and integrity of DESs, but relatively little work has been done towards studying
supervisory control robustness against attacks.
An attacked plant can be modeled by a single DES with nondeterministic observations.
Supervisory control for such a DES has been studied in [28], [29]. The major difference from
the problem setting in these previous works is that in our work, the attacked output, i.e., the nondeterministic observation function, is uncertain. In other words, if GAi denotes the DES obtained
by modeling the plant under an attack Ai , then the problem we consider can be regarded as robust
supervision for an uncertain DES in a set of possible models {GA1 , . . . , GAn }, each representing
a potential type of attack.
Contributions and organization of this paper: In Section II, after defining attacks formally, we
introduce a new notion of observability under attacks, which is a natural extension of conventional
observability introduced in [30], [31]. We show that there exists a partial observation supervisor
that achieves a given language under attack if and only if the language is controllable in the
usual sense and observable according to the new notion of observability introduced in this
paper. Moreover, the desired supervisor is obtained explicitly. For attacks that correspond to
arbitrary insertions/removals of symbols, the new notion of observability can be reduced to the
conventional observability of a set of DESs with appropriately redefined observation maps, and
the number of elements in this DES set is the square of the number of possible attacks.
In Section III, we construct an automaton representation of the supervisor derived in Section II.
The results in this section are specific to attacks that replace and/or remove specific symbols from
January 5, 2017
DRAFT
4
the output string. First, we provide the mathematical formulation of replacement-removal attacks.
Then we construct an observer automaton that is resilient against such an attack, extending the
result in [30]. Finally, using the observer for each possible attack, we obtain an automaton
representation of the desired supervisor.
Section IV is devoted to developing a test for observability under attacks. Attacks are restricted
to the replacement and/or removal of symbols in this section as well. Constructing a product
automaton in [32], we show how to test observability under replacement-removal attacks in a
computationally efficient way. The computational complexity of this observability test is (Number
of possible attacks)2 × (Complexity of the conventional observability test without attacks), which
is the same as in the insertion-removal attack case of Section II.
Notation and definitions
The following notation and definitions are standard in the DES literature (see, e.g., [33], [34]).
For a finite set Σ of event labels, we denote by |Σ| the number of elements in Σ, and by
Σ∗ the set of all finite strings of elements of Σ, including the empty string . For a language
L ⊂ Σ∗ , the prefix closure of L is the language
L̄ := u ∈ Σ∗ : ∃v ∈ Σ∗ , uv ∈ L ,
where uv denotes the concatenation of two strings in Σ∗ , and L is said to be prefix closed if
L = L̄. We define the concatenation of languages L1 , L2 ⊂ Σ∗ by
L1 L2 := {w1 w2 ∈ Σ∗ : w1 ∈ L1 , w2 ∈ L2 }.
Let the set of events Σ be partitioned in two sets as Σ = Σc ∪ Σu with Σc ∩ Σu = ∅, where
Σc is called the set of controlled events and Σu the set of uncontrolled events. For a language
L defined on Σ, a prefix-closed set K ⊂ L is said to be controllable if KΣu ∩ L ⊂ K.
Consider an observation map P : Σ → (∆ ∪ {}) that maps a set of events Σ into a set
of observation symbols ∆ (augmented by the empty event ). This observation map P can be
extended to map strings of events in Σ∗ to strings of observation symbols in ∆∗ , using the rules
P () = and
P (wσ) = P (w)P (σ),
∀w ∈ Σ∗ , σ ∈ Σ.
A prefix-closed language K ⊂ L is P -observable with respect to L if
ker P ⊂ actK⊂L
January 5, 2017
DRAFT
5
where ker P denotes the equivalence relation on Σ∗ defined by
ker P := (w, w0 ) ∈ Σ∗ × Σ∗ : P (w) = P (w0 ) ,
and actK⊂L is a binary relation on Σ∗ defined as follows: The pair (w, w0 ) ∈ actK⊂L if (and
only if) w, w0 ∈ K implies that there does not exist σ ∈ Σ such that neither
[wσ ∈ K, w0 σ ∈ L \ K] nor [wσ ∈ L \ K, w0 σ ∈ K].
We will omit the underling language L when it is clear from the context.
Consider an automaton G = (X, Σ, ξ, x0 ), where X is the set of states, Σ is the nonempty
event set, ξ : X × Σ → X is the transition mapping (a partial function), and x0 ∈ X is the
initial state. We write ξ(x, σ)! to mean that ξ(x, σ) is defined. The transition function ξ can be
extended to a function X × Σ∗ → X according to the following rule:
•
•
For all x ∈ X, ξ(x, ) := x
For all x ∈ X, w ∈ Σ∗ , and σ ∈ Σ,
ξ(ξ(q, w), σ)
ξ(x, wσ) :=
undefined
if ξ(q, w)! and ξ(ξ(q, w), σ)!
otherwise.
The language generated by G is given by
L(G) := {w ∈ Σ∗ : f (x0 , w)!}.
II. S UPERVISED D ISCRETE - EVENT S YSTEMS UNDER ATTACKS
In Section II A, we first introduce attacks on observation symbols and a new notion of
observability under attacks. In Section II B, we present the main result of this section, which
shows that there exists a supervisor achieving a given language in the presence of attacks if and
only if the language is controllable in the usual sense and observable under attacks. Next, in
Section II C, we focus our attention on attacks that insert and remove symbols, and show that the
new notion of observability under such attacks can be reduced to the conventional observability
notion of a set of DESs.
A. Observability under attacks
By an attack, we mean the substitution of an observation string w ∈ ∆∗ generated by the plant
by a corrupted string y ∈ ∆∗ that is exposed to the supervisor (see Fig. 1). The corrupted string
January 5, 2017
DRAFT
6
y may differ from the original string w by the insertion, removal, or replacement of symbols.
The simplest form of attack could be modeled by a function y = A(w) that maps ∆∗ to ∆∗ .
However, we are interested in more general forms of attacks where the attacker is allowed to
non-deterministically map the same original string w ∈ ∆∗ to distinct strings y ∈ ∆∗ , in order to
make the task of the supervisor more difficult. We thus model attacks by a set-valued function
∗
A : ∆∗ → 2∆ that maps each original string w ∈ ∆∗ to the set A(w) ⊂ ∆∗ of all possible
corrupted strings y. Note that the supervisor receives one of the strings in the set A(w), not
∗
A(w) itself. The attack map Aid : ∆∗ → 2∆ that assigns to each string w ∈ ∆∗ the set {w}
containing only the original string w can be viewed as the absence of an attack.
When we design supervisors, attack maps may be uncertain due to lack of knowledge of
which sensors are attacked. In this paper, we therefore consider a set of possible attacks A =
{A1 , . . . , An }, and we are interested in the following scenario: We know the attack set A in
advance, and that only one attack in the set A is conducted. In other words, the attacker is not
allowed to switch between the attacks in the set A. However, when we construct a supervisor,
we do not know which attack actually occurs, and hence the aim is to design a robust supervisor
with respect to all attacks in A.
Example 1: Consider the language L(G) generated by the automaton G shown in Fig. 2a.
We investigate the observability under attacks of the specification language K generated by the
automaton GK in Fig. 2b. The difference between G and GK is an event c from x1 to x3 .
The purpose of supervisory control here would be to avoid this “shortcut”. We consider the
observation map P defined by
P (σ) =
σ
if σ ∈ ∆ := {a, b, d}
otherwise,
and three attacks A1 , A2 , A3 defined by A1 (σ) = ∆ for all σ ∈ ∆,
A2 (a) = {a, b},
A2 (b) = {b},
A2 (d) = {a, d}
A3 (a) = {a},
A3 (b) = {b},
A3 (d) = {}.
The attack A1 replaces each output symbol arbitrarily, so the supervisor knows from the output
only whether an observable event occurs. The attack A2 can replace the symbol a by b and the
symbol d by a, respectively, whereas the attack A3 always erases the symbol d. The goal is then
to design a supervisor that enforces the specification GK , without knowing which of the three
attacks is taking place. We shall return to this example later in the paper.
January 5, 2017
DRAFT
7
x0
a
c
d
c
x3
r0
x1
b
r1
d
x2
(a) System automaton G.
a
b
r3
c
r2
(b) Specification automaton GK .
Fig. 2: Automata in Example 1.
∗
For simplicity of notation, we denote by AP : Σ∗ → 2∆ the attacked observation map
obtained from the composition AP := A ◦ P . We introduce a new notion of observability under
a set of attacks, which can be seen as a direct extension of the conventional observability notion
introduced in [30], [31].
Definition 2 (Observability under attacks): Given an attack set A, we say that a prefix-closed
language K ⊂ L is P -observable under the attack set A if
RA,A0 ⊂ actK⊂L ,
∀A, A0 ∈ A,
(1)
where the relation RA,A0 contains all pairs of strings that may result in attacked observation
maps AP and A0 P with a common string of output symbols, i.e.,
RA,A0 := (w, w0 ) ∈ Σ∗ × Σ∗ : AP (w) ∩ A0 P (w0 ) 6= ∅ .
(2)
In view of the definition (2), the P -observability condition (1) can be restated as requiring that,
for every w, w0 ∈ K,
∃A, A0 ∈ A s.t. AP (w) ∩ A0 P (w0 ) 6= ∅
⇒ @σ ∈ Σ s.t. [wσ ∈ K, w0 σ ∈ L \ K] or [wσ ∈ L \ K, w0 σ ∈ K],
(3)
or equivalently, for every w, w0 ∈ K,
∃A, A0 s.t. AP (w) ∩ A0 P (w0 ) 6= ∅
⇒ ∀σ ∈ Σ, wσ ∈
/ L or w0 σ ∈
/ L or [wσ, w0 σ ∈ K] or [wσ, w0 σ ∈ L \ K].
(4)
In words, observability means that we cannot find two attacks A, A0 ∈ A that would result in
the same observation for two strings w, w0 ∈ K such that (w, w0 ) 6∈ actK⊂L , i.e., two strings
January 5, 2017
DRAFT
8
w, w0 ∈ K such that one will transition to an element of K and the other to an element outside
K, by the concatenation of the same symbol σ ∈ Σ.
First we obtain a condition equivalent to observability under attacks for controllable languages.
In the non-attacked case, this condition is used as the definition of conventional observability in
the book [34, Sec. 3.7].
Proposition 3: Suppose that the prefix-closed language K ⊂ L is controllable. Then K is P observable under the set of attacks A if and only if for every w, w0 ∈ K, σ ∈ Σc , and A, A0 ∈ A,
the following statement holds:
[AP (w) ∩ A0 P (w0 ) 6= ∅, wσ ∈ K, w0 σ ∈ L] ⇒ w0 σ ∈ K.
(5)
Proof: We use the necessary and sufficient condition (3) for the specification language K to
be observable under attacks.
(⇒) Suppose that K is P -observable under A, and suppose that w, w0 ∈ K, σ ∈ Σc , and
A, A0 ∈ A satisfy
AP (w) ∩ A0 P (w0 ) 6= ∅,
wσ ∈ K,
w0 σ ∈ L.
Then (3) directly leads to w0 σ ∈ K.
(⇐) Suppose that (5) holds for all w, w0 ∈ K, σ ∈ Σc , and A, A0 ∈ A. From (3), it is enough
to show that if w, w0 ∈ K satisfy AP (w) ∩ A0 P (w0 ) 6= ∅ for some A, A0 ∈ A, then there does
not exist σ ∈ Σ such that
[wσ ∈ K, w0 σ ∈ L \ K] or [wσ ∈ L \ K, w0 σ ∈ K].
Since K is controllable, if σ ∈ Σu and wσ, w0 σ ∈ L, then wσ, w0 σ ∈ K from (5). Moreover,
for all σ ∈ Σc , if wσ ∈ K and w0 σ ∈ L, then w0 σ ∈ K. Exchanging w and w0 , we also have if
w0 σ ∈ K and wσ ∈ L, then wσ ∈ K. This completes the proof.
Example 1 (cont.): Consider the language L(G), the specification language K = L(GK ), and
the attack sets A1 , A2 , A3 in Example 1. Let the controllable event set be Σc = {a, c}, and the
uncontrollable event be Σu = {b, d}. It is straightforward to show that K is controllable, and we
can use Proposition 3 to verify that K is observable under the attack set A = {A1 }. Additionally,
K is observable under the attack sets A = {A2 } and A = {A3 }, but is not observable under
A = {A2 , A3 }. In fact, if we define w := abcda and w0 := wb, then
A2 P (w) = {abaa, abab, abda, abdb, bbaa, bbab, bbda, bbdb}
A3 P (w0 ) = {abab},
January 5, 2017
DRAFT
9
and hence A2 P (w) ∩ A3 P (w0 ) 6= ∅, but c ∈ Σc satisfies wc ∈ L \ K and w0 c ∈ K. Thus K
is robust with respect to symbol replacements but vulnerable to a combination of replacements
and removals.
B. Existence of supervisors
Our objective in this subsection is to provide a necessary and sufficient condition for the existence of a supervisor that achieves a specification language in the presence of output corruption.
To this end, we first introduce supervisors for an attack set and define controlled languages under
attacks.
A P -supervisor for a language L ⊂ Σ∗ and an attack set A is a function f :
S
A∈A
AP (L) →
2Σ , where AP (L) is the set of all possible output strings under the attack A, that is,
AP (L) := y ∈ ∆∗ : ∃w ∈ L s.t. y ∈ AP (w) .
We will say that a supervisor f is valid if f (w) ⊃ Σu for all w ∈
S
A∈A
AP (L).
Given a P -supervisor f for a language L and an attack set A, the maximal language Lmax
f,A
controlled by f under the attack A ∈ A is defined inductively by ∈ Lmax
f,A and
wσ ∈ Lmax
f,A
⇔
w ∈ Lmax
f,A ,
wσ ∈ L,
∃y ∈ AP (w) s.t. σ ∈ f (y),
max
whereas the minimal language Lmin
f,A (⊂ Lf,A ) controlled by f under the attack A ∈ A is defined
inductively by ∈ Lmin
f,A and
wσ ∈ Lmin
f,A
⇔
w ∈ Lmin
f,A ,
wσ ∈ L,
∀y ∈ AP (w), σ ∈ f (y).
max
In general, Lmin
f,A ⊂ Lf,A , but in the absence of an attack, i.e., A = Aid , both languages coincide.
min
By construction, Lmax
f,A and Lf,A are prefix closed.
By definition, Lmax
f,A is the largest language that the attack A could enforce by exposing to the
supervisor f an appropriate corrupted output string in AP (w). On the other hand, Lmin
f,A is the
smallest language that the attack A could enforce. In other words, A could not reject any string
in this set by the choice of corrupted output strings in AP (w).
The following result provides a necessary and sufficient condition on a language K ⊂ L for
the existence of a P -supervisor f for an attack set A whose minimal and maximal controlled
max
languages Lmin
f,A , Lf,A are both equal to K.
Theorem 4: For every nonempty prefix-closed set K ⊂ L and every attack set A:
January 5, 2017
DRAFT
10
max
1) There exists a valid P -supervisor f for A such that Lmin
f,A = Lf,A = K for all A ∈ A if
and only if K is controllable and P -observable under A;
2) If K is controllable and P -observable under A, then the following map f :
S
A∈A
AP (L) →
max
2Σ defines a valid P -supervisor for which Lmin
f,A = Lf,A = K for all A ∈ A:
[
f (y) := Σu ∪ σ ∈ Σc : ∃w ∈ K, A ∈ A s.t. [y ∈ AP (w), wσ ∈ K] , ∀y ∈
AP (L).
A∈A
(6)
When the set of attacks A contains only Aid , Theorem 4 specializes to the case without attacks.
Remark 5: As the proof below shows, in order to obtain controllability in item 1), it is enough
min
that Lmax
f,A = K or Lf,A = K for some A ∈ A.
Proof of Theorem 4: We first prove that Lmax
f,A = K for some A ∈ A implies the controllability
of K. Pick some word w̄ ∈ KΣu ∩ L. Such a word must be of the form w̄ = wσ ∈ L such
that w ∈ K and σ ∈ Σu . The supervisor f is defined for all the strings y that are produced
by an attacker. Therefore, if w ∈ K ⊂ L, then f (y) is defined for every y ∈ AP (w). Since f
is a valid supervisor, any uncontrollable event belongs to f (y), in particular, σ ∈ f (y). Since
max
max
w ∈ K = Lmax
f,A , it now follows by the definition of Lf,A that wσ ∈ Lf,A = K, which shows
max
that KΣu ∩ L ⊂ K, and therefore K is controllable. From Lmin
f,A = K instead of Lf,A = K,
controllability can be obtained in the same way.
Next we prove that K is P -observable under an attack set A by using the fact that K =
max
Lmin
f,A = Lf,A for all A ∈ A. To do so, we employ the statement (4), which is equivalent to
observability under attacks. Pick a pair of words w, w0 ∈ K such that
∃A, A0 s.t. AP (w) ∩ A0 P (w0 ) 6= ∅,
and an arbitrary symbol σ ∈ Σ such that wσ, w0 σ ∈ L. If such a symbol σ does not exist, then
all σ satisfy wσ 6∈ L or w0 σ 6∈ L, and we immediately conclude that K is observable under A
min
from (4). If such σ exists and wσ ∈ K = Lmin
f,A , then by the definition of Lf,A , we must have
w ∈ Lmin
f,A = K,
wσ ∈ L,
∀y ∈ AP (w), σ ∈ f (y).
Since AP (w) ∩ A0 P (w0 ) 6= ∅, we must then have
0
0
0
w0 ∈ Lmax
f,A0 = K, w σ ∈ L, ∃y ∈ A P (w ) s.t. σ ∈ f (y).
January 5, 2017
DRAFT
11
Consequently, w0 σ ∈ Lmax
/ K = Lmax
f,A and we
f,A0 = K. Alternatively, if wσ ∈ L \ K, then wσ ∈
must have
w ∈ Lmax
/ f (y).
f,A = K, wσ ∈ L, ∀y ∈ AP (w), σ ∈
Since AP (w) ∩ A0 P (w0 ) 6= ∅, we must then have
0
0
0
w0 ∈ Lmin
/ f (y),
f,A0 = K, w σ ∈ L, ∃y ∈ A P (w ) s.t. σ ∈
and hence w0 σ ∈
/ Lmin
f,A0 = K. This shows that (4) holds, and therefore K is P -observable under
A.
To prove the existence of the supervisor in item 1 (and also the statement in item 2), pick
the supervisor f according to (6). We prove by induction on the word length that the supervisor
min
f so defined satisfies K = Lmax
f,A = Lf,A for all A ∈ A. The basis of induction is the empty
min
string that belongs to Lmax
f,A and Lf,A because of the definition of these sets and belongs to K
because this set is prefix-closed.
min
Suppose now that K, Lmax
f,A , and Lf,A have exactly the same words of length n ≥ 0, and pick
0
a word w0 σ ∈ Lmax
f,A of length n + 1 with σ 6= . We show that w σ ∈ K as follows. Since
0
w0 ∈ Lmax
f,A has length n, we know by the induction hypothesis that w ∈ K. On the other hand,
since w0 σ ∈ Lmax
f,A , we must have
w0 ∈ Lmax
f,A ,
w0 σ ∈ L,
∃y ∈ AP (w0 ) s.t. σ ∈ f (y).
If σ ∈ Σu , then we see from controllability that w0 σ ∈ K. Let us next consider the case σ ∈ Σc .
By the definition (6) of f , σ ∈ f (y) must mean that
∃w ∈ K, Ā ∈ A
s.t.
[y ∈ ĀP (w), wσ ∈ K].
(7)
We therefore have
w, w0 ∈ K, AP (w0 ) ∩ ĀP (w) 6= ∅, wσ ∈ K, w0 σ ∈ L.
Since K is controllable and observable under A, Proposition 3 shows that w0 σ ∈ K. This shows
min
max
that any word of length n + 1 in Lmax
f,A also belongs to K. Since Lf,A ⊂ Lf,A , it follows that
any word of length n + 1 in Lmin
f,A also belongs to K.
Conversely, for a word w0 σ ∈ K ⊂ L of length n + 1 with σ 6= , we prove w0 σ ∈ Lmin
f,A
as follows. Since K is prefix closed, we have w0 ∈ K. The induction hypothesis shows that
0
min
w0 ∈ Lmin
f,A . To obtain w σ ∈ Lf,A , we need to show that
w0 ∈ Lmin
f,A ,
January 5, 2017
w0 σ ∈ L,
∀y ∈ AP (w0 ), σ ∈ f (y).
DRAFT
12
The first statement is a consequence of the induction hypothesis (as discussed above). The second
statement is a consequence of the fact that w0 σ ∈ K ⊂ L. As regards the third statement, if
σ ∈ Σu , then σ ∈ f (y) for all y ∈ AP (w0 ) by definition. It is therefore enough to show that
σ ∈ Σc leads to σ ∈ f (y), that is, the statement (7), for every y ∈ AP (w0 ). We obtain (7) for
the particular case Ā = A, w = w0 ∈ K. Thus any word of length n + 1 in K also belongs to
max
min
Lmin
f,A and hence also to Lf,A ⊃ Lf,A , which completes the induction step.
C. Observability under insertion-removal attacks
In this subsection, we consider attacks that insert and remove certain symbols from output
strings, and reduce observability under such attacks to the conventional observability in the
non-attacked case.
Given a set of symbols α ⊂ ∆ in the observation alphabet, we define the insertion-removal
∗
attack Aα : ∆∗ → 2∆ that maps each string u ∈ ∆∗ to the set of all strings v ∈ ∆∗ that can be
obtained from u by an arbitrary number of insertions or removals of symbols in α. We say that
Aα corresponds to an attack on the output symbols in α. In this context, it is convenient to also
define the corresponding α-removal observation map R¬α : ∆ → (∆ ∪ {}) by
t ∈ α
R¬α (t) =
t t 6∈ α.
(8)
The α-removal observation map can be extended to strings of events in the same way as the
observation map P . This α-removal observation map allows us to define the attack Aα : ∆∗ →
∗
2∆ on the output symbol in α as follows:
Aα (u) = v ∈ ∆∗ : R¬α (u) = R¬α (v) .
(9)
Example 6: Let α = {t1 } ⊂ {t1 , t2 }. Then R¬α (t1 t2 ) = t2 , and
∗
Aα (t1 t2 ) = {tn1 t2 tm
1 : n, m ≥ 0} = v ∈ ∆ : R¬α (t1 t2 ) = R¬α (v) .
The next result shows that the observability in Definition 2 under insertion-removal attacks
is equivalent to the usual observability (without attacks) for an appropriate set of output maps.
Note that the composition R¬α ◦ P : ∆ → (∆ ∪ {}) can be regarded as an observation map (in
the usual sense, i.e., without attacks).
January 5, 2017
DRAFT
13
Theorem 7: For every nonempty prefix-closed language K ⊂ L and insertion-removal attack
set A = {Aα1 , Aα2 , . . . , AαM } consisting of M ≥ 1 observation attacks, K is P -observable
under the set of attacks A if and only if K is (R¬α ◦ P )-observable for every set α := αi ∪ αj ,
∀i, j ∈ {1, 2, . . . , M }.
Theorem 7 implies that one can use the standard test for DES observability (without attacks)
in [32] to determine observability under insertion-removal attacks (Definition 2).
Remark 8 (Computational complexity of observability test for insertion-removal attacks):
Consider a language L = L(G) generated by a finite automaton G = (X, Σ, ξ, x0 ) and a
specification language K = L(GK ) ⊂ L generated by a finite automaton GK = (R, Σ, η, r0 ).
According to Thereom 7, in order to test observability under an insertion-removal attack set A,
it is enough to construct |A|2 test automatons, each of which verifies the usual observability.
Since the computational complexity of verifying the usual observability with the test automaton
is O(|X| · |R|2 · |Σc |) (see, e.g., [34, Sec. 3.7]), the total complexity for this observability test
under attacks is O(|X| · |R|2 · |Σc | · |A|2 ).
The following result is the key step in proving Theorem 7.
Lemma 9: Given any two sets α1 , α2 ⊂ ∆ and α := α1 ∪ α2 , we have that
(v, v 0 ) ∈ ker R¬α
⇒
Aα1 (v) ∩ Aα2 (v 0 ) 6= ∅,
(10)
where ker R¬α is defined by
ker R¬α := (v, v 0 ) ∈ ∆∗ × ∆∗ : R¬α (v) = R¬α (v 0 ) .
(11)
Proof: To prove this result, we must show that given two strings v, v 0 ∈ ∆∗ such that
R¬α (v) = R¬α (v 0 ), there exists a third string y ∈ ∆∗ that belongs both to Aα1 (v) and Aα2 (v 0 ).
In view of (9), this means that y ∈ ∆∗ must satisfy
R¬α1 (v) = R¬α1 (y),
R¬α2 (v 0 ) = R¬α2 (y).
(12)
The desired string y can be constructed through the following steps:
1) Start with the string y1 := R¬α1 (v), which is obtained by removing from v all symbols in
α1 . Since α := α1 ∪ α2 , we have R¬α = R¬α2 ◦ R¬α1 . Hence
R¬α2 (y1 ) = R¬α2 (R¬α1 (v)) = R¬α (v) = R¬α (v 0 ),
and the strings y1 and v 0 still only differ by symbols in α1 and α2 .
January 5, 2017
DRAFT
14
2) Construct y2 by adding to y1 suitable symbols in α1 so that R¬α2 (y2 ) = R¬α2 (v 0 ). This is
possible because y1 and v 0 only differ by symbols in α1 and α2 and y1 has no symbols in α1
by definition. To get R¬α2 (y2 ) = R¬α2 (v 0 ), we do not care about the symbols in α2 , so we just
have to insert into y1 the symbols in α1 that appear in v 0 (at the appropriate locations).
3) By construction,
R¬α1 (y2 ) = y1 = R¬α1 (v),
and hence the original v and y2 only differ by symbols in α1 . Moreover, R¬α2 (y2 ) = R¬α2 (v 0 ),
that is, v 0 and y2 only differ by symbols in α2 . We therefore conclude that y := y2 satisfies (12)
and hence belongs to both Aα1 (v) and Aα2 (v 0 ).
Proof of Theorem 7: By definition, K is P -observable under the set of attacks A if and only
if
RAαi ,Aαj ⊂ actK⊂L ,
∀i, j ∈ {1, 2, . . . , M }.
Also, by definition, K is (R¬α ◦ P )-observable for the set α := αi ∪ αj if and only if
ker(R¬α ◦ P ) ⊂ actK⊂L .
To prove the result, it therefore suffices to show that, for all i, j ∈ {1, 2, . . . , M }, we have that
RAαi ,Aαj = ker(R¬α ◦ P ),
α := αi ∪ αj .
To show that this equality holds, first pick a pair (w, w0 ) ∈ RAαi ,Aαj , which means by the
definition (2) of RAαi ,Aαj that there exists a string y ∈ ∆∗ that belongs both to Aαi P (w) and
Aαj P (w0 ), and therefore
y ∈ Aαi P (w)
R¬αi P (w) = R¬αi (y)
⇔ R¬αj P (w0 ) = R¬αj (y)
⇔
y ∈ Aαj P (w0 )
Since α := αi ∪ αj , we have that R¬α = R¬αj ◦ R¬αi = R¬αi ◦ R¬αj , and consequently
R¬α P (w) = R¬αj R¬αi P (w) = R¬αj R¬αi (y)
= R¬αi R¬αj (y) = R¬αi R¬αj P (w0 ) = R¬α P (w0 ) .
Hence (w, w0 ) ∈ ker(R¬α ◦ P ). We have thus shown that RAαi ,Aαj ⊂ ker(R¬α ◦ P ).
January 5, 2017
DRAFT
15
To prove the reverse inclusion, pick a pair (w, w0 ) ∈ ker(R¬α ◦ P ), which means that
R¬α P (w) = R¬α P (w0 ) , and therefore P (w), P (w0 ) ∈ ker R¬α by the definition (11)
of ker R¬α . In conjunction with Lemma 9, this leads to
Aαi P (w) ∩ Aαj P (w0 ) 6= ∅.
Therefore we have (w, w0 ) ∈ RAαi ,Aαj . This shows that ker(R¬α ◦ P ) ⊂ RAαi ,Aαj , which
concludes the proof.
III. R EALIZATION OF S UPERVISORS UNDER ATTACKS
The objective of this section is to describe the supervisor f in (6) through an automaton for
replacement-removal attacks, which will be introduced in Section III A and are a special class of
the attacks considered in Section II. In Section III B, we provide a construction for an automaton
that describes a supervisor for replacement-removal attacks and then formally prove in Section
III C that the proposed automaton represents the supervisor f in (6).
A. Replacement-removal attack
The results in this section are specific to attacks that consist of replacement and/or removal of
specific symbols from the output strings. For a given replacement-removal map φ : ∆ → 2∆∪{}
that maps each output symbol to a (possibly empty) set of symbols, the corresponding attack
∗
map A : ∆∗ → 2∆ is defined by A() = {} and
A(yt) := wy wt : wy ∈ A(y), wt ∈ φ(t)
(13)
for all y ∈ ∆∗ and t ∈ ∆. Sets of attacks of this form are called replacement-removal attack
sets. Recalling that AP := A ◦ P , we conclude from (13) that
AP (sσ) = AP (s)AP (σ),
∀s ∈ Σ∗ , σ ∈ Σ.
(14)
In what follows, it will be convenient to define the inverse map AP −1 by
AP −1 (y) := w ∈ Σ∗ : y ∈ AP (w) ,
∀y ∈ ∆∗ ,
and extend it to languages L∆ ⊂ ∆∗ :
AP −1 (L∆ ) := w ∈ Σ∗ : ∃y ∈ L∆ s.t. y ∈ AP (w) .
The following lemma provides basic properties of AP and AP −1 for a replacement-removal
attack A:
January 5, 2017
DRAFT
16
Lemma 10: Consider a replacement-removal attack A.
1) For all w, s ∈ Σ∗ , we have
AP (ws) = AP (w)AP (s).
(15)
2) Let y1 , . . . , ym ∈ ∆ and w1 , . . . , wn ∈ Σ. If y1 · · · ym ∈ AP (w1 · · · wn ), then there exist
i1 , . . . , im such that 1 ≤ i1 < i2 < · · · < im ≤ n and
∈ AP (w1 · · · wi1 −1 ),
∈ AP (wi1 +1 · · · wi2 −1 ),
y1 ∈ AP (wi1 )
y2 ∈ AP (wi2 )
..
.
∈ AP (wim +1 · · · wn ),
where, for example, if i2 − i1 = 1, then the condition ∈ AP (wi1 · · · wi2 −1 ) is omitted.
3) For all y, v ∈ ∆∗ , we have
AP −1 (yv) = AP −1 (y)AP −1 (v).
(16)
Proof: 1) If s = , then (15) holds for all w ∈ Σ∗ because AP () = {}. For s = s1 · · · sk
with si ∈ Σ, we obtain (15), by using (14) iteratively. Thus we have (15) holds for all w, s ∈ Σ∗ .
2) We prove the item 2 by induction on the word length of y1 · · · ym . In the case m = 1,
suppose that
y1 ∈ AP (w1 · · · wn ) = AP (w1 ) · · · AP (wn ).
If y1 6∈ AP (wi ) for every i = 1, . . . , n, then y1 6∈ AP (w1 · · · wn ). Hence y1 ∈ AP (wi ) for at
least one i. Let y1 ∈ AP (wi ) hold for i = j1 , . . . , jk . It follows that there exists l ∈ {j1 , . . . , jk }
such that
∈ AP (w1 · · · wl−1 ), y1 ∈ AP (wl ), ∈ AP (wl+1 · · · wn ).
The desired i1 is such an index l.
Suppose now that the item 2 holds with y1 · · · yk of length k ≥ 1 and that
y1 · · · yk yk+1 ∈ AP (w1 · · · wn ) = AP (w1 ) · · · AP (wn ).
Similarly to the case m = 1, we see that there exists ik+1 such that
y1 · · · yk ∈ AP (w1 · · · wik+1 −1 ),
January 5, 2017
yk+1 ∈ AP (wik+1 ),
∈ AP (wik+1 +1 · · · wn ).
DRAFT
17
Applying the induction hypothesis to y1 · · · yk ∈ AP (w1 · · · wik+1 −1 ), we have the desired
conclusion with y1 · · · yk+1 of length k + 1.
3) If (16) holds for all y ∈ ∆∗ and v ∈ ∆ ∪ {}, then an iterative calculation shows that (16)
holds for all y ∈ ∆∗ and all v ∈ ∆∗ . Hence it is enough to prove that (16) holds for all y ∈ ∆∗
and t ∈ ∆ ∪ {}.
Suppose that w ∈ AP −1 (y)AP −1 (t) for y ∈ ∆∗ and t ∈ ∆ ∪ {}. Then
∃w1 ∈ AP −1 (y), w2 ∈ AP −1 (t) s.t. w = w1 w2 .
Since y ∈ AP (w1 ) and t ∈ AP (w2 ), it follows that
yt ∈ AP (w1 )AP (w2 ) = AP (w),
which implies w ∈ AP −1 (yt). We therefore have AP −1 (y)AP −1 (t) ⊂ AP −1 (yt).
Next we prove the converse inclusion. If t = , then we have from AP () = {} that
AP −1 (y)AP −1 (t) ⊃ AP −1 (yt).
Let us next consider the case t 6= . Suppose that w ∈ AP −1 (yt) for y ∈ ∆∗ and t ∈ ∆. Since
yt ∈ AP (w) and yt 6= , it follows that w 6= . Let w = w1 · · · wn with wi ∈ Σ. From the item
2, there exists i ∈ {1, . . . , n} such that y ∈ AP (w1 · · · wi ) and t ∈ AP (wi+1 · · · wn ). Hence if
we define w1 := w1 · · · wi and w2 := wi+1 · · · wn , then
w = w1 w2 , w1 ∈ AP −1 (y), w2 ∈ AP −1 (t).
Hence w ∈ AP −1 (y)AP −1 (t). Thus AP −1 (y)AP −1 (t) ⊃ AP −1 (yt), which completes the proof.
B. Supervisors described by observers
Inspired by the non-attacked results in [30], we first construct an observer that is resilient
against replacement-removal attacks, which plays an important role in the representation of the
supervisor f in (6).
Consider a specification automaton GK = (R, Σ, η, r0 ) with a set ∆ of output symbols and
an output map P : Σ → (∆ ∪ {}). We define the unobservable reach URA (r) of each state
r ∈ R under a replacement and removal attack A ∈ A by
URA (r) := z ∈ R : ∃u ∈ AP −1 () s.t. z = η(r, u) ,
January 5, 2017
DRAFT
18
Observer
fobs,A (x0,obs,A , t)
x0,obs,A
t2
u 2 AP
1
u 2 AP
(✏) e 2 AP
x0
1
1
(✏)
(t) \ ⌃
f (xe , e)
xe
Unobservable reach
Fig. 3: Observer ObsA (GK ) for an attack A.
which can be extended to a set of states B ⊂ R by
[
URA (B) :=
URA (r).
r∈B
To estimate the current state r of the specification automaton GK in the presence of a
replacement-removal attack A ∈ A, the observer ObsA (GK ) = (Robs,A , ∆, ηobs,A , r0,obs,A ) is
constructed in the following iterative procedure:
1) Define r0,obs,A := URA (r0 ) ⊂ R and set Robs,A = {r0,obs,A }.
2) For each set of states B ∈ Robs,A and t ∈ ∆, if η(re , e) is defined for some re ∈ B and
e ∈ AP −1 (t) ∩ Σ, then define
ηobs,A (B, t) := URA {r ∈ R : ∃re ∈ B, e ∈ AP (t)−1 ∩ Σ s.t. r = η(re , e)}
(17)
and add this set to Robs,A ; otherwise ηobs,A (B, t) is not defined.
3) Go back to step 2) until no more sets can be added to Robs,A .
Fig. 3 illustrates the observer ObsA (GK ).
Example 1 (cont.): Consider the specification language K = L(GK ) and the attack A1 as in
Example 1. Fig. 4 shows the observers obtained through the procedure described above for the
attacks Aid (absence of attack) and A1 . The structure of the observer ObsA1 (GK ) is similar to
that of ObsAid (GK ), but ObsA1 (GK ) transitions by using only the length of the observed strings,
which guarantees the robustness of K against replacement attacks.
The next result provides the realization of the P -supervisor f in (6) for the language L and
the specification K. This realization is base on a specification automaton GK := (R, Σ, η, r0 )
with L(GK ) = K and a family of observer automata ObsA (GK ) designed using the procedure
S
outlined above. Without loss of generality, we can restrict the domain of f to A∈A AP (K).
January 5, 2017
DRAFT
19
{r0 }
a
d
{r1 }
{r0 }
b
a, b, d
a, b, d
{r2 , r3 }
{r1 }
a, b, d
{r2 , r3 }
(a) Observer ObsAid (GK ) for the attack Aid (non-
(b) Observer ObsA1 (GK ) for the attack A1 .
attacked case).
Fig. 4: Observers in Example 1.
Theorem 11: Consider a nonempty prefix-closed set K ⊂ L and a replacement-removal attack
set A. Build a specification automaton GK := (R, Σ, η, r0 ) with L(GK ) = K and an observer
S
ObsA (GK ) := (Robs,A , ∆, ηobs,A , r0,obs,A ) for every A ∈ A. Define functions Ψ : A∈A Robs,A →
S
2Σ and ΦA : A∈A AP (K) → 2Σ by
Ψ(robs,A ) := Σu ∪ σ ∈ Σc : ∃r ∈ robs,A s.t. η(r, σ)!
Ψ ηobs,A (r0,obs,A , y) y ∈ AP (K)
ΦA (y) :=
Σu
y 6∈ AP (K).
Then the supervisor f in (6) can be obtained using
[
[
f (y) =
ΦA (y)
∀y ∈
AP (K).
A∈A
(18)
(19)
(20)
A∈A
We can precompute the function Ψ(robs,A ) for each observer state robs,A and then can obtain
the desired control action by looking at the current state robs,A and the corresponding event set
Ψ(robs,A ) for all observers ObsA (GK ) (A ∈ A).
Remark 12: The branch ΦA (y) = Σu for y 6∈ AP (K) in (19) implies that once an attack A
and a corrupted output string y satisfy y 6∈ AP (K), the corresponding observer ObsA (GK ) is not
under operation because the actual attack must be different from the attack A that the observer
assumes. Analogous to the case for linear time-invariant systems in [9]–[11], once y ∈
/ AP (K)
is detected, the supervisor can exclude that attack from further consideration and stop updating
the corresponding observer.
January 5, 2017
DRAFT
20
C. Proof of Theorem 11
The following lemma shows that the state of ObsA (GK ) is the set containing all the the states
of GK that could be reached from the initial state under the attack A.
Lemma 13: Consider a replacement-removal attack A. For all y ∈ L(ObsA (GK )), we have
r ∈ ηobs,A (r0,obs,A , y)
⇔
∃w ∈ AP −1 (y) s.t. r = η(r0 , w).
(21)
Moreover, L(ObsA (GK )) = AP (L(GK )).
The proof of Lemma 13 relies on a key technical result that provides a more direct representation
of ηobs,A in (17) without unobservable reaches URA .
Lemma 14: Consider a replacement-removal attack A. Let B ∈ Robs,A and t ∈ ∆. If η(re , e)
is defined for some re ∈ B and e ∈ AP −1 (t), then we define
η̄obs,A (B, t) := r ∈ R : ∃re ∈ B, e ∈ AP −1 (t) s.t. r = η(re , e) .
Then
ηobs,A (B, t)!
⇔
η̄obs,A (B, t)!
(22)
and
ηobs,A (B, t) = η̄obs,A (B, t).
(23)
Proof: First we prove (22). To prove this, it suffices to show that for all B ∈ Robs,A and
t ∈ ∆,
η(re , e)! for some re ∈ B and e ∈ AP −1 (t) ∩ Σ
⇔
η(re , e)! for some re ∈ B and e ∈ AP −1 (t)
(24)
By construction, (⇒) holds. We prove (⇐) as follows. Assume that there exist r̄e ∈ B and
ē ∈ AP −1 (t) such that η(r̄e , ē) is defined. Since t ∈ AP (ē), Lemma 10 shows that there exist
e1 ∈ Σ∗ , e2 ∈ Σ, and e3 ∈ Σ∗ such that
ē = e1 e2 e3 , ∈ AP (e1 ), t ∈ AP (e2 ), ∈ AP (e3 ).
(25)
Since η(r̄e , ē)!, we also obtain η(η(r̄e , e1 ), e2 )!. Therefore if we prove that
r̄e ∈ B,
∈ AP (e1 )
⇒
η(r̄e , e1 ) ∈ B,
(26)
then (⇐) of (24) holds with re = η(r̄e , e1 ) and e = e2 .
January 5, 2017
DRAFT
21
Let us show (26). Since B ∈ Robs,A , it follows that B = URA (B̄) for some B̄ ⊂ R. From
r̄e ∈ B, we have r̄e = η(r, u) for some r ∈ B̄ and u ∈ AP −1 (). Since ∈ AP (e1 ), it follows
that ue1 ∈ AP −1 (). Thus
η(r̄e , e1 ) = η(r, ue1 ) ∈ URA (B̄) = B,
which completes the proof of (22).
Next we prove (23). To show ηobs,A (B, t) ⊂ η̄obs,A (B, t), suppose that z ∈ ηobs,A (B, t). Then
there exist
r ∈ r0 ∈ R : ∃re ∈ B, e ∈ AP (t)−1 ∩Σ s.t. r0 = η(re , e) =: M
and u ∈ AP −1 () such that z = η(r, u). This implies that
∃re ∈ B, e ∈ AP −1 (t) ∩ Σ, u ∈ AP −1 () s.t. z = η(re , eu).
Since eu ∈ AP −1 (t), it follows that z ∈ η̄obs,A (B, t). Thus ηobs,A (B, t) ⊂ η̄obs,A (B, t)
To prove the reverse inclusion, pick z ∈ η̄obs,A (B, t). Then
∃r̄e ∈ B, ē ∈ AP −1 (t) s.t. z = η(r̄e , ē).
Since t ∈ AP (ē), it follows from Lemma 10 that there exist e1 ∈ Σ∗ , e2 ∈ Σ, and e3 ∈ Σ∗ such
that (25) holds. Furthermore, as discussed above, r̄e ∈ B and ∈ AP (e1 ) lead to η(r̄e , e1 ) ∈ B.
Since e2 ∈ AP −1 (t) ∩ Σ, we have
η(r̄e , e1 e2 ) = η η(r̄e , e1 ), e2 ∈ M.
Finally, since ∈ AP (e3 ), it follows that
z = η(re , e1 e2 e3 ) ∈ URA (M ) = ηobs,A (B, t).
Thus ηobs,A (B, t) ⊃ η̄obs,A (B, t). This completes the proof.
We are now ready to prove Lemma 13.
Proof of Lemma 13: We prove (21) by induction on the word length. Since
r0,obs,A = z ∈ R : ∃u ∈ AP −1 () s.t. z = η(r0 , u) ,
we have that if y = , then
r ∈ ηobs,A (r0,obs,A , y) ⇔ ∃u ∈ AP −1 () s.t. r = η(r0 , u),
which means that (21) holds for y = .
January 5, 2017
DRAFT
22
Suppose now that (21) holds for all words y ∈ L ObsA (GK ) of length n ≥ 0. Pick a word
y 0 t ∈ L ObsA (GK ) of length n + 1, where y 0 ∈ L ObsA (GK ) has length n. By Lemma 14,
ηobs,A (r0,obs,A , y 0 t) = ηobs,A ηobs,A (r0,obs,A , y 0 ), t
= η̄obs,A ηobs,A (r0,obs,A , y 0 ), t
We therefore have
r ∈ ηobs,A (r0,obs,A , y 0 t)
∃re ∈ ηobs,A (r0,obs,A , y 0 ), e ∈ AP −1 (t) s.t. r = η(re , e). (27)
⇔
Since y 0 has length n, we know by the induction hypothesis that re ∈ ηobs,A (r0,obs,A , y 0 ) if and
only if
∃w0 ∈ AP −1 (y 0 ) s.t. re = η(r0 , w0 ).
Combining this with (27), we obtain
r ∈ ηobs,A (r0,obs,A , y 0 t)
⇔
∃w0 ∈ AP −1 (y 0 ), e ∈ AP −1 (t) s.t. r = η(r0 , w0 e).
(28)
To prove (⇒) in (21), suppose that r ∈ ηobs,A (r0,obs,A , y 0 t). Then from (28) and Lemma 10,
w := w0 e satisfies
w ∈ AP −1 (y 0 )AP −1 (t) = AP −1 (y 0 t)
and r = η(r0 , w), which implies that (⇒) in (21) holds.
Let us next show that (⇐) in (21) holds. Conversely, suppose that
∃w ∈ AP −1 (y 0 t) s.t. r = η(r0 , w).
Since AP −1 (y 0 t) = AP −1 (y 0 )AP −1 (t) from Lemma 10, it follows that
∃w0 ∈ AP −1 (y 0 ), e ∈ AP −1 (t) s.t. w = w0 e.
Since r = η(r0 , w0 e), (28) shows that r ∈ ηobs,A (r0,obs , y 0 t), and hence we have (⇐) in (21).
Next we prove L ObsA (GK ) = AP L(GK ) by induction on the word length. The basis
of induction is the empty string that belongs to L ObsA (GK ) by definition, and belongs to
AP L(GK ) because AP () = {}.
January 5, 2017
DRAFT
23
Suppose now that L ObsA (GK ) and AP L(GK ) have exactly the same words of length
n ≥ 0, and pick a word y 0 t ∈ ∆∗ of length n+1, where y 0 and t have length n and 1, respectively.
Lemma 14 shows
y 0 t ∈ L ObsA (GK )
⇔
ηobs,A (r0,obs,A , y 0 t)!
⇔
ηobs,A (r0,obs,A , y 0 )! ∧ ∃re ∈ ηobs,A (r0,obs,A , y 0 ), e ∈ AP −1 (t) s.t. η(re , e)! .
(29)
Suppose that y 0 t ∈ L(ObsA (GK )). We do not use the induction hypothesis to prove that
y 0 t ∈ AP (L(GK )) holds. Since y 0 ∈ L(ObsA (GK )) as well, it follows from (21) that
re ∈ ηobs,A (r0,obs,A , y 0 )
⇔
∃w0 ∈ AP −1 (y 0 ) s.t. re = η(r0 , w0 ).
(30)
Combining this with (29), we have
∃w0 ∈ AP −1 (y 0 ), e ∈ AP −1 (t) s.t. η(r0 , w0 e)!.
(31)
Hence w := w0 e satisfies AP (w) = AP (w0 )AP (e) 3 y 0 t and w ∈ L(GK ), which implies
y 0 t ∈ AP L(GK ) . Thus we have L ObsA (GK ) ⊂ AP L(GK ) .
Conversely, suppose that y 0 t ∈ AP L(GK ) . Then there exists w ∈ L(GK ) such that y 0 t ∈
AP (w). Lemma 10 shows that
w = w1 w2 , y 0 ∈ AP (w1 ), t ∈ AP (w2 ).
(32)
for some w1 , w2 ∈ Σ∗ . Since w ∈ L(GK ) leads to w1 ∈ L(GK ), it follows that y 0 ∈ AP L(GK ) .
Since y 0 has length n, the induction hypothesis gives y 0 ∈ L ObsA (GK ) . Hence we have
ηobs,A (r0,obs,A , y 0 )! and (30). To prove y 0 t ∈ L ObsA (GK ) , it suffices from (29) and (30) to show
that (31) holds. Define w0 := w1 and e := w2 . Then we have w0 ∈ AP −1 (y 0 ) and e ∈ AP −1 (t)
from (32). Moreover,
w0 e = w1 w2 = w ∈ L(GK )
leads to η(r0 , w0 e)!. We therefore obtain (31). Thus L ObsA (GK ) ⊃ AP L(GK ) , which
completes the proof.
Using Lemma 13, we finally provide the proof of Theorem 11.
S
Proof of Theorem 11: Define fA : A∈A AP (K) → 2Σ by
fA (y) := Σu ∪ {σ ∈ Σc : ∃w ∈ K s.t. y ∈ AP (w), wσ ∈ K}
January 5, 2017
DRAFT
24
for all y ∈
S
A∈A
AP (K). Then the supervisor f in (6) satisfies
[
[
f (y) =
fA (y)
∀y ∈
AP (K),
A∈A
A∈A
Hence, in order to obtain (20), we need to prove that fA = ΦA for each A ∈ A.
By definition, fA (y) = Σu = ΦA (y) for all y 6∈ AP (K). Suppose that y ∈ AP (K). Since
L(GK ) = K, it follows from Lemma 13 that
AP (K) = AP L(GK ) = L ObsA (GK ) .
Since y ∈ AP (K) = L ObsA (GK ) , we conclude from (21) that
∃r ∈ ηobs,A (r0,obs,A , y) s.t. η(r, σ)!
⇔
∃w ∈ AP −1 (y) s.t. η(r0 , wσ)!.
Since η(r0 , wσ)! means that w, wσ ∈ K, it follows that
σ ∈ Σc : ∃r ∈ ηobs,A (r0,obs,A , y) s.t. η(r, σ)!
= {σ ∈ Σc : ∃w ∈ AP −1 (y) s.t. wσ ∈ K}
= {σ ∈ Σc : ∃w ∈ K s.t. y ∈ AP (w), wσ ∈ K}
which implies that fA (y) = ΦA (y) also for every y ∈ AP (K). This completes the proof.
IV. T EST FOR O BSERVABILITY UNDER R EPLACEMENT-R EMOVAL ATTACKS
In this section, we propose an observability test under replacement-removal attacks in Section III A, inspired by product automata [32].
A. Product automata
Consider a language L = L(G) generated by a finite automaton G = (X, Σ, ξ, x0 ). We want
to test the observability under an attack set A of a specification language K = L(GK ) ⊂ L
generated by a finite automaton GK = (R, Σ, η, r0 ).
For each A, A0 ∈ A, we construct a product automaton TA,A0 = (Q, ΣT,A,A0 , δA,A0 , q0 ) for the
observability test, where Q := R × X × R, q0 := (r0 , x0 , r0 ), and
ΣT,A,A0 := (σ, σ 0 ) ∈ (Σ ∪ {}) × (Σ ∪ {}) \ {(, )} : AP (σ) ∩ A0 P (σ 0 ) 6= ∅ .
The events in ΣTA,A0 are the pairs of events of the original automaton G that may not be
distinguished under attacks A and A0 by a supervisor.
January 5, 2017
DRAFT
25
The transition function δA,A0 : Q × ΣT,A,A0 → Q is defined in the following way: For every
q = (r, x0 , r0 ) ∈ Q and every σT = (σ, σ 0 ) ∈ ΣT,A,A0 , δA,A0 (q, σT ) is defined if (and only if)
η(r, σ), ξ(x0 , σ 0 ), and η(r0 , σ 0 ) are all defined. If δA,A0 (q, σT ) is defined, then
δA,A0 (q, σT ) = η(r, σ), ξ(x0 , σ 0 ), η(r0 , σ 0 ) .
B. Observability test
For each A, A0 ∈ A, let us denote by AcA,A0 (Q) the set of accessible states from the initial
state q0 by some string in L(TA,A0 ), that is,
AcA,A0 (Q) := q ∈ Q : ∃(w, w0 ) ∈ L(TA,A0 ) s.t. q = δA,A0 (q0 , (w, w0 )) .
The following theorem provides a necessary and sufficient condition for observability under
replacement-removal attacks, which can be checked using a set of test automatons {TA,A0 }A,A0 ∈A .
Theorem 15: Consider a replacement-removal attack set A and the test automaton TA,A0
defined above. Assume that a prefix-closed language K ⊂ L(G) is controllable. Then K is not
P -observable under A if and only if there exist A, A0 ∈ A, (r, x0 , r0 ) ∈ AcA,A0 (Q), and σ ∈ Σc
such that
η(r, σ)! ∧ ξ(x0 , σ)! ∧ ¬η(r0 , σ)!.
(33)
The proof of Theorem 15 is provided in the next subsection.
Remark 16 (Computational complexity of observability test for replacement-removal attacks):
The number of the elements in AcA,A0 (Q) is at most |X| · |R|2 . Since we only have to check the
state transition of the elements in AcA,A0 (Q) driven by a controllable event, the total computational complexity to test observability is O(|X| · |R|2 · |Σc | · |A|2 ) for the replacement-removal
attack case. This complexity is the same as the one we derived in Remark 8 for insertion-removal
attacks.
Example 1 (cont.): Consider again the language L(G), the specification language K = L(GK ),
and the attack set A = {A2 , A3 } in Example 1. To check observability under A by Theorem 15,
we need to construct four test automatons TA2 ,A2 , TA3 ,A3 , TA3 ,A2 , and TA2 ,A3 . We see from TA3 ,A2
in Fig. 5 that K is not observable under A. In fact, for the state (r2 , x01 , r10 ) and the controllable
event c ∈ Σc , we have
η(r2 , c)!, ξ(x01 , c)!, and ¬η(r10 , c)!.
Thus K is not observable under A, which is consistent with the discussion in Example 1.
January 5, 2017
DRAFT
26
(r0 , x00 , r00 )
(r1 , x01 , r10 )
(a, a)
0
0
(d, d) (r2 , x3 , r3 ) (b, b)
(r2 , x01 , r10 )
(r3 , x03 , r30 )
(c, ✏) (✏, c)
(c, c)
(✏, c)
(d, ✏)
(d, c)
(b, a)
(a, d)
(r1 , x00 , r00 )
(r2 , x02 , r20 )
(c, ✏)
(r3 , x02 , r20 )
(d, ✏)
(✏, c)
(r0 , x03 , r30 )
(r0 , x02 , r20 )
Fig. 5: Test automaton TA3 ,A2 .
C. The proof of Theorem 15
For all (w, w0 ) ∈ Σ∗ × Σ∗ and all (σ, σ 0 ) ∈ (Σ ∪ {}) × (Σ ∪ {}), we define
(w, w0 )(σ, σ 0 ) := (wσ, w0 σ 0 ).
Using this notation, we define
Σ∗T,A,A0 := (, ) ∩ (σ1 , σ10 ) · · · (σn , σn0 ) : (σi , σi0 ) ∈ ΣT,A,A0 , ∀i = 1, . . . , n, n ∈ N .
Also, we define Σ0T,A,A0 := {(, )} and for each n ≥ 1,
ΣnT,A,A0 := (σ1 , σ10 ) · · · (σn , σn0 ) : (σi , σi0 ) ∈ ΣT,A,A0 , ∀i = 1, . . . , n .
ΣnT,A,A0 is a subset of Σ∗T,A,A0 whose elements have length n ≥ 0.
The following proposition provides another representation of δA,A0 (q0 , (w, w0 )) with initial
states r0 and x0 .
Lemma 17: Consider replacement-removal attacks A, A0 and the test automaton TA,A0 defined
above. For every (w, w0 ) ∈ Σ∗T,A,A0 , if δA,A0 (q0 , (w, w0 )) is defined, then
δA,A0
η(r0 , w)!, ξ(x0 , w0 )!, η(r0 , w0 )!
q0 , (w, w0 ) = η(r0 , w), ξ(x0 , w0 ), η(r0 , w0 ) .
(34)
(35)
Proof: We prove (34) and (35) by induction on the word length of (w, w0 ). The basis of
induction is the empty string (, ). Since δA,A0 (q0 , (, )) = q0 = (r0 , x0 , r0 ) and η(r0 , ) = r0 ,
ξ(x0 , ) = x0 , η(r0 , ) = r0 , it follows that (34) and (35) hold.
January 5, 2017
DRAFT
27
Suppose now that for all (w, w0 ) ∈ ΣnT,A,A0 with n ≥ 0, if δA,A0 q0 , (w, w0 ) is defined, then
0
0 q0 , (w̄, w̄ )
(34) and (35) hold. Let (w̄, w̄0 ) ∈ Σn+1
satisfies
δ
is defined. Then there exist
0
A,A
T,A,A
(w, w0 ) ∈ ΣnT,A,A0 and (σ, σ 0 ) ∈ ΣT,A,A0 such that (w̄, w̄0 ) = (w, w0 )(σ, σ 0 ) and
δA,A0 q0 , (w, w0 ) !, δA,A0 δA,A0 (q0 , (w, w0 )), (σ, σ 0 ) !.
(36)
We know by the induction hypothesis that η(r0 , w), ξ(x0 , w0 ), and η(r0 , w0 ) are defined and that
δA,A0 q0 , (w, w0 ) = η(r0 , w), ξ(x0 , w0 ), η(r0 , w0 ) . Hence we see from the second statement of
(36) that η(r0 , w̄) = η η(r0 , w), σ is defined. Similarly, ξ(x0 , w̄0 ) and η(r0 , w̄0 ) are defined.
Furthermore, we have
δA,A0 q0 , (w̄, w̄0 ) = δA,A0 δA,A0 (q0 , (w, w0 )), (σ, σ 0 )
= η(r0 , w̄), ξ(x0 , w̄0 ), η(r0 , w̄0 ) .
Thus the desired statements (34) and (35) hold for all (w̄, w̄0 ) ∈ Σn+1
T,A,A0 , which completes the
induction step.
The next lemma shows that the product automaton TA,A0 tests state transitions by two events
in K whose observation alphabets may not be distinguished under attacks A and A0 by the
supervisor:
Lemma 18: Consider replacement-removal attacks A, A0 . For the test automaton TA,A0 defined
as in Section IV A, we have
L(TA,A0 ) := (w, w0 ) ∈ Σ∗T,A,A0 : δA,A0 (q0 , (w, w0 ))!
= (w, w0 ) ∈ Σ∗ × Σ∗ : w, w0 ∈ K, AP (w) ∩ A0 P (w0 ) 6= ∅ =: L̂A,A0 .
(37)
In the proof of Lemma 18, we use the lemma below that provides the property of Σ∗T,A,A0 and
a set equivalent to L̂A,A0 .
Lemma 19: Consider two replacement-removal attacks A, A0 and the test automaton TA,A0
defined as in Section IV A. For all (w, w0 ) ∈ Σ∗ × Σ∗ , we have
(w, w0 ) ∈ Σ∗T,A,A0
⇔
AP (w) ∩ A0 P (w0 ) 6= ∅.
(38)
Moreover, the language L̂A,A0 in (37) satisfies
L̂A,A0 = (w, w0 ) ∈ Σ∗T,A,A0 : w, w0 ∈ K .
(39)
Proof: (⇒ of (38)) We show the proof by induction on the word length (w, w0 ) ∈ Σ∗T,A,A0 . The
basis of the induction, (, ), satisfies ∈ AP ()∩A0 P (). We therefore have AP ()∩A0 P () 6= ∅.
January 5, 2017
DRAFT
28
Suppose that all words (w, w0 ) ∈ ΣnT,A,A0 satisfies AP (w)∩A0 P (w) 6= ∅. Let (w̄, w̄0 ) ∈ Σn+1
T,A,A0 .
Then there exists (w, w0 ) ∈ ΣnT,A,A0 and (σ, σ 0 ) ∈ ΣT,A,A0 such that w̄ = wσ and w̄0 = w0 σ 0 .
The induction hypothesis shows that AP (w) ∩ A0 P (w0 ) 6= ∅, and by construction, we have
AP (σ) ∩ A0 P (σ 0 ) 6= ∅. Thus AP (w̄) ∩ A0 P (w̄0 ) 6= ∅.
(⇐ of (38)) We split the proof into two cases:
[w = or w0 = ] and [w 6= and w0 6= ].
First we consider the case [w = or w0 = ]. Let w = . Since AP () ∩ A0 P (w0 ) 6= ∅ and
since AP () = {}, it follows that ∈ A0 P (w0 ). If w0 = , then (w, w0 ) = (, ) ∈ Σ∗T,A,A0 .
Suppose that w0 6= . Let w0 = w10 · · · wk0 with w10 , . . . , wk0 ∈ Σ. Then
∈ AP (w0 ) = AP (w10 ) · · · AP (wk0 ),
and hence ∈ AP () ∩ AP (wi0 ) for all i = 1, . . . , k. Thus (w, w0 ) ∈ Σ∗T,A,A0 .
Next we study the case [w 6= and w0 6= ]. Let w = w1 · · · wl with w1 , . . . , wl ∈ Σ and
w0 = w10 · · · wk0 with w10 , . . . , wk0 ∈ Σ. Suppose that AP (w) ∩ A0 P (w0 ) = {}. Then
∈ AP (w) = AP (w1 ) · · · AP (wl )
∈ AP (w0 ) = AP (w10 ) · · · AP (wk0 ).
Hence ∈ AP (wj ) for all j = 1, . . . , l and ∈ AP (wi0 ) for all i = 1, . . . , k. Thus (w, w0 ) ∈
Σ∗T,A,A0 .
Suppose that AP (w) ∩ A0 P (w0 ) 6= {}, then there exists y ∈ ∆∗ \ {} such that y ∈ AP (w) ∩
A0 P (w0 ). Let y = y1 · · · ym with y1 , . . . , ym ∈ ∆. From Lemma 10, there exist i1 , . . . , im such
that y1 ∈ AP (wi1 ), . . . , ym ∈ AP (wim ) and
∈ AP (w1 · · · wi1 −1 ) = AP (w1 ) · · · AP (wi1 −1 )
∈ AP (wi1 +1 · · · wi2 −1 ) = AP (wi1 +1 ) · · · AP (wi2 −1 )
..
.
∈ AP (wim +1 · · · wl ) = AP (wim +1 ) · · · AP (wl ),
which implies that ∈ AP (wi ) for all i 6= i1 , . . . , im . We also have similar indices for w0 . Thus
(w, w0 ) ∈ Σ∗T,A,A0 .
The second statement (39) directly follows from (38).
Using Lemmas 17 and 19, we prove Lemma 18.
January 5, 2017
DRAFT
29
Proof of Lemma 18: From Lemmas 17 and 19, we obtain L(TA,A0 ) ⊂ L̂A,A0 .
Let us prove L(TA,A0 ) ⊃ L̂A,A0 . From Lemma 19, it is enough to show this inclusion by
induction on the word length (w, w0 ) ∈ Σ∗T,A,A0 . Since AP () ∩ A0 P () = {}, the basis of the
induction, (, ), belongs to both sets L(TA,A0 ) and L̂A,A0 .
Suppose that all words in L̂A,A0 whose length is n ≥ 0 belong to L(TA,A0 ). Let (w̄, w̄0 ) ∈ L̂A,A0
0
have length n + 1, that is, (w̄, w̄0 ) satisfy Σn+1
T,A,A0 and w̄, w̄ ∈ K. Then
(w̄, w̄0 ) = (w, w0 )(σ, σ 0 )
for some (w, w0 ) ∈ ΣnT,A,A0 and some (σ, σ 0 ) ∈ ΣT,A,A0 . Since w, w0 ∈ K, the induction hypothesis
shows that η(r0 , w), ξ(x0 , w0 ), and η(r00 , w0 ) are defined. Hence
δA,A0 q0 , (w̄, w̄0 ) !
η η(r0 , w), σ ! ∧ ξ ξ(x0 , w0 ), σ 0 ! ∧ η η(r00 , w0 ), σ 0 !
⇐
The right statement is equivalent to w̄, w̄0 ∈ K. Thus all the words in L̂A,A0 whose length is
n + 1 belong to L(TA,A0 ). This completes the proof.
We are now ready to provide a proof of Theorem 15, where the property of L(TA,A0 ) in
Lemma 18 plays an important role.
Proof of Theorem 15: (⇒) Suppose that K is not observable under the attack set A. From
Proposition 3, there exist w, w0 ∈ K, σ ∈ Σc , and A, A0 ∈ A such that
AP (w) ∩ A0 P (w0 ) 6= ∅, wσ ∈ K, and w0 σ ∈ L \ K.
Then Lemma 18 shows that (w, w0 ) ∈ L(TA,A0 ). Hence δA,A0 q0 , (w, w0 ) is defined, and (34) in
Lemma 17 holds. Define r, x0 , r0 by
r := η(r0 , w),
x0 := ξ(x0 , w0 ),
r0 := η(r0 , w0 ).
Then we see from (35) in Lemma 17 that
(r, x0 , r0 ) = δA,A0 q0 , (w, w0 ) ,
and hence (r, x0 , r0 ) ∈ AcA,A0 (Q). Furthermore, since wσ ∈ K and w0 σ ∈ L \ K, we have (33).
(⇐) Conversely, suppose that A, A0 ∈ A, (r, x0 , r0 ) ∈ AcA,A0 (Q), and σ ∈ Σc satisfy (33). Since
(r, x0 , r0 ) ∈ AcA,A0 (Q), it follows that (r, x0 , r0 ) = δA,A0 q0 , (w, w0 ) for some (w, w0 ) ∈ L(TA,A0 ),
and hence Lemma 17 shows that
r = η(r0 , w),
January 5, 2017
x0 = ξ(x0 , w0 ),
r0 = η(r0 , w0 ).
DRAFT
30
In conjunction with (33), this leads to wσ ∈ K and w0 σ ∈ L \ K. On the other hand, since
(w, w0 ) ∈ L(TA,A0 ), it follows from Lemma 18 that w, w0 ∈ K and AP (w) ∩ A0 P (w0 ) 6= ∅. Thus
Proposition 3 shows that K is not observable under the attack pair {A, A0 }. This completes the
proof.
V. C ONCLUSION
We studied supervisory control for DESs in the presence of attacks. We defined a new notion of
observability under attacks and proved that this notion combined with conventional controllability
is necessary and sufficient for the existence of a supervisor enforcing the specification language
despite the attacks. Furthermore, we showed that the usual notion of observability can be used
to test the new notion of observability in the case of insertion-removal attacks. The automaton
representation and the observability test were extended for replacement-removal attacks. In [35],
we also define normality under attacks and discuss the existence of a maximally permissive
supervisor. An important direction for future work is to combine robustness against attack with
confidentiality and integrity in supervisory control for DESs.
R EFERENCES
[1] S. Checkoway, D. McCoy, B. Kantor, D. Anderson, Shacham, S. H. Savage, K. Kocher, A. Czeskis, F. Roesner, and
T. Kohno, “Comprehensive experimental analyses of automotive attack surfaces,” in Proc. USENIX Security Symposium,
2011.
[2] A. J. Kerns, D. P. Shepard, J. A. Bhatti, and T. E. Humphreys, “Unmanned aircraft capture and control via GPS spoofing,”
J. Field Robot., vol. 31, pp. 617–636, 2014.
[3] J. Slay and M. Miller, “Lessons learned from the Maroochy water breach,” in Proc. Critical Infrastructure Protection, vol.
253, 2007, pp. 73–82.
[4] J. P. Farwell and R. Rohozinski, “Stuxnet and the future of cyber war,” Survival, vol. 53, pp. 23–40, 2011.
[5] T. De Maiziére, “Die Lage der IT-Sicherheit in Deutschland 2014,” Tech. Report, Federal Office for Information Security,
2014. [Online]. Available: http://www.wired.com/wp-content/uploads/2015/01/Lagebericht2014.pdf
[6] P. Falkman, B. Lennartson, and M. Tittus, “Specification of a batch plant using process algebra and petri nets,” Control
Eng. Practice, vol. 17, pp. 1004–1015, 2009.
[7] X. Zhao, P. Shi, and L. Zhang, “Asynchronously switched control of a class of slowly switched linear systems,” Systems
Control Lett., vol. 61, pp. 1151–1156, 2012.
[8] M. Uzam and G. Gelen, “The real-time supervisory control of an experimental manufacturing system based on a hybrid
method,” Control Eng. Practice, vol. 17, pp. 1174–1189, 2009.
[9] H. Fawzi, P. Tabuada, and S. Diggavi, “Secure estimation and control for cyber-physical systems under adversarial attacks,”
IEEE Trans. Automat. Control, vol. 59, pp. 1454–1467, 2014.
[10] Y. Shoukry and P. Tabuada, “Event-triggered state observers for sparse noise/attacks,” IEEE Trans. Automat. Control,
vol. 61, pp. 2079–2091, 2016.
January 5, 2017
DRAFT
31
[11] M. S. Chong, M. Wakaiki, and J. P. Hespanha, “Observability of linear systems under adversarial attacks,” in Proc. ACC’15,
2015.
[12] F. Lin, “Robust and adaptive supervisory control of discrete event systems,” IEEE Trans. Automat. Control, vol. 38, pp.
1848–1852, 1993.
[13] S. Takai, “Robust supervisory control of a class of timed discrete event systems under partial observation,” Systems Control
Lett., vol. 39, pp. 267–273, 2000.
[14] A. Saboori and S. H. Zad, “Robust nonblocking supervisory control of discrete-event systems under partial observation,”
Systems Control Lett., vol. 55, pp. 839–848, 2006.
[15] A. M. Sánchez and F. J. Montoya, “Safe supervisory control under observability failure,” Discrete Event Dyn. System:
Theory Appl., vol. 16, pp. 493–525, 2006.
[16] A. Paoli, M. Sartini, and S. Lafortune, “Active fault tolerant control of discrete event systems using online diagnostics,”
Automatica, vol. 47, pp. 639–649, 2011.
[17] S. Shu and F. Lin, “Fault-tolerant control for safety of discrete-event systems,” IEEE Trans. Autom. Sci. Eng., vol. 11, pp.
78–89, 2014.
[18] S. Amin, X. Litrico, S. Sastry, and A. M. Bayen, “Cyber security of water SCADA systems–Part I: Analysis and
experimentation of stealthy deception attacks,” IEEE Trans. Control Systems Tech., vol. 21, pp. 1963–1970, 2013.
[19] A. Teixeira, H. Shames, I. Sandberg, and K. H. Johansson, “A secure control framework for resource-limited adversaries,”
Automatica, vol. 51, pp. 135–148, 2015.
[20] S. Takai and Y. Oka, “A formula for the supremal controllable and opaque sublanguage arising in supervisory control,”
SICE J Control Meas. System Integr., vol. 1, pp. 307–311, 2008.
[21] J. Dubreil, P. Darondeau, and H. Marchand, “Supervisory control for opacity,” IEEE Trans. Automat. Control, vol. 55, pp.
1089–1100, 2010.
[22] A. Saboori and C. N. Hadjicostis, “Opacity-enforcing supervisory strategies via state estimator constructions,” IEEE Trans.
Automat. Control, vol. 57, pp. 1155–1165, 2012.
[23] Y.-C. Wu and S. Lafortune, “Synthsis of insertion functions for enforcement of opacity security properties,” Automatica,
vol. 50, pp. 1336–1348, 2014.
[24] D. Thorsley and D. Teneketzis, “Intrusion detection in controlled discrete event systems,” in Proc. 45th IEEE CDC, 2006.
[25] S.-J. Whittaker, M. Zulkernine, and K. Rudie, “Toward incorporating discrete-event systems in secure software development,” in Proc. ARES’08, 2008.
[26] N. Hubballi, S. Biswas, S. Roopa, R. Ratti, and S. Nandi, “LAN attack detection using discrete event systems,” ISA Trans.,
vol. 50, pp. 119–130, 2011.
[27] L. K. Carvalho, Y.-C. Wu, R. Kwong, and S. Lafortune, “Detection and prevention of actuator enablement attacks in
supervisory control systems,” in Proc. WODES’16, 2016.
[28] S. Xu and R. Kumar, “Discrete event control under nondeterministic partial observation,” in Proc. IEEE CASE’09, 2009.
[29] T. Ushio and S. Takai, “Nonblocking supervisory control of discrete event systems modeled by Mealy automata with
nondeterministic output functions,” IEEE Trans. Automat. Control, vol. 61, pp. 799–804, 2016.
[30] R. Cieslak, C. Desclaux, A. S. Fawaz, and P. Varaiya, “Supervisory control of discrete event processes with partial
observations,” IEEE Trans. Automat. Control, vol. 33, pp. 249–260, 1988.
[31] F. Lin and W. M. Wonham, “On observability of discrete-event systems,” Inform. Sci., vol. 44, pp. 173–198, 1988.
[32] J. N. Tsitsiklis, “On the control of discrete-event dynamical systems,” Math. Control Signals Systems, pp. 95–107, 1989.
[33] P. J. Ramadge and W. M. Wonham, “The control of discrete event systems,” Proc. IEEE, vol. 77, pp. 81–98, 1989.
[34] C. Cassandras and S. Lafortune, Introduction to Discrete Event Systems.
January 5, 2017
Boston, MA: Kluwer, 1999.
DRAFT
32
[35] M. Wakaiki, P. Tabuada, and J. P. Hespanha, “Supervisory control of discrete-event systems under attacks,” Tech. Report,
University of California, Santa Barbara, 2016. [Online]. Available: http://www.ece.ucsb.edu/∼hespanha/published/DES
under attack ver8 19 Tech note.pdf
January 5, 2017
DRAFT
| 3 |
arXiv:1709.05976v3 [cs.LG] 10 Nov 2017
Leveraging Distributional Semantics for
Multi-Label Learning
Rahul Wadbude
Vivek Gupta
Piyush Rai
IIT Kanpur
[email protected]
Microsoft Research
[email protected]
IIT Kanpur
[email protected]
Nagarajan Natarajan
Harish Karnick
Prateek Jain
Microsoft Research
[email protected]
IIT Kanpur
[email protected]
Microsoft Research
[email protected]
Abstract
We present a novel and scalable label embedding framework
for large-scale multi-label learning a.k.a ExMLDS (Extreme
Multi-Label Learning using Distributional Semantics). Our
approach draws inspiration from ideas rooted in distributional
semantics, specifically the Skip Gram Negative Sampling
(SGNS) approach, widely used to learn word embeddings
for natural language processing tasks. Learning such embeddings can be reduced to a certain matrix factorization. Our
approach is novel in that it highlights interesting connections
between label embedding methods used for multi-label learning and paragraph/document embedding methods commonly
used for learning representations of text data. The framework
can also be easily extended to incorporate auxiliary information such as label-label correlations; this is crucial especially
when there are a lot of missing labels in the training data. We
demonstrate the effectiveness of our approach through an extensive set of experiments on a variety of benchmark datasets,
and show that the proposed learning methods perform favorably compared to several baselines and state-of-the-art methods for large-scale multi-label learning. To facilitate end-toend learning, we develop a joint learning algorithm that can
learn the embeddings as well as a regression model that predicts these embeddings given input features, via efficient gradient based methods.
Introduction
Modern data generated in various domains are increasingly
"multi-label" in nature; images (e.g. Instagram) and documents (e.g. Wikipedia) are often identified with multiple
tags, online advertisers often associate multiple search keywords with ads, and so on. Multi-label learning is the problem of learning to assign multiple labels to instances, and
has received a great deal of attention over the last few
years; especially so, in the context of learning with millions of labels, now popularly known as extreme multi-label
learning (Jain, Prabhu, and Varma 2016; Bhatia et al. 2015;
Babbar and Schölkopf 2017; Prabhu and Varma 2014).
The key challenges in multi-label learning, especially when there are millions of labels, include a) the
data may have a large fraction of labels missing, and
b) the labels are often heavy-tailed (Bhatia et al. 2015;
Submitted to Thirty-Second Association for the Advancement of
Artificial Intelligence, AAAI 2018 (www.aaai.org). Do not distribute.
Jain, Prabhu, and Varma 2016) and predicting labels in
the tail becomes significantly hard for lack of training data. For these reasons, and the sheer scale of
data, traditional multi-label classifiers are rendered
impracticable. State-of-the-art approaches to extreme
multi-label learning fall broadly under two classes: 1)
embedding based methods, e.g. L EML (Yu et al. 2014),
W SABIE
(Weston, Bengio, and Usunier 2010),
S LEEC (Bhatia et al. 2015), P D -S PARSE (Yen et al. 2016)),
and 2) tree-based methods (Prabhu and Varma 2014;
Jain, Prabhu, and Varma 2016). The first class of approaches
are generally scalable and work by embedding the highdimensional label vectors to a lower-dimensional space and
learning a regressor in that space. In most cases, these methods rely on a key assumption that the binary label matrix is
low rank and consequently the label vectors can be embedded into a lower-dimensional space. At the time of prediction, a decompression matrix is used to retrieve the original
label vector from the low-dimensional embeddings. As corroborated by recent empirical evidence (Bhatia et al. 2015;
Jain, Prabhu, and Varma 2016), approaches based on
standard structural assumptions such as low-rank label matrix fail and perform poorly on the tail. The
second class of methods (tree-based) methods for
multi-label learning try to move away from rigid
structural
assumptions
(Prabhu and Varma 2014;
Jain, Prabhu, and Varma 2016), and have been demonstrated to work very well especially on the tail labels.
In this work, we propose an embedding based
approach, closely following the framework of
S LEEC (Bhatia et al. 2015), that leverages a word vector embedding technique (Mikolov et al. 2013) which has
found resounding success in natural language processing
tasks. Unlike other embedding based methods, S LEEC has
the ability to learn non-linear embeddings by aiming to
preserve only local structures and example neighborhoods.
We show that by learning rich word2vec style embedding
for instances (and labels), we can a) achieve competitive multi-label prediction accuracies, and often improve
over the performance of the state-of-the-art embedding
approach S LEEC and b) cope with missing labels, by
incorporating auxiliary information in the form of labellabel co-occurrences, which most of the state-of-the-art
methods can not. Furthermore, our learning algorithm
admits significantly faster implementation compared to
other embedding based approaches. The distinguishing
aspect of our work is that it draws inspiration from distributional semantics approaches (Mikolov et al. 2013;
Le and Mikolov 2014), widely used for learning non-linear
representations of text data for natural language processing
tasks such as understand word and document semantics,
classifying documents, etc.
Our main contributions are:
1. We leverage an interesting connection between the problem of learning distributional semantics in text data analysis and the multi-label learning problem. To the best of
our knowledge, this is a novel application.
2. The proposed objectives for learning embeddings can be
solved efficiently and scalably; the learning reduces to a
certain matrix factorization problem.
3. Unlike existing multi-label learning methods, our method
can also leverage label co-occurrence information while
learning the embeddings; this is especially appealing
when a large fraction of labels are missing in the label
matrix.
4. We show improvement in training time as compared to
state-of-art label embedding methods for extreme multilabel learning, while being competitive in terms of label
prediction accuracies; we demonstrate scalability and prediction performance on several state-of-the-art moderateto-large scale multi-label benchmark datasets.
The outline of the paper is as follows. We begin by setting
up notation, background and describing the problem formulation in Section . In Section , we present our training algorithms based on learning word embeddings for understanding word and document semantics. Here we propose two
objectives, where we progressively incorporate auxiliary information viz. label correlations. We present comprehensive
experimental evaluation in Section , and conclude.
Problem Formulation and Background
In the standard multi-label learning formulation, the
learning algorithm is given a set of training instances
{x1 , x2 , . . . , xn }, where xi ∈ Rd and the associated label
vectors {y1 , y2 , . . . , yn }, where yi ∈ {0, 1}L. In real-world
multi-label learning data sets, one does not usually observe
irrelevant labels; here yij = 1 indicates that the jth label is
relevant for instance i but yij = 0 indicates that the label is
missing or irrelevant. Let Y ∈ {0, 1}n×L denote the matrix
of label vectors. In addition, we may have access to labelL×L
label co-occurrence information, denoted by C ∈ Z+
(e.g., number of times a pair of labels co-occur in some external source such as the Wikipedia corpus). The goal in multilabel learning is to learn a vector-valued function f : x 7→ s,
where s ∈ RL scores the labels.
Embedding-based approaches typically model f as a com′
posite function h(g(x)) where, g : Rd → Rd and
′
h : Rd → RL . For example, assuming both g and h
as linear transformations, one obtains the formulation proposed by (Yu et al. 2014). The functions g and h can be
learnt using training instances or label vectors, or both.
More recently, non-linear embedding methods have been
shown to help improve multi-label prediction accuracies
significantly. In this work, we follow the framework of
(Bhatia et al. 2015), where g is a linear transformation, but
h is non-linear, and in particular, based on k-nearest neighbors in the embedded feature space.
′
In S LEEC, the function g : Rd → Rd is given by
′
′
g(x) = V x where V ∈ Rd ×d . The function h : Rd → RL
is defined as:
n
h z; {zi , yi }i=1 =
1 X
yi ,
|Nk |
(1)
i∈Nk
where zi = g(xi ) and Nk denotes the k−nearest neighbor
training instances of z in the embedded space. Our algorithm
for predicting the labels of a new instance is identical to that
of S LEEC and is presented for convenience in Algorithm 1.
Note that, for speeding up predictions, the algorithm relies
on clustering the training instances xi ; for each cluster of
instances Qτ , a different linear embedding gτ , denoted by
V τ , is learnt.
Algorithm 1 Prediction Algorithm
Input: Test point: x, no. of nearest neighbors k, no. of
desired labels p.
1. Qτ : partition closest to x.
2. z ← V τ x.
3. Nk ← k nearest neighbors of z in the embedded instances of Qτ .
4. s = h(z; {zi , yi }i∈Qτ ) where h is defined in (1).
return top p scoring labels according to s.
In this work, we focus on learning algorithms for the functions g and h, inspired by their successes in natural language
processing in the context of learning distributional semantics (Mikolov et al. 2013; Levy and Goldberg 2014). In particular, we use techniques for inferring word-vector embeddings for learning the function h using a) training label vectors yi , and b) label-label correlations C ∈ RL×L .
Word embeddings are desired in natural language processing in order to understand semantic relationships between words, classifying text documents, etc. Given a text
corpus consisting of a collection of documents, the goal
is to embed each word in some space such that words appearing in similar contexts (i.e. adjacenct words in documents) should be closer in the space, than those that do
not. In particular, we use the word2vec embedding approach (Mikolov et al. 2013) to learn an embedding of instances, using their label vectors y1 , y2 , . . . , yn . S LEEC also
uses nearest neighbors in the space of label vectors yi in order to learn the embeddings. However, we show in experiments that word2vec based embeddings are richer and
help improve the prediction performance significantly, especially when there is a lot of missing labels. In the subsequent
section, we discuss our algorithms for learning the embeddings and the training phase of multi-label learning.
Learning Instance and Label Embeddings
There are multiple algorithms in the literature for
learning
word
embeddings
(Mikolov et al. 2013;
Pennington, Socher, and Manning 2014). In this work,
we use the Skip Gram Negative Sampling (SGNS) technique, for two reasons a) it is shown to be competitive in
natural language processing tasks, and more importantly b)
it presents a unique advantage in terms of scalability, which
we will address shortly after discussing the technique.
Skip Gram Negative Sampling. In SGNS, the goal is to
′
learn an embedding z ∈ Rd for each word w in the vocabulary. To do so, words are considered in the contexts in which
they occur; context c is typically defined as a fixed size window of words around an occurrence of the word. The goal is
to learn z such that the words in similar contexts are closer
to each other in the embedded space. Let w′ ∈ c denote a
word in the context c of word w. Then, the likelihood of observing the pair (w, w′ ) in the data is modeled as a sigmoid
of their inner product similarity:
Note that Nk (yi ) denotes the k-nearest neighborhood of ith
instance in the space of label vectors 1 or instance embedding. After learning label embeddings zi , we can learn the
function g : x → z by regressing x onto z, as in S LEEC.
Solving (3) for zi using standard word2vec implementations can be computationally expensive, as it requires training multiple-layer neural networks. Fortunately, the learning can be significantly sped up using the key observation
by (Levy and Goldberg 2014).
(Levy and Goldberg 2014) showed that solving SGNS objective is equivalent to matrix factorization of the shifted positive point-wise mutual information (SPPMI) matrix defined
as follows. Let Mij = hyi , yj i.
Mij ∗ |M |
P
PMIij (M ) = log P
k M(k,j)
k M(i,k) ∗
SPPMIij (M ) = max(PMIij (M ) − log(k), 0) (4)
Here, PMI is the point-wise mutual information matrix of M
and |M | denotes the sum of all elements in M . Solving the
problem (3) reduces to factorizing the shifted PPMI matrix
M.
Finally, we use ADMM (Boyd et al. 2011) to learn the re1
P (Observing (w, w′ )) = σ(hzw , zw′ i) =
. gressors V over the embedding space formed by z . Overall
i
1 + exp(h−zw , zw′ i)
training algorithm is presented in 2.
To promote dissimilar words to be further apart, negative sampling is used, wherein randomly sampled negAlgorithm 2 Learning embeddings via SPPMI factorizaative examples (w, w′′ ) are used. Overall objective fation (E X MLDS1).
vors zw , zw′ , zw′′ that maximize the log likelihood of obInput. Training data (xi , yi ), i = 1, 2, . . . , n.
serving (w.w′ ), for w′ ∈ c, and the log likelihood of
c := SPPMI(M ) in (4), where Mij =
′′
′′
1. Compute M
P (not observing (w, w )) = 1 − P (Observing (w, w )) for
hy
,
y
i.
randomly sampled negative instances. Typically, n− negai j
c), and preserve top d′ singular
tive examples are sampled per observed example, and the
2. Let U, S, V = svd(M
resulting SGNS objective is given by:
values and singular vectors.
3. Compute the embedding matrix Z = U S 0.5 , where
X X
′
log σ(hzw , zw′ i) +
max
Z ∈ Rn×d , where ith row gives zi
z
4. Learn V
s.t. XV T
=
Z using
w
w ′ :(w ′ ,w)
(2)
ADMM
(Boyd
et
al.
2011),
where
X
is
the
matrix
X
n−
with xi as rows.
log σ(−hzw , zw′′ i) ,
#w ′′
return V, Z
w
where #w denotes the total number of words in the vocabulary, and the negative instances are sampled uniformly over
the vocabulary.
Embedding label vectors
We now derive the analogous embedding technique for
multi-label learning. A simple model is to treat each instance
as a "word"; define the "context" as k-nearest neighbors of
a given instance in the space formed by the training label
vectors yi , with cosine similarity as the metric. We then arrive at an objective identical to (2) for learning embeddings
z1 , z2 , . . . , zn for instances x1 , x2 , . . . , xn respectively:
max
z1 ,z2 ,...,zn
n X
X
i=1
j:Nk (yi )
log σ(hzi , zj i) +
n− X
log σ(−hzi , zj ′ i)
n ′
j
,
(3)
We refer to Algorithm 2 based on fast PPMI matrix factorization for learning label vector embeddings
as E X MLDS1. We can also optimize the objective 3 using a
neural network model (Mikolov et al. 2013); we refer to this
word2vec method for learning embeddings in Algorithm 2
as E X MLDS2.
Using label correlations
In various practical natural language processing applications, superior performance is obtained using joint models
for learning embeddings of text documents as well as individual words in a corpus (Dai, Olah, and Le 2015). For example, in PV-DBoW (Dai, Olah, and Le 2015), the objective while learning embeddings is to maximize similarity
1
Alternately, one can consider the neighborhood in the ddimensional feature space xi ; however, we perform clustering in
this space for speed up, and therefore the label vectors are likely to
preserve more discriminative information within clusters.
between embedded documents and words that compose the
documents. Negative sampling is also included, where the
objective is to minimize the similarity between the document
embeddings and the embeddings of high frequency words.
In multi-label learning, we want to learn the embeddings
of labels as well as instances jointly. Here, we think of labels as individual words, whereas label vectors (or instances
with the corresponding label vectors) as paragraphs or documents. As alluded to in the beginning of Section , in many
real world problems, we may also have auxiliary label correlation information, such as label-label co-occurrence. We
can easily incorporate such information in the joint modeling approach outlined above. To this end, we propose the
following objective that incorporates information from both
label vectors as well as label correlations matrix:
max Oz,z̄ = µ1 Oz̄1 + µ2 O2z + µ3 O3 {z,z̄}
z,z̄
O1z̄ =
L
X
i=1
X
log σ(hz̄i , z̄j i) +
j:Nk (C(i,:))
n1− X
′
log σ(−hz̄i , z̄j i) ,
L ′
(5)
(6)
j
O2z =
n
X
i=1
X
j:Nk (M (i,:))
log σ(hzi , zj i) +
n2− X
log σ(−hzi , zj ′ i) ,
n ′
(7)
L X
X
i=1
n3−
L
j:yij =1
X
j′
log σ(hzi , z̄j i) +
log σ(−hzi , z̄j ′ i)
Algorithm 3 Learning joint label and instance embeddings
via SPPMI factorization (E X MLDS3).
Input. Training data (xi , yi ), i = 1, 2, . . . , n and C (labellabel correlation matrix) and objective weighting µ1 ,µ2
and µ3 .
b := SPPMI(A) in (4); write
1. Compute A
µ2 M µ3 Y
,
A=
µ3 Y T µ1 C
Mij = hyi , yj i, Y is label matrix with yi as rows.
b and preserve top d′ singular
2. Let U, S, V = svd(A),
values and singular vectors.
3. Compute the embedding matrix Z = U S 0.5 ; write
Z1
Z=
,
Z2
′
j
O3{z,z̄} =
objective efficiently utilizes label-label correlations to help
improve embedding and, importantly, to cope with missing
labels. The complete training procedure using SPPMI factorization is presented in Algorithm 3. Note that we can use
the same arguments given by (Levy and Goldberg 2014) to
show that the proposed combined objective (5) is solved by
SPPMI factorization of the joint matrix A given in Step 1 of
Algorithm 3.
(8)
Here, zi , i = 1, 2, . . . , n denote embeddings of instances
while z̄i , i = 1, 2, . . . , L denote embeddings of labels.
Nk (M (i, :)) denotes the k-nearest neighborhood of ith instance in the space of label vectors. Nk (C(i, :)) denotes the
k-nearest neighborhood of ith label in the space of labels.
Here, M defines instance-instance correlation i.e. Mij =
hyi , yj i and C is the label-label correlation matrix. Clearly,
(7) above is identical to (3). O1z̄ tries to embed labels z̄i in a
vector space, where correlated labels are closer; O2z tries to
embed instances zi in such a vector space, where correlated
instances are closer; and finally, O3{z,z̄} tries to embed labels
and instances in a common space where labels occurring in
the ith instance are closer to the embedded instance.
Overall the combined objective O{z,z̄} promotes learning
a common embedding space where correlated labels, correlated instances and observed labels for a given instance
occur closely. Here µ1 ,µ2 and µ3 are hyper-parameters to
weight the contributions from each type of correlation. n1−
negative examples are sampled per observed label, n2− negative examples are sampled per observed instance in context of labels and n3− negative examples are sampled per observed instance in context of instances. Hence, the proposed
where rows of Z1 ∈ Rn×d give instance embedding and
′
rows of Z2 ∈ RL×d give label embedding.
4. Learn V
s.t. XV T
=
Z1 using
ADMM (Boyd et al. 2011), where X is the matrix
with xi as rows.
return V, Z
Algorithm 4 Prediction Algorithm with Label Correlations
(E X MLDS3 prediction).
Input: Test point: x, no. of nearest neighbors k, no. of
desired labels p, V , embeddings Z1 and Z2 .
1. Use Algorithm 1 (Step 3) with input Z1 , k, p to get
score s1 .
3. Get score s2 = Z2 V x
4. Get final score s = kss11 k + kss22 k .
return top p scoring labels according to s.
At test time, given a new data point we could use the Algorithm 1 to get top p labels. Alternately, we propose to use Algorithm 4 that also incorporates similarity with label embeddings Z2 along with Z1 during prediction, especially when
there are very few training labels to learn from. In practice,
we find this prediction approach useful. Note the zi corresponds to the ith row of Z1 , and z̄j corresponds to the j th
row of Z2 . We refer the Algorithm 3 based on the combined
learning objective (5) as E X MLDS3.
Experiments
We conduct experiments on commonly used benchmark
datasets from the extreme multi-label classification repository provided by the authors of (Prabhu and Varma 2014;
Bhatia et al. 2015) 2 ; these datasets are pre-processed, and
have prescribed train-test splits. Statistics of the datasets
used in experiments is shown in Table 1. We use the standard,
practically relevant, precision at k (denoted by Prec@k) as
the evaluation metric of the prediction performance. Prec@k
denotes the number of correct labels in the top k predictions. We run our code and all other baselines on a Linux
machine with 40 cores and 128 GB RAM. We implemented
our prediction Algorithms 1 and 4 in M ATLAB. Learning
Algorithms 2 and 3 are implemented parlty in Python and
partly in M ATLAB. The source code will be made available
later. We evaluate three models (a) E X MLDS1 i.e. Algorithm 2 based on fast PPMI matrix factorization for learning
label embeddings as described in Section , (b) E X MLDS2
based on optimizing the objective (3) as described in section
, using neural network (Mikolov et al. 2013) (c) E X MLDS3
i.e. Algorithm 3 based on combined learning objective (5).
Compared methods.
following baselines.
We compare our algorithms with the
1. S LEEC (Bhatia et al. 2015), which was shown to outperform all other embedding baselines on the benchmark
datasets.
2. L EML (Yu et al. 2014), an embedding based method. This
method also facilitates incorporating label information
(though not proposed in the original paper); we use the
code given by the authors of L EML which uses item features3 . We refer to the latter method that uses label correlations as L EML -I MC.
3. FAST XML
method.
(Prabhu and Varma 2014),
a
tree-based
4. P D -S PARSE (Yen et al. 2016), recently proposed embedding based method
5. P FASTRE XML (Jain, Prabhu, and Varma 2016) is an extension of FAST XML; it was shown to outperform all
other tree-based baselines on benchmark datasets.
6. D I SMEC (Babbar and Schölkopf 2017) is recently proposed scalable implementation of the O NE - VS -A LL
method.
7. DXML (Zhang et al. 2017) is a recent deep learning solution for multi-label learning
8. O NE - VS -A LL (Zhang et al. 2017) is traditional one vs all
multi-label classifier
We report all baseline results from the the extreme classification repository. 4 , where they have been curated; note that
all the relevant research work use the same train-test split for
benchmarking.
2
http://manikvarma.org/downloads/XC/XMLRepository.html
https://goo.gl/jdGbDPl
4
http://manikvarma.org/downloads/XC/XMLRepository.html
Hyperparameters. We use the same embedding dimensionality, preserve the same number of nearest neighbors for learning embeddings as well as at prediction
time, and the same number of data partitions used in
S LEEC (Bhatia et al. 2015) for our method E X MLDS1and
E X MLDS2. For small datasets, we fix negative sample size
to 15 and number of iterations to 35 during neural network
training, tuned based on a separate validation set. For large
datasets (4 and 5 in Table 1), we fix negative sample size
to 2 and number of iterations to 5, tuned on a validation set.
In E X MLDS3, the parameters (negative sampling) are set
identical to E X MLDS1. For baselines, we either report results from the respective publications or used the best hyperparameters reported by the authors in our experiments, as
needed.
Performance evaluation. The performance of the compared methods are reported in Table 3. Performances of the
proposed methods E X MLDS1 and E X MLDS2 are found
to be similar in our experiments, as they optimize the same
objective 3; so we include only the results of E X MLDS1 in
the Table. We see that the proposed methods achieve competitive prediction performance among the state-of-the-art
embedding and tree-based approaches. In particular, note
that on Medialmill and Delicious-200K datasets our method
achieves the best performance.
Training time. Objective 3 can be trained using a neural
network, as described in (Mikolov et al. 2013). For training
the neural network model, we give as input the k-nearest
neighbor instance pairs for each training instance i, where
the neighborhood is computed in the space of the label
vectors yi . We use the Google word2vec code5 for training. We parallelize the training on 40 cores Linux machine
for speed-up. Recall that we call this method E X MLDS2.
We compare the training time with our method E X MLDS1,
which uses a fast matrix factorization approach for learning
embeddings. Algorithm 2 involves a single SVD as opposed
to iterative SVP used by S LEEC and therefore it is significantly faster. We present training time measurements in Table 2. As anticipated, we observe that E X MLDS2 which
uses neural networks is slower than E X MLDS1 (with 40
cores). Also, among the smaller datasets, E X MLDS1 trains
14x faster compared to S LEECon Bibtex dataset. In the large
dataset, Delicious-200K, E X MLDS1 trains 5x faster than
S LEEC.
Coping with missing labels. In many real-world scenarios, data is plagued with lots of missing labels. A desirable
property of multi-label learning methods is to cope with
missing labels, and yield good prediction performance with
very few training labels. In the dearth of training labels, auxiliary information such as label correlations can come in
handy. As described in Section , our method E X MLDS3 can
learn from additional information. The benchmark datasets,
however, do not come with auxiliary information. To simulate this setting, we hide 80% non-zero entries of the training label matrix, and reveal the 20% training labels to learning algorithms. As a proxy for label correlations matrix C,
we simply use the label-label co-occurrence from the 100%
3
5
https://code.google.com/archive/p/word2vec/
Table 1: Dataset statistics
Dataset
Bibtex (Katakis, Tsoumakas, and Vlahavas 2008)
Delicious (Tsoumakas, Katakis, and Vlahavas 2008)
EURLex-4K (Loza Mencía and Fürnkranz 2008)
rcv1v2 (Lewis et al. 2004)
Delicious-200K (Tsoumakas, Katakis, and Vlahavas 2008)
MediaMill (Snoek et al. 2006)
Feature
1836
500
5000
47236
782585
120
Label
159
983
3993
101
205443
101
Train
4880
12920
15539
3000
196606
30993
Test
2515
3185
3809
3000
100095
12914
Table 2: Comparing training times (in seconds) of different methods
Method
E X MLDS1
E X MLDS2
S LEEC
Bibtex
23
143.19
313
Delicious
259
781.94
1351
Eurlex
580.9
880.64
4660
Mediamill
1200
12000
8912
Delicious-200K
1937
13000
10000
Table 3: Comparing prediction performance of different methods(− mean unavailable results). Note that although SLEEC performs slightly
better, our model is much faster as shown in the results in Table 2. Also note the performance of our model in Table 5 when a significant
fraction of labels are missing is considerably better than SLEEC
Dataset
Bibtex
Delicious
Eurlex
Mediamill
Delicious-200K
Prec@k
P@1
P@3
P@5
P@1
P@3
P@5
P@1
P@3
P@5
P@1
P@3
P@5
P@1
P@3
P@5
E X MLDS1
63.38
38.00
27.64
67.94
61.35
56.3
77.55
64.18
52.51
87.49
72.62
58.46
46.07
41.15
38.57
Embedding Based
DXML
S LEEC
L EML
63.69
65.29
62.54
37.63
39.60
38.41
27.71
28.63
28.21
67.57
68.10
65.67
61.15
61.78
60.55
56.7
57.34
56.08
77.13
79.52
63.40
64.21
64.27
50.35
52.31
52.32
41.28
87.37
84.01
88.71
72.6
67.20
71.65
52.80
56.81
58.39
44.13
47.50
40.73
39.88
42.00
37.71
37.20
39.20
35.84
training data, i.e. C = Y T Y where Y denotes the full training matrix. We give higher weight µ1 to O1 during training
in Algorithm 3. For prediction, We use Algorithm 4 which
takes missing labels into account. We compare the performance of E X MLDS3with S LEEC, L EML and L EML -I MCin
Table 5. Note that while S LEEC and L EML methods do not
incorporate such auxiliary information, L EML -I MC does. In
particular, we use the spectral embedding based features i.e.
SVD of Y Y T and take all the singular vectors corresponding to non-zero singular values as label features. It can be
observed that on all three datasets, E X MLDS3 performs
significantly better by huge margins. In particular, the lift
over L EML -I MC is significant, even though both the methods use the same information. This serves to demonstrate
the strength of our approach.
P D -S PARSE
61.29
35.82
25.74
51.82
44.18
38.95
76.43
60.37
49.72
81.86
62.52
45.11
34.37
29.48
27.04
Tree Based
P FASTRE XML
FAST XML
63.46
63.42
39.22
39.23
29.14
28.86
67.13
69.61
62.33
64.12
58.62
59.27
75.45
71.36
62.70
59.90
52.51
50.39
84.22
83.98
67.33
67.37
53.04
53.02
41.72
43.07
37.83
38.66
35.58
36.19
Others
O NE - VS -A LL
D I SMEC
62.62
39.09
28.79
65.01
58.88
53.28
79.89
82.40
66.01
68.50
53.80
57.70
83.57
65.60
48.57
45.50
38.70
35.50
Table 4: Evaluating competitive methods in the setting where 80%
of the training labels are hidden
Dataset
Bibtex
Eurlex
rcv1v2
Prec@k
P@1
P@3
P@5
P@1
P@3
P@5
P@1
P@3
P@5
E X MLDS3
48.51
28.43
20.7
60.28
44.87
35.31
81.67
52.82
37.74
S LEEC
30.5
14.9
9.81
51.4
37.64
29.62
41.8
17.48
10.63
L EML
35.98
21.02
15.50
26.22
22.94
19.02
64.83
42.56
31.68
L EML -I MC
41.23
25.25
18.56
39.24
32.66
26.54
73.68
48.56
34.82
Joint Embedding and Regression. We extended SGNS
objective for joint training i.e. learning of embeddings Z and
regressor V simultaneously.
∇V
Ot+1
= Oti + η∇V Oi
i
Gradient of objective 3 w.r.t to V i.e. ∇V Oi is describe in
detail below :
Given,
∇V
Kij = hzi zj i = hzTi zj i
′
zi = V xi , where V ∈ Rd ×d
max
z1 ,z2 ,...,zn
n X
X
i=1
j:Nk (yi )
log σ(hzi , zj i) +
n− X
′
log σ(−hzi , zj i) ,
n ′
(9)
n X
X
i=1
j:Nk (yi )
log σ(hV xi , V xj i) +
n− X
′
log σ(−hV xi , V xj i) ,
n ′
n X
X
i=1
j:Nk (yi )
Ot+1
= Oti + η∇V Oi
i
(10)
n− X
log σ(Kij ) +
log σ(−Kij ′ ) ,
n ′
j
rewriting for only ith instance, we have
X
n− X
log σ(Kij ) +
Oi =
log σ(−Kij ′ ) ,
n ′
j
j:Nk (yi )
X
∇V Oi =
σ(−Kij )∇V Kij −
n− X
σ(Kij ′ )∇V Kij ′
n ′
here, ∇V Kij can be obtain through,
∇V Kij = V (xi xTj + xj xTi ) = zi .xTj + zj .xTi
Sometime cosine similarity perform better then dot product because of scale invariant, in that case the gradient would
modify to :
Kij = h
zi zj
hzTi zj i
i=
kzi k kzj k
kzi kkzj k
∇V hzi zj i = ∇V (V xi )zj + ∇V (V xj )zi
=
hzi xTj i
To implement joint learning, we modified the existing
public code of state of art embedding based extreme
classification approach AnnexML (Tagami 2017a) 6 , by
replacing the DSSM 7 training objective by word2vec
objective, while keeping cosine similarity, partitioning
algorithm, and approximate nearest prediction algorithm
same. For efficient training of rare label, we keep the coefficient ratio of negative to positive samples as 20:1, while
training. We used the same hyper-parameters i.e.embedding
size as 50, number of learner for each cluster as 15,
number of nearest neighbor as 10, number of embedding
and partitioning iteration both 100, gamma as 1, label
normalization as true, number of threads as 32. We obtain state of art result i.e. similar (some dataset slightly
better also) to D I SMEC (Babbar and Schölkopf 2017),
PPD SPARSE
(Yen et al. 2017)
and
ANNEX ML (Tagami 2017b) on all large datasets, see table 5
for details results.
Conclusions and Future Work
j
j:Nk (yi )
+
hzj xTi i
=V
(hxi xTj
+
xj xTi i)
1
1
,c =
kzi k
kzj k
Gradient update after tth iteration for ith instance,
j
V
(14)
∇V Kij = −ab3 czi (xi )T − abc3 zj (xj )T + bc(zi xTj + zj xTi )
rewriting with s.t.t V and Kij , we obtained
max
−1
−1 T −3
1
= ∇V zj zTj 2 =
zj zj 2 ∇V zj zTj =
kzj k
2
−1 T −3
zj zj 2 xj zTj
2
a = zTi zj , b =
j
V
(13)
Let,
Objective 3 :
max
−1
1
−1 T −3
= ∇V zi zTi 2 =
zi zi 2 ∇V zi zTi =
kzi k
2
−1 T −3
zi zi 2 xi zTi
2
(11)
We proposed a novel objective for learning label embeddings
for multi-label classification, that leverages word2vec embedding technique; furthermore, the proposed formulation
can be optimized efficiently by SPPMI matrix factorization. Through comprehensive experiments, we showed that
the proposed method is competitive compared to state-ofthe-art multi-label learning methods in terms of prediction
accuracies. We also extended SGNS objective for joint learning of embeddings Z and regressor V and obtain state of art
results. We proposed a novel objective that incorporates side
information, that is particularly effective in handling missing
labels.
6
(12)
7
Code: https://research-lab.yahoo.co.jp/en/software/
https://www.microsoft.com/en-us/research/project/dssm/
Table 5: Performance on multiple large datasets with Joint Learning
Prec@1
Prec@2
Prec@3
Prec@4
Prec@5
nDCG@1
nDCG@2
nDCG@3
nDCG@4
nDCG@5
AmazonCat-13K
93.05
86.56
79.18
72.06
64.54
93.05
89.60
87.72
86.35
85.92
Wiki10K-31K
86.82
80.44
74.30
68.61
63.68
86.82
81.89
77.22
72.92
69.13
Delicious-200K
47.70
43.67
41.22
39.37
37.98
47.70
44.63
42.75
41.34
40.27
References
[Babbar and Schölkopf 2017] Babbar, R., and Schölkopf, B.
2017. Dismec: Distributed sparse machines for extreme
multi-label classification. In Proceedings of the Tenth ACM
International Conference on Web Search and Data Mining,
WSDM ’17, 721–729. New York, NY, USA: ACM.
[Bhatia et al. 2015] Bhatia, K.; Jain, H.; Kar, P.; Varma, M.;
and Jain, P. 2015. Sparse local embeddings for extreme
multi-label classification. In Advances in Neural Information Processing Systems, 730–738.
[Boyd et al. 2011] Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.;
and Eckstein, J. 2011. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends R in Machine Learning 3(1):1–
122.
[Dai, Olah, and Le 2015] Dai, A. M.; Olah, C.; and Le, Q. V.
2015. Document embedding with paragraph vectors. arXiv
preprint arXiv:1507.07998.
[Jain, Prabhu, and Varma 2016] Jain, H.; Prabhu, Y.; and
Varma, M. 2016. Extreme multi-label loss functions for
recommendation, tagging, ranking & other missing label applications. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining, 935–944. ACM.
[Katakis, Tsoumakas, and Vlahavas 2008] Katakis,
I.;
Tsoumakas, G.; and Vlahavas, I. 2008. Multilabel text
classification for automated tag suggestion. ECML PKDD
discovery challenge 75.
[Le and Mikolov 2014] Le, Q. V., and Mikolov, T. 2014. Distributed representations of sentences and documents. In
ICML, volume 14, 1188–1196.
[Levy and Goldberg 2014] Levy, O., and Goldberg, Y. 2014.
Neural word embedding as implicit matrix factorization. In
Advances in neural information processing systems, 2177–
2185.
[Lewis et al. 2004] Lewis, D. D.; Yang, Y.; Rose, T. G.; and
Li, F. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research
5(Apr):361–397.
[Loza Mencía and Fürnkranz 2008] Loza Mencía, E., and
Fürnkranz, J. 2008. Efficient pairwise multilabel classifica-
WikiLSHTC-325K
62.15
48.37
39.58
33.52
29.10
62.15
57.00
55.20
54.77
54.84
Wikipedia-500K
62.27
49.60
41.43
35.68
31.42
62.27
55.23
52.11
50.59
49.88
Amazon-670K
41.47
38.58
36.35
34.20
32.43
41.47
39.73
38.41
37.31
36.46
tion for large-scale problems in the legal domain. Machine
Learning and Knowledge Discovery in Databases 50–65.
[Mikolov et al. 2013] Mikolov, T.; Sutskever, I.; Chen, K.;
Corrado, G. S.; and Dean, J. 2013. Distributed representations of words and phrases and their compositionality. In
Advances in neural information processing systems, 3111–
3119.
[Pennington, Socher, and Manning 2014] Pennington,
J.;
Socher, R.; and Manning, C. D. 2014. Glove: Global
vectors for word representation. In EMNLP, volume 14,
1532–1543.
[Prabhu and Varma 2014] Prabhu, Y., and Varma, M. 2014.
Fastxml: A fast, accurate and stable tree-classifier for extreme multi-label learning. In Proceedings of the 20th ACM
SIGKDD international conference on Knowledge discovery
and data mining, 263–272. ACM.
[Snoek et al. 2006] Snoek, C. G.; Worring, M.; Van Gemert,
J. C.; Geusebroek, J.-M.; and Smeulders, A. W. 2006. The
challenge problem for automated detection of 101 semantic
concepts in multimedia. In Proceedings of the 14th ACM
international conference on Multimedia, 421–430. ACM.
[Tagami 2017a] Tagami, Y. 2017a. Annexml: Approximate
nearest neighbor search for extreme multi-label classification. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, 455–464. New York, NY, USA: ACM.
[Tagami 2017b] Tagami, Y. 2017b. Learning extreme multilabel tree-classifier via nearest neighbor graph partitioning.
In Proceedings of the 26th International Conference on
World Wide Web Companion, WWW ’17 Companion, 845–
846. Republic and Canton of Geneva, Switzerland: International World Wide Web Conferences Steering Committee.
[Tsoumakas, Katakis, and Vlahavas 2008] Tsoumakas, G.;
Katakis, I.; and Vlahavas, I. 2008. Effective and efficient
multilabel classification in domains with large number of
labels. In Proc. ECML/PKDD 2008 Workshop on Mining
Multidimensional Data (MMD’08), 30–44.
[Weston, Bengio, and Usunier 2010] Weston, J.; Bengio, S.;
and Usunier, N. 2010. Large scale image annotation: learning to rank with joint word-image embeddings. Machine
learning 81(1):21–35.
[Yen et al. 2016] Yen, I. E.-H.; Huang, X.; Ravikumar, P.;
Zhong, K.; and Dhillon, I. 2016. Pd-sparse: A primal
and dual sparse approach to extreme multiclass and multilabel classification. In International Conference on Machine
Learning, 3069–3077.
[Yen et al. 2017] Yen, I. E.; Huang, X.; Dai, W.; Ravikumar,
P.; Dhillon, I.; and Xing, E. 2017. Ppdsparse: A parallel
primal-dual sparse method for extreme classification. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17,
545–553. New York, NY, USA: ACM.
[Yu et al. 2014] Yu, H.-F.; Jain, P.; Kar, P.; and Dhillon, I.
2014. Large-scale multi-label learning with missing labels. In International Conference on Machine Learning,
593–601.
[Zhang et al. 2017] Zhang, W.; Wang, L.; Yan, J.; Wang, X.;
and Zha, H. 2017. Deep extreme multi-label learning. arXiv
preprint arXiv:1704.03718.
Leveraging Distributional Semantics for
Multi-Label Learning
A2. SGNS Objective as Implicit SPPMI
factorization
The SGNS (Mikolov et al. 2013) objective is as follows:
Oi =
X
M
X
log(σ(Kij )) +
j∈Si
Ek∼PD [log(σ(−Kik ))]
k∼PD
0.75
where, PD = (#k)
#D , D is collection of all word-context
pairs and Kij represent dot-product similarity between the
embeddings of a given word (i) and context (j).
Here, #k represent total number of word-context pairs
with context (k).
Ek∼PD [log(σ(−Kik ))]
k∼PD
X (#k)0.75
log(σ(−Kik ))
#D
Ek∼PD [log(σ(−Kik ))] =
k∼PD
Ek∼PD [log(σ(−Kik ))] =
+
X
k∼PD &k6=j
(#j)0.75
log(σ(−Kij ))
#D
(#k)0.75
log(σ(−Kik ))
#D
Therefore,
Ej∼PD [log(σ(−Kij ))] =
O{i,j} = log(σ(Kij )) +
(#j)0.75
log(σ(−Kij ))
#D
M (#j)0.75
log(σ(−Kij ))
|S| #D
Let γKij = x, then
∇x O{i,j} = σ(−x) −
M (#j)0.75
σ(x)
|S| #D
equating ∇x J{i,j} to 0, we get :
e2x −
1
M (#j)0.75
|S| #D
− 1 ex −
1
M (#j)0.75
|S|
#D
=0
If we define y = ex , this equation becomes a quadratic
equation of y, which has two solutions, y =- 1 (which is
invalid given the definition of y) and
y=
1
M (#j)0.75
|S|
#D
=
Here |S| = #(i, j) and M = µ#(i) i.e. µ proportion of total
number of times label vector (i) appear with others.
#(i, j)(#D)
− log(µ)
Kij = log
#(i)(#j)0.75
P (i, j)
Kij = log
− log(µ)
P (i)P (j)
Here P(i,j),P(i) and P(j) represent probability of cooccurrences of {i, j} , occurrence of i and occurrence of j
respectively,
Therefore,
Kij = PMIij − log(µ) = log(P (i|j)) − log(µ)
M
|S|
X
O{i,j} = log(σ(Kij )) +
Substituting y with ex and x with Kij reveals :
#D ∗ |S|
Kij = log
M ∗ (#j)0.75
#D ∗ |S|
M ∗ (#j)0.75
Note that P M I + is inconsistent, therefore we used the
sparse and consistent positive PMI(PPMI) metric, in which
all negative values and nan are replaced by 0:
PPMIij = max(PMIij , 0)
Here, PMI is point wise mutual information and PPMI
is positive point wise mutual information. Similarity of two
{i, j} is more influenced by the positive neighbor they share
than by the negative neighbor they share as uninformative i.e.
0 value. Hence, SGNS objective can be cast into a weighted
matrix factorization problem, seeking the optimal lower ddimensional factorization of the matrix SPPMI under a metric which pays more for deviations on frequent #(i, j) pairs
than deviations on infrequent ones.
Using a similar derivation, it can be shown that noisecontrastive estimation (NCE) which is alternative to (SGNS)
can be cast as factorization of (shifted) log-conditionalprobability matrix
#(i, j)
Kij = log
− log(µ)
(#j)
| 2 |
arXiv:1704.02043v2 [math.AG] 9 Nov 2017
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS AND
GROUPS OF PSEUDO-AUTOMORPHISMS
SERGE CANTAT AND YVES DE CORNULIER
A BSTRACT. Pseudo-automorphisms are birational transformations acting as regular automorphisms in codimension 1. We import ideas from geometric group
theory to study groups of birational transformations, and prove that a group of birational transformations that satisfies a fixed point property on CAT(0) cubical complexes is birationally conjugate to a group acting by pseudo-automorphisms on
some non-empty Zariski-open subset. We apply this argument to classify groups
of birational transformations of surfaces with this fixed point property up to birational conjugacy.
1. I NTRODUCTION
1.1. Birational transformations and pseudo-automorphisms. Let X be a quasiprojective variety, over an algebraically closed field k. Denote by Bir(X) the group
of birational transformations of X and by Aut(X) the subgroup of (regular) automorphisms of X. For the affine space of dimension n, automorphisms are invertible
transformations f : Ank → Ank such that both f and f −1 are defined by polynomial
formulas in affine coordinates:
f (x1 , . . . , xn ) = (f1 , . . . , fn ), f −1 (x1 , . . . , xn ) = (g1 , . . . , gn )
with fi , gi ∈ k[x1 , . . . , xn ]. Similarly, birational transformations of Ank are given by
rational formulas, i.e. fi , gi ∈ k(x1 , . . . , xn ).
Birational transformations may contract hypersurfaces. Roughly speaking, pseudo-automorphisms are birational transformations that act as automorphisms in
codimension 1. Precisely, a birational transformation f : X 99K X is a pseudoautomorphism if there exist Zariski-open subsets U and V in X such that X r U
and X r V have codimension ≥ 2 and f induces an isomorphism from U to V.
The pseudo-automorphisms of X form a group, which we denote by Psaut(X).
For instance, all birational transformations of Calabi-Yau manifolds are pseudoautomorphisms; and there are examples of such manifolds for which Psaut(X) is
Date: November 9, 2017.
2010 Mathematics Subject Classification. Primary 14E07, Secondary 14J50, 20F65.
1
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
2
infinite while Aut(X) is trivial (see [10]). Pseudo-automorphisms are studied in
Section 2.
Definition 1.1. Let Γ ⊂ Bir(X) be a group of birational transformations of an
irreducible projective variety X. We say that Γ is pseudo-regularizable if there
exists a triple (Y, U, ϕ) where
(1) Y is a projective variety and ϕ : Y 99K X is a birational map;
(2) U is a dense Zariski open subset of Y ;
(3) ϕ−1 ◦ Γ ◦ ϕ yields an action of Γ by pseudo-automorphisms on U.
More generally if α : Γ → Bir(X) is a homomorphism, we say that it is pseudoregularizable if α(Γ) is pseudo-regularizable.
One goal of this article is to use rigidity properties of commensurating actions,
a purely group-theoretic concept, to show that many group actions are pseudoregularizable. In particular, we exhibit a class of groups for which all actions by
birational transformations on projective varieties are pseudo-regularizable.
1.2. Property (FW). The class of groups we shall be mainly interested in is characterized by a fixed point property appearing in several related situations, for instance for actions on CAT(0) cubical complexes. Here, we adopt the viewpoint of
commensurated subsets. Let Γ be a group, and Γ × S → S an action of Γ on a set
S. Let A be a subset of S. One says that Γ commensurates A if the symmetric
difference
γ(A)4A = (γ(A) r A) ∪ (A r γ(A))
is finite for every element γ of Γ. One says that Γ transfixes A if there is a subset
B of S such that A4B is finite and B is Γ-invariant: γ(B) = B, for every γ in Γ.
A group Γ has Property (FW) if, given any action of Γ on any set S, all com√
mensurated subsets of S are automatically transfixed. For instance, SL 2 (Z[ 5]) and
SL 3 (Z) have Property (FW), but non-trivial free groups do not share this property.
Property (FW) is discussed in Section 3.
Let us mention that among various characterizations of Property (FW) (see [11]),
one is: every combinatorial action of Γ on a CAT(0) cube complex fixes some cube.
Another, for Γ finitely generated, is that all its infinite Schreier graphs are oneended.
1.3. Pseudo-regularizations. Let X be a projective variety. The group Bir(X)
does not really act on X, because there are indeterminacy points; it does not act
on the set of hypersurfaces either, because some of them may be contracted. As
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
3
˜
we shall explain, one can introduce the set Hyp(X)
of all irreducible and reduced
0
hypersurfaces in all birational models X 99K X (up to a natural identification).
Then there is a natural action of the group Bir(X) on this set, given by strict transforms. The rigorous construction of this action follows from a general categorical
framework, which is developed in Section 4. Moreover, this action commensurates
the subset Hyp(X) of hypersurfaces of X. This construction leads to the following
result.
Theorem A. Let X be a projective variety over an algebraically closed field. Let Γ
be a subgroup of Bir(X). If Γ has Property (FW), then Γ is pseudo-regularizable.
There is also a relative version of Property (FW) for pairs of groups Λ ≤ Γ,
which leads to a similar pseudo-regularization theorem for the subgroup Λ: this is
discussed in Section 6.4, with applications to distorted birational transformations.
Remark 1.2. Theorem A provides a triple (Y, U, ϕ) such that ϕ conjugates Γ to
a group of pseudo-automorphisms on the open subset U ⊂ Y . There are two extreme cases for the pair (Y, U) depending on the size of the boundary Y r U. If
this boundary is empty, Γ acts by pseudo-automorphisms on a projective variety Y .
If the boundary is ample, its complement U is an affine variety, and then Γ actually acts by regular automorphisms on U (see Section 2.4). Thus, in the study of
groups of birational transformations, pseudo-automorphisms of projective varieties
and regular automorphisms of affine varieties deserve specific attention.
1.4. Classification in dimension 2. In dimension 2, pseudo-automorphisms do not
differ much from automorphisms; for instance, Psaut(X) coincides with Aut(X) if
X is a smooth projective surface. Thus, for groups with Property (FW), Theorem A can be used to reduce the study of birational transformations to the study
of automorphisms of quasi-projective surfaces. Combining results of Danilov and
Gizatullin on automorphisms of affine surfaces with a theorem of Farley on groups
of piecewise affine transformations of the circle, we will be able to prove the following theorem.
Theorem B. Let X be a smooth, projective, and irreducible surface, over an algebraically closed field. Let Γ be an infinite subgroup of Bir(X). If Γ has Property
(FW), there is a birational map ϕ : Y 99K X such that
(1) Y is the projective plane P2 , a Hirzebruch surface Fm with m ≥ 1, or the
product of a curve C by the projective line P1 . If the characteristic of the
field is positive, Y is the projective plane P2k .
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
4
(2) ϕ−1 ◦ Γ ◦ ϕ is contained in Aut(Y ).
Remark 1.3. The group Aut(Y ) has finitely many connected components for all
surfaces Y listed in Assertion (1) of Theorem B. Thus, changing Γ into a finite
index subgroup Γ0 , one gets a subgroup of Aut(Y )0 . Here Aut(Y )0 denotes the
connected component of the identity of Aut(Y ); this is an algebraic group, acting
algebraically on Y .
Example 1.4. Groups with Kazhdan Property (T) satisfy Property (FW). Thus, Theorem B extends Theorem A of [8] and the present article offers a new proof of that
result.
√
Theorem B can also be applied to the group SL 2 (Z[ d]), where d ≥ 2 is a nonsquare positive integer. Thus, every action of this group on a projective surface by
birational transformations is conjugate to an action by regular automorphisms on
P2k , the product of a curve C by the projective line P1k , or a Hirzebruch surface.
Moreover, in this case, Margulis’ superrigidity theorem can be combined with Theorem B to get a more precise result, see §10.
Remark 1.5. In general, for a variety X one can ask whether Bir(X) transfixes
Hyp(X), or equivalently is pseudo-regularizable. For a surface X, this holds precisely when X is not birationally equivalent to the product of the projective line
with a curve. See §7.1 for more precise results.
1.5. Acknowledgement. This work benefited from interesting discussions with
Jérémy Blanc, Vincent Guirardel, Vaughan Jones, Christian Urech, and Junyi Xie.
2. P SEUDO - AUTOMORPHISMS
This preliminary section introduces useful notation for birational transformations
and pseudo-automorphisms, and presents a few basic results.
2.1. Birational transformations. Let X and Y be two irreducible and reduced
algebraic varieties over an algebraically closed field k. Let f : X 99K Y be a
birational map. Choose dense Zariski open subsets U ⊂ X and V ⊂ Y such
that f induces an isomorphism fU,V : U → V . Then the graph Gf of f is defined
as the Zariski closure of {(x, fU,V (x)) : x ∈ U } in X × Y ; it does not depend on
the choice of U and V . The graph Gf is an irreducible variety; both projections
u : Gf → X and v : Gf → Y are birational morphisms and f = v ◦ u−1 .
We shall denote by Ind(f ) the indeterminacy set of the birational map f .
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
5
Theorem 2.1 (Theorem 2.17 in [23]). Let f : X 99K Y be a rational map, with X
a normal variety and Y a projective variety. Then the indeterminacy set of f has
codimension ≥ 2.
Example 2.2. The transformation of the affine plane (x, y) 7→ (x, y/x) is birational,
and its indeterminacy locus is the line {x = 0}: this set of co-dimension 1 is
mapped “to infinity”. If the affine plane is compactified by the projective plane,
the transformation becomes [x : y : z] 7→ [x2 : yz : xz], with two indeterminacy
points.
Assume that X is normal; in particular, it is smooth in codimension 1. The jacobian determinant Jac(f )(x) is defined in local coordinates, on the smooth locus
of X, as the determinant of the differential dfx ; Jac(f ) depends on the coordinates,
but its zero locus does not. The zeroes of Jac(f ) form a hypersurface of the smooth
part of X; the zero locus of Jac(f ) will be defined as the Zariski closure of this
hypersurface in X. The exceptional set of f is the subset of X along which f is
not a local isomorphism onto its image; by a corollary of Zariski’s main theorem,
it coincides with the union of Ind(f ), the zero locus of Jac(f ), and additional parts
which are contained in the singular locus of X and have therefore codimension ≥ 2.
Its complement is the largest open subset on which f is a local isomorphism (see
[33, 36], for instance).
The total transform of a subset Z ⊂ X is denoted by f∗ (Z). If Z is not contained in Ind(f ), we denote by f◦ (Z) its strict transform, defined as the Zariski
closure of f (Z r Ind(f )). We say that a hypersurface W ⊂ Z is contracted if it is
not contained in the indeterminacy set and the codimension of its strict transform is
larger than 1.
2.2. Pseudo-isomorphisms. A birational map f : X 99K Y is a pseudo-isomorphism if one can find Zariski open subsets U ⊂ X and V ⊂ Y such that
(i) f realizes a regular isomorphism from U to V and
(ii) X r U and Y r V have codimension ≥ 2.
Pseudo-isomorphisms from X to itself are called pseudo-automorphisms (see
§ 1.2). The set of pseudo-automorphisms of X is a subgroup Psaut(X) of Bir(X).
Example 2.3. Start with the standard birational involution σn : Pnk 99K Pnk which
is defined in homogeneous coordinates by σn [x0 : . . . : xn ] = [x−1
: . . . : x−1
].
0
Q n
Blow-up the (n + 1) vertices of the simplex ∆n = {[x0 : . . . : xn ];
xi =
0}; this provides a smooth rational variety Xn together with a birational morphism
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
6
π : Xn → Pnk . Then, π −1 ◦ σn ◦ π is a pseudo-automorphism of Xn , and is an
automorphism if n ≤ 2.
Proposition 2.4. Let f : X 99K Y be a birational map between two (irreducible,
reduced) normal algebraic varieties. Assume that the codimension of the indeterminacy sets of f and f −1 is at least 2. Then, the following properties are equivalent:
(1) The birational maps f and f −1 do not contract any hypersurface.
(2) The jacobian determinants of f and f −1 do not vanish on the regular loci
of X r Ind(f ) and Y r Ind(f −1 ) respectively.
(3) For every smooth point q ∈ X r Ind(f ), f is a local isomorphism from a
neighborhood of q to a neighborhood of f (q), and the same holds for f −1 .
(4) The birational map f is a pseudo-isomorphism from X to Y .
Proof. Denote by g be the inverse of f . If the Jacobian determinant of f vanishes at
some (smooth) point of X r Ind(f ), then it vanishes along a hypersurface V ⊂ X.
If (1) is satisfied, the image of V is a hypersurface W in Y , and we can find a point
p ∈ V r Ind(f ) such that f (p) is not an indeterminacy point of g. Since the product
of the jacobian determinant of f at p and of g at f (p) must be equal to 1, we get
a contradiction. Thus (1) implies (2), and (2) is equivalent to (1). Now, assume
that (2) is satisfied. Then f does not contract any positive dimensional subset of
X reg r Ind(f ): f is a quasi-finite map from X reg r Ind(f ) to its image, and so is g.
Zariski’s main theorem implies that f realizes an isomorphism from X reg r Ind(f )
to Y r Ind(g) (see [33], Prop. 8.57). Thus, (2) implies (4) and (3). By assumption,
Ind(f ) and Ind(g) have codimension ≥ 2; thus, (3) implies (2). Since (4) implies
(1), this concludes the proof.
Example 2.5. Let X be a smooth projective variety with trivial canonical bundle
KX . Let Ω be a non-vanishing section of KX , and let f be a birational transformation of X. Then, f ∗ Ω extends from X r Ind(f ) to X and determines a new section
of KX ; this section does not vanish identically because f is dominant, hence it does
not vanish at all because KX is trivial. As a consequence, Jac(f ) does not vanish, f
is a pseudo-automorphism of X, and Bir(X) = Psaut(X). We refer to [10, 16] for
families of Calabi-Yau varieties with an infinite group of pseudo-automorphisms.
2.3. Projective varieties.
Proposition 2.6 (see [5]). Let f : X 99K Y be a pseudo-isomorphism between two
normal projective varieties. Then
(1) the total transform of Ind(f ) by f is equal to Ind(f −1 );
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
7
(2) f has no isolated indeterminacy point;
(3) if dim(X) = 2, then f is a regular isomorphism.
Proof. Let p ∈ X be an indeterminacy point of the pseudo-isomorphism f : X 99K
Y . Then f −1 contracts a subset C ⊂ Y of positive dimension on p. Since f and
f −1 are local isomorphisms on the complement of their indeterminacy sets, C is
contained in Ind(f −1 ). The total transform of a point q ∈ C by f −1 is a connected
subset of X that contains p and has dimension ≥ 1. This set Dq is contained in
Ind(f ) because f is a local isomorphism on the complement of Ind(f ); since p ∈
Dq ⊂ Ind(f ), p is not an isolated indeterminacy point. This proves Assertions (1)
and (2). The third assertion follows from the second one because indeterminacy
sets of birational transformations of projective surfaces are finite sets.
Let W be a hypersurface of X, and let f : X 99K Y be a pseudo-isomorphism.
The divisorial part of the total transform f∗ (W ) coincides with the strict transform
f◦ (W ). Indeed, f∗ (W ) and f◦ (W ) coincide on the open subset of Y on which f −1
is a local isomorphism, and this open subset has codimension ≥ 2.
Recall that the Néron-Severi group NS(X) is the free abelian group of codimension 1 cycles modulo cycles which are numerically equivalent to 0. Its rank is finite
and is called the Picard number of X.
Theorem 2.7. The action of pseudo-isomorphisms on Néron-Severi groups is functorial: (g ◦ f )∗ = g∗ ◦ f∗ for all pairs of pseudo-isomorphisms f : X 99K Y and
g : Y 99K Z. If X is a normal projective variety, the group Psaut(X) acts linearly
on the Néron-Severi group NS(X); this provides a morphism
Psaut(X) → GL (NS(X)).
The kernel of this morphism is contained in Aut(X) and contains Aut(X)0 as a
finite index subgroup.
As a consequence, if X is projective the group Psaut(X) is an extension of a
discrete linear subgroup of GL (NS(X)) by an algebraic group.
Proof. The first statement follows from the equality f∗ = f◦ on divisors. The
second follows from the first. To study the kernel K of the linear representation
Psaut(X) → GL (NS(X)), fix an embedding ϕ : X → Pm
k and denote by H the
m
polarization given by hyperplane sections in Pk . For every f in K, f∗ (H) is an
ample divisor, because its class in NS(X) coincides with the class of H. Now, a
theorem of Matsusaka and Mumford implies that f is an automorphism of X (see
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
8
[25] exercise 5.6, and [32]). To conclude, note that Aut(X)0 has finite index in the
kernel of the action of Aut(X) on NS(X) (see [31, 27]).
2.4. Affine varieties. The group Psaut(Ank ) coincides with the group Aut(Ank ) of
polynomial automorphisms of the affine space Ank : this is a special case of the
following proposition.
Proposition 2.8. Let Z be an affine variety. If Z is factorial, the group Psaut(Z)
coincides with the group Aut(Z).
Proof. Fix an embedding Z → Am
k . Rational functions on Z are restrictions of
m
rational functions on Ak . Thus, every birational transformation f : Z → Z is
given by rational formulas f (x1 , . . . , xm ) = (f1 , . . . , fm ) where each fi is a rational
function
pi
∈ k(x1 , . . . , xm );
fi =
qi
here, pi and qi are relatively prime polynomial functions. Since the local rings OZ,x
are unique factorization domains, we may assume that the hypersurfaces WZ (pi ) =
{x ∈ Z; pi (z) = 0} and WZ (qi ) = {x ∈ Z; qi (z) = 0} have no common
components. Then, the generic point of WZ (qi ) is mapped to infinity by f . Since f
is a pseudo-isomorphism, WZ (qi ) is in fact empty; but if qi does not vanish on Z, f
is a regular map.
3. G ROUPS WITH P ROPERTY (FW)
3.1. Commensurated subsets and cardinal definite length functions (see [11]).
Let G be a group, and G × S → S an action of G on a set S. Let A be a subset
of S. As in the Introduction, one says that G commensurates A if the symmetric
difference A4gA is finite for every element g ∈ G. One says that G transfixes A if
there is a subset B of S such that A4B is finite and B is G-invariant: gB = B for
every g in G. If A is transfixed, then it is commensurated. Actually, A is transfixed
if and only if the function g 7→ #(A4gA) is bounded on G.
A group G has Property (FW) if, given any action of G on a set S, all commensurated subsets of S are automatically transfixed. More generally, if H is a
subgroup of G, then (G, H) has relative Property (FW) if every commensurating action of G is transfixing in restriction to H. This means that, if G acts on a
set S and commensurates a subset A, then H transfixes automatically A. The case
H = G is Property (FW) for G.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
9
We refer to [11] for a detailed study of Property (FW). The next paragraphs
present the two main sources of examples for groups with Property (FW) or its
relative version, namely Property (T) and distorted subgroups.
Remark 3.1. Property (FW) should be thought of as a rigidity property. To illustrate
this idea, consider a group K with Property (PW); by definition, this means that K
admits a commensurating action on a set S, with a commensurating subset C such
that the function g 7→ #(C4gC) has finite fibers. If G is a group with Property
(FW), then, every homomorphism G → K has finite image.
3.2. Property (FW) and Property (T). One can rephrase Property (FW) as follows: G has Property (FW) if and only if every isometric action on an “integral
Hilbert space” `2 (X, Z) has bounded orbits, where X is any discrete set.
A group has Property (FH) if all its isometric actions on Hilbert spaces have
fixed points. More generally, a pair (G, H) of a group G and a subgroup H ⊂ G
has relative Property (FH) if every isometric G-action on a Hilbert space has an
H-fixed point. Thus, the relative Property (FH) implies the relative Property (FW).
By a theorem of Delorme and Guichardet, Property (FH) is equivalent to Kazhdan’s Property (T) for countable groups (see [13]). Thus, Property (T) implies Property (FW).
Kazhdan’s Property (T) is satisfied by lattices in semisimple Lie groups all of
whose simple factors have Property (T), for instance if all simple factors have real
rank ≥ 2. For example, SL 3 (Z) satisfies Property (T).
Property (FW) is actually conjectured to hold for all irreducible lattices in semisimple Lie groups of real rank ≥ 2, such as SL 2 (R)k for k ≥ 2. (here, irreducible
means that the projection of the lattice modulo every simple factor is dense.) This
is known in the case of a semisimple Lie group admitting at least one noncompact
simple factor with Kazhdan’s Property (T), for instance in SO (2, 3) × SO (1, 4),
which admits irreducible lattices (see [12]).
3.3. Distortion. Let G be a group. An element g of G is distorted in G if there
exists a finite subset Σ of G generating a subgroup hΣi containing g, such that
limn→∞ n1 |g n |Σ = 0; here, |g|Σ is the length of g with respect to the set Σ. If G is
finitely generated, this condition holds for some Σ if and only if it holds for every
finite generating subset of G. For example, every finite order element is distorted.
Example 3.2. Let K be a field. The distorted elements of SL n (K) are exactly the
virtually unipotent elements, that is, those elements whose eigenvalues are all roots
of unity; in positive characteristic, these are elements of finite order. By results of
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
10
Lubotzky, Mozes, and Raghunathan (see [29, 28]), the same characterization
holds
√
in the group SL n (Z), as soon as n ≥ 3; it also holds in SL n (Z[ d]) when n ≥ 2
and d ≥ 2 is not a perfect square. In contrast, in SL 2 (Z), every element of infinite
order is undistorted.
Lemma 3.3 (see [11]). Let G be a group, and H a finitely generated abelian subgroup of G consisting of distorted elements. Then, the pair (G, H) has relative
Property (FW).
This lemma provides many examples. For instance, if G is any finitely generated
nilpotent group and G0 is its derived subgroup, then (G, G0 ) has relative Property
(FH); this result is due to Houghton, in a more general formulation encompassing
polycyclic groups (see [11]). Bounded generation by distorted unipotent elements
can also be used to obtain nontrivial examples of groups with √
Property (FW), including√the above examples SL n (Z) for n ≥ 3, and SL n (Z[ d]). The case of
SL 2 (Z[ d]) is particularly interesting because it does not have Property (T).
3.4. Subgroups of PGL 2 (k) with Property (FW). If a group G acts on a tree T
by graph automorphisms, then G acts on the set E of directed edges of T (T is
non-oriented, so each edge gives rise to a pair of opposite directed edges). Let Ev
be the set of directed edges pointing towards a vertex v. Then Ev 4Ew is the set
of directed edges lying in the segment between v and w; it is finite of cardinality
2d(v, w), where d is the graph distance. The group G commensurates the subset Ev
for every v, and #(Ev 4gEv ) = 2d(v, gv). As a consequence, if G has Property
(FW), then it has Property (FA) in the sense that every action of G on a tree has
bounded orbits. This argument can be combined with Proposition 5.B.1 of [11] to
obtain the following lemma.
Lemma 3.4 (See [11]). Let G be a group with Property (FW), then all finite index
subgroups of G have Property (FW), and hence have Property (FA). Conversely, if
a finite index subgroup of G has Property (FW), then so does G.
On the other hand, Property (FA) is not stable by taking finite index subgroups.
Lemma 3.5. Let k be an algebraically closed field and Λ be a subgroup of GL 2 (k).
(1) Λ has a finite orbit on the projective line if and only if it is virtually solvable,
if and only if its Zariski closure does not contain SL 2 .
(2) Assume that all finite index subgroups of Λ have Property (FA) (e.g., Λ has
Property FW). If the action of Λ on the projective line preserves a nonempty, finite set, then Λ is finite.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
11
The proof of the first assertion is standard and omitted. The second assertion
follows directly from the first one.
In what follows, we denote by Z ⊂ Q the ring of algebraic integers (in some
fixed algebraic closure Q of Q).
Theorem 3.6 (Bass [2]). Let k be an algebraically closed field.
(1) If k has positive characteristic, then GL 2 (k) has no infinite subgroup with
Property (FA).
(2) Suppose that k has characteristic zero and that Γ ⊂ GL 2 (k) is a countable
subgroup with Property (FA), and is not virtually abelian. Then Γ acts irreducibly on k2 , and is conjugate to a subgroup of GL 2 (Z). If moreover
Γ ⊂ GL 2 (K) for some subfield K ⊂ k containing Q, then we can choose
the conjugating matrix to belong to GL 2 (K).
On the proof. The original statement [2, Theorem 6.5] yields this statement, except
the last fact, and assumes that Γ is contained in GL 2 (M ) with M a finitely generated
field. The latter condition is actually automatic: indeed, being a countable group
with Property (FA), Γ is finitely generated [35, §6, Th. 15], and one can choose K
to be the field generated by entries of a finite generating subset.
For the last assertion, we have Γ ∪ BΓB −1 ⊂ GL 2 (K) for some B ∈ GL 2 (k)
such that BΓB −1 ⊂ GL 2 (Z); we claim that this implies that B ∈ k∗ GL 2 (K). First,
since Γ is absolutely irreducible, this implies that BM2 (K)B −1 ⊂ M2 (K). The
conclusion follows from Lemma 3.7 below, which can be of independent interest.
Lemma 3.7. Let K ⊂ L be fields. Then the normalizer {B ∈ GL 2 (L) : BM2 (K)B −1 ⊂
M2 (K)} is reduced to L∗ GL 2 (K) = {λA : λ ∈ L∗ , A ∈ GL 2 (K)}.
Proof. Write
b 1 b2
B=
.
b3 b4
Since BAB −1 ∈ M2 (K) for the three elementary matrices A ∈ {E11 , E12 , E21 },
we deduce by a plain computation that bi bj /bk b` ∈ K for all 1 ≤ i, j, k, ` ≤ 4 such
that bk b` 6= 0. In particular, for all indices i and j such that bi and bj are nonzero,
the quotient bi /bj = bi bj /b2j belongs to K. It follows that B ∈ L∗ GL 2 (K).
Corollary 3.8. Let k be an algebraically closed field. Let C be a projective curve
over k, and let k(C) be the field of rational functions on the curve C. Let Γ be an
infinite subgroup of PGL 2 (k(C)). If Γ has Property (FA), then
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
12
(1) the field k has characteristic 0;
(2) there is an element of PGL 2 (k(C)) that conjugates Γ to a subgroup of
PGL 2 (Z) ⊂ PGL 2 (k(C)).
4. A CATEGORAL LIMIT CONSTRUCTION
The purpose of this section is to describe a general categorical construction,
which can be used to construct various actions of groups of birational transformations, such as Manin’s construction of the Picard-Manin space (see [30, 8]), as
well as the commensurating action which is the main construction of this paper. A
closely related construction is performed by V. Jones in [24] to construct representations of Thompson’s groups, although it does not directly apply here.
4.1. Categories of projective varieties. Here, in a category C, arrows between any
two objects X and Y are assumed to form a set HomC (X, Y ). Recall that a category
is small if its objects form a set, and is essentially small if it is equivalent to a small
category, or equivalently if there is a subset of the collection of objects meeting
every isomorphism class. A subcategory C of a category D is full if all arrows of D
between objects of C are also arrows of C.
Example 4.1. Our main example will be the following. Fix an algebraically closed
field k. Let V = Vk be the category whose objects are irreducible (reduced) projective k-varieties and whose arrows are birational morphisms. Let V ] be the category
with the same objects, but whose arrows are birational maps. Similarly, one can
consider the category VN of irreducible (reduced) normal projective varieties, with
arrows given by birational morphisms, and the category VN ] with the same objects
but whose arrows are all birational maps. By construction, VN is a full subcategory
of V, which is a subcategory of V ] .
4.2. Relative thinness and well-cofiltered categories. Given a category C and an
object X ∈ Ob(C), let us define the category CX whose objects are pairs (Y, f ) with
Y ∈ Ob(C) and f ∈ HomC (Y, X), and whose arrows (Y, f ) → (Z, g) are given by
arrows u ∈ HomC (Y, Z) such that g ◦ u = f . A category is thin if there is at most
one arrow between any two objects. Let us say that a category is relatively thin if
the category CX is thin for all X ∈ Ob(X).
Example 4.2. A category in which every arrow is invertible is relatively thin, and
so are all its subcategories. This applies to the categories of Example 4.1: the
category Vk] of birational maps between irreducible projective varieties, and to its
subcategory Vk , and similarly to VN ]k and its subcategory VN k .
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
13
Recall that a category is cofiltered if it satisfies the following two properties (a)
and (b):
(a) for any pair of objects X1 , X2 , there exists an object Y with arrows X1 ←
Y → X2 ;
(b) for any pair of objects X, Y and arrows u1 , u2 : X → Y , there exists an
object W and an arrow w : W → X such that u1 ◦ w = u2 ◦ w.
Note that (b) is automatically satisfied when the category is thin. We say that a
category C is well-cofiltered if it is relatively thin and for every object X ∈ Ob(C),
the category CX is cofiltered (note that we do not require C to be cofiltered).
Example 4.3. Coming again to the categories of Example 4.1, the category Vk is
essentially small and well-cofiltered. It is relatively thin, as mentioned in Example
4.2. To show that (Vk )X is cofiltered, consider two birational morphisms f1 : X1 →
X and f2 : X2 → X, and denote by h the composition f2−1 ◦ f1 . The graph Gh
is a projective subvariety of X1 × X2 . One can compose the projection of Gh
onto X1 with f1 (resp. onto X2 with f2 ) to get a birational morphism Gh → X;
this birational morphism is an object in (Vk )X that dominates f1 : X1 → X and
f2 : X2 → X, as in property (a).
The full subcategory VN k of Vk enjoys the same properties. When k has characteristic zero, the resolution of indeterminacies implies that its full subcategory of
non-singular varieties (and birational morphisms) is also well-cofiltered.
4.3. Filtering inductive limits.
4.3.1. We shall say that a category E admits filtering inductive limits if for every
small, thin and cofiltered category D and every contravariant functor F : D →
E, the colimit of F exists (and then it also exists when “small” is replaced with
“essentially small”). For example, the category of sets and the category of groups
admit filtering inductive limits (see [36], § 1.4, for colimits).
4.3.2. Let us consider an essentially small category C, a category E admitting
filtering inductive limits, and a contravariant functor F : C → E; we denote the
functor F by X 7→ FX on objects and u 7→ Fu on arrows. Assume that C is wellcofiltered. Then, for every object X ∈ Ob(C), we can restrict the functor F to CX
and take the colimit F̃X of this restriction F : CX → E. Roughly speaking, F̃X is
the inductive limit in E of all FY for Y ∈ CX . So, for every arrow u : Y → X in C,
there is an arrow in E, φu : FY → F̃X in E; and for every arrow v : Z → Y in CX ,
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
14
the following diagram commutes
FO Z
Fv
FY
φu◦v
φu
/
!
F̃X .
The colimits F̃X satisfy a universal property. To describe it, consider an object
E ∈ Ob(E), together with arrows ψY : FY → E for all Y ∈ CX , and assume that
for every arrow v : Z → Y in CX we have the relation ψZ ◦ Fv = ψY . Then, there
exists a unique arrow ψ : F̃X → E in E such that for every (Y, u) ∈ Ob(CX ) the
following diagram commutes:
F̃X
O
ψ
Fu
/
>E
ψY
FY .
This construction provides a bijection ΦX from the inductive limit lim
←−Y ∈CX HomE (FY , E)
to Hom(F̃X , E) whose reciprocal bijection maps an element ψ ∈ Hom(F̃X , E) to
the family of arrows (ψ ◦ Fu )(Y,u)∈CX .
4.3.3. We can now define the covariant functor α associated to F . At the level
of objects, α maps X ∈ Ob(C) to the limit F̃X . Let us now describe α at the
level of arrows. If we fix (Y, u) ∈ Ob(CX ), the family of arrows (φu◦v : FZ →
F̃X )(Z,v)∈Ob(CY ) corresponds under ΦY to an arrow αu : F̃Y → F̃X . For every
(Z, v) ∈ Ob(CY ), the following diagram commutes
F̃OY
φv
αu
/
F̃
= X
(4.1)
φu◦v
FZ ,
and this characterizes the arrow αu . If (W, w) ∈ CY , the uniqueness readily proves
that αu◦w = αu ◦ αw , that is, X 7→ F̃X , u 7→ αu is a covariant functor C → E,
denoted α, and called the relative colimit functor associated to F .
4.3.4. Note that the previous diagram can be essentially rewritten in the form of
the commutative square on the left of the next equation. The commutative diagram
on the right only refers to u; it is obtained by composing the left diagram with the
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
15
map Fu and by using the equalities φv ◦ Fv = φidY and φu ◦ Fu = φidX .
αu
F̃OY
/
φv
F̃X
O
Fv
/
F̃X
O
φidY
φu
FZ o
αu
F̃OY
φidX
FY o
FY
(4.2)
Fu
FX .
Lemma 4.4. Suppose that C is well-cofiltered. Then α maps arrows of C to invertible arrows (i.e. isomorphisms) of E.
Proof. Fix (Y, u) ∈ CX . The proof consists in constructing a map, and then show
that it is the inverse map of αu .
Consider (S, s) ∈ Ob(CX ). By assumption, in C we can find a commutative
diagram as the one on the left of the following equation; hence, in E we obtain the
diagram on the right, where g = φx ◦ Fw by definition.
Y o
u
Xo
x
D
s
Fx
FOY
w
φx
FD
O
Fw
Fu
S,
/
Fs
FX
/
/
F̃
= Y
g
FS .
A priori g depends on the choice of (D, x, w); let us show that it only depends on
(u, s) and, for that purpose, let us denote g temporarily by g = gD (x, w being
implicit). First consider the case of a commutative diagram as the one on the left
in the next equation; in E, this diagram induces the diagram depicted on the right,
where everything not involving gD or gD0 is commutative.
Y `o
x
x0
q
D0
u
D
Fx
FOY
Fx0
~
w
!
Fu
v
Fq
/4
F̃
<D Y
φx 0
F a
S,
φx
FO
= D
Fw
D0
w0
Xo
/
Fs
FX
Fw0
/
gD ,gD0
FS ,
Thus, by definition gD0 = φx0 ◦ Fw0 = φx ◦ Fq ◦ Fw0 = φx ◦ Fw = gD . Now consider,
more generally two objects D0 and D00 and four arrows forming a diagram in CX :
YO o
D00
/
D0
S;
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
16
we have to show that gD0 = gD00 . Since C is well-cofiltered, CX is thin and cofiltered,
and we can complete the previous diagram into the one on the left of the following
equation. Since this diagram is in CX which is a thin category, it is commutative;
which means that if we complete it with both composite arrows D → Y and both
composite arrows D → S, the resulting arrows coincide; the resulting diagram, on
the right of the equation, is a commutative one.
DO 0
DO 0
Y a
}
!
D
>
Y ao
S
}
D
/
!
>
S
D00 ;
D00 ;
Using the previous case, we deduce gD0 = gD = gD00 . Thus, we have seen that
gD does not depend on the choice of D; we now write it as gu,s . In particular,
when S ∈ (CX )Y , we can choose D = S (and w the identity); we thus deduce that
gu,s = φx (where s = u ◦ x).
x
w
Consider (T, t) ∈ CS and choose D ∈ Ob(CX ) with Y ← D → T in CX . Then
we have the diagram in E
FOY
Fx
/
FD
O
φx
Fw
FO T
gu,s◦t
Ft
FX
Fs
/
!
/
F̃
> Y
gu,s
FS ,
where the left rectangle is commutative as well as the upper right triangle; since
by definition gu,s = φx ◦ Ft◦w = φx ◦ Fw ◦ Ft = gu,s◦t ◦ Ft , the lower right
triangle is also commutative. So the family (gs : FS → F̃Y )(S,s)∈Ob(CX ) defines
an element gu : F̃X → F̃Y . Namely, for every (S, s) ∈ Ob(CX ) the following
diagram commutes
F̃X
O
φs
FS ,
and gu is characterized by this property.
gu
/
F̃
> Y
gu,s
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
17
We now combine this with the map αu , and make use of the same notation as the
one in Equations (4.1) and (4.2). When S = Z (so s = u ◦ v, gu,s = φv ), we obtain
the commutative diagram.
F̃Y a
αu
/
gu
F̃X
O
φs
φv
/
F̃
= Y
φv
FZ ,
Since this holds for all (Z, v) ∈ (CX )Y , the universal property of F̃Y implies that
gu ◦ αu is the identity of F̃Y .
On the other hand, turning back to the notation of the beginning of the proof,
both triangles in the following diagram are commutative
αu
F̃OY
φx
/
F̃O
= X
φu◦x
FD o
Fw
φs
FS ;
since gu,s = φx ◦ Fw , this implies that the right triangle of the following diagram is
commutative, the left-hand triangle from above also being commutative
F̃X a
gu
/
F̃OY
αu
/
F̃
= X
gu,s
φs
φs
FS .
Since this holds for all (S, s) ∈ CX , by the universal property of F̃X , we obtain that
αu ◦ gu is the identity of F̃X . This ends the proof that αu is invertible.
4.4. Good right-localization and extensions. Given a category D with a subcategory C with the same objects, we say that (C, D) is a good right-localization if (i)
every arrow u : X → Y in D admits an inverse u−1 : Y → X and (ii) every arrow
in D can be decomposed as g ◦ f −1 where f and g are arrows of C.
Lemma 4.5. Let (C, D) be a good right-localization. Then C is well-cofiltered.
Proof. Clearly any category in which all arrows are invertible is relatively thin. It
follows that D and its subcategory C are relatively thin. Now consider a pair of
objects (Y, u) and (Z, v) of CX . Then v −1 ◦ u is an arrow of D (because all arrows
s
t
are invertible in D), and it can be decomposed as Y ← W → Z, with s and t arrows
of C. By definition v ◦ t = u ◦ s determines an arrow W → X; endowing W with
the resulting composite arrow to X, the arrows s and t become arrows in CX .
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
18
Lemma 4.6. Let (C, D) be a good right-localization, and let E be another category.
Consider a (covariant) functor β from C to the category E mapping every arrow
to an invertible arrow of E. Then β has a unique extension to a functor from the
category D to the category E.
Proof. The uniqueness is clear. For the existence, consider an arrow u in D. We
wish to map u to β(g) ◦ β(f )−1 , where u = g ◦ f −1 . We have to prove that this does
not depend on the choice of (f, g). Thus write u = g1 ◦ f1−1 = g2 ◦ f2−1 . Since CX is
well-cofiltered (Lemma 4.5), we can produce a diagram as follows in C, where the
left “square” and the whole square are commutative:
f1
X`
~
YO 1
h1
g1
Z
f2
h2
Y
> 2
g2
X2
Then the right square is also commutative: indeed g1 ◦ h1 = (g1 ◦ f1−1 ) ◦ (f1 ◦ h1 ) =
(g2 ◦ f2−1 ) ◦ (f2 ◦ h2 ) = g2 ◦ h2 . Then
β(g1 ) ◦ β(f1 )−1 = β(g1 ) ◦ β(h1 ) ◦ β(h1 )−1 ◦ β(f1 )−1 = β(g1 ◦ h1 ) ◦ β(f1 ◦ h1 )−1
= β(g2 ◦ h2 ) ◦ β(f2 ◦ h2 )−1 = β(g2 ) ◦ β(h2 ) ◦ β(h2 )−1 ◦ β(f2 )−1 = β(g2 ) ◦ β(f2 )−1 ;
hence we can define without ambiguity β(u) = β(g) ◦ β(f )−1 .
We have to prove β(v) ◦ β(u) = β(v ◦ u) for any arrows u, v of D. This already
holds for u, v arrows of C. Write u = g ◦ f −1 , v = j ◦ h−1 with f , g, h, and j
arrows of C. Write h−1 ◦ g = t ◦ s−1 with s, t arrows of C. Then g ◦ s = h ◦ t,
so β(g) ◦ β(s) = β(g ◦ s) = β(h ◦ t) = β(h) ◦ β(t), which can be rewritten
β(h)−1 ◦ β(g) = β(t) ◦ β(s)−1 . In turn, we get
β(v ◦ u) = β(j ◦ h−1 ◦ g ◦ f −1 ) = β(j ◦ t ◦ s−1 ◦ f −1 ) = β(j ◦ t) ◦ β(f ◦ s)−1
= β(j) ◦ β(t) ◦ β(s)−1 ◦ β(f )−1 = β(j) ◦ β(h)−1 ◦ β(g) ◦ β(f )−1 = β(v) ◦ β(u).
Combining Lemmas 4.4, 4.5 and 4.6, we deduce:
Proposition 4.7. Consider a good right-localization (C, D) and a category E admitting filtering inductive limits. Let F be a contravariant functor from C to E. Then
the relative colimit functor α, defined by
X 7→ F̃X = −
lim
→Y →X F (Y ),
u 7→ αu ,
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
19
has a unique extension to a covariant functor from D to E.
5. I RREDUCIBLE HYPERSURFACES
Let X be a normal projective variety. In this chapter, we make use of the categorical construction of Section 4.3 to define an action of Bir(X) on the set of all
irreducible hypersurfaces in all “models” Y → X of X. We also sketch an application to the construction of Picard-Manin spaces in Section 5.5.
5.1. Localizations in categories of projective varieties. Consider the categories
V, V ] , VN , and VN ] from examples 4.1 and 4.3.
Proposition 5.1. Let k be an algebraically closed field.
(1) The categories V and VN are well-cofiltered (as defined in §4.2).
(2) The pairs (V, V ] ) (VN , VN ] ) are good right-localizations of categories (as
defined in §4.4).
Proof. Property (1) follows from Example 4.3. Let us now prove the second property. Clearly, every arrow in V ] is invertible. Let f : X 99K X 0 be a birational
g0
g
map. The graph Gf is an irreducible variety, both projections X ← Gf → X 0 are
g
g0
birational morphisms and f = g 0 ◦ g −1 . Since f = g 0 ◦ g −1 with X ← Gf → X 0 ,
we deduce that (V, V ] ) is a good right-localization.
For VN , we only need to compose with the normalization map Y → Gf to get
the result (see [36], §9.7).
To spare the reader from going into too much category theory, let us state explicitly Proposition 4.7 in this case:
Corollary 5.2. Let E be a category admitting filtering inductive limits. Consider a
contravariant functor F from VN to the category E. Then the covariant functor α,
]
X 7→ F̃X = −
lim
→Y →X F (Y ), u 7→ αu has a unique extension to a functor from VN
to E.
5.2. The functor of irreducible hypersurfaces. Let us construct the functor to
which we will apply Corollary 5.2. For X ∈ V, define Hyp(X) as the set of irreducible and reduced hypersurfaces of X.
Proposition 5.3. Let f : Y → X be a birational morphism between two irreducible
projective varieties (an arrow in V).
(1) The number of T ∈ Hyp(Y ) such that f (T ) is not a hypersurface is finite
(these T are precisely the hypersurfaces contracted by f ).
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
20
(2) For every S ∈ Hyp(X), the number of S 0 ∈ Hyp(Y ) such that f (S 0 ) = S is
positive and finite.
(3) If X is normal, this number is equal to 1, and the unique preimage S 0 of S
is the strict transform f ◦ (S) = (f −1 )◦ (S). In particular, S 7→ S 0 = f ◦ (S)
is injective and its image has finite complement: this complement Hyp(Y ) r
f ◦ (Hyp(X)) is the set of hypersurfaces T that are contracted by f .
Proof. Let U ⊂ Y be a Zariski-dense open subset on which f induces an isomorphism onto its image. Let F be the complement of U .
(1) If f (T ) is not a hypersurface, then T ⊂ F . So T is one of the irreducible
components of F , which leaves finitely many possibilities.
(2) Since Y is projective, f is surjective, and hence f (T ) = S, where T =
−1
f (S); T is a proper subvariety. Then at least one irreducible component T 0 of T
satisfies f (T 0 ) = S, and conversely, every S 0 ∈ Hyp(Y ) such that f (S 0 ) = S has to
be an irreducible component of T , hence there are finitely many of them.
(3) We now use Theorem 2.1: since X is normal and Y is projective, the indeterminacy set Ind(f −1 ) ⊂ X has codimension ≥ 2. Hence the strict transform
(f −1 )◦ (S) of S is well-defined and is equal to f −1 (X r Ind(f −1 ). The total transform of S by f −1 may contain additional components of codimension 1, but all of
them are contracted into Ind(f −1 ), which has codimension 2 (hence are not equal
to S). This proves that S 0 = (f −1 )◦ (S). Since f (S 0 ) = S, the map f ◦ := (f −1 )◦ is
injective. Moreover, by construction, its image is made of hypersurfaces which are
not contained in f −1 (Ind(f )). Since every element T ∈ Hyp(Y ) which is not contracted coincides with f ◦ (f (T )), the image of f ◦ is in fact equal to the complement
of the set of contracted hypersurfaces.
From Proposition 5.3, the map X 7→ Hyp(X) defines a contravariant functor
from VN to the category of sets, mapping an arrow f : Y → X to the (injective)
map
f ◦ : Hyp(X) → Hyp(Y ).
If F denoted the functor Hyp as in Section 4.3, then we would have FX = Hyp(X)
˜
and Ff = f ◦ . For X in the category VN , define Hyp(X)
as the filtering inductive
limit
˜
Hyp(X)
= lim
−→Y →X Hyp(Y ).
˜
By construction, X 7→ Hyp(X)
is a covariant functor from VN to the category
of sets. By Corollary 5.2, it has a unique extension to a functor from VN ] to the
˜
category of sets. The image of an object Y ∈ VN ] is denoted Hyp(Y
), and the
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
21
image of an arrow f : Y 99K Y 0 , that is of a birational map between two normal
projective varieties Y and Y 0 , is denoted by f• . By construction, f• is a bijection
0
˜
˜
from Hyp(Y
) to Hyp(Y
).
For an arrow u in VN , i.e. a birational morphism u : Y → X between two normal projective varieties, we rewrite the commutative square on the right of Equation (4.2) as
u•
˜
˜
Hyp(Y
) ∼ / Hyp(X)
O
O
iY
?
Hyp(Y ) o
u◦
?
iX
? _ Hyp(X).
The two injections iX and iY will simply be viewed as inclusions in what follows. So the bijection u−1
• extends the injection which is given by the strict trans◦
form u . Since the image of u◦ has finite complement, the symmetric difference
Hyp(Y )4u−1
• (Hyp(X)) is finite. This latter property passes to inverses and compositions; hence, for every birational map v : X 99K X 0 between normal irreducible
projective varieties, the symmetric difference Hyp(X)4v•−1 (Hyp(X 0 )) is finite. To
give a more precise statement, let us introduce the following notation: given a birational map v : X 99K X 0 between normal irreducible projective varieties, define
exc(v) by
exc(v) = # {S ∈ Hyp(X); v contracts S} .
This is the number of contracted hypersurfaces S ∈ Hyp(X) by v.
Proposition 5.4. Let v : X 99K X 0 be a birational transformation between normal
irreducible projective varieties. Let S be an element of Hyp(X).
(1) If S ∈ (v −1 )◦ Hyp(X 0 ), then v• (S) = v◦ (S) ∈ Hyp(X 0 ).
(2) If S ∈
/ (v −1 )◦ Hyp(X 0 ), then v◦ (S) has codimension ≥ 2 (i.e. v contracts
0
˜
S), and v• (S) is an element of Hyp(X
) r Hyp(X 0 ).
(3) The symmetric difference v• (Hyp(X))4Hyp(X 0 ) contains exc(v)+exc(v −1 )
elements.
Proof. Let U be the complement of Ind(v) in X 0 . Since, by Theorem 2.1, Ind(v)
has codimension ≥ 2, no S ∈ Hyp(X) is contained in Ind(v).
Let us prove (1). When v is a birational morphism the assertion follows from
Proposition 5.3. To deal with the general case, write v = g ◦ f −1 where f : Y → X
and g : Y → X 0 are birational morphisms from a normal variety Y . Since f is a
birational morphism, f • (S) = f ◦ (S) ⊂ Hyp(Y ); since S is not contracted by v,
g• (f ◦ (S)) = g◦ (f ◦ (S)) ∈ Hyp(X 0 ). Thus, v• (S) = g• (f • (S)) coincides with the
strict transform v◦ (S) ∈ Hyp(X 0 ).
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
22
Now let us prove (2), assuming thus that S ∈
/ (v −1 )◦ Hyp(X 0 ). Let S 00 ∈ Hyp(Y )
be the hypersurface (f −1 )• (S) = (f −1 )◦ (S). Then f (S 00 ) = S. If g◦ (S 00 ) is a
hypersurface S 0 , then (v −1 )◦ (S 0 ) = S, contradicting S ∈
/ (v −1 )◦ Hyp(X 0 ). Thus,
g contracts S 00 onto a subset S 0 ⊂ X 0 of codimension ≥ 2. Since S 0 = v◦ (S),
assertion (2) is proved.
Assertion (3) follows from the previous two assertions.
Example 5.5. Let g be a birational transformation of Pnk of degree d, meaning that
g ∗ (H) ' dH where H denotes a hyperplane of Pnk , or equivalently that g is defined
by n + 1 homogeneous polynomials of the same degree d without common factor of
positive degree. The exceptional set of g has degree (n+1)(d−1); as a consequence,
excPnk (g) ≤ (n+1)(d−1). More generally, if H is a polarization of X, then excX (g)
is bounded from above by a function that depends only on the degree degH (g) :=
(g ∗ H) · H dim(X)−1 .
˜
5.3. Action of Bir(X) on Hyp(X).
Let us now restrict the functor f 7→ f• to the
elements of Bir(X). The existence of this functor and Proposition 5.4 give the
following theorem.
Theorem 5.6. Let X be a normal projective variety. The group Bir(X) acts faith˜
fully by permutations on the set Hyp(X)
via the homomorphism
˜
Bir(X) → Perm(Hyp(X))
g
7→
g•
˜
This action commensurates the subset Hyp(X) of Hyp(X):
for every g ∈ Bir(X),
|g• (Hyp(X))4Hyp(X)| = exc(g) + exc(g −1 ).
The only thing that has not been proven yet is the fact that the homomorphism
˜
f ∈ Bir(X) 7→ Perm(Hyp(X))
is injective. But the kernel of this homomorphism
is made of birational transformations f such that f◦ (W ) = W for every irreducible
hypersurface W of X. Since X is projective, one can embed X in some projective
space Pm
k ; then, every point of X(k) is the intersection of finitely many irreducible
hyperplane sections of X: since all these sections are fixed by f , every point is fixed
by f , and f is the identity.
5.4. Products of varieties. Let X, Y be irreducible, normal projective varieties.
We consider the natural embedding of Bir(X) into Bir(X × Y ), given by the birational action f · (x, y) = (f (x), y), f ∈ Bir(X). There is a natural injection jY of
Hyp(X) into Hyp(X × Y ), given by S 7→ S × Y , which naturally extends to an
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
23
˜
˜
injection of Hyp(X)
into Hyp(X
× Y ); this inclusion is Bir(X)-equivariant. The
following proposition will be applied to Corollary 6.7.
Proposition 5.7. Let a group Γ act on X by birational transformations. Then Γ
˜
˜
transfixes Hyp(X) in Hyp(X)
if and only if it transfixes Hyp(X ×Y ) in Hyp(X
×Y ).
More precisely, the subset Hyp(X × Y ) r jY (Hyp(X)) is Bir(X)-invariant.
Proof. The reverse implication is immediate, since any restriction of a transfixing
action is transfixing. The direct implication follows from the latter statement, which
we now prove. Consider S ∈ Hyp(X × Y ) r jY (Hyp(X)). This means that S is an
irreducible hypersurface of X × Y whose projection to X is surjective. Now, for
γ ∈ Bir(X), γ induces an isomorphism between open dense subset U, V of X, and
hence between U × Y and V × Y ; in particular, γ does not contract S. This shows
that γ stabilizes Hyp(X × Y ) r jY (Hyp(X)).
5.5. Manin’s construction. Instead of looking at the functor X 7→ Hyp(X) from
the category of normal projective varieties to the category of sets, one can consider
the Néron-Severi functor X 7→ NS(X) from the category of smooth projective varieties to the category of abelian groups. In characteristic zero, or for surfaces in arbitrary characteristic, the resolution of singularities shows that smooth projective varieties, together with birational morphisms, form a good right localization of the category of smooth projective varieties with birational maps between them. Thus, one
˜
can construct a functor, the relative colimit of Néron-Severi groups, X 7→ NS(X)
˜
˜
that maps birational maps X 99K Y to group isomorphisms NS(X)
→ NS(Y
). In
dimension 2, this construction is known as the Picard-Manin space (see [8, 30]).
One may also replace NS(X) by other cohomology groups if they behave contravariantly with respect to birational morphisms (see [7] for instance).
6. P SEUDO - REGULARIZATION OF BIRATIONAL TRANSFORMATIONS
˜
In this section, we make use of the action of Bir(X) on Hyp(X)
to characterize
and study groups of birational transformations that are pseudo-regularizable, in the
sense of Definition 1.1. As before, k is an algebraically closed field.
6.1. An example. Consider the birational transformation f (x, y) = (x + 1, xy) of
P1k × P1k . The vertical curves Ci = {x = −i}, i ∈ Z, are exceptional curves for the
cyclic group Γ = hf i: each of these curves is contracted by an element of Γ onto a
point, namely f◦i+1 (Ci ) = (1, 0).
Let ϕ : Y 99K P1k × P1k be a birational map, and let U be a non-empty open subset
of Y . Consider the subgroup ΓY := ϕ−1 ◦ Γ ◦ ϕ of Bir(Y ). If i is large enough,
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
24
ϕ◦−1 (Ci ) is an irreducible curve Ci0 ⊂ Y , and these curves Ci0 are pairwise distinct,
so that most of them intersect U. For positive integers m, f i+m maps Ci onto (m, 0),
and (m, 0) is not an indeterminacy point of ϕ−1 if m is large. Thus, ϕ−1 ◦ f m ◦ ϕ
contracts Ci0 , and ϕ−1 ◦ f m ◦ ϕ is not a pseudo-automorphism of U. This argument
proves the following lemma.
Lemma 6.1. Let X be the surface P1k ×P1k . Let f : X 99K X be defined by f (x, y) =
(x + 1, xy), and let Γ be the subgroup generated by f ` , for some ` ≥ 1. Then the
cyclic group Γ is not pseudo-regularizable.
This shows that Theorem A requires an assumption on the group Γ. More generally, consider a subgroup Γ ⊂ Bir(X) such that Γ
(a) contracts a family of hypersurfaces Wi ⊂ X whose union is Zariski dense
(b) the union of the family of strict transforms f◦ (Wi ), for f ∈ Γ contracting
Wi , form a subset of X whose Zariski closure has codimension at most 1.
Then, Γ cannot be pseudo-regularized.
6.2. Characterization of pseudo-Isomorphisms. Recall that f• denotes the bi0
˜
˜
jection Hyp(X)
→ Hyp(X
) which is induced by a birational map f : X 99K X 0 .
Also, for any nonempty open subset U ⊂ X, we define Hyp(U ) = {H ∈ Hyp(X) :
H ∩ U 6= ∅}; it has finite complement in Hyp(X).
Proposition 6.2. Let f : X 99K X 0 be a birational map between normal projective
varieties. Let U ⊂ X and U 0 ⊂ X 0 be two dense open subsets. Then, f induces a
pseudo-isomorphism U → U 0 if and only if f• (Hyp(U )) = Hyp(U 0 ).
Proof. If f restricts to a pseudo-isomorphism U → U 0 , then f maps every hypersurface of U to a hypersurface of U 0 by strict transform. And (f −1 )◦ is an inverse
for f◦ : Hyp(U ) → Hyp(U 0 ). Thus, f• (Hyp(U )) = f◦ (Hyp(U ) = Hyp(U 0 ).
Let us now assume that f• (Hyp(U )) = Hyp(U 0 ). Since X and X 0 are normal,
Ind(f ) and Ind(f −1 ) have codimension ≥ 2 (Theorem 2.1).
Let fU,U 0 be the birational map from U to U 0 which is induced by f . The indeterminacy set of fU,U 0 is contained in the union of the set Ind(f ) ∩ U and the set
of points x ∈ U r Ind(f ) which are mapped by f in the complement of U 0 ; this
second part of Ind(fU,U 0 ) has codimension 2, because otherwise there would be an
irreducible hypersurface W in U which would be mapped in X 0 r U 0 , contradicting
the equality f• (Hyp(U )) = Hyp(U 0 ). Thus, the indeterminacy set of fU,U 0 has codimension ≥ 2. Changing f in its inverse f −1 , we see that the indeterminacy set of
fU−10 ,U : U 0 99K U 0 has codimension ≥ 2 too.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
25
If fU,U 0 contracted an irreducible hypersurface W ⊂ U onto a subset of U 0 of
codimension ≥ 2, then f• (W ) would not be contained in Hyp(U 0 ) (it would corre0
˜
spond to an element of Hyp(X
)rHyp(X 0 ) by Proposition 5.4). Thus, fU,U 0 satisfies
the first property of Proposition 2.4 and, therefore, is a pseudo-isomorphism.
6.3. Characterization of pseudo-regularization. Let X be a (irreducible, reduced)
normal projective variety. Let Γ be a subgroup of Bir(X). Assume that the action
˜
˜
of Γ on Hyp(X)
fixes (globally) a subset A ⊂ Hyp(X)
such that
|A4Hyp(X)| < +∞.
In other words, A is obtained from Hyp(X) by removing finitely many hypersur˜
faces Wi ∈ Hyp(X) and adding finitely many hypersurfaces Wj0 ∈ Hyp(X)
r
0
Hyp(X). Each Wj comes from an irreducible hypersurface in some model πj : Xj →
X, and there is a model π : Y → X that covers all of them (i.e. π ◦ πj−1 is a morphism from Y to Xj for every j). Then, π ◦ (A) is a subset of Hyp(Y ). Changing X
into Y , A into π ◦ (A), and Γ into π −1 ◦ Γ ◦ π, we may assume that
(1) A = Hyp(X) r {E1 , . . . , E` } where the Ei are ` distinct irreducible hypersurfaces of X,
˜
(2) the action of Γ on Hyp(X)
fixes the set A.
In what follows, we denote by U the non-empty Zariski open subset X r ∪i Ei and
by ∂X the boundary X r U = E1 ∪ · · · ∪ E` ; ∂X is considered as the boundary of
the compactification X of U.
Lemma 6.3. The group Γ acts by pseudo-automorphisms on the open subset U. If
U is smooth and there is an ample divisor D whose support coincides with ∂X, then
Γ acts by automorphisms on U.
In this statement, we say that the support of a divisor D coincides with ∂X if
P
D = i ai Ei with ai > 0 for every 1 ≤ i ≤ `.
Proof. Since A = Hyp(U) is Γ-invariant, Proposition 6.2 shows that Γ acts by
pseudo-automorphisms on U.
Since D is an ample divisor, some positive multiple mD is very ample, and the
complete linear system |mD| provides an embedding of X in a projective space.
The divisor mD corresponds to a hyperplane section of X in this embedding, and
the open subset U is an affine variety because the support of D is equal to ∂X.
Proposition 2.8 concludes the proof of the lemma.
˜
By Theorem 5.6, every subgroup of Bir(X) acts on Hyp(X)
and commensu˜
rates Hyp(X). If Γ transfixes Hyp(X), there is an invariant subset A of Hyp(X)
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
26
for which A4Hyp(X) is finite. Thus, one gets the following characterization of
pseudo-regularizability (the converse being immediate).
Theorem 6.4. Let X be a normal projective variety over an algebraically closed
field k. Let Γ be a subgroup of Bir(X). Then Γ transfixes the subset Hyp(X) of
˜
Hyp(X)
if and only if Γ is pseudo-regularizable.
Of course, this theorem applies directly when Γ ⊂ Bir(X) has property (FW)
because Theorem 5.6 shows that Γ commensurates Hyp(X).
Remark 6.5. Assuming char(k) = 0, we may work in the category of smooth
varieties (see Example 4.3 and § 5.5). As explained in Remark 1.2 and Lemma 6.3,
there are two extreme cases, corresponding to an empty or an ample boundary B =
∪i Ei .
If U = Y , Γ acts by pseudo-automorphisms on the projective variety Y . As
explained in Theorem 2.7, Γ is an extension of a subgroup of GL (NS(Y )) by an
algebraic group (which is almost contained in Aut(Y )0 ).
If U is affine, Γ acts by automorphisms on U. The group Aut(U) may be huge
(for instance if U is the affine space), but there are techniques to study groups of
automorphisms that are not available for birational transformations. For instance
Γ is residually finite and virtually torsion free if Γ is a group of automorphisms
generated by finitely many elements (see [3]).
6.4. Distorted elements. Theorem 6.4 may be applied when Γ has Property (FW),
or for pairs (Λ, Γ) with relative Property (FW). Here is one application:
Corollary 6.6. Let X be an irreducible projective variety. Let Γ be a distorted
cyclic subgroup of Bir(X). Then Γ is pseudo-regularizable.
The contraposition is useful to show that some elements of Bir(X) are undistorted. Let us state it in a strong “stable” way.
Corollary 6.7. Let X be a normal irreducible projective variety and let f be an
element of Bir(X) such that the cyclic group hf i does not transfix Hyp(X) (i.e., f is
not pseudo-regularizable). Then the cyclic subgroup hf i is undistorted in Bir(X),
and more generally for every irreducible projective variety, the cyclic subgroup
hf × IdY i is undistorted in Bir(X × Y ).
The latter consequence indeed follows from Proposition 5.7. This can be applied
to various examples, such as those in Example 7.9.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
27
7. I LLUSTRATING RESULTS
7.1. Surfaces whose birational group is transfixing. If X is a projective curve,
Bir(X) always transfixes Hyp(X), since it acts by automorphisms on a smooth
model of X. We now consider the same problem for surfaces, starting with the
following result, which holds in arbitrary dimension.
Proposition 7.1. Let X be a normal irreducible variety of positive dimension over
an algebraically closed field k. Then Bir(X × P1 ) does not transfix Hyp(X × P1 ).
Proof. We can suppose that X is affine and work in the model X × A1 . For
ϕ a nonzero regular function on X, define a regular self-map f of X × A1 by
f (x, t) = (x, ϕ(x)t). Denoting by Z(ϕ) the zero set of ϕ, we remark that f induces
an automorphism of the open subset (X r Z(ϕ)) × A1 . In particular, it induces a
permutation of Hyp((X r Z(ϕ)) × A1 ). Moreover, since f contracts the complement Z(ϕ) × A1 to the subset Z(ϕ) × {0}, which has codimension ≥ 2, its action
˜
on Hyp(X
× A1 ) maps the set of codimension 1 components of Z(ϕ) × A1 outside
M = Hyp(X × A1 ). Therefore M r f −1 (M ) is the set of irreducible components of
Z(ϕ) × A1 . Its cardinal is equal to the number of irreducible components of Z(ϕ).
When ϕ varies, this number is unbounded; hence, Bir(X × A1 ) does not transfix
Hyp(X × A1 ).
Varieties that are birational to the product of a variety and the projective line are
said to be ruled. Proposition 7.1 states that for any ruled irreducible projective
variety Y of dimension ≥ 2, Bir(Y ) does not transfix Hyp(Y ). The converse holds
for surfaces, by the following theorem.
Theorem 7.2. Let k be an algebraically closed field. Let X be an irreducible
normal projective surface over k. The following are equivalent:
(1)
(2)
(3)
(4)
Bir(X) does not transfix Hyp(X);
the Kodaira dimension of X is −∞;
X is ruled;
there is no irreducible projective surface Y that is birationally equivalent to
X, and such that Bir(Y ) = Aut(Y ).
Proof. The equivalence between (2) and (3) is classical (see [1]). The group Aut(Y )
˜
fixes Hyp(Y ) ⊂ Hyp(Y
), hence (1) implies (4). If the Kodaira dimension of X is
≥ 0, then X has a unique minimal model X0 , and Bir(X0 ) = Aut(X0 ). Thus, (4)
implies (2). Finally, Proposition 7.1 shows that (3) implies (1).
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
28
Theorem 7.3. Let X be an irreducible projective surface over an algebraically
closed field k. The following are equivalent:
(1) some finitely generated subgroup of Bir(X) does not transfix Hyp(X);
(2) some cyclic subgroup of Bir(X) does not transfix Hyp(X);
(3) • k has characteristic 0, and X is birationally equivalent to the product
of the projective line with a curve of genus 0 or 1, or
• k has positive characteristic, and X is a rational surface.
Example 7.4. Let k be an algebraically closed field that is not algebraic over a finite
field. Let t be an element of infinite order in the multiplicative group k∗ . Then the
birational transformation g of P2k given, in affine coordinates, by (x, y) 7→ (tx +
1, xy) does not transfix Hyp(P2k ). Indeed, it is easy to show that the hypersurface
C = {x = 0} satisfies, for n ∈ Z, f n (C) ∈ Hyp(P2k ) if and only if n ≤ 0.
Example 7.5. The example of non-transfixing element in Example 7.4 works under
a small restriction on k. Here is an example over an arbitrary algebraically closed
field k. Let L and L0 be two lines in P2k intersecting transversally at a point q.
Let f be a birational transformation of P2k that contracts L0 onto q and fixes the
line L. For instance, in affine coordinates, the monomial map (x, y) 7→ (x, xy)
contracts the y-axis onto the origin, and fixes the x-axis. Assume that there is an
open neighborhood U of q such that f does not contract any curve in U except the
line L0 . Let C be an irreducible curve that intersects L and L0 transversally at q.
Then, for every n ≥ 1, the strict transform f◦n (C) is an irreducible curve, and the
order of tangency of this curve with L goes to infinity with n. Thus, the degree of
f◦n (C) goes to infinity too and the f◦n (C) form an infinite sequence in Hyp(P2k ).
Now, assume that C is contracted by f −1 onto a point p, p ∈
/ Ind(f ), and p is
−1
−m
fixed by f . Then, for every m ≥ 1, f• (C) is not in Hyp(P2k ). This shows
that the orbit of C under the action of f• intersects Hyp(P2k ) and its complement
2
2
n
−m
˜
Hyp(P
k ) r Hyp(Pk ) on the infinite sets {f◦ (C) ; n ≥ 1} and {f• (C) ; m ≥ 1}.
In particular, f does not transfix Hyp(P2k ).
Since such maps exist over every algebraically closed field k, this example shows
that property (2) of Theorem 7.3 is satisfied for every rational surface X.
Proof. Trivially (2) implies (1).
Suppose that (3) holds and let us prove (2). The case X = P1 × P1 is already
covered by Lemma 6.1 in characteristic zero, and by the previous example in positive characteristic. The case X = C × P1 in characteristic zero, where C is an
elliptic curve, is similar. To see it, fix a point t0 ∈ C and a rational function ϕ on C
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
29
that vanishes at t0 . Then, since k has characteristic zero, one can find a translation
s of C of infinite order such that the orbit {sn (t0 ) : n ∈ Z} does not contain any
other zero or pole of ϕ (here we use that the characteristic of k is 0). Consider the
birational transformation f ∈ Bir(X) given by f (t, x) = (s(t), ϕ(t)x). Let H be
the hypersurface {t0 } × C. Then for n ∈ Z, we have (f• )n H ∈ Hyp(X) if and only
if n ≤ 0. Hence the action of the cyclic group hf i does not transfix Hyp(X).
Let us now prove that (1) implies (3). Applying Theorem 7.2, and changing
X to a birationally equivalent surface if necessary, we assume that X = C × P1
for some (smooth irreducible) curve C. We may now assume that the genus of C
is ≥ 2, or ≥ 1 in positive characteristic, and we have to show that every finitely
generated group Γ of Bir(X) transfixes Hyp(X). Since the genus of C is ≥ 1, the
group Bir(X) preserves the fibration X → C; this gives a surjective homomorphism
Bir(X) → Aut(C). Now let us fully use the assumption on C: if its genus is ≥ 2,
then Aut(C) is finite; if its genus is 1 and k has positive characteristic, then Aut(C)
is locally finite1, and in particular the projection of Γ on Aut(C) has a finite image.
Thus the kernel of this homomorphism intersects Γ in a finite index subgroup Γ0 .
It now suffices to show that Γ0 transfixes Hyp(X). Every f ∈ Γ0 has the form
f (t, x) = (t, ϕt (x)) for some rational map t 7→ ϕt from C to PGL 2 ; define Uf ⊂ C
as the open and dense subset on which ϕγ is regular: by definition, f restricts to an
automorphism of Uf × P1 . Let S be a finite generating subset of Γ0 , and let US be
the intersection of the open subsets Ug , for g ∈ S. Then Γ0 acts by automorphisms
on US × P1 and its action on Hyp(X) fixes the subset Hyp(US ). Hence Γ transfixes
Hyp(X).
It would be interesting to obtain characterizations of the same properties in dimension 3 (see Question 11.2).
7.2. Transfixing Jonquières twists. Let X be an irreducible normal projective surface and π a morphism onto a smooth projective curve C with rational connected
fibers. Let Birπ (X) be the subgroup of Bir(X) permuting the fibers of π. Since C
is a smooth projective curve, the group Bir(C) coincides with Aut(C) and we get a
canonical homomorphism rC : Birπ (X) → Aut(C).
The main examples to keep in mind are provided by P1 ×P1 , Hirzebruch surfaces,
and C × P1 for some genus 1 curve C, π being the first projection.
Let Hypπ (X) denote the set of irreducible curves which are contained in fibers
˜ π (X) = Hypπ (X) t (Hyp(X)
˜
˜
of π, and define Hyp
r Hyp(X)), so that Hyp(X)
=
1
Every finitely generated subgroup is finite.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
30
˜ π (X) t (Hyp(X) r Hypπ (X)). An irreducible curve H ⊂ X is an element of
Hyp
Hyp(X) r Hypπ (X) if and only if its projection π(H) coincides with C; this curves
are said to be transverse to π.
˜
˜ π (X) t (Hyp(X) r Hypπ (X))
Proposition 7.6. The decomposition Hyp(X)
= Hyp
is Birπ (X)-invariant.
Proof. Let H ⊂ X be an irreducible curve which is transverse to π. Since Birπ (X)
acts by automorphisms on C, H can not be contracted by any element of Birπ (X);
more precisely, for every g ∈ Birπ (X), g• (H) is an element of Hyp(X) which is
transverse to π. Thus the set of transverse curves is Birπ (X)-invariant.
This proposition and the proof of Theorem 7.3 lead to the following corollary.
Corollary 7.7. Let G be a subgroup of Birπ (X). If π maps the set of indeterminacy
points of the elements of G into a finite subset of C, then G transfixes Hyp(X).
In the case of cyclic subgroups, we establish a converse under the mild assumption of algebraic stability. Recall that a birational transformation f of a smooth
projective surface is algebraically stable if the forward orbit of Ind(f −1 ) does not
intersect Ind(f ). By [14], given any birational transformation f of a surface X,
there is a birational morphism u : Y → X, with Y a smooth projective surface,
such that fY := u−1 ◦ f ◦ u is algebraically stable. If π : X → C is a fibration,
as above, and f is in Birπ (X), then fY preserves the fibration π ◦ u. Thus, we
may always assume that X is smooth and f is algebraically stable after a birational
conjugacy.
Proposition 7.8. Let X be a smooth projective surface, and π : X → C a rational
fibration. If f ∈ Birπ (X) is algebraically stable, then f transfixes Hyp(X) if, and
only if the orbit of π(Ind(f )) under the action of rC (f ) is finite.
For X = P1 × P1 , the reader can check (e.g., conjugating a suitable automorphism) that the proposition fails without the algebraic stability assumption.
Proof. Denote by A ⊂ Aut(C) the subgroup generated by rC (f ). Consider a fiber
F ' P1 which is contracted to a point q by f . Then, there is a unique indeterminacy
point p of f on F . If the orbit of π(q) under the action of A is infinite, the orbit of
q under the action of f is infinite too. Set qn = f n−1 (q) for n ≥ 1 (so that q1 = q);
this sequence of points is well defined because f is algebraically stable: for every
n ≥ 1, f is a local isomorphism from a neighborhood of qn to a neighborhood
˜
of qn+1 . Then, the image of F in Hyp(X)
under the action of f n is an element
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
31
˜
of Hyp(X)
r Hyp(X): it is obtained by a finite number of blow-ups above qn .
Since the points qn form an infinite set, the images of F form an infinite subset of
˜
Hyp(X)
r Hyp(X). Together with the previous corollary, this argument proves the
proposition.
Example 7.9. Consider X = P1 × P1 , with π(x, y) = x (using affine coordinates).
Start with fa (x, y) = (ax, xy), for some non-zero parameter a ∈ k. The action of
rC (fa ) on C = P1 fixes the images 0 and ∞ of the indeterminacy points of fa . Thus,
˜
fa transfixes Hyp(X)
by Corollary 7.7. Now, consider ga (x, y) = (ax, (x + 1)y).
Then, the orbit of −1 under multiplication by a is finite if and only if a is a root
of unity; thus, if a is not a root of unity, ga does not transfix Hyp(X). Section 6.1
provides more examples of that kind.
8. B IRATIONAL TRANSFORMATIONS OF SURFACES I
From now on, we work in dimension 2: X, Y , and Z will be smooth projective
surfaces over the algebraically closed field k. (In dimension 2, the resolution of
singularities is available in all characteristics, so that we can always assume the
varieties to be smooth.)
8.1. Regularization. In this section, we refine Theorem 6.4, in order to apply results of Danilov and Gizatullin. Recall that a curve C in a smooth surface Y has
normal crossings if each of its singularities is a simple node with two transverse
tangents. In the complex case, this means that C is locally analytically equivalent
to {xy = 0} (two branches intersecting transversally) in an analytic neighborhood
of each of its singularities.
Theorem 8.1. Let X be a smooth projective surface, defined over an algebraically
closed field k. Let Γ be a subgroup of Bir(X) that transfixes the subset Hyp(X) of
˜
Hyp(X).
There exists a smooth projective surface Z, a birational map ϕ : Z 99K X
and a dense open subset U ⊂ Z such that, writing the boundary ∂Z := Z r U as a
finite union of irreducible components Ei ⊂ Z, 1 ≤ i ≤ `, the following properties
hold:
(1) The boundary ∂Z is a curve with normal crossings.
(2) The subgroup ΓZ := ϕ−1 ◦ Γ ◦ ϕ ⊂ Bir(Z) acts by automorphisms on the
open subset U.
(3) For all i ∈ {1, . . . , `} and g ∈ ΓZ , the strict transform of Ei under the
action of g on Z is contained in ∂Z: either g◦ (Ei ) is a point of ∂Z or
g◦ (Ei ) is an irreducible component Ej of ∂Z.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
32
(4) For all i ∈ {1, . . . , `}, there exists an element g ∈ ΓZ that contracts Ei to a
point g◦ (Ei ) ∈ ∂Z. In particular, Ei is a rational curve.
(5) The pair (Z, U) is minimal for the previous properties, in the following
sense: if one contracts a smooth curve of self-intersection −1 in ∂Z, then
the boundary stops to be a normal crossing divisor.
Before starting the proof, note that the boundary ∂Z may a priori contain an
irreducible rational curve E with a node.
Proof. We apply Theorem 6.4 (which works in positive characteristic too, because
X is a surface), and get a birational morphism ϕ0 : Y0 → X and an open subset U0
of Y0 that satisfy properties (1) and (3), except that we only know that the action of
Γ0 := ϕ−1
0 ◦ Γ ◦ ϕ0 on U0 is by pseudo-automorphisms (not yet by automorphisms).
We shall progressively modify the triple (Y0 , U0 , ϕ0 ) to obtain a surface Z with
properties (1) to (5).
Step 1.– First, we blow-up the singularities of the curve ∂Y0 = Y0 r U0 to get a
boundary that is a normal crossing divisor. This replaces the surface Y0 by a new
one, still denoted Y0 . This modification adds new components to the boundary ∂Y0
but does not change the fact that Γ0 acts by pseudo-automorphisms on U0 . Let `0 be
the number of irreducible components of Y0 r U0 .
Step 2.– Consider a point q in U0 , and assume that there is a curve Ei of ∂Y0
that is contracted to q by an element g ∈ Γ0 ; fix such a g, and denote by D the
union of the curves Ej such that g◦ (Ej ) = q. By construction, g is a pseudoautomorphism of U0 . The curve D does not intersect the indeterminacy set of g,
since otherwise there would be a curve C containing q that is contracted by g −1 .
And D is a connected component of ∂Y0 , because otherwise g maps one of the Ej
to a curve that intersects U0 . Thus, there are small neighborhoods W of D and
W 0 of q such that W ∩ ∂Y0 = D and g realizes an isomorphism from W r D to
W 0 r {q}, contracting D onto the smooth point q ∈ Y0 . As a consequence, there is
a birational morphism π1 : Y0 → Y1 such that
(1) Y1 is smooth
(2) π1 contracts D onto a point q1 ∈ Y1
(3) π1 is an isomorphism from Y0 r D to Y1 r {q1 }.
In particular, π1 (U0 ) is an open subset of Y1 and U1 = π1 (U0 ) ∪ {q1 } is an open
neighborhood of q1 in Y1 .
Then, Γ1 := π1 ◦ Γ0 ◦ π1−1 acts birationally on Y1 , and by pseudo-automorphisms
on U1 . The boundary ∂Y1 = Y1 r U1 contains `1 irreducible components, with
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
33
`1 < `0 (the difference is the number of components of D), and is a normal crossing
divisor because D is a connected component of ∂Y0 .
Repeating this process, we construct a sequence of surfaces πk : Yk−1 → Yk and
open subsets πk (Uk−1 ) ⊂ Uk ⊂ Yk such that the number of irreducible components
of ∂Yk = Yk r Uk decreases. After a finite number of steps (at most `0 ), we may
assume that Γk ⊂ Bir(Yk ) does not contract any boundary curve onto a point of the
open subset Uk . On such a model, Γk acts by automorphisms on Uk .
We fix such a model, which we denote by the letters Y , U, ∂Y , ϕ. The new birational map ϕ : Y 99K X is the composition of ϕ0 with the inverse of the morphism
Y0 → Yk . On such a model, properties (1) and (2) are satisfied. Moreover, (3)
follows from (2). We now modify Y further to get property (4).
Step 3.– Assume that the curve Ei ⊂ Y r U is not contracted by Γ. Let F be
the orbit of Ei : F = ∪g∈Γ g◦ (Ei ); this curve is contained in the boundary ∂Y of the
open subset U. Changing U into
U 0 = U ∪ (F r ∂Y r F ),
the group Γ also acts by pseudo-automorphisms on U 0 . This operation decreases
the number ` of irreducible components of the boundary. Thus, combining steps 2
and 3 finitely many times, we reach a model that satisfies Properties (1) to (4). We
continue to denote it by Y .
Step 4.– If the boundary ∂Y contains a smooth (rational) curve Ei of self-intersection −1, it can be blown down to a smooth point q by a birational morphism
π : Y → Y 0 ; the open subset U is not affected, but the boundary ∂Y 0 has one
component less. If Ei was a connected component of ∂Y , then U 0 = π(U) ∪ {q}
is a neighborhood of q and one replaces U by U 0 , as in step 2. Now, two cases may
happen. If the boundary ∂Y 0 ceases to be a normal crossing divisor, we come back
to Y and do not apply this surgery. If ∂Y 0 has normal crossings, we replace Y by
this new model. In a finite number of steps, looking successively at all (−1)-curves
and iterating the process, we reach a new surface Z on which all five properties are
satisfied.
Remark 8.2. One may also remove property (5) and replace property (1) by
(1’) The Ei are rational curves, and none of them is a smooth rational curve with
self-intersection −1.
But doing so, we may lose the normal crossing property. To get property (1’), apply
the theorem and argue as in step 4.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
34
8.2. Constraints on the boundary. We now work on the new surface Z given by
Theorem 8.1. Thus, Z is the surface, Γ the subgroup of Bir(Z), U the open subset
on which Γ acts by automorphisms, and ∂Z the boundary of U.
Proposition 8.3 (Gizatullin, [17] § 4). There are four possibilities for the geometry
of the boundary ∂Z = Z r U.
(1)
(2)
(3)
(4)
∂Z is empty.
∂Z is a cycle of rational curves.
∂Z is a chain of rational curves.
∂Z is not connected; it is the disjoint union of finitely many smooth rational
curves of self-intersection 0.
Moreover, in cases (2) and (3), the open subset U is the blow-up of an affine surface.
Thus, there are four possibilities for ∂Z, which we study successively. We shall
start with (1) and (4) in sections 8.3 and 8.4. Then case (3) is dealt with in Section 8.5. Case (2) is slightly more involved: it is treated in Section 9.
Before that, let us explain how Proposition 8.3 follows from Section 5 of [17].
First, we describe the precise meaning of the statement, and then we explain how
the original results of [17] apply to our situation.
The boundary and its dual graph .– Consider the dual graph GZ of the boundary ∂Z. The vertices of GZ are in one to one correspondence with the irreducible
components Ei of ∂Z. The edges correspond to singularities of ∂Z: each singular point q gives rise to an edge connecting the components Ei that determine the
two local branches of ∂Z at q. When the two branches correspond to the same
irreducible component, one gets a loop of the graph GZ .
We say that ∂Z is a chain of rational curves if the dual graph is of type A` : ` is
the number of components, and the graph is linear, with ` vertices. Chains are also
called zigzags by Gizatullin and Danilov.
We say that ∂Z is a cycle if the dual graph is isomorphic to a regular polygon
with ` vertices. There are two special cases: when ∂Z is reduced to one component,
this curve is a rational curve with one singular point and the dual graph is a loop
(one vertex, one edge); when ∂Z is made of two components, these components
intersect in two distinct points, and the dual graph is made of two vertices with two
edges between them. For ` = 3, 4, . . ., the graph is a triangle, a square, etc.
Gizatullin’s original statement.– To describe Gizatullin’s article, let us introduce some useful vocabulary. Let S be a projective surface, and C ⊂ S be a curve;
C is a union of irreducible components, which may have singularities. Assume that
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
35
S is smooth in a neighborhood of C. Let S0 be the complement of C in S, and
let ι : S0 → S be the natural embedding of S0 in S. Then, S is a completion of
S0 : this completion is marked by the embedding ι : S0 → S, and its boundary is
the curve C. Following [17] and [18, 19], we only consider completions of S0 by
curves (i.e. S rι(S0 ) is of pure dimension 1), and we always assume S to be smooth
in a neighborhood of the boundary. Such a completion is
(i) simple if the boundary C has normal crossings;
(ii) minimal if it is simple and minimal for this property: if Ci ⊂ C is an
exceptional divisor of the first kind then, contracting Ci , the image of C is
not a normal crossing divisor anymore. Equivalently, Ci intersects at least
three other components of C. Equivalently, if ι0 : S0 → S 0 is another simple
completion, and π : S → S 0 is a birational morphism such that π ◦ ι = ι0 ,
then π is an isomorphism.
If S is a completion of S0 , one can blow-up boundary points to obtain a simple
completion, and then blow-down some of the boundary components Ci to reach a
minimal completion.
Now, consider the group of automorphisms of the open surface S0 . This group
Aut(S0 ) acts by birational transformations on S. An irreducible component Ei of
the boundary C is contracted if there is an element g of Aut(S0 ) that contracts Ei :
g◦ (Ei ) is a point of C. Let E be the union of the contracted components. In [17],
Gizatullin proves that E satisfies one of the four properties stated in Proposition 8.3;
moreover, in cases (2) and (3), E contains an irreducible component Ei with Ei2 > 0
(see Corollary 4, Section 5 of [17]).
Thus, Proposition 8.3 follows from the properties of the pair (Z, U, Γ): the open
subset U plays the role of S0 , and Z is the completion S; the boundary ∂Z is the
curve C: it is a normal crossing divisor, and it is minimal by construction. Since
every component of ∂Z is contracted by at least one element of Γ ⊂ Aut(U), ∂Z
coincides with Gizatullin’s curve E. The only thing we have to prove is the last
sentence of Proposition 8.3, concerning the structure of the open subset U.
First, let us show that E = ∂Z supports an effective divisor D such that D2 > 0
and D·F ≥ 0 for every irreducible curve. To do so, fix an irreducible component E0
of ∂Z with positive self-intersection. Assume that ∂Z is a cycle, and list cyclically
the other irreducible components: E1 , E2 , ..., up to Em , with E1 and Em intersecting
E0 . First, one defines a1 = 1. Then, one chooses a2 > 0 such that a1 E1 + a2 E2
intersects positively E1 , then a3 > 0 such that a1 E1 + a2 E2 + a3 E3 intersects
P
positively E1 and E2 , ..., up to m
i=1 ai Ei that intersects all components Ei , 1 ≤
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
36
i ≤ m − 1 positively. Since E02 > 0 and E0 intersects Em , one can find a coefficient
a0 for which the divisor
m
X
D=
ai Ei
i=0
2
satisfies D > 0 and D · Ei > 0 for all Ei , 0 ≤ i ≤ m. This implies that D
intersects every irreducible curve F non-negatively. Thus, D is big and nef (see
[26], Section 2.2). A similar proof applies when ∂Z is a zigzag.
Let W be the subspace of NS(X) spanned by classes of curves F with D · F = 0.
Since D2 > 0, Hodge index theorem implies that the intersection form is negative definite on W . Thus, Mumford-Grauert contraction theorem provides a birational morphism τ : Z → Z 0 that contracts simultaneously all curves F with class
[F ] ∈ W and is an isomorphism on Z rF ; in particular, τ is an isomorphism from a
neighborhood V of ∂Z onto its image τ (V) ⊂ Z 0 . The modification τ may contract
curves that are contained in U, and may create singularities for the new open subset
U 0 = τ (U), but does not modify Z near the boundary ∂Z. Now, on Z 0 , the divisor D0 = τ∗ (D) intersects every effecitve curve positively and satisfies (D0 )2 > 0.
Nakai-Moishezon criterion shows that D0 is ample (see [26], Section 1.2.B); consequently, there is an embedding of Z 0 into a projective space and a hyperplane section
H of Z 0 for which Z 0 r H coincides with U 0 . This proves that U is a blow-up of the
affine (singular) surface U 0 .
8.3. Projective surfaces and automorphisms. In this section, we (almost always)
assume that Γ acts by regular automorphisms on a projective surface X. This
corresponds to case (1) in Proposition 8.3. Our goal is the special case of Theorem B which is stated below as Theorem 8.8. We shall assume that Γ has property
(FW) in some of the statements (this was not a hypothesis in Theorem thm:FWregularization-surfaces).
We may, and shall, assume that X is smooth. We refer to [1, 4, 20] for the
classification of surfaces and the main notions attached to them.
8.3.1. Action on the Néron-Severi group. The intersection form is a non-degenerate
quadratic form qX on the Néron-Severi group NS(X), and Hodge index theorem
asserts that its signature is (1, ρ(X) − 1), where ρ(X) denotes the Picard number,
i.e. the rank of the lattice NS(X) ' Zρ .
The action of Aut(X) on the Néron-Severi group NS(X) provides a linear representation preserving the intersection form qX . This gives a morphism
Aut(X) → O (NS(X); qX ).
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
37
Fix an ample class a in NS(X) and consider the hyperboloid
HX = {u ∈ NS(X) ⊗Z R; qX (u, u) = 1 and qX (u, a) > 0}.
This set is one of the two connected components of {u; qX (u, u) = 1}. With the riemannian metric induced by (−qX ), it is a copy of the hyperbolic space of dimension
ρ(X) − 1; the group Aut(X) acts by isometries on this space (see [9]).
Proposition 8.4. Let X be a smooth projective surface. Let Γ be a subgroup of
Aut(X). If Γ has Property (FW), its action on NS(X) fixes a very ample class, the
image of Γ in O (NS(X); qX ) is finite, and a finite index subgroup of Γ is contained
in Aut(X)0 .
Proof. The image Γ∗ of Γ is contained in the arithmetic group O (NS(X); qX ).
The Néron-Severi group NS(X) is a lattice Zρ and qX is defined over Z. Thus,
O (NS(X);
qX ) is a standard arithmetic group in the sense of [6], § 1.1. The main results
of [6] imply that the action of Γ∗ on the hyperbolic space HX has a fixed point. Let
u be such a fixed point. Since qX is negative definite on the orthogonal complement
u⊥ of u in NS(X), and Γ∗ is a discrete group acting by isometries on it, we deduce
P
that Γ∗ is finite. If a is a very ample class, the sum γ∈Γ∗ γ ∗ (a) is an invariant, very
ample class.
The kernel K ⊂ Aut(X) of the action on NS(X) contains Aut(X)0 as a finite
index subgroup. Thus, if Γ has Property (FW), it contains a finite index subgroup
that is contained in Aut(X)0 .
8.3.2. Non-rational surfaces. In this paragraph, we assume that the surface X is
not rational. The following proposition classifies subgroups of Bir(X) with Property (FW); in particular, such a group is finite if the Kodaira dimension of X is
non-negative (resp. if the characteristic of k is positive). Recall that we denote by
Z ⊂ Q the ring of algebraic integers.
Proposition 8.5. Let X be a smooth, projective, and non-rational surface, over the
algebraically closed field k. Let Γ be an infinite subgroup of Bir(X) with Property (FW). Then k has characteristic 0, and there is a birational map ϕ : X 99K
C × P1k that conjugates Γ to a subgroup of Aut(C × P1k ). Moreover, there is a finite
index subgroup Γ0 of Γ such that ϕ ◦ Γ0 ◦ ϕ−1 , is a subgroup of PGL 2 (Z), acting
on C × P1k by linear projective transformations on the second factor.
Proof. Assume, first, that the Kodaira dimension of X is non-negative. Let π : X →
X0 be the projection of X on its (unique) minimal model (see [20], Thm. V.5.8).
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
38
The group Bir(X0 ) coincides with Aut(X0 ); thus, after conjugacy by π, Γ becomes
a subgroup of Aut(X0 ), and Proposition 8.4 provides a finite index subgroup Γ0 ≤ Γ
that is contained in Aut(X0 )0 . Note that Γ0 inherits Property (FW) from Γ.
If the Kodaira dimension of X is equal to 2, the group Aut(X0 )0 is trivial; hence
Γ0 = {IdX0 } and Γ is finite. If the Kodaira dimension is equal to 1, Aut(X0 )0
is either trivial, or isomorphic to an elliptic curve, acting by translations on the
fibers of the Kodaira-Iitaka fibration of X0 (this occurs, for instance, when X0 is the
product of an elliptic curve with a curve of higher genus). If the Kodaira dimension
is 0, then Aut(X0 )0 is also an abelian group (either trivial, or isomorphic to an
abelian surface). Since abelian groups with Property (FW) are finite, the group Γ0
is finite, and so is Γ.
We may now assume that the Kodaira dimension kod(X) is negative. Since X
is not rational, then X is birationally equivalent to a product S = C × P1k , where
C is a curve of genus g(C) ≥ 1. Denote by k(C) the field of rational functions on
the curveC. We fix a local coordinate x on C and denote the elements of k(C) as
functions a(x) of x. The semi-direct product Aut(C) n PGL 2 (k(C)) acts on S by
birational transformations of the form
a(x)y + b(x)
1
,
(x, y) ∈ C × Pk 7→ f (x),
c(x)y + d(x)
and Bir(S) coincides with this group Aut(C) n PGL 2 (k(C)); indeed, the first projection π : S → C is equivariant under the action of Bir(S) because every rational
map P1k → C is constant.
Since g(C) ≥ 1, Aut(C) is virtually abelian. Property (FW) implies that there
is a finite index, normal subgroup Γ0 ≤ Γ that is contained in PGL 2 (k(C)). By
Corollary 3.8, every subgroup of PGL 2 (k(C)) with Property (FW) is conjugate to a
subgroup of PGL 2 (Z) or a finite group if the characteristic of the field k is positive.
We may assume now that the characteristic of k is 0 and that Γ0 ⊂ PGL 2 (Z) is
infinite. Consider an element g of Γ; it acts as a birational transformation on the
surface S = C × P1k , and it normalizes Γ0 :
g ◦ Γ0 = Γ0 ◦ g.
Since Γ0 acts by automorphisms on S, the finite set Ind(g) is Γ0 -invariant. But a
subgroup of PGL 2 (k) with Property (FW) preserving a non-empty, finite subset of
P1 (k) is a finite group. Thus, Ind(g) must be empty. This shows that Γ is contained
in Aut(S).
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
39
8.3.3. Rational surfaces. We now assume that X is a smooth rational surface, that
Γ ≤ Bir(X) is an infinite subgroup with Property (FW), and that Γ contains a
finite index, normal subgroup Γ0 that is contained in Aut(X)0 . Recall that a smooth
surface Y is minimal if it does not contain any smooth rational curve of the first
kind, i.e. with self-intersection −1. Every exceptional curve of the first kind E ⊂
X is determined by its class in NS(X) and is therefore invariant under the action
of Aut(X)0 . Contracting such (−1)-curves one by one, we obtain the following
lemma.
Lemma 8.6. There is a birational morphism π : X → Y onto a minimal rational surface Y that is equivariant under the action of Γ0 ; Y does not contain any
exceptional curve of the first kind and Γ0 becomes a subgroup of Aut(Y )0 .
Let us recall the classification of minimal rational surfaces and describe their
groups of automorphisms. First, we have the projective plane P2k , with Aut(P2k ) =
PGL 3 (k) acting by linear projective transformations. Then comes the quadric P1k ×
P1k , with Aut(P1k × P1k )0 = PGL 2 (k) × PGL 2 (k) acting by linear projective transformations on each factor; the group of automorphisms of the quadric is the semidirect product of PGL 2 (k) × PGL 2 (k) with the group of order 2 generated by the
permutation of the two factors, η(x, y) = (y, x). Then, for each integer m ≥ 1, the
Hirzebruch surface Fm is the projectivization of the rank 2 bundle O ⊕ O(m) over
P1k ; it may be characterized as the unique ruled surface Z → P1k with a section C
of self-intersection −m. Its group of automorphisms is connected and preserves the
ruling. This provides a homomorphism Aut(Fm ) → PGL 2 (k) that describes the action on the base of the ruling, and it turns out that this homomorphism is surjective.
If we choose coordinates for which the section C intersects each fiber at infinity, the
kernel Jm of this homomorphism acts by transformations of type
(x, y) 7→ (x, αy + β(x))
where β(x) is a polynomial function of degree ≤ m. In particular, Jm is solvable.
In other words, Aut(Fm ) is isomorphic to the group
(GL 2 (k)/µm ) n Wm
where Wm is the linear representation of GL 2 (k) on homogeneous polynomials
of degree m in two variables, and µm is the kernel of this representation: it is
the subgroup of GL 2 (k) given by scalar multiplications by roots of unity of order
dividing m.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
40
Lemma 8.7. Given the above conjugacy π : X → Y , the subgroup π ◦ Γ ◦ π −1 of
Bir(Y ) is contained in Aut(Y ).
Proof. Assume that the surface Y is the quadric P1k × P1k . Then, according to Theorem 3.6, Γ0 is conjugate to a subgroup of PGL 2 (Z) × PGL 2 (Z). If g is an element
of Γ, its indeterminacy locus is a finite subset Ind(g) of P1k × P1k that is invariant under the action of Γ0 , because g normalizes Γ0 . Since Γ0 is infinite and has Property
(FW), this set Ind(g) is empty (Lemma 3.5). Thus, Γ is contained in Aut(P1k × P1k ).
The same argument applies for Hirzebruch surfaces. Indeed, Γ0 is an infinite
subgroup of Aut(Fm ) with Property (FW). Thus, up to conjugacy, its projection
in PGL 2 (k) is contained in PGL 2 (Z). If it were finite, a finite index subgroup
of Γ0 would be contained in the solvable group Jm , and would therefore be finite
too by Property (FW); this would contradict |Γ0 | = ∞. Thus, the projection of
Γ0 in PGL (Z) is infinite. If g is an element of Γ, Ind(g) is a finite, Γ0 -invariant
subset, and by looking at the projection of this set in P1k one deduces that it is empty
(Lemma 3.5). This proves that Γ is contained in Aut(Fm ).
Let us now assume that Y is the projective plane. Fix an element g of Γ, and
assume that g is not an automorphism of Y = P2 ; the indeterminacy and exceptional
sets of g are Γ0 invariant. Consider an irreducible curve C in the exceptional set
of g, together with an indeterminacy point q of g on C. Changing Γ0 in a finite
index subgroup, we may assume that Γ0 fixes C and q; in particular, Γ0 fixes q, and
permutes the tangent lines of C through q. But the algebraic subgroup of PGL 3 (k)
preserving a point q and a line through q does not contain any infinite group with
Property (FW) (Lemma 3.5). Thus, again, Γ is contained in Aut(P2k ).
8.3.4. Conclusion, in Case (1). Putting everything together, we obtain the following particular case of Theorem B.
Theorem 8.8. Let X be a smooth projective surface over an algebraically closed
field k. Let Γ be an infinite subgroup of Bir(X) with Property (FW). If a finite index
subgroup of Γ is contained in Aut(X), there is a birational morphism ϕ : X → Y
that conjugates Γ to a subgroup ΓY of Aut(Y ), with Y in the following list:
(1) Y is the product of a curve C by P1k , the field k has characteristic 0, and a
finite index subgroup Γ0Y of ΓY is contained in PGL 2 (Z), acting by linear
projective transformations on the second factor;
(2) Y is P1k × P1k , the field k has characteristic 0, and ΓY is contained in
PGL 2 (Z) × PGL 2 (Z);
(3) Y is a Hirzebruch surface Fm and k has characteristic 0;
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
41
(4) Y is the projective plane P2k .
In particular, Y = P2k if the characteristic of k is positive.
Remark 8.9. Denote by ϕ : X → Y the birational morphism given by the theorem.
Changing Γ in a finite index subgroup, we may assume that it acts by automorphisms on both X and Y .
If Y = C × P1 , then ϕ is in fact an isomorphism. To prove this fact, denote by
ψ the inverse of ϕ. The indeterminacy set Ind(ψ) is ΓY invariant because both Γ
and ΓY act by automorphisms. From Lemma 3.5, applied to Γ0Y ⊂ PGL 2 (k), we
deduce that Ind(ψ) is empty and ψ is an isomorphism. The same argument implies
that the conjugacy is an isomorphism if Y = P1k × P1k or a Hirzebruch surface Fm ,
m ≥ 1.
Now, if Y is P2k , ϕ is not always an isomorphism. For instance, SL 2 (C) acts on
P2k with a fixed point, and one may blow up this point to get a new surface with an
action of groups with Property (FW). But this is the only possible example, i.e. X
is either P2k , or a single blow-up of P2k (because Γ ⊂ PGL 3 (C) can not preserve
more than one base point for ϕ−1 without loosing Property (FW)).
8.4. Invariant fibrations. We now assume that Γ has Property (FW) and acts by
automorphisms on U ⊂ X, and that the boundary ∂X = X rU is the union of ` ≥ 2
pairwise disjoint rational curves Ei ; each of them has self-intersection Ei2 = 0 and
is contracted by at least one element of Γ. This corresponds to the fourth possibility
in Gizatullin’s Proposition 8.3. Since Ei · Ej = 0, the Hodge index theorem implies
that the classes ei = [Ei ] span a unique line in NS(X), and that [Ei ] intersects
non-negatively every curve.
From Section 8.3.2, we may, and do assume that X is a rational surface. In
particular, the Euler characteristic of the structural sheaf is equal to 1: χ(OX ) = 1,
and Riemann-Roch formula gives
E12 − KX · E1
+ 1.
2
The genus formula implies KX ·E1 = −2, and Serre duality shows that h2 (X, E1 ) =
h0 (X, KX −E1 ) = 0 because otherwise −2 = (KX −E1 )·E1 would be non-negative
(because E1 intersects non-negatively every curve). From this, we obtain
h0 (X, E1 ) − h1 (X, E1 ) + h2 (X, E1 ) =
h0 (X, E1 ) = h1 (X, E1 ) + 2 ≥ 2.
Since E12 = 0, we conclude that the space H 0 (X, E1 ) has dimension 2 and determines a fibration π : X → P1k ; the curve E1 , as well as the Ei for i ≥ 2, are fibers
of π.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
42
If f is an automorphism of U and F ⊂ U is a fiber of π, then f (F ) is a (complete)
rational curve. Its projection π(f (F )) is contained in the affine curve P1k r ∪i π(Ei )
and must therefore be reduced to a point. Thus, f (F ) is a fiber of π and f preserves
the fibration. This proves the following lemma.
Lemma 8.10. There is a fibration π : X → P1k such that
(1) every component Ei of ∂X is a fiber of π, and U = π −1 (V) for an open
subset V ⊂ P1k ;
(2) the generic component of π is a smooth rational curve;
(3) Γ permutes the fibers of π: there is a morphism ρ : Γ → PGL 2 (k) such that
π ◦ f = ρ(f ) ◦ π for every f ∈ Γ.
The open subset V ( P1k is invariant under the action of ρ(Γ); hence ρ(Γ) is
finite by Property (FW) and Lemma 3.5. Let Γ0 be the kernel of this morphism. Let
ϕ : X 99K P1k × P1k be a birational map that conjugates the fibration π to the first
projection τ : P1k × P1k → P1k . Then, Γ0 is conjugate to a subgroup of PGL 2 (k(x))
acting on P1k × P1k by linear projective transformations of the fibers of τ . From
Corollary 3.8, a new conjugacy by an element of PGL 2 (k(x)) changes Γ0 in an
infinite subgroup of PGL 2 (Z). Then, as in Sections 8.3.2 and 8.3.3 we conclude
that Γ becomes a subgroup of PGL 2 (Z) × PGL 2 (Z), with a finite projection on the
first factor.
Proposition 8.11. Let Γ be an infinite group with Property (FW), with Γ ⊂ Aut(U),
and U ⊂ Z as in case (4) of Proposition 8.3. There exists a birational map
ψ : Z 99K P1k × P1k that conjugates Γ to a subgroup of PGL 2 (Z) × PGL 2 (Z), with
a finite projection on the first factor.
8.5. Completions by zigzags. Two cases remain to be studied: ∂Z can be a chain
of rational curves (a zigzag in Gizatullin’s terminology) or a cycle of rational curves
(a loop in Gizatullin’s terminology). Cycles are considered in Section 9. In this
section, we rely on difficult results of Danilov and Gizatullin to treat the case of
chains of rational curves (i.e. case (3) in Proposition 8.3). Thus, in this section
(i) ∂X is a chain of smooth rational curves Ei
(ii) U = X r ∂X is an affine surface (singularities are allowed)
(iii) every irreducible component Ei is contracted to a point of ∂X by at least
one element of Γ ⊂ Aut(U) ⊂ Bir(X).
In [18, 19], Danilov and Gizatullin introduce a set of “standard completions” of
the affine surface U. As in Section 8.2, a completion (or more precisely a “marked
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
43
completion”) is an embedding ι : U → Y into a complete surface such that ∂Y =
Y r ι(U) is a curve (this boundary curve may be reducible). Danilov and Gizatullin
only consider completions for which ∂Y is a chain of smooth rational curves and
Y is smooth in a neighborhood of ∂Y ; the surface X provides such a completion.
Two completions ι : U → Y and ι0 : U → Y 0 are isomorphic if the birational map
ι0 ◦ι−1 : Y → Y 0 is an isomorphism; in particular, the boundary curves are identified
by this isomorphism. The group Aut(U) acts by pre-composition on the set of
isomorphism classes of (marked) completions.
Among all possible completions, Danilov and Gizatullin distinguish a class of
“standard (marked) completions”, for which we refer to [18] for a definition. There
are elementary links (corresponding to certain birational mappings Y 99K Y 0 ) between standard completions, and one can construct a graph ∆U whose vertices are
standard completions; there is an edge between two completions if one can pass
from one to the other by an elementary link.
Example 8.12. A completion is m-standard, for some m ∈ Z, if the boundary curve
∂Y is a chain of n + 1 consecutive rational curves E0 , E1 , . . ., En (n ≥ 1) such that
E02 = 0, E12 = −m, and Ei2 = −2 if i ≥ 2.
Blowing-up the intersection point q = E0 ∩ E1 , one creates a new chain starting
by E00 with (E00 )2 = −1; blowing down E00 , one creates a new (m + 1)-standard
completion. This is one of the elementary links.
Standard completions are defined by constraints on the self-intersections of the
components Ei . Thus, the action of Aut(U) on completions permutes the standard completions; this action determines a morphism from Aut(U) to the group of
isometries (or automorphisms) of the graph ∆U (see [18]):
Aut(U) → Iso (∆U ).
Theorem 8.13 (Danilov and Gizatullin, [18, 19]). The graph ∆U of all isomorphism
classes of standard completions of U is a tree. The group Aut(U) acts by isometries
of this tree. The stabilizer of a vertex ι : U → Y is the subgroup G(ι) of automorphisms of the complete surface Y that fix the curve ∂Y . This group is an algebraic
subgroup of Aut(Y ).
The last property means that G(ι) is an algebraic group that acts algebraically on
Y . It coincides with the subgroup of Aut(Y ) fixing the boundary ∂Y ; the fact that
it is algebraic follows from the existence of a G(ι)-invariant, big and nef divisor
which is supported on ∂Y (see the last sentence of Proposition 8.3).
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
44
The crucial assertion in this theorem is that ∆U is a simplicial tree (typically,
infinitely many edges emanate from each vertex). There are sufficiently many links
to assure connectedness, but not too many in order to prevent the existence of cycles
in the graph ∆U .
Corollary 8.14. If Γ is a subgroup of Aut(U) that has the fixed point property on
trees, then Γ is contained in G(ι) ⊂ Aut(Y ) for some completion ι : U → Y .
If Γ has Property (FW), it has Property (FA) (see Section 3.4). Thus, if it acts
by automorphisms on U, Γ is conjugate to the subgroup G(ι) of Aut(Y ), for some
zigzag-completion ι : U → Y . Theorem 8.8 of Section 8.3.3 implies that the action
of Γ on the initial surface X is conjugate to a regular action on P2k , P1k ×P1k or Fm , for
some Hirzebruch surface Fm . This action preserves a curve, namely the image of the
zigzag into the surface Y . The following examples list all possibilities, and conclude
the proof of Theorem B in the case of zigzags (i.e. case (3) in Proposition 8.3).
Example 8.15. Consider the projective plane P2k , together with an infinite subgroup
Γ ⊂ Aut(P2k ) that preserves a curve C and has Property (FW). Then, C must be a
smooth rational curve: either a line, or a smooth conic. If C is the line “at infinity”,
then Γ acts by affine transformations on the affine plane P2k r C. If the curve is the
conic x2 + y 2 + z 2 = 0, Γ becomes a subgroup of PO3 (k).
Example 8.16. When Γ is a subgroup of Aut(P1k × P1k ) that preserves a curve C
and has Property (FW), then C must be a smooth curve because Γ has no finite
orbit (Lemma 3.5). Similarly, the two projections C → P1k being equivariant with
respect to the morphisms Γ → PGL 2 (k), they have no ramification points. Thus,
C is a smooth rational curve, and its projections onto each factor are isomorphisms.
In particular, the action of Γ on C and on each factor are conjugate. From these
conjugacies, one deduces that the action of Γ on P1k × P1k is conjugate to a diagonal
embedding
γ ∈ Γ 7→ (ρ(γ), ρ(γ)) ∈ PGL 2 (k) × PGL 2 (k)
preserving the diagonal.
Example 8.17. Similarly, the group SL 2 (k) acts on the Hirzebruch surface Fm ,
preserving the zero section of the fibration π : Fm → P1k . This gives examples of
groups with Property (FW) acting on Fm and preserving a big and nef curve C.
Starting with one of the above examples, one can blow-up points on the invariant
curve C, and then contract C, to get examples of zigzag completions Y on which Γ
acts and contracts the boundary ∂Y .
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
45
9. B IRATIONAL TRANSFORMATIONS OF SURFACES II
In this section, U is a (normal, singular) affine surface with a completion X by a
cycle of ` rational curves. Every irreducible component Ei of the boundary ∂X =
X r U is contracted by at least one automorphism of U. Our goal is to classify
subgroups Γ of Aut(U) ⊂ Bir(X) that are infinite and have Property (FW): in fact,
we shall show that no such group exists. This ends the proof of Theorem B since
all the other possibilities of Proposition 8.3 have been dealt with in the previous
section.
Example 9.1. Let (A1k )∗ denote the complement of the origin in the affine line A1k ;
it is isomorphic to the multiplicative group Gm over k. The surface (A1k )∗ × (A1k )∗
is an open subset in P2k whose boundary is the triangle of coordinate lines {[x :
y : z]; xyz = 0}. Thus, the boundary is a cycle of length ` = 3. The group of
automorphisms of (A1k )∗ × (A1k )∗ is the semi-direct product
GL 2 (Z) n (Gm (k) × Gm (k));
it does not contain any infinite group with Property (FW).
9.1. Resolution of indeterminacies. Let us order cyclically the irreducible components Ei of ∂X, so that Ei ∩ Ej 6= ∅ if and only if i − j = ±1(mod `). Blowing
up finitely many singularities of ∂X, we may assume that ` = 2m for some integer
m ≥ 1; in particular, every curve Ei is smooth. (With such a modification, one
may a priori create irreducible components of ∂X that are not contracted by the
group Γ.)
Lemma 9.2. Let f be an automorphism of U and let fX be the birational extension
of f to the surface X. Then
(1) Every indeterminacy point of fX is a singular point of ∂X, i.e. one of the
intersection points Ei ∩ Ei+1 .
(2) Indeterminacies of fX are resolved by inserting chains of rational curves.
Property (2) means that there exists a resolution of the indeterminacies of fX ,
given by two birational morphisms : Y → X and π : Y → X with f ◦ = π, such
that π −1 (∂X) = −1 (X) is a cycle of rational curves. Some of the singularities of
∂X have been blown-up into chains of rational curves to construct Y .
Proof. Consider a minimal resolution of the indeterminacies of fX . It is given by
a finite sequence of blow-ups of the base points of fX , producing a surface Y and
two birational morphisms : Y → X and π : Y → X such that fX = π ◦ −1 .
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
46
π
f
F IGURE 1. A blow-up sequence creating two (red) branches. No branch
of this type appears for minimal resolution.
Since the indeterminacy points of fX are contained in ∂X, all necessary blow-ups
are centered on ∂X.
The total transform F = ∗ (∂X) is a union of rational curves: it is made of a
cycle, together with branches emanating from it. One of the assertions (1) and (2)
fails if and only if F is not a cycle; in that case, there is at least one branch.
Each branch is a tree of smooth rational curves, which may be blown-down onto
a smooth point; indeed, these branches come from smooth points of the main cycle
that have been blown-up finitely many times. Thus, there is a birational morphism
η : Y → Y0 onto a smooth surface Y0 that contracts the branches (and nothing
more).
The morphism π maps F onto the cycle ∂X, so that all branches of F are contracted by π. Thus, both and π induce (regular) birational morphisms 0 : Y0 → X
and π0 : Y0 → X. This contradicts the minimality of the resolution.
Let us introduce a family of surfaces
πk : Xk → X.
First, X1 = X and π1 is the identity map. Then, X2 is obtained by blowing-up the `
singularities of ∂X1 ; X2 is a compactification of U by a cycle ∂X2 of 2` = 2m+1
smooth rational curves. Then, X3 is obtained by blowing up the singularities of
∂X2 , and so on. In particular, ∂Xk is a cycle of 2k−1 ` = 2m+k−1 curves.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
47
Denote by Dk the dual graph of ∂Xk : vertices of Dk correspond to irreducible
components Ei of ∂Xk and edges to intersection points Ei ∩ Ej . A simple blow-up
(of a singular point) modifies both ∂Xk and Dk locally as follows
F IGURE 2. Blowing-up one point.
˜
The group Aut(U) acts on Hyp(X)
and Lemma 9.2 shows that its action stabilizes
˜
the subset B of Hyp(X) defined as
n
o
˜
B = C ∈ Hyp(X)
: ∃k ≥ 1, C is an irreducible component of ∂Xk .
In what follows, we shall parametrize B in two distinct ways by rational numbers.
9.2. Farey and dyadic parametrizations. Consider an edge of the graph D1 , and
identify this edge with the unit interval [0, 1]. Its endpoints correspond to two adjacent components Ei and Ei+1 of ∂X1 , and the edge corresponds to their intersection q. Blowing-up q creates a new vertex (see Figure 2). The edge is replaced
by two adjacent edges of D2 with a common vertex corresponding to the exceptional divisor and the other vertices corresponding to (the strict transforms of) Ei
and Ei+1 ; we may identify this part of D2 with the segment [0, 1], the three vertices
with {0, 1/2, 1}, and the two edges with [0, 1/2] and [1/2, 1].
Subsequent blow-ups may be organized in two different ways by using either a
dyadic or a Farey algorithm (see Figure 3).
In the dyadic algorithm, the vertices are labelled by dyadic numbers m/2k . The
vertices of Dk+1 coming from an initial edge [0, 1] of D1 are the points {n/2k ; 0 ≤
n ≤ 2k } of the segment [0, 1]. We denote by Dyad(k) the set of dyadic numbers
n/2k ∈ [0, 1]; thus, Dyad(k) ⊂ Dyad(k + 1). We shall say that an interval [a, b] is
a standard dyadic interval if a and b are two consecutive numbers in Dyad(k) for
some k.
In the Farey algorithm, the vertices correspond to rational numbers p/q. Adjacent
vertices of Dk coming from the initial segment [0, 1] correspond to pairs of rational
numbers (p/q, r/s) with ps − qr = ±1; two adjacent vertices of Dk give birth to
a new, middle vertex in Dk+1 : this middle vertex is (p + r)/(q + s) (in the dyadic
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
48
algorithm, the middle vertex is the “usual” euclidean middle). We shall say that an
interval [a, b] is a standard Farey interval if a = p/q and b = r/s with ps − qr =
−1. We denote by Far(k) the finite set of rational numbers p/q ∈ [0, 1] that is given
by the k-th step of Farey algorithm; thus, Far(1) = {0, 1} and Far(k) is a set of
2k+1 rational numbers p/q with 0 ≤ p ≤ q. (One can check that 1 ≤ q ≤ Fib(k),
with Fib(k) the k-th Fibonacci number.)
0/1
0/1
1/1
0/1
1/1
0/1
1/3
1/1
0/1
1/3
1/1
0/1
1/2
0/1
0/1
1/4
1/3
1/2
1/3
1/2
2/5
3/5
3/4
1/1
1/2
1/4
1/2
1/4
1/8
1/1
3/4
1/2
3/8
1/1
3/4
5/8
1/1
7/8
F IGURE 3. On the left, the Farey algorithm. On the right, the dyadic one.
By construction, the graph D1 has ` = 2m edges. The edges of D1 are in one
to one correspondance with the singularities qj of ∂X1 . Each edge determines a
subset Bj of B; the elements of Bj are the curves C ⊂ ∂Xk (k ≥ 1) such that πk (C)
contains the singularity qj determined by the edge. Using the dyadic algorithm
(resp. Farey algorithm), the elements of Bj are in one-to-one correspondence with
dyadic (resp. rational) numbers in [0, 1]. Gluing these segments cyclically together
one gets a circle S1 , together with a nested sequence of subdivisions in `, 2`, . . .,
2k−1 `, . . . intervals; each interval is a standard dyadic interval (resp. standard Farey
interval) of one of the initial edges .
Since there are ` = 2m initial edges, we may identify the graph D1 with the circle
S1 = R/Z = [0, 1]/0'1 and the initial vertices with the dyadic numbers in Dyad(m)
modulo 1 (resp. with the elements of Far(m) modulo 1). Doing this, the vertices of
Dk are in one to one correspondence with the dyadic numbers in Dyad(k + m − 1)
(resp. in Far(k + m − 1)).
Remark 9.3. (a).– By construction, the interval [p/q, r/s] ⊂ [0, 1] is a standard
Farey interval if and only if ps − qr = −1, iff it is delimited by two adjacent
elements of Far(m) for some m.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
49
(b).– If h : [x, y] → [x0 , y 0 ] is a homeomorphism between two standard Farey intervals mapping rational numbers to rational numbers and standard Farey intervals to
standard Farey intervals, then h is the restriction to [x, y] of a unique linear projective transformation with integer coefficients:
at + b
a b
, for some element
of PGL 2 (Z).
h(t) =
c d
ct + d
(c).– Similarly, if h is a homeomorphism mapping standard dyadic intervals to intervals of the same type, then h is the restriction of an affine dyadic map
u
h(t) = 2m t + n , with m, n ∈ Z.
2
In what follows, we denote by GFar the group of self-homeomorphisms of S1 =
R/Z that are piecewise PGL 2 (Z) mapping with respect to a finite decomposition of
the circle in standard Farey intervals [p/q, r/s]. In other words, if f is an element of
GFar , there are two partitions of the circle into consecutive intervals Ii and Ji such
that the Ii are intervals with rational endpoints, h maps Ii to Ji , and the restriction
f : Ii → Ji is the restriction of an element of PGL 2 (Z) (see [34], §1.5.1).
Theorem 9.4. Let U be an affine surface with a compactification U ⊂ X such that
∂X := X r U is a cycle of smooth rational curves. In the Farey parametrization of
˜
the set B ⊂ Hyp(X)
of boundary curves, the group Aut(U) acts on B as a subgroup
of GFar .
Remark 9.5. There is a unique orientation preserving self-homeomorphism of the
circle that maps Dyad(k) to Far(k) for every k. This self-homeomorphism conjugates GFar to the group GDya of self-homeomorphisms of the circle that are piecewise affine with respect to a dyadic decomposition of the circle, with slopes in ±2Z ,
and with translation parts in Z[1/2]. Using the parametrization of B by dyadic numbers, the image of Aut(U) becomes a subgroup of GDya .
Remark 9.6. The reason why we keep in parallel the dyadic and Farey viewpoints
is the following: the Farey viewpoint is more natural for algebraic geometers (this is
related to toric –i.e. monomial– maps and appears clearly in [21]), while the dyadic
viewpoint is more natural to geometric group theorists, because this is the classical
setting used in the study of Thompson groups (see [34], §1.5).
Proof. Lemma 9.2 is the main ingredient. Consider the action of the group Aut(U)
on the set B. Let f be an element of Aut(U) ⊂ Bir(X). Consider an irreducible
curve E ∈ B, and denote by F its image: F = f• (E) is an element of B by
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
50
Lemma 9.2. There are integers k and l such that E ⊂ ∂Xk and F ⊂ ∂Xl . Replacing
Xk by a higher blow-up Xm → X, we may assume that flm := πl−1 ◦ f ◦ πm is
regular on a neighborhood of the curve E (Lemma 9.2). Let qk be one of the two
singularities of ∂Xm that are contained in E, and let E 0 be the second irreducible
component of ∂Xm containing q. If E 0 is blown down by flm , its image is one of
the two singularities of ∂Xl contained in F (by Lemma 9.2). Consider the smallest
integer n ≥ l such that ∂Xn contains the strict transform F 0 = f• (E 0 ); in Xn , the
curve F 0 is adjacent to the strict transform of F (still denoted F ), and f is a local
isomorphism from a neighborhood of q in Xm to a neighborhood of q 0 := F ∩ F 0 in
Xn .
Now, if one blows-up q, the exceptional divisor D is mapped by f• to the exceptional divisor D0 obtained by a simple blow-up of q: f lifts to a local isomorphism
from a neighborhood of D to a neighborhood of D0 , the action from D to D0 being
given by the differential dfq . The curve D contains two singularities of ∂Xm+1 ,
which can be blown-up too: again, f lifts to a local isomorphism if one blow-ups
the singularities of ∂Xn+1 ∩ D0 . We can repeat this process indefinitely. Let us
now phrase this remark differently. The point q determines an edge of Dm , hence
a standard Farey interval I(q). The point q 0 determines an edge of Dn , hence another standard Farey interval I(q 0 ). Then, the points of B that are parametrized by
rational numbers in I(q) are mapped by f• to rational numbers in I(q 0 ) and this
map respects the Farey order: if we identify I(q) and I(q 0 ) to [0, 1], f• is the restriction of a monotone map that sends Far(k) to Far(k) for every k. Thus, on I(q), f•
is the restriction of a linear projective transformation with integer coefficients (see
Remark 9.3-(b)). This shows that f• is an element of GFar .
9.3. Conclusion. Consider the group G∗Dya of self-homeomorphisms of the circle
S1 = R/Z that are piecewise affine with respect to a finite partition of R/Z into
dyadic intervals [xi , xi+1 [ with xi in Z[1/2]/Z for every i, and satisfy
h(t) = 2mi t + ai
with mi ∈ Z and ai ∈ Z[1/2] for every i. This group is known as the Thompson
group of the circle, and is isomorphic to the group G∗Far of orientation-preserving
self-homeomorphisms in GFar (defined in §9.2).
Theorem 9.7 (Farley, Hughes [15, 22]). Every subgroup of G∗Dya (and hence of
GFar ) with Property (FW) is a finite cyclic group.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
51
Indeed fixing a gap in an earlier construction of Farley [15]2, Hughes proved [22]
that GFar has Property PW, in the sense that it admits a commensurating action
whose associated length function is a proper map (see also Navas’ book [34]). This
implies the conclusion, because every finite group of orientation-preserving selfhomeomorphisms of the circle is cyclic.
Thus, if Γ is a subgroup of Aut(U) with Property (FW), it contains a finite index
˜
subgroup Γ0 that acts trivially on the set B ⊂ Hyp(X).
This means that Γ0 extends
as a group of automorphisms of X fixing the boundary ∂X. Since ∂X supports
a big and nef divisor, Γ0 contains a finite index subgroup Γ1 that is contained in
Aut(X)0 .
Note that Γ1 has Property (FW) because it is a finite index subgroup of Γ. It
preserves every irreducible component of the boundary curve ∂X, as well as its singularities. As such, it must act trivially on ∂X. When we apply Theorem 8.8 to Γ1 ,
the conjugacy ϕ : X → Y can not contract ∂X, because the boundary supports an
ample divisor. Thus, Γ1 is conjugate to a subgroup of Aut(Y ) that fixes a curve
pointwise. This is not possible if Γ1 is infinite (see Theorem 8.8 and the remarks
following it).
We conclude that Γ is finite in case (2) of Proposition 8.3.
√
10. B IRATIONAL ACTIONS OF SL 2 (Z[ d])
We develop here Example 1.4. If k is an algebraically closed field of characteristic√0, therefore containing Q, we denote by σ1 and σ2 the distinct embeddings
√
of Q( d) into k. Let j1 and j2 be the resulting embeddings of SL 2 (Z[ d]) into
SL 2 (k), and j = j1 × j2 the resulting embedding into
G = SL 2 (k) × SL 2 (k).
√
Theorem 10.1. Let Γ be a finite index subgroup of SL 2 (Z[ d]). Let X be an
irreducible projective surface over an algebraically closed field k. Let α : Γ →
Bir(X) be a homomorphism with infinite image. Then k has characteristic zero,
and there exist a finite index subgroup Γ0 of Γ and a birational map ϕ : Y 99K X
such that
(1) Y is the projective plane P2 , a Hirzebruch surface Fm , or C × P1 for some
curve C;
(2) ϕ−1 α(Γ)ϕ ⊂ Aut(Y );
2
The gap in Farley’s argument lies in Prop. 2.3 and Thm. 2.4 of [15].
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
52
(3) there is a unique algebraic homomorphism β : G → Aut(Y ) such that
ϕ−1 α(γ)ϕ = β(j(γ))
for every γ ∈ Γ0 .
Theorem B ensures that the characteristic of k is 0 and that (1) and (2) are satisfied. If Y is P2 or a Hirzebruch surface Fm , then Aut(Y ) is a linear algebraic
group. If Y is a product C × P1 , a finite index subgroup of Γ preserves the projection onto P1 , so that it acts via an embedding into the linear algebraic group
Aut(P1 ) = PGL 2 (k).
When k has positive characteristic, Y is the projective plane, and the Γ-action is
given by a homomorphism Γ → PGL 3 (k). Then we use the fact that for any n,
every homomorphism f : Γ → GL n (k) has finite image. Indeed, it is well-known
that GL n (k) has no infinite order distorted elements: elements of infinite order have
some transcendental eigenvalue and the conclusion easily follows. Since Γ has an
exponentially distorted cyclic subgroup, f has infinite kernel, and infinite normal
subgroups of Γ have finite index (Margulis normal subgroup theorem).
On the other hand, in characteristic zero we conclude the proof of Theorem 10.1
with the following lemma.
√
d). Consider the embedding j
Lemma 10.2.
Let
k
be
any
field
extension
of
Q(
√
of SL 2 (Z[ d]) into G = SL 2 (k) × SL 2 (k) given by the standard embedding into
the left-hand SL 2 and its Galois conjugate in the right-hand√SL 2 . Then for every
linear algebraic group H and homomorphism f : SL 2 (Z[ d]) → H(k), there
exists a unique homomorphism f¯ : G → H of k-algebraic groups such that the
homomorphisms f and f˜ ◦ j coincide on some finite index subgroup of Γ.
Proof. The uniqueness is a consequence of Zariski density of the image of j. Let
us prove the existence. Zariski density allows to reduce to the case when H = SL n .
First, the case k = R is given by Margulis’ superrigidity, along with the fact that
every continuous real representation of SL n (R) is algebraic. The case of fields
containing R immediately follows,
√ and in turn it follows for subfields of overfields
of R (as soon as they contain Q( d)).
11. O PEN PROBLEMS
11.1. Regularization and Calabi-Yau varieties.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
53
Question 11.1. Let Γ be a group with Property (FW). Is every birational action
of Γ regularizable ? Here regularizable is defined in the same way as pseudoregularizable, but assuming that the action on U is by automorphisms (instead of
pseudo-automorphisms).
A particular case is given by Calabi-Yau varieties, in the strict sense of a simply connected complex projective manifold X with trivial canonical bundle and
hk,0 (X) = 0 for 0 < k < dim(X). For such a variety the group Bir(X) coincides with Psaut(X). One can then ask (1) whether every infinite subgroup Γ of
Psaut(X) with property (FW) is regularizable on some birational model Y of X
(without restricting the action to a dense open subset), and (2) what are the possibilities for such a group Γ.
11.2. Transfixing birational groups.
Question 11.2. For which irreducible projective varieties X
(1) Bir(X) does not transfix Hyp(X)?
(2) some finitely generated subgroup of Bir(X) does not transfix Hyp(X)?
(3) some cyclic subgroup of Bir(X) does not transfix Hyp(X).
We have the implications: X is ruled ⇒ (3) ⇒ (2) ⇒ (1). In dimension 2, we
have: ruled ⇔ (1) /⇒(2) ⇔ (3) (see §7.1). It would be interesting to find counterexamples to these equivalences in higher dimension, and settle each of the problems
raised in Question 11.2 in dimension 3.
11.3. The affine space. The group of affine transformations of A3C contains SL 3 (C),
and this group contains many subgroups with Property (FW). In the case of surfaces,
Theorem B shows that groups of birational transformations with Property (FW)
are contained in algebraic groups, up to conjugacy. The following question asks
whether this type of theorem may hold for Aut(A3C ).
Question 11.3. Does there exist an infinite subgroup of Aut(A3C ) with Property (FW)
that is not conjugate to a group of affine transformations of A3C ?
11.4. Length functions. Recall that a length function ` on a group G is quasigeodesic if there exists M > 0 such that for every n ≥ 1 and every g ∈ G with
−1
`(g) ≤ n, there exist 1 = g0 , g1 , . . . , gn = g in G such that `(gi−1
gi ) ≤ M for all i.
−1
Equivalently G, endowed with the distance (g, h) 7→ `(g h), is quasi-isometric to
a connected graph.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
54
Question 11.4. Given an irreducible variety X, is the length function
g ∈ Bir(X) 7→ |Hyp(X)4gHyp(X)|
quasi-geodesic? In particular, what about X = P2 and the Cremona group Bir(P2 )?
R EFERENCES
[1] Wolf P. Barth, Klaus Hulek, Chris A. M. Peters, and Antonius Van de Ven. Compact complex
surfaces, volume 4 of Ergebnisse der Mathematik und ihrer Grenzgebiete. Springer-Verlag,
Berlin, second edition, 2004.
[2] Hyman Bass. Groups of integral representation type. Pacific J. Math., 86(1):15–51, 1980.
[3] Hyman Bass and Alexander Lubotzky. Automorphisms of groups and of schemes of finite type.
Israel J. Math., 44(1):1–22, 1983.
[4] Arnaud Beauville. Surfaces algébriques complexes. Société Mathématique de France, Paris,
1978. Avec une sommaire en anglais, Astérisque, No. 54.
[5] Eric Bedford and Kyounghee Kim. Dynamics of (pseudo) automorphisms of 3-space: periodicity versus positive entropy. Publ. Mat., 58(1):65–119, 2014.
[6] Nicolas Bergeron, Frédéric Haglund, and Daniel T. Wise. Hyperplane sections in arithmetic
hyperbolic manifolds. J. Lond. Math. Soc. (2), 83(2):431–448, 2011.
[7] Sébastien Boucksom, Charles Favre, and Mattias Jonsson. Valuations and plurisubharmonic
singularities. Publ. Res. Inst. Math. Sci., 44(2):449–494, 2008.
[8] Serge Cantat. Sur les groupes de transformations birationnelles des surfaces. Annals Math.,
(174), 2011.
[9] Serge Cantat. Dynamics of automorphisms of compact complex surfaces. In Frontiers in Complex Dynamics: In celebration of John Milnor’s 80th birthday, volume 51 of Princeton Math.
Series, pages 463–514. Princeton University Press, 2014.
[10] Serge Cantat and Keiji Oguiso. Birational automorphism groups and the movable cone theorem for Calabi-Yau manifolds of Wehler type via universal Coxeter groups. Amer. J. Math.,
137(4):1013–1044, 2015.
[11] Yves Cornulier. Group actions with commensurated subsets, wallings and cubings. ArXiv
1302.5982, (1):1–71, 2015.
[12] Yves Cornulier. Irreducible lattices, invariant means, and commensurating actions. Math. Z.,
279(1-2):1–26, 2015.
[13] Pierre de la Harpe and Alain Valette. La propriété (T ) de Kazhdan pour les groupes localement
compacts (avec un appendice de Marc Burger). Astérisque, (175):158, 1989. With an appendix
by M. Burger.
[14] J. Diller and C. Favre. Dynamics of bimeromorphic maps of surfaces. Amer. J. Math.,
123(6):1135–1169, 2001.
[15] Daniel S. Farley. Proper isometric actions of Thompson’s groups on Hilbert space. Int. Math.
Res. Not., (45):2409–2414, 2003.
[16] M. J. Fryers. The movable fan of the horrocks-mumford quintic. unpublished manuscript,
arXiv:math/0102055, pages 1–20, 2001.
[17] M. H. Gizatullin. Invariants of incomplete algebraic surfaces that can be obtained by means of
completions. Izv. Akad. Nauk SSSR Ser. Mat., 35:485–497, 1971.
[18] M. H. Gizatullin and V. I. Danilov. Automorphisms of affine surfaces. I. Izv. Akad. Nauk SSSR
Ser. Mat., 39(3):523–565, 703, 1975.
[19] M. H. Gizatullin and V. I. Danilov. Automorphisms of affine surfaces. II. Izv. Akad. Nauk SSSR
Ser. Mat., 41(1):54–103, 231, 1977.
COMMENSURATING ACTIONS OF BIRATIONAL GROUPS
55
[20] Robin Hartshorne. Algebraic geometry. Springer-Verlag, New York, 1977. Graduate Texts in
Mathematics, No. 52.
[21] John H. Hubbard and Peter Papadopol. Newton’s method applied to two quadratic equations in
C2 viewed as a global dynamical system. Mem. Amer. Math. Soc., 191(891):vi+146, 2008.
[22] Bruce Hughes. Local similarities and the haagerup property. with an appendix by daniel s.
farley. Groups Geom. Dyn., 3(2):299–315, 2009.
[23] Shigeru Iitaka. Algebraic geometry: an introduction to the birational geometry of algebraic
varieties, volume 76. Springer, 1982.
[24] Vaughan FR Jones. A no-go theorem for the continuum limit of a periodic quantum spin chain.
arXiv preprint arXiv:1607.08769, 2016.
[25] János Kollár, Karen E. Smith, and Alessio Corti. Rational and nearly rational varieties, volume 92 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2004.
[26] Robert Lazarsfeld. Positivity in algebraic geometry. I, volume 48 of Ergebnisse der Mathematik
und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics. Springer-Verlag,
Berlin, 2004. Classical setting: line bundles and linear series.
[27] David I. Lieberman. Compactness of the Chow scheme: applications to automorphisms and
deformations of Kähler manifolds. In Fonctions de plusieurs variables complexes, III (Sém.
François Norguet, 1975–1977), volume 670 of Lecture Notes in Math., pages 140–186.
Springer, Berlin, 1978.
[28] Alexander Lubotzky, Shahar Mozes, and M. S. Raghunathan. The word and Riemannian metrics on lattices of semisimple groups. Inst. Hautes Études Sci. Publ. Math., (91):5–53 (2001),
2000.
[29] Alexander Lubotzky, Shahar Mozes, and MS Raghunathan. Cyclic subgroups of exponential
growth and metrics on discrete groups. Comptes Rendus Académie des Sciences Paris Série 1,
317:735–740, 1993.
[30] Yu. I. Manin. Cubic forms, volume 4 of North-Holland Mathematical Library. North-Holland
Publishing Co., Amsterdam, second edition, 1986. Algebra, geometry, arithmetic, Translated
from the Russian by M. Hazewinkel.
[31] T. Matsusaka. Polarized varieties, fields of moduli and generalized Kummer varieties of polarized abelian varieties. Amer. J. Math., 80:45–82, 1958.
[32] T. Matsusaka and D. Mumford. Two fundamental theorems on deformations of polarized varieties. Amer. J. Math., 86:668–684, 1964.
[33] J. S. Milne. Algebraic geometry. preprint, http://www.jmilne.org/math/CourseNotes/AG.pdf,
pages 1–221, 2017.
[34] Andrés Navas. Groups of circle diffeomorphisms. Chicago Lectures in Mathematics. University
of Chicago Press, Chicago, IL, spanish edition, 2011.
[35] Jean-Pierre Serre. Arbres, amalgames, SL2 . Société Mathématique de France, Paris, 1977.
Avec un sommaire anglais, Rédigé avec la collaboration de Hyman Bass, Astérisque, No. 46.
[36] Ravi Vakil. The Rising see, Foundations of Algebraic Geometry. webpage of the author,
math216.wordpress.com, 2017.
U NIV R ENNES , CNRS, IRMAR - UMR 6625, F-35000 R ENNES
CNRS AND U NIV LYON , U NIV C LAUDE B ERNARD LYON 1, I NSTITUT C AMILLE J ORDAN ,
43 BLVD . DU 11 NOVEMBRE 1918, F-69622 V ILLEURBANNE
E-mail address: [email protected]
E-mail address: [email protected]
| 4 |
1
A Generalized Framework for Kullback-Leibler
Markov Aggregation
Abstract—This paper proposes an information-theoretic cost
function for aggregating a Markov chain via a (possibly stochastic) mapping. The cost function is motivated by two objectives:
1) The process obtained by observing the Markov chain through
the mapping should be close to a Markov chain, and 2) the
aggregated Markov chain should retain as much of the temporal
dependence structure of the original Markov chain as possible.
We discuss properties of this parameterized cost function and
show that it contains the cost functions previously proposed by
Deng et al., Xu et al., and Geiger et al. as special cases. We
moreover discuss these special cases providing a better understanding and highlighting potential shortcomings: For example,
the cost function proposed by Geiger et al. is tightly connected
to approximate probabilistic bisimulation, but leads to trivial
solutions if optimized without regularization. We furthermore
propose a simple heuristic to optimize our cost function for
deterministic aggregations and illustrate its performance on a
set of synthetic examples.
Index Terms—Markov chain, lumpability, predictability, bisimulation, model reduction
I. I NTRODUCTION
Markov aggregation is the task of representing a Markov
chain with a large alphabet by a Markov chain with a smaller
alphabet, thus reducing model complexity while at the same
time retaining the computationally and analytically desirable
Markov property (see Fig. 1). Such a model reduction is
necessary if the original Markov chain is too large to admit
simulation, estimating model parameters from data, or control
(in the case of Markov decision processes). These situations
occur often in computational chemistry (where aggregation is
called coarse-graining, e.g., [1]), natural language processing,
and the simulation and control of large systems (giving rise
to the notion of bisimulation, e.g., [2]). Additionally, Markov
aggregation can be used as a tool in exploratory data analysis,
either to discover groups of “similar” states of a stochastic
process or to cluster data points, cf. [3], [4].
Information-theoretic cost functions were proposed for
Markov aggregation in [5]–[8]. Specifically, the authors of [5]
proposed a cost function linked to the predictability of the
aggregated Markov chain. Such an approach is justified if the
original model is nearly completely decomposable, i.e., if there
is a partition of the alphabet such that transitions within each
element of the partition occur quickly and randomly, while
transitions between elements of the partition occur only rarely.
Building on this work, the authors of [7] proposed a cost
function linked to lumpability, i.e., to the phenomenon where
a function of a Markov chain is Markov. Such an approach
X ∼ Mar(X , P)
Observation pY |X
arXiv:1709.05907v1 [] 18 Sep 2017
Rana Ali Amjad, Clemens Blöchl, Bernhard C. Geiger
Institute for Communications Engineering, Technical University of Munich, Germany
[email protected], [email protected]
Ag
gr
eg
a
tio
n
D̄(Y||Ỹ)
Y
Ỹ ∼ Mar(Y, Q)
Fig. 1. Illustration of the aggregation problem: A stationary first-order Markov
chain X is given. We are interested in finding a conditional distribution pY |X
and an aggregation of X, i.e., a Markov chain Ỹ on Y. The conditional
distribution pY |X defines a stationary process Y, a noisy observation of X.
Y might not be Markov of any order, but can be approximated by a Markov
chain Ỹ.
is justified whenever there are groups of states with similar
probabilistic properties (in a well-defined sense). Both [5]
and [7] focus on deterministic aggregations, i.e., every state
of the original alphabet is mapped to exactly one state of the
reduced alphabet. Moreover, the authors of both references
arrive at their cost functions by lifting the aggregated Markov
chain to the original alphabet. The authors of [6] present an
information-theoretic cost function for stochastic aggregations,
but they do not justify their choice by an operational characterization (such as predictability or lumpability). Instead, they
arrive at their cost function via the composite of the original
and the aggregated Markov chain.
In this paper, we extend the works [5]–[8] as follows:
1) We present a two-step approach to Markov aggregation (Section III): Observing the original Markov chain
through a (stochastic or deterministic) channel, and
then approximating this (typically non-Markov) process
as a Markov chain (see Fig. 1). This approach has
already been taken by [7], albeit only for deterministic
aggregations.
2) Using this two-step approach, we propose a parameterized, information-theoretic cost function for Markov
aggregation (Section IV). We arrive at this cost function
neither via lifting nor via the composite model, but via
requiring specific operational qualities of the process
observed through the channel: It should be close to a
Markov chain and it should retain the temporal dependence structure of the original Markov chain.
2
3) We show that our cost function contains the cost functions of [5]–[8] as special cases (Section V). We also
discuss previous algorithmic approaches to the Markov
aggregation problem.
4) We propose a simple, low-complexity heuristic to minimize our generalized cost function for deterministic
aggregations (Section VI).
5) As a side result, we justify the cost function proposed
in [7] by showing a tight connection to approximate
probabilistic bisimulation (Section III-A).
We illustrate our cost function for various examples in
Section VII. Specifically, we investigate the aggregation of
quasi-lumpable and nearly completely decomposable Markov
chains, and we look at a toy example from natural language
processing. We also take up the approach of [3], [4] to perform
clustering via Markov aggregation. In future work, we shall
extend our efforts to Markov decision processes, and provide
a theory for lifting stochastic aggregations as indicated in [6,
Remark 3].
II. N OTATION AND D EFINITIONS
1
(1)
The Markov chain is time-homogeneous if, the right-hand side
of (1) does not depend on n, i.e., if
pZn |Zn−1 (zn |zn−1 ) = pZ2 |Z1 (zn |zn−1 ) := Pzn−1 →zn .
z∈Z
H(Z2 |Z1 ) :=
X
H(Z2 |Z1 = z)pZ1 (z)
I(Z1 ; Z2 ) := H(Z2 ) − H(Z2 |Z1 ).
(3c)
The entropy rate and redundancy rate of a stationary stochastic
process Z (not necessarily Markov) are
H(Z1n )
= lim H(Zn |Z1n−1 )
n→∞
n→∞
n
R̄(Z) := lim I(Zn ; Z1n−1 ) = H(Z) − H̄(Z).
H̄(Z) := lim
n→∞
(3d)
(3e)
The Kullback-Leibler divergence rate (KLDR) between two
stationary stochastic processes Z and Z′ on the same finite
alphabet Z is [10, Ch. 10]
pZ ′n1 (z1n )
1 X
pZ ′n1 (z1n ) log
n→∞ n
pZ1n (z1n )
n
n
(3f)
z1 ∈Z
provided the limit exists. If the limit exists, it is finite if, for
all n and all z1n , pZ1n (z1n ) = 0 implies pZ ′ n1 (z1n ) = 0 (short:
pZ ′ n1 ≪ pZ1n ). In particular, if Z′ ∼ Mar(Z, P′ ) and Z ∼
Mar(Z, P), then [11]
D̄(Z′ ||Z) =
X
′
µz Pz→z
′ log
z,z ′ ∈Z
′
Pz→z
′
.
Pz→z′
(3g)
provided P ≪ P′ .
These information-theoretic quantities can be used to give
an equivalent definition of Markovity:
Lemma 1 ([12, Prop. 3]). Suppose the stochastic process Z
is stationary. Then, Z is Markov iff H̄(Z) = H(Z2 |Z1 ).
If Z is a stationary process on Z (not necessarily Markov),
then one can approximate this process by a Markov chain
Z̃ ∼ Mar(Z, P):
Lemma 2 ([10, Cor. 10.4]). Let Z be a stationary process
on Z, and let Z′ ∼ Mar(Z, P′ ) be any Markov chain on Z.
Then,
P = arg min D̄(Z||Z′ )
(2)
If the transition probability matrix P = [Pzn−1 →zn ] of a
time-homogeneous Markov chain is irreducible and aperiodic
(see [9] for terminology), then there exists a unique vector µ
such that µT = µT P, which represents the invariant distribution of Z. If the initial distribution pZ1 coincides with µ,
then Z is stationary and we denote this stationary, irreducible
and aperiodic first-order Markov chain by Z ∼ Mar(Z, P).
In this work we deal exclusively with first-order stationary,
irreducible and aperiodic Markov chains.
We use information-theoretic cost functions for the aggregation problem. The entropy of Z, the conditional entropy of
(3b)
z∈Z
D̄(Z′ ||Z) := lim
We denote vectors and matrices by bold lower case and
blackboard bold upper case letters, e.g., a and A. A diagonal
matrix with vector a on the main diagonal is denoted by
diag(a). The transpose of A is AT .
We denote random variables (RVs) by upper case letters,
e.g., Z, and their alphabet by calligraphic letters, e.g., Z.
In this work we will restrict ourselves to RVs with finite
alphabets, i.e., |Z| < ∞. Realizations are denoted by lower
case letters, e.g., z, where z ∈ Z. The probability mass
function (PMF) of Z is pZ , where pZ (z) := Pr(Z = z) for all
z ∈ Z. Joint and conditional PMFs are defined accordingly.
We denote a one-sided, discrete-time, stochastic process
with Z := (Z1 , Z2 , . . . ), where each Zk takes values from the
n
:= (Zm , Zm+1 , Zn ).
(finite) alphabet Z. We abbreviate Zm
We consider only stationary processes, i.e., PMFs are invariant
w.r.t. a time shift. In particular, the marginal distribution of Zk
is equal for all k and shall be denoted as pZ .
A first-order Markov chain is a process that satisfies, for all
n > 1 and all z1n ∈ Z n ,
pZn |Z n−1 (zn |z1n−1 ) = pZn |Zn−1 (zn |zn−1 ).
Z2 given Z1 , and the mutual information between Z1 and Z2
are defined by
X
H(Z) := −
pZ (z) log pZ (z)
(3a)
(4a)
P′
where
Pz→z′ = pZ2 |Z1 (z ′ |z).
(4b)
Moreover, for Z̃ ∼ Mar(Z, P),
D̄(Z||Z̃) = H(Z2 |Z1 ) − H̄(Z).
(4c)
By Lemma 1 we know that right-hand side of (4c) is 0 iff
Z is Markov. Hence, one can view the KLDR D̄(Z||Z̃) as a
measure of how close a process Z is to a Markov chain.
3
III. M ARKOV C HAIN AGGREGATION
Given a Markov chain X, Markov aggregation deals with
the problem of finding a Markov chain Ỹ on a given smaller
alphabet Y which is the optimal representation of X in the
¯
sense of minimizing a given cost function C(X,
Ỹ). This is
depicted in Fig. 1 by the diagonal arrow and is summarized
in the following definition:
Definition 1 (Markov Aggregation Problem). Let X ∼
¯ ·) be given.
Mar(X , P), Y, and an arbitrary cost function C(·,
The Markov aggregation problem concerns finding a minimizer of
¯
Ỹ)
(5)
min C(X,
Ỹ
where the optimization is over Markov chains on Y.
In this work we address the Markov aggregation problem
using the two-step approach depicted in Fig 1. The first step is
to use a (possibly stochastic) mapping from X to Y. Applying
this mapping to X leads to a stationary process Y which may
not be Markov (in fact, Y is a hidden Markov process). In
the second step we look for the optimal approximation Ỹ of
Y in the sense of Lemma 2.
This two-step approach is a popular method of Markov
aggregation and has been employed in various works including
[5], [7], [8]. In these references, the mapping in the first
step was restricted to be deterministic whereas in this work
we allow it to be stochastic. In other words, while these
references were looking for a partition of X induced by
a function g: X → Y, in this work we permit stochastic
mappings induced by a conditional distribution pY |X . We
represent pY |X as a row stochastic matrix W = [Wx→y ],
where Wx→y = pY |X (y|x).
With this notation, the following corollary to Lemma 2
solves the second of the two steps in our approach, i.e.,
it characterizes the optimal approximation Ỹ of the hidden
Markov process Y that we obtain by observing X through
W:
Corollary 1. Let X ∼ Mar(X , P) and let W denote a
conditional distribution from X to Y. Let Y be the hidden
Markov process obtained by observing X through W, and let
Ỹ ∼ Mar(Y, Q) be its best Markov approximation in the
sense of minimizing D̄(Y||Ỹ) (cf. Lemma 2). Then,
Q = UPW
(6)
where U := diag(ν)−1 WT diag(µ) with ν T := µT W being
the marginal distribution of Yk .
Note that this corollary extends [7, Lem. 3] from deterministic to stochastic mappings.
With the second step solved, the two-step approach to the
optimization problem stated in Definition 1 boils down to
optimization over the mapping W. We can thus restate the
Markov aggregation problem as follows:
Definition 2 (Markov Aggregation Problem Restated). Let
¯ ·) be
X ∼ Mar(X , P), Y, and an arbitrary cost function C(·,
given. Let
¯
C(X, W) = C(X,
Ỹ)
(7)
where Ỹ is the Markov approximation of the hidden Markov
process Y that is obtained by observing X through the
stochastic mapping W. The Markov aggregation problem
using the two-step approach concerns finding a minimizer of
min C(X, W)
W
(8)
where the optimization is over stochastic mappings from X to
Y. If the optimization is restricted over deterministic mappings
g, we abuse notation and write C(X, g) for the cost.
Note that Definition 1 and Definition 2 are not equivalent
in general, i.e., the optimal aggregated chain Ỹ obtained
by solving (5) is not the same as the optimal aggregated
chain Ỹ obtained by solving (8). The two formulations only
become equivalent when we restrict the optimization in (5) to
aggregated Markov chains which can be obtained as a result
of the aforementioned two-step approach.
A. Markov Aggregation via Lumpability
¯
The optimal mapping W depends on the cost function C.
One possible choice in the light of Lemma 2 is
¯
C(X,
Ỹ) = D̄(Y||Ỹ).
(9)
In other words, we wish to find a mapping W such that the
hidden Markov process Y is as close to a Markov chain
as possible in an information-theoretic sense. This may be
reasonable since it states that data obtained by simulating the
aggregated model Ỹ differs not too much from data obtained
by simulating the original model in conjunction with the
stochastic mapping, i.e., data obtained from Y.
There are two shortcomings of the cost (9). First, (9) focuses
only on getting Y close to Ỹ but not on preserving any form
of information in X. This gives rise to trivial optimal solutions:
If W is such that the conditional distribution does not depend
on the conditioning event, i.e., pY |X = pY , or W = 1αT
for some probability vector α, then Y is independent and
identically distributed (i.i.d.) and hence Markov. Indeed, in this
case H̄(Y) = H(Y2 |Y1 ) = H(Y ), from which D̄(Y||Ỹ) = 0
follows. The cost function is thus inappropriate for Markov
aggregation, unless it is regularized appropriately. The second
shortcoming is linked to the fact that D̄(Y||Ỹ) requires,
by (4c), the computation of the entropy rate H̄(Y) of a hidden
Markov process. This problem is inherently difficult [13],
and analytic expressions do not even exist for simple cases
(cf. [14]). In the following, we discuss two previously proposed relaxations of the Markov aggregation problem for
¯
C(X,
Ỹ) = D̄(Y||Ỹ).
The authors of [7] addressed the second shortcoming by
relaxing the cost via
CL (X, W) := H(Y2 |Y1 ) − H(Y2 |X1 ) ≥ D̄(Y||Ỹ).
(10)
This cost does not require computing H̄(Y) and is linked to
the phenomenon of lumpability, the fact that a function of
a Markov chain has the Markov property [12, Thm. 9]: If
CL (X, W) = 0, then Y is a Markov chain.
We now show that, at least for deterministic mappings, this
cost function also has a justification in approximate probabilistic bisimulations, or ε-bisimulations. More specifically, the
4
authors of [15] discussed bisimilarity of Markov processes and
showed that two Markov chains are bisimilar if one can be
described as a function of the other (see discussion after [15,
Def. 5.2]). In other words, if X ∼ Mar(X , P) is a Markov
chain, g: X → Y a surjective function, and Y ∼ Mar(Y, Q)
satisfies Yk = g(Xk ), then X and Y are bisimilar. Since
this is equivalent to lumpability, bisimilarity is implied by
CL (X, g) = 0.
Extending this line of reasoning, we give a justification of
the cost function CL (X, g) in terms of ε-bisimulation of pairs
of Markov chains, even in case X is not lumpable w.r.t. g. To
this end, we adapt [16, Def. 4 & 5] for our purposes:
Definition 3 (ε-Bisimulation). Consider two finite Markov
chains X ∼ Mar(X , P) and Ỹ ∼ Mar(Y, Q) and assume
w.l.o.g. that X and Y are disjoint. We say that X and Ỹ are
ε-bisimilar if there exists a relation Rε ⊆ (X ∪ Y) × (X ∪ Y)
such that for all x ∈ X and y ∈ Y for which (y, x) ∈ Rε ,
and all T ⊆ X ∪ Y we have
X
X
Px→x′ ≥
(11)
Qy→y′ − ε
x′ ∈Rε (T )∩X
y ′ ∈T ∩Y
where Rε (T ) := {s2 ∈ X ∪ Y: s1 ∈ T, (s1 , s2 ) ∈ Rε }.
The definitions of ε-bisimulations are typically given for
labeled [16, Def. 4 & 5] or controlled [2, Def. 4.4] Markov
processes with general alphabets and thus contain more restrictive conditions than our Definition 3. Our definition is
equivalent if the alphabets are finite and if the set of labels is
empty. We are now ready to state
Proposition 1. Let X ∼ Mar(X , P) and the surjective
function g: X → Y be given. Let Q be as in Corollary 1,
where Wx→y = 1 iff y = g(x). Let Ỹ ∼ Mar(Y, Q). Then,
X and Ỹ are ε-bisimilar with
s
ln(2)CL (X, g)
ε=
.
(12)
2 minx∈X µx
Proof: See Section VIII-A.
Despite this justification, the cost function CL (X, W) is
mainly of theoretical interest. The reason is that the shortcoming of leading to trivial solutions is inherited by CL (X, W),
since for W = 1αT one gets CL (X, W) = 0, regardless of α
and P. Even restricting W to be a deterministic partition, as
considered in [7], does not solve this problem: The combinatorial search over all partitions may have its global optimum at a
partition that makes Y close to an i.i.d. process. Indeed, if the
cardinality of Y is not constrained (or if g is not required to be
surjective), then the constant function g yields CL (X, g) = 0.
B. Markov Aggregation by Predictability
A different approach was taken by the authors of [5] who
proposed the following cost function (again with the focus on
deterministic partitions):
CP (X, W) := I(X1 ; X2 ) − I(Y1 ; Y2 )
(13)
The computation of CP (X, W) is simple as it does not require
computing H̄(Y). Furthermore, CP (X, W) reflects the wish
to preserve the temporal dependence structure of X, i.e., it is
connected to predicting future states of Y based on knowledge
of past states of Y. Since X is not i.i.d., observing Xk reveals
some information about Xk+1 . Minimizing CP (X, W) thus
tries to find a W such that Yk reveals as much information
about Yk+1 as possible, and hence does not lead to the
same trivial solutions as CL (X, W) and D̄(Y||Ỹ): A constant
function g or a soft partition W = 1αT render Y1 and Y2
independent, hence the cost is maximized at CP (X, W) =
I(X1 ; X2 ). Unfortunately, as it was shown in [7, Thm. 1],
we have CP (X, W) ≥ CL (X, W), i.e., (13) does not capture
Markovity of Y as well as the relaxation proposed by [7].
Note that although CP (X, W) does not lead to the same
trivial solutions as CL (X, W) and D̄(Y||Ỹ), it still tries to
preserve only part of the information about temporal dependence in X, i.e., information which is helpful in predicting the
next sample. Such a goal is justified in scenarios in which X
is quasi-static, i.e., runs on different time scales: The process
X moves quickly and randomly within a group of states, but
moves only slowly from one group of states to another.
Since all other information contained in X is not necessarily
preserved by minimizing CP (X, W), this cost function can
also lead to undesired solutions: For example, if X is i.i.d.
and hence does not contain any temporal dependence structure,
then I(X1 ; X2 ) = 0 and CP (X, W) = 0 for every mapping
W.
A third objective for optimization may be worth mentioning.
The information contained in X splits into a part describing its
temporal dependence structure (measured by its redundancy
rate I(X1 ; X2 )) and a part describing its new information
generated in each time step (measured by its entropy rate
H̄(X)). Indeed, we have
H(X) = H̄(X) + I(X1 ; X2 ).
(14)
While in this work we focus on preserving Markovity
via CL (X, W) and the temporal dependence structure via
CP (X, W), the authors of [12] investigated conditions such
that the newly generated information (measured by H̄(X)) is
preserved. Developing a Markov aggregation framework that
trades between three different goals – Markovity, temporal
dependence, generated information – is the object of future
work.
IV. R EGULARIZED M ARKOV C HAIN AGGREGATION
In this section we combine the approaches in [7] and [5]
to obtain a new cost function for Markov aggregation. As
discussed in Section III, the aim of Markov aggregation in
[7] is to get a process Y which is as Markov as possible.
This can be captured well by the cost function D̄(Y||Ỹ)
in the light of Lemma 2. The authors of [5] define the
Markov aggregation problem in terms of finding a mapping
W which preserves the temporal dependence structure in X.
This temporal dependence is captured well by the redundancy
rate of the process, which for a Markov chain X equals
R̄(X) = I(X1 ; X2 ). Preserving this temporal dependence
structure is thus well captured by maximizing the redundancy
rate of Y, i.e., by the following optimization problem:
min R̄(X) − R̄(Y)
W
(15)
5
Hence, to combine both the goal of Markovity and the goal of
preserving temporal information, one can define the following
Markov aggregation problem
min (1 − β)D̄(Y||Ỹ) + β(R̄(X) − R̄(Y))
W |
{z
}
(16)
=:δβ (X,W)
where 0 ≤
Definition 1.
ensures that
negative. We
β ≤ 1. Clearly, for β = 0 we are back at
For a general β, the data processing inequality
R̄(X) ≥ R̄(Y), hence the cost (16) is nonmoreover have
Lemma 3. δβ (X, W) is non-decreasing in β.
Proof: See Section VIII-B.
Although minimizing δβ (X, W) tries to preserve both
Markovity and the temporal information in X, the computation
of (16) requires computing H̄(Y). We thus take the approach
of [7] to relax (16). By [7, Thm. 1], we have
D̄(Y||Ỹ) ≤ CL (X, W).
(17)
By the data processing inequality we also have
R̄(X) − R̄(Y) ≤ CP (X, W).
were proposed to solve the respective Markov aggregation
problem.
1
• For β = 2 , optimizing (20) is equivalent to optimizing
CP (X, W). The authors of [5] proposed this cost function for deterministic aggregations, i.e., they proposed
optimizing CP (X, g). Note that this restriction to deterministic aggregations comes at the loss of optimality:
In [17, Example 3] a reversible, three-state Markov chain
was given for which the optimal aggregation to |Y| = 2
states is stochastic. For the bi-partition problem, i.e., for
|Y| = 2, the authors of [5] propose a relaxation to
a spectral, i.e., eigenvector-based optimization problem,
the solution of which has a computational complexity of
O(|X |3 ). In general, this relaxation leads to a further loss
of optimality, even among the search over all deterministic bi-partitions. For a general Y, they suggest to solve
the problem by repeated bi-partitioning, i.e., splitting sets
of states until the desired cardinality is achieved.
• For β = 1, the problem becomes equivalent to maximizing I(X1 ; Y2 ). This is exactly the information bottleneck
problem [18] for a Lagrangian parameter γ → ∞:
(18)
I(X2 ; Y2 ) − γI(X1 ; Y2 ).
Hence, combining the two we get an upper bound on
δβ (X, W) that does not require computing H̄(Y). Indeed, for
β ∈ [0, 1],
δβ (X, W) ≤ (1 − β)CL (X, W) + βCP (X, W).
(19)
•
Rather than the right-hand side of (19), we propose the
following cost function for Markov aggregation:
Cβ (X, W) := (1 − 2β)CL (X, W) + βCP (X, W).
where again β ∈ [0, 1]. One can justify going from (19) to
(20) by noticing that for every 0 ≤ β ≤ 1 for (19), one can
find a 0 ≤ β ≤ 0.5 for (20) such that the two optimization
problems are equivalent, i.e., they have the same optimizer W.
Furthermore, for β = 1, the cost function in (20) corresponds
to information bottleneck problem, a case that is not covered
by (19). Hence, not only is Cβ a strict generalization of δβ but
also has the information bottleneck problem as an interesting
corner case. In the following we summarize some of the
properties of Cβ .
Proof: See Section VIII-C.
V. R ELATED W ORK : S PECIAL C ASES
OF
Cβ (X, W)
We now show that specific settings of β lead to cost
functions that have been proposed previously in the literature.
We list these approached together with the algorithms that
Algorithmic approaches to solving this optimization problem are introduced in [19]. Note, that in this case the
optimal aggregation will be deterministic [17, Thm. 1].
For β = 0, the authors of [7] relaxed their cost function
C0 (X, g) = CL (X, g) as
C0 (X, g) = H(Y2 |Y1 ) − H(Y2 |X1 ) = I(Y2 ; X1 |Y1 )
≤ I(X2 ; X1 |Y1 ) = I(X2 ; X1 ) − I(X2 ; Y1 )
(22)
(20)
Lemma 4. For Cβ and 0 ≤ β ≤ 1 we have:
1) Cβ (X, W) ≥ 0
2) δ0.5 (X, W) = C0.5 (X, W) = 21 CP (X, W)
3) C1 (X, W) = CIB (X, W) := I(X1 ; X2 |Y2 )
4) For β ≤ 12 , βCP (X, W) ≤ δβ (X, W) ≤ Cβ (X, W)
5) For β ≥ 12 , Cβ (X, W) ≤ δβ (X, W) ≤ βCP (X, W)
6) If X is reversible, then Cβ (X, W) is non-decreasing in
β
(21)
•
and proposed using the agglomerative information bottleneck method [20] with the roles of X1 and X2 in (21) exchanged to solve this relaxed optimization problem. The
method has a computational complexity of O(|X |4 ) [19,
Sec. 3.4]. While the mapping minimizing CL (X, W)
may be stochastic, the mapping minimizing (22) will
be deterministic; hence, with this relaxation in mind,
the restriction to deterministic aggregations made in [7]
comes without an additional loss of optimality compared
to what is lost in the relaxation.
The authors of [6] proposed minimizing
I(X1 ; X2 ) − I(X2 ; Y1 ) − γH(Y2 |X1 ).
(23)
They suggested using a deterministic annealing approach,
reducing γ successively until γ = 0. In the limiting case,
the cost function then coincides with (22) and the optimal
aggregation is again deterministic. Note that, for reversible Markov chains, we have I(X1 ; Y2 ) = I(X2 ; Y1 ),
hence both (22) and (23) (for γ = 0) are equivalent to
C1 (X, W). Analyzing [6, Sec. III.B] shows that in each
annealing step the quantity
D(pX2 |X1 =x ||pX2 |Y1 =y )
X
pX |X (x′ |x)
:=
pX2 |X1 (x′ |x) log 2 1 ′
pX2 |Y1 (x |y)
′
x ∈X
(24)
6
has to be computed for every x and y. Hence, the computational complexity of this approach is O(|Y| · |X |2 )
in each annealing step.
VI. M ARKOV C HAIN AGGREGATION A LGORITHMS
We now propose an iterative method for optimizing (20)
over deterministic aggregations for general values of β. The
method consists of a sequential optimization algorithm (Algorithm 1) and an annealing procedure for β (Algorithm 2)
that prevents getting stuck in local optima. Since we focus
only on deterministic aggregations, in the remainder of this
section we can replace Cβ (X, W) by Cβ (X, g) for some
g: X → Y. Our algorithm has a computational complexity of
O(|Y| · |X |2 ) per iteration. Note, however, that the restriction
to deterministic aggregation functions comes, at least for some
values of β, with a loss of optimality, i.e., in general we have
minW Cβ (X, W) ≤ ming Cβ (X, g).
A. Sequential Algorithm
We briefly illustrate an iteration of Algorithm 1: Suppose
x ∈ X is mapped to the aggregate state y ∈ Y, i.e., g(x) = y.
We remove x from aggregate state y. We then assign x to
every aggregate state y ′ , y ′ ∈ Y, while keeping the rest of the
mapping g the same and evaluate the cost function. Finally, we
assign x to the aggregate state that minimized the cost function
(breaking ties, if necessary). This procedure is repeated for
every x ∈ X .
Algorithm 1 Sequential Generalized Information-Theoretic
Markov Aggregation.
1: function g = S GITMA(P, β, |Y|, #itermax , optional:
initial aggregation function ginit )
2:
if ginit is empty then
⊲ Initialization
3:
g ← Random Aggregation Function
4:
else
5:
g ← ginit
6:
end if
7:
#iter ← 0
8:
while #iter < #itermax do
⊲ Main Loop
9:
for all elements x ∈ X do
⊲ Optimizing g
10:
for all aggregate
( states y ∈ Y do
g(x′ )
x′ 6= x
11:
gy (x′ ) =
⊲ Assign x
y
x′ = x
to aggregate state y
12:
Cgy = Cβ (X, gy )
13:
end for
⊲ (break ties)
14:
g = arg min Cgy
gy
15:
16:
17:
18:
end for
#iter ← #iter + 1
end while
end function
It is easy to verify that the cost function is reduced in each
step of Algorithm 1, as a state is only assigned to a different
aggregate state if the cost function is reduced. Hence, the
algorithm modifies g in each iteration in order to reduce the
cost until it either reaches the maximum number of iterations
or until the cost converges.
Note that the algorithm is random in the sense that it
is started with a random aggregation function g. Depending
on the specific application, though, a tailored initialization
procedure may lead to performance improvements.
Finally, it is worth mentioning that for β = 1 our Algorithm 1 is equivalent to the sequential information bottleneck
algorithm proposed in [19, Sec. 3.4].
B. Annealing Procedure for β
Although Algorithm 1 is guaranteed to converge (with
proper tiebreaking), convergence to a global optimum is not
ensured. The algorithm may get stuck in poor local minima.
This happens particularly often for small values of β, as our
experiments in Section VII-B show. The reason is that, for
small β, Cβ (X, W) has many poor local minima and, randomly
initialized, the algorithm is more likely to get stuck in one of
them. In contrast, our results suggest that for larger values of
β the cost function has only few poor local minima and that
the algorithm converges to a good local or a global minimum
for a significant portion of random initializations.
A solution for small β would thus be to choose an initialization that is close to a “good” local optimum. A simple idea is
thus to re-use the function g obtained for a large value of β as
initial aggregation for smaller values of β. We thus propose the
following annealing algorithm: We initialize β = 1 to obtain
g. Then, in each iteration of the annealing procedure, β is
reduced and the aggregation function is updated, starting from
the result of the previous iteration. The procedure stops when β
reaches the desired value, βtarget . The β-annealing algorithm is
sketched as Algorithm 2. As is clear from the description, the
β-annealing algorithm closely follows graduated optimization
in spirit [21]. The results for synthetic datasets with and
without β-annealing are discussed in Section VII-B, which
show that without restarts one keeps getting stuck in bad local
optima for small β, while with β-annealing one is able to avoid
them. Furthermore in our experiments we have observed that
β-annealing achieves good results for random initializations,
hence tailoring initialization procedures is not necessary at
least for the scenarios we considered.
Algorithm 2 β-Annealing Information-Theoretic Markov Aggregation
1: function g = A NN ITMA(P, βtarget , |Y|, #itermax , ∆)
2:
β←1
3:
g = sGITMA(P, β, |Y|, #itermax )
⊲ Inizialization
4:
while β > βtarget do
5:
β ← max{β − ∆, βtarget }
6:
g = sGITMA(P, β, |Y|, #itermax , g)
7:
end while
8: end function
Note that the β-annealing algorithm admits producing results for a series of values of β at once: Keeping all intermediate aggregation functions, one obtains aggregations
for all values of β in the set {1, 1 − ∆, 1 − 2∆, . . . , 1 −
7
1−β
∆⌈ ∆target ⌉, βtarget }. The aggregations one obtains are exactly
those one would obtain from restarting A NN ITMA for each
value in this set, each time with the same random initial
partition. We used this fact in our experiments: If we were
interested in results for βtarget ranging between 0 and 1 in
steps of 0.05, rather than restarting A NN ITMA for each value
in this set, we started A NN ITMA for βtarget = 0 and ∆ = 0.05
once, keeping all intermediate results.
C. Computational Complexity of the Sequential Algorithm
Note that the asymptotic computational complexity of Algorithm 2 equals that of Algorithm 1, since the former simply
calls the latter ⌈(1 − βtarget )/∆⌉ + 1 times. We thus only
evaluate the complexity of Algorithm 1. To this end, we first
observe that the cost Cβ (X, W) can be expressed with only
three mutual information terms for any β and W:
Cβ (X, g) = βI(X1 ; X2 ) + (1 − 2β)I(X1 ; g(X2 ))
− (1 − β)I(g(X1 ); g(X2 )). (25)
The first term I(X1 ; X2 ) is constant regardless of the aggregation hence the computation of Cβ (X, g) depends upon the
computation of the other two terms.
In each iteration of the main loop, we evaluate Cβ (X, gy )
in line 12 for each x ∈ X and y ∈ Y. Note that gy differs
from the current g only for one element as defined in line
11. Thus, the joint PMF pX1 ,gy (X2 ) differs from pX1 ,g(X2 ) in
only two rows and hence can be computed from pX1 ,g(X2 )
in O(|X |) computations. Moreover, I(X1 ; gy (X2 )) can be
computed from I(X1 ; g(X2 )) in O(|X |) computations, cf. [20,
Prop. 1]. This is due to the fact that we can write [22,
eq. (2.28)]
I(X1 ; gy (X2 )) = I(X1 ; g(X2 ))
X
pX1 ,gy (X2 ) (x1 , y2 )
+
pX1 ,gy (X2 ) (x1 , y2 ) log
pX1 (x1 )pgy (X2 ) (y2 )
x1 ∈X
y2 ∈{y,g(x)}
−
X
pX1 ,g(X2 ) (x1 , y2 ) log
x1 ∈X
y2 ∈{y,g(x)}
pX1 ,g(X2 ) (x1 , y2 )
pX1 (x1 )pg(X2 ) (y2 )
.
(26)
The term I(gy (X1 ); gy (X2 )) can be computed from
I(g(X1 ); g(X2 )) in O(|Y|) computations, but requires the
updated joint PMF pgy (X1 ),gy (X2 ) . This PMF can be computed
from pg(X1 ),g(X2 ) in O(|X |) computations. Combining this
with the fact that line 12 is executed once for each aggregate
state in Y and once for each state in X in every iteration,
we get that optimizing g has a computational complexity of
O(|Y| · |X |2 ) per iteration.
VII. E XPERIMENTS
AND
R ESULTS
A. A Non-Reversible Markov Chain
The last property of Lemma 4 cannot be generalized to
non-reversible Markov chains. Specifically, as the proof of
Lemma 4 shows, Cβ is non-decreasing in β iff CP ≥ 2CL .
Since one can find also non-reversible Markov chains for
which this holds, reversibility is sufficient but not necessary
for Cβ to be non-decreasing in β. We next consider a nonreversible Markov chain X ∼ Mar({1, 2, 3}, P) with
0.4
0.3
0.3
P = 0.25 0.3
0.45
(27)
0.15 0.425 0.425
and let g be such that g(1) = 1 and g(2) = g(3) = 2. Then,
CL = 0.0086 and CP = 0.0135, i.e., CP < 2CL . In this case,
Cβ is decreasing with increasing β.
B. Quasi-Lumpable and Nearly Completely Decomposable
Markov Chains
Suppose we have a partition {Xi }, i = 1, . . . , M , of X
with |Xi | = Ni . Then for any A′ and P′ij that are M × M
and Ni × Nj row stochastic matrices, respectively, define A =
[aij ] = (1 − α)A′ + αI, α ∈ [0, 1], and let
a11 P′11
a12 P′12 · · · a1M P′1M
a21 P′21
a22 P′22 · · · a2M P′2M
P′ =
. (28)
..
..
..
..
.
.
.
.
′
′
′
aM1 PM1 aM2 P22 · · · aMM PMM
Let further X ∼ Mar(X , P′ ). If g induces the partition
{Xi }, then it can be shown that Y is Markov with transition
probability matrix A, i.e., Ỹ ≡ Y, and C0 (X, Ỹ) = 0. The
Markov chain X is lumpable w.r.t. the partition g. The matrix
P′ is block stochastic and the parameter α specifies how
dominant the diagonal blocks are. Specifically, if α = 1, then
P′ is block diagonal and we call X completely decomposable.
Such a Markov chain is not irreducible. We hence look at
Markov chains X ∼ Mar(X , P) with
P = (1 − ε)P′ + εE
(29)
where ε ∈ [0, 1] and where E (which can be interpreted as
noise) is row stochastic and irreducible. For small values of
ε we call X nearly completely decomposable (NCD) if α is
close to one, otherwise we call it quasi-lumpable.
We now perform experiments with these types of Markov
chains. We set M = 3, N1 = N2 = 25, and N3 = 50,
and chose the parameters from α ∈ {0, 0.5, 0.95} and ε ∈
{0, 0.4, 0.8}. For each pair (α, ε), we generated 250 random
matrices A′ and P′ij . A selection of the corresponding matrices
P is shown in Fig. 2.
Note that in practice the states of even a completely decomposable Markov chain X are rarely ordered such that the
transition probability matrix is block diagonal. Rather, the state
labeling must be assumed to be random. In this case, P is
obtained by a random permutation of the rows and columns
of a block diagonal matrix (see Fig. 2(d)), which prevents
the optimal aggregation function being “read off” simply by
looking at P. That P has a block structure in our case does
not affect the performance of our algorithms, since they 1) are
unaware of this structure and 2) are initialized randomly.
We applied our aggregation algorithm both with and without
the annealing procedure for β ∈ {0, 0.1, . . . , 0.9, 1} with the
goal of retrieving the partition {Xi }. We measure the success,
i.e., the degree to which the function g obtained from the
8
10
20
20
20
40
40
40
60
60
60
80
80
80
100
100
100
20
30
40
50
60
70
80
90
20
40
60
80
100
20
(a) α = ε = 0
40
60
80
100
20
(b) α = 0.95, ε = 0.8
40
60
80
100
(c) α = 0.95, ε = 0.4
100
20
40
60
80
100
(d) α = 0.95, ε = 0.4; rows and
columns are permuted
0.2
ARI
Cβ
Cβ
1
0.2
0.15
0.1
0.1
0.5
5 · 10−2
0
0
0.2
0.4
0.6
0.8
0
1
0
0
0.2
0.4
β
0.8
0
1
α=0 (lumpable)
α=0.5
α=0.95 (NCD)
0.2
0.4
0.6
0.8
0.5
0
1
1
0.5
0
0
0.2
0.4
β
(h) ARI with β-annealing, ε = 0
0.8
1
ARI
ARI
0.5
0.6
(g) ARI without β-annealing, ε = 0.4
1
0
0.4
β
(f) Cost without β-annealing, ε = 0.4
1
0
0.2
β
(e) Cost with β-annealing, ε = 0.4
ARI
0.6
0.6
0.8
β
(i) ARI with β-annealing, ε = 0.4
1
0
0.2
0.4
0.6
0.8
1
β
(j) ARI with β-annealing, ε = 0.8
Fig. 2. (a)-(c): Colorplots of the transition probability matrices P for different values of α and ε. For large α the block diagonal structure becomes more
dominant. (d): A random permutation of the rows and columns hides the block structure. (e)-(j): Curves showing the cost function Cβ and the adjusted Rand
index (ARI) for different settings. Mean values (solid lines) are shown together with the standard deviation (shaded areas).
algorithm agrees with the partition {Xi }, using the adjusted
Rand index (ARI). An ARI of one indicates that the two
partitions are equivalent. Note that we always assume that the
number M of sets in the partition {Xi } is known.
The results are shown in Fig. 2. Specifically, Fig. 2(e) shows
that the cost for the aggregation found by our algorithm with
β-annealing decreases monotonically with decreasing β: We
obtain a partition for a given value of β. This partition has,
by assuming CP ≥ 2CL (cf. Section VII-A), an even lower
cost for a smaller value of β. Further optimization for this
smaller value of β further reduces the cost, leading to the
depicted phenomenon. In contrast, the sequential Algorithm 1
without the annealing procedure fails for values of β less than
0.5. This is apparent both in the cost in Fig. 2(f) (which has
a sharp jump around β = 0.5) and in the ARI in Fig. 2(g)
(which drops to zero). Apparently, the algorithm gets stuck in
a bad local optimum.
Figs. 2(h) to 2(j) show the ARI of the aggregations obtained
by our algorithm with β-annealing. First of all, it can be
seen that performance improves with increasing α, since
the dominant block structure makes discovering the correct
partition more easy. Moreover, it can be seen that for α = 0 the
optimum β lies at smaller values, typically smaller than 0.5.
The position of this optimum increases with increasing noise:
While in the noiseless case the correct partition is typically
obtained for β close to zero, in the highly noisy case of
ε = 0.8 we require β ≈ 0.4 to achieve good results. The
reason may be that the higher noise leads to more partitions
9
A GGREGATING A
β Value
Ref.
β=1
β = 0.8
β = 0.5
β=0
ARI
–
0.43
0.46
0.35
0.12
A GGREGATING A
β Value
Ref.
β=1
β = 0.8
β = 0.5
β=0
ARI
–
0.2
0.34
0.24
0.35
0.15
0.31
0.01
0.02
LETTER BI - GRAM MODEL .
TABLE I
T HE PARTITIONS ARE SHOWN TOGETHER WITH THE ARI ARI W. R . T. THE REFERENCE PARTITION ( FIRST
ROW ) FOR |Y| = 4
Partitions, shown for |Y| = 4
{ },{!"$’(),-.:;?[]},{aeiou},{0123456789},{AEIOU},{BCDFGHJKLMNPQRSTVWYZ},{bcdfghjklmnpqrstvwxyz}
{ !’),-.0:;?]},{aeioy},{"$(123456789ABCDEFGHIJKLMNOPQRSTUVWY[h},{Zbcdfgjklmnpqrstuvwxz}
{ !’),-.:;?Z]},{aeiouy},{"$(0123456789ABCDEFGHIJKLMNOPQRSTUVWY[h},{bcdfgjklmnpqrstvwxz}
{ !3?Z},{’2456789AOUaeiou},{"$(-01BCDEFGHIJKLMNPQRSTVWY[bhjqw},{),.:;]cdfgklmnprstvxyz}
{ -2CEFMPSTcfgopst},{"’456789AOUZaeiu},{!$1?BDGHJLNQRVW[bhjklmqrvwz},{(),.03:;IKY]dnxy}
LETTER BI - GRAM MODEL .
TABLE II
T HE PARTITIONS ARE SHOWN TOGETHER WITH THE ARI ARI W. R . T. THE REFERENCE PARTITION ( FIRST
ROW ) FOR |Y| ∈ {2, 7}
Partitions, shown for |Y| ∈ {2, 7}
{ },{!"$’(),-.:;?[]},{aeiou},{0123456789},{AEIOU},{BCDFGHJKLMNPQRSTVWYZ},{bcdfghjklmnpqrstvwxyz}
{ !"’),-.01235689:;?KU]aehioy},{$(47ABCDEFGHIJLMNOPQRSTVWYZ[bcdfgjklmnpqrstuvwxz}
{ !’),-.:;?]},{aeioy},{"$(0123456789ABCDEFGHIJLMNOPQRSTVWY[},{bcfjmpqstw},{dgx},{KUh},{Zklnruvz}
{ !"’),-.01235689:;?EU]aehiouy},{$(47ABCDFGHIJKLMNOPQRSTVWYZ[bcdfgjklmnpqrstvwxz}
{ !’),-.:;?]},{aeioy},{$"(0123456789ABCDEFGHIJLMNOPQRSTUVWYZ[},{Kh},{bcfjkmpqstw},{dg},{lnruvxz}
{ !’-12368?EOUZaeiou},{"$(),.04579:;ABCDFGHIJKLMNPQRSTVWY[]bcdfghjklmnpqrstvwxyz}
{ ’},{!),.:;?]dy},{aeiou},{"$(-0123589ACEIMOPRSTUWZ},{BDFGHJKLNQVYhj},{7[bcfgkmpqstw},{46lnrvxz}
{ !$(-0124578?ABCFHLMNOPRSTUVWaceglnostuwxz},{"’),.369:;DEGIJKQYZ[]bdfhijkmpqrvy}
{ 4689ao},{$’AKOiux},{!?HVZhjkmvz},{"(-25CEFLMNRUWY[egnprs},{37BPQbl},{1:;STctw},{),.0DGIJ]dfy}
being quasi-lumpable by leading to an i.i.d. Y, hence for small
values of β one may get drawn into these “false solutions”
more easily. In contrast, for NCD Markov chains (i.e., for
α = 0.95) sometimes noise helps in discovering the correct
partition. Comparing Figs. 2(h) and 2(i), one can see that a
noise of ε = 0.4 allows us to perfectly discover the partition.
We believe that a small amount of noise helps in escaping bad
local minima.
The fact that the β for which the highest ARI is achieved not
necessarily falls together with the values 0, 0.5, or 1 indicates
that our generalized aggregation framework has the potential to
strictly outperform aggregation cost functions and algorithms
that have been previously proposed (cf. Section V).
C. An Example from Natural Language Processing
We took the letter bi-gram model from [23], which was
obtained by analyzing the co-occurrence of letters in F. Scott
Fitzgerald’s book “The Great Gatsby”. The text was modified
by removing chapter headings, line breaks, underscores, and
by replacing é by e. With the remaining symbols, we obtained
a Markov chain with an alphabet size of N = 76 (upper and
lower case letters, numbers, punctuation, etc.).
We applied Algorithm 2 for |Y| ∈ {2, . . . , 7} and β ∈
{0, 0.1, . . . , 0.9, 1}. To get consistent results, we restarted the
algorithm 20 times for β = 1 and chose the aggregation g
that minimized C1 (X, g); we used this aggregation g as an
initialization for the β-annealing procedure.
Looking at the results for |Y| = 4 in Table I, one can
observe that the results for β = 0.8 appear to be most
meaningful when compared to other values of β such as
β = 1 (information bottleneck), β = 0.5 (as proposed
in [5]), and β = 0 (as proposed in [7]). Specifically, for
β = 0 it can be seen that not even the annealing procedure
was able to achieve meaningful results. This conclusion is
supported by calculating the ARI of these aggregations for
a plausible reference aggregation of the alphabet into upper
case vowels, upper case consonants, lower case vowels, lower
case consonants, numbers, punctuation, and the blank space as
shown in the first row of the Table I. The absolute ARI values
are not a good performance indicator in this case since we are
comparing to a reference partition with seven sets whereas
|Y| = 4.
In Table II the same experiment is repeated for |Y| ∈ {2, 7}.
We again observe that β = 0.8 leads to the most meaningful
results which is also supported by ARI values.
D. Clustering via Markov Aggregation
Data points are often described only by pairwise similarity
values, and these similarity values can be used to construct the
transition probability matrix of a Markov chain. Then, with
this probabilistic interpretation, our information-theoretic cost
functions for Markov aggregation can be used for clustering.
This approach has been taken by [3], [4].
We considered two different data sets: three linearly separable clusters and three concentric circles, as shown in Fig 3. The
three linearly separable clusters were obtained by placing 40,
20, and 40 points, drawn from circularly symmetric Gaussian
distributions with standard deviations 2.5, 0.5, and 1.5 at
horizontal coordinates -10, 0, and 10, respectively. The three
concentric circles were obtained by placing 40 points each at
uniformly random angles at radii {0.1, 7, 15}, and by adding
to each data point spherical Gaussian noise with a standard
deviation of 0.3. In both cases, we computed the transition
probability matrix P according to
Pi→j ∝ e
−
kxi −xj k2
2
σk
(30)
where xi and xj are the coordinates of the i-th and j-th data
point, k ·k22 is the squared Euclidean distance, and where σk is
a scale parameter. We set σk to the average squared Euclidean
distance between a data point and its k nearest neighbors (and
10
20
b
b
b
b
80
40
60
80
100
b b
b
bb
b
b
b
b
bb
bb
b
b
b
bb
b
b
20
30
30
40
40
50
50
60
60
70
70
80
80
b
b
b bb b
b
b b
b
bb
b
bb
b
bb
bb
40
60
80
100
(c) P, k = 120
b
b
b
b
b
b
b
b
b
b
b
b
b bb
b
b bb
b
b
b bb b
b
b
b
b
b bb b
b b
b
b
b
b
b
b
b
bb
b b b b bb bb b bb b b bb b b
b b
b bb
b
bb
b
bb
b
b
b
bb
b
b
bb
bb b
b
b
bbb
b
b
b
bb
b
b
b
b
b
b b
b
b
bb b
{0.2, 0.5, 0.8},
10
90
b
120
20
b
b
bb
b b b b bb bb b bb b b bb b b
b b
b
10
b bb
b
b bb
b
100
bbb
b
b bb
b
b
80
b bb b
b
b
60
b
20
(b) β ∈
k = 15
b
b
120
(a) P, k = 15
b
b
bb
b
b
b
b
40
b
b
b
b
b b
b
b
b bb b
b
20
b
b
b
bb
b b b b bb bb b b b bb bb b b
b b
b
bb b
120
b bb
b
b bb
b
100
b
b
20
b
b bb
b
b
b
b bb b
b
40
60
b
b
b
b b
bb
b
b
b
b
b
bbb
b
120
(d) β = 0.2, k = 120
b
b
b
b
b
b
b
b
b
b
bb
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
(e) β = 0.8, k = 120
b
b
b
b
b
b b b b b b bb
b bb b b b
b b
b
b
b
b
b b
b
b
b
b
bb
b
b
b
b
b b
b
b b b
b b bb bb
b
b
b bb
b
b
b b
b
b
90
100
100
20
40
60
80
(f) P, k = 15
100
20
40
60
80
100
(h) β ∈ {0.2, 0.5, 0.8}, k ∈ {15, 100}
(g) P, k = 100
Fig. 3. Clustering three circles (first row) and three linearly separable clusters (second row). For k = 15, the transition probability matrices (shown in (a)
and (f)) are nearly completely decomposable. The result for the three circles depends strongly on a careful setting of the parameters β and k ((b), (d), and
(e)), while the three linearly separable clusters were separated correctly for all parameter choices (h).
averaged this quantity over all data points). We set k either to
15 or to the total number of data points.
We applied our Algorithm 2 with the annealing procedure
for β. As in the previous experiment, we restarted the algorithm 50 times for β = 1 and chose the aggregation g
that minimized C1 (X, g); we used this aggregation g as an
initialization for the β-annealing procedure.
The results are shown in Fig. 3, together with a colorplot
of the respective transition probability matrices. It can be
seen that the three linearly separable clusters were detected
correctly for all chosen parameter values. This is not surprising
for k = 15, since in this case the resulting Markov chain
is nearly completely decomposable. Interestingly, though, the
same results were observed for k = 100 for which P is
structured, but not block diagonal. One may claim that these
results are due to Algorithm 2 getting stuck in a local optimum
for β = 1 which accidentally coincides with the correct
clustering, and that optimizing our cost function for values
of β larger than 0.5 but smaller than 1 may fail. Since we
reproduced these results by using Algorithm 1 (with 50 restarts
to escape bad local optima) for values of β greater than 0.5,
this claim can be refuted.
For the three concentric circles, things look different. We
correctly identified the clusters only for a nearly completely
decomposable P, i.e., for a careful setting of k (and we were
able to reproduce these results for β greater than 0.5 using
Algorithm 1). For k = 120, i.e., equal to the number of data
points, the three circles were not identified correctly.
Since we have reason to believe that the optimal k depends
strongly on the data set, we are hesitant to recommend this approach to cluster data points that are not linearly separable (in
which case a simpler method such as k-means would suffice).
Our preliminary analysis of [3] suggests that their approach
(in which X is a random walk on the k-nearest neighbor graph
of the data set and in which the authors chose β = 0.5) suffers
from similar problems. Finally, the authors of [4] suggest to
let X “relax” to some metastable point, i.e., take an r-th
power of P such that Pr is approximately a projection; their
approach is equivalent to ours for β = 1, with P replaced
by Pr . Nevertheless, also this approach requires setting r and
k for (30). Whether this relaxation to metastability can be
successfully combined with our generalized cost function for
Markov aggregation will be deferred to future investigations.
VIII. P ROOFS
A. Proof of Proposition 1
Consider the relation Rε = {(g(x), x): x ∈ X }. It can be
shown that
∀T ⊆ X ∪ Y: Rε (T ) = g −1 (T ∩ Y) ⊆ X .
(31)
We thus need to show that, for all x and all B ⊆ Y,
X
X
Px→x′ ≥
Qg(x)→y − ε.
(32)
Now let R = [Rx→y ] = PW, i.e., we have
X
Rx→y =
Px→x′ .
(33)
x′ ∈g−1 (B)
y∈B
x′ ∈g−1 (y)
One can show along the lines of [7, (65)–(68)] that
X
X
Rx→y
CL (X, W) =
µx
Rx→y log
Qg(x)→y
x∈X
y∈Y
{z
}
|
(34)
=:D(Rx→· ||Qg(x)→· )
from which we get that, for every x,
D(Rx→· ||Qg(x)→· ) ≤
CL (X, W)
.
minx µx
(35)
11
With Pinsker’s inequality [22, Lemma 12.6.1] and [22,
(12.137)] we thus get that, for every x and every B ⊆ Y,
s
X
ln(2)CL (X, W)
.
(36)
Rx→y − Qg(x)→y ≤
2 minx µx
y∈B
Combining this with (33) thus shows that (32) holds for
s
ln(2)CL (X, W)
.
(37)
ε=
2 minx µx
δβ (X, W) − βI(X1 ; X2 )
= (1 − β)H(Y2 |Y1 ) − (1 − 2β)H̄(Y) − βH(Y )
≤ (1 − β)H(Y2 |Y1 ) − (1 − 2β)H(Y2 |X1 ) − βH(Y )
= (1 − 2β)H(Y2 |Y1 ) − (1 − 2β)H(Y2 |X1 ) − βI(Y1 ; Y2 )
= (1 − 2β)CL (X, W) − βI(Y1 ; Y2 ).
The inequality is reversed for β ≥ 0.5.
For the fifth property, we repeat the last steps with
− (1 − 2β)H̄(Y) ≤ −(1 − 2β)H(Y2 |Y1 )
This completes the proof.
B. Proof of Lemma 3
We show that the derivative of δβ (X, W) w.r.t. β is positive.
Indeed,
d
δβ (X, W)
dβ
= R̄(X) − R̄(Y) − D̄(Y||Ỹ)
(38)
= I(X1 ; X2 ) − H(Y ) + H̄(Y) − H(Y2 |Y1 ) + H̄(Y). (39)
The entropy rate of the reversed process equals the entropy
rate of the original process, i.e.,
H̄(Y) = lim H(Yn |Y1n−1 ) = lim H(Y1 |Y2n ).
n→∞
The fourth property is obtained by observing that, if β ≤ 0.5
n→∞
(40)
We can now apply [22, Lem. 4.4.1] to both sides to get
H̄(Y) ≥ H(Y2 |X1 ) and H̄(Y) ≥ H(Y1 |X2 ). We use this
in the derivative to get
d
δβ (X, W)
dβ
≥ I(X1 ; X2 ) − H(Y ) + H(Y1 |X2 ) − H(Y2 |Y1 ) + H(Y2 |X1 )
(41)
= H(X|Y ) − H(X1 |Y1 , X2 ) − H(Y2 |Y1 ) + H(Y2 |X1 )
(42)
= I(X1 ; X2 |Y1 ) − I(X1 ; Y2 |Y1 ) ≥ 0
(43)
by data processing.
C. Proof of Lemma 4
The first property follows by recognizing that
Cβ (X, W)
= (1 − β)CL (X, W) + β(CP (X, W) − CL (X, W))
(44)
and that CP (X, W) ≥ CL (X, W).
The second property follows immediately from the definition of δβ (X, W) and CP (X, W).
For the third property, note that
C1 (X, W) = CP (X, W) − CL (X, W)
= I(X1 ; X2 ) − H(Y ) + H(Y2 |X1 )
= I(X1 ; X2 ) − I(X1 , Y2 ) = I(X1 ; X2 |Y2 ).
(45)
noticing that (1 − 2β) ≤ 0 if β ≥ 0.5. Again, the inequality
is reversed for β ≤ 0.5.
If X is reversible, then the PMFs do not change if the
order of the indices is reversed. As a consequence, we
have I(X1 ; X2 |Y2 ) = I(X2 ; X1 |Y1 ) = C1 (X, W). But
CL (X; W) = I(Y2 ; X1 |Y1 ) ≤ C1 (X, W) by data processing.
Thus, the sixth property follows by noting that, with (44),
Cβ (X, W) = (1 − β)CL (X, W) + βC1 (X, W).
ACKNOWLEDGMENTS
The authors thank Matthias Rungger and Majid Zamani,
both from Hybrid Control Systems Group, Technical University of Munich, for discussions suggesting the connection
between lumpability and bisimulation. The work of Rana Ali
Amjad was supported by the German Ministry of Education
and Research in the framework of an Alexander von Humboldt
Professorship. The work of Bernhard C. Geiger was funded
by the Erwin Schrödinger Fellowship J 3765 of the Austrian
Science Fund.
R EFERENCES
[1] M. A. Katsoulakis and J. Trashorras, “Information loss in coarsegraining of stochastic particle dynamics,” J. Stat. Phys., vol. 122, no. 1,
pp. 115–135, 2006.
[2] A. Abate, “Approximation metrics based on probabilistic bisimulations
for general state-space Markov processes: A survey,” Electronic Notes
in Theoretical Computer Science, vol. 297, pp. 3 – 25, 2013, Proc.
Workshop on Hybrid Autonomous Systems.
[3] A. Alush, A. Friedman, and J. Goldberger, “Pairwise clustering based on
the mutual-information criterion,” Neurocomputing, vol. 182, pp. 284–
293, 2016.
[4] N. Tishby and N. Slonim, “Data clustering by Markovian relaxation and
the information bottleneck method,” in Advances in Neural Information
Processing Systems (NIPS), Denver, CO, Nov. 2000. [Online]. Available:
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.24.3488
[5] K. Deng, P. G. Mehta, and S. P. Meyn, “Optimal Kullback-Leibler
aggregation via spectral theory of Markov chains,” IEEE Trans. Autom.
Control, vol. 56, no. 12, pp. 2793–2808, Dec. 2011.
[6] Y. Xu, S. M. Salapaka, and C. L. Beck, “Aggregation of graph models
and Markov chains by deterministic annealing,” IEEE Trans. Autom.
Control, vol. 59, no. 10, pp. 2807–2812, Oct. 2014.
[7] B. C. Geiger, T. Petrov, G. Kubin, and H. Koeppl, “Optimal KullbackLeibler aggregation via information bottleneck,” IEEE Trans. Autom.
Control, vol. 60, no. 4, pp. 1010–1022, Apr. 2015, open-access:
arXiv:1304.6603 [].
[8] M. Vidyasagar, “Reduced-order modeling of Markov and hidden Markov
processes via aggregation,” in Proc. IEEE Conf. on Decision and Control
(CDC), Atlanta, GA, Dec. 2010, pp. 1810–1815.
[9] J. G. Kemeny and J. L. Snell, Finite Markov Chains, 2nd ed. Springer,
1976.
[10] R. M. Gray, Entropy and Information Theory. New York, NY: Springer,
1990.
12
[11] Z. Rached, F. Alajaji, and L. L. Campbell, “The Kullback-Leibler
divergence rate between Markov sources,” IEEE Trans. Inf. Theory,
vol. 50, no. 5, pp. 917–921, May 2004.
[12] B. C. Geiger and C. Temmel, “Lumpings of Markov chains, entropy rate preservation, and higher-order lumpability,” J. Appl. Probab.,
vol. 51, no. 4, pp. 1114–1132, Dec. 2014, extended version available:
arXiv:1212.4375 [].
[13] D. Blackwell, “The entropy of functions of finite-state {M}arkov
chains,” in Trans. first {P}rague Conf. Inf. theory, {S}tatistical Decis. Funct. random Process. held {L}iblice near {P}rague from
{N}ovember 28 to 30, 1956.
Prague: Publishing House of the
Czechoslovak Academy of Sciences, 1957, pp. 13–20.
[14] O. Ordentlich, “Novel lower bounds on the entropy rate of binary hidden
Markov processes,” in Proc. IEEE Int. Sym. on Information Theory
(ISIT), Jul. 2016, pp. 690–694.
[15] J. Desharnais, A. Edalat, and P. Panangaden, “Bisimulation for labelled
Markov processes,” Information and Computation, vol. 179, no. 2, pp.
163 – 193, 2002.
[16] G. Bian and A. Abate, “On the relationship between bisimulation and
trace equivalence in an approximate probabilistic context,” in Proc. Int.
Conf. on Foundations of Software Science and Computation Structure
(FOSSACS), J. Esparza and A. S. Murawski, Eds. Uppsala: Springer
Berlin Heidelberg, Apr. 2017, pp. 321–337.
[17] B. C. Geiger and R. A. Amjad, “Mutual information-based clustering:
Hard or soft?” in Proc. Int. ITG Conf. on Systems, Communications and Coding (SCC), Hamburg, Feb. 2017, pp. 1–6, open-access:
arXiv:1608.04872 [].
[18] N. Tishby, F. C. Pereira, and W. Bialek, “The information bottleneck
method,” in Proc. Allerton Conf. on Communication, Control, and
Computing, Monticello, IL, Sep. 1999, pp. 368–377.
[19] N. Slonim, “The information bottleneck: Theory and applications,” Ph.D.
dissertation, Hebrew University of Jerusalem, 2002.
[20] N. Slonim and N. Tishby, “Agglomerative information bottleneck,” in
Advances in Neural Information Processing Systems (NIPS), Denver,
CO, Nov. 1999, pp. 617–623.
[21] A. Blake and A. Zisserman, Visual Reconstruction. Cambridge, MA,
USA: MIT Press, 1987.
[22] T. M. Cover and J. A. Thomas, Elements of Information Theory, 1st ed.
Wiley Interscience, 1991.
[23] B. C. Geiger and Y. Wu, “Higher-order optimal Kullback-Leibler
aggregation of Markov chains,” in Proc. Int. ITG Conf. on Systems,
Communications and Coding (SCC), Hamburg, Feb. 2017, pp. 1–6,
open-access: arXiv:1608.04637 [].
| 7 |
A tool for stability and power sharing analysis of a generalized class of
droop controllers for high-voltage direct-current transmission systems
Daniele Zonetti, Romeo Ortega and Johannes Schiffer
arXiv:1609.03149v2 [] 20 Mar 2017
Abstract
The problem of primary control of high-voltage direct current transmission systems is addressed in this paper, which contains
four main contributions. First, to propose a new nonlinear, more realistic, model for the system suitable for primary control
design, which takes into account nonlinearities introduced by conventional inner controllers. Second, to determine necessary
conditions—dependent on some free controller tuning parameters—for the existence of equilibria. Third, to formulate additional
(necessary) conditions for these equilibria to satisfy the power sharing constraints. Fourth, to establish conditions for stability
of a given equilibrium point. The usefulness of the theoretical results is illustrated via numerical calculations on a four-terminal
example.
I. INTRODUCTION
For its correct operation, high-voltage direct current (hvdc) transmission systems—like all electrical power systems—must
satisfy a large set of different regulation objectives that are, typically, associated to the multiple time—scale behavior of the
system. One way to deal with this issue, that prevails in practice, is the use of hierarchical control architectures [1]–[3].
Usually, at the top of this hierarchy, a centralized controller called tertiary control—based on power flow optimization
algorithms (OPFs)—is in charge of providing the inner controllers with the operating point to which the system has to be
driven, according to technical and economical constraints [1]. If the tertiary control had exact knowledge of such constraints
and of the desired operating points of all terminals, then it would be able to formulate a nominal optimization problem and
the lower level (also called inner-loop) controllers could operate under nominal conditions. However, such exact knowledge
of all system parameters is impossible in practice, due to uncertainties and lack of information. Hence, the operating points
generated by the tertiary controller may, in general, induce unsuitable perturbed conditions. To cope with this problem
further control layers, termed primary and secondary control, are introduced. These take action—whenever a perturbation
occurs—by promptly adjusting the references provided by the tertiary control in order to preserve properties that are essential
for the correct and safe operation of the system. The present paper focuses on the primary control layer. Irrespectively of
the perturbation and in addition to ensuring stability, primary control has the task of preserving two fundamental criteria: a
prespecified power distribution (the so-called power sharing) and keeping the terminal voltages near the nominal value [4].
Both objectives are usually achieved by an appropriate control of the dc voltage of one or more terminals at their point of
interconnection with the hvdc network [2], [5], [6].
Clearly, a sine qua non requirement for the fulfillment of these objectives is the existence of a stable equilibrium point for
the perturbed system. The ever increasing use of power electronic devices in modern electrical networks, in particular the
presence of constant power devices (CPDs), induces a highly nonlinear behavior in the system—rendering the analysis of
existence and stability of equilibria very complicated. Since linear, inherently stable, models, are usually employed for the
description of primary control of dc grids [3], [6], [7], little attention has been paid to the issues of stability and existence
of equilibria. This fundamental aspect of the problem has only recently attracted the attention of power systems researchers
[8]–[10] who, similarly to the present work, invoke tools of nonlinear dynamic systems analysis, to deal with the intricacies
of the actual nonlinear behavior.
The main contributions and the organization of the paper are as follows. Section II is dedicated to the formulation—under
some reasonable assumptions—of a reduced, nonlinear model of an hvdc transmission system in closed-loop with standard
inner-loop controllers. In Section III a further model simplification, which holds for a general class of dc systems with short
lines configurations, is presented. A first implication is that both obtained models, which are nonlinear, may in general have
no equilibria. Then, we consider a generalized class of primary controllers, that includes the special case of the ubiquitous
voltage droop control, and establish necessary conditions on the control parameters for the existence of an equilibrium point.
This is done in Section IV. An extension of this result to the problem of existence of equilibria that verify the power sharing
property is carried out in Section V. A last contribution is provided in Section VI, with a (local) stability analysis of a known
equilibrium point, based on Lyapunov’s first method. The usefulness of the theoretical results is illustrated with a numerical
example in Section VII. We wrap-up the paper by drawing some conclusions and providing guidelines for future investigation.
D. Zonetti and R. Ortega are with the Laboratoire des Signaux et Systémes, 3, rue Joliot Curie, 91192 Gif-sur-Yvette, France.
[email protected], [email protected]
J. Schiffer is with the School of Electronic and Electrical Engineering, University of Leeds, Leeds LS2 9JT, UK, [email protected]
Notation. For a set N = {l, k, . . . , n} of, possibly unordered, elements, we denote with i ∼ N the elements i = l, k, . . . , n.
All vectors are column vectors. Given positive integers n, m, the symbol 0n ∈ Rn denotes the vector of all zeros, 0n×m
the n × m column matrix of all zeros, 1n ∈ Rn the vector with all ones and In the n × n identity matrix. When clear from
the context dimensions are omitted and vectors and matrices introduced above are simply denoted by the symbols 0, 1 or
I. For a given matrix A, the i-th colum is denoted by Ai . Furthermore, diag{ai } is a diagonal matrix with entries ai ∈ R
and bdiag{Ai } denotes a block diagonal matrix with matrix-entries Ai . x := col(x1 , . . . , xn ) ∈ Rn denotes a vector with
entries xi ∈ R. When clear from the context it is simply referred to as x := col(xi ).
II. NONLINEAR MODELING OF HVDC TRANSMISSION SYSTEMS
A. A graph description
The main components of an hvdc transmission system are ac to dc power converters and dc transmission lines. The power
converters connect ac subsystems—that are associated to renewable generating units or to ac grids—to an hvdc network. In
[11] it has been shown that an hvdc transmission system can be represented by a directed graph1 without self-loops, where
the power units—i.e. power converters and transmission lines—correspond to edges and the buses correspond to nodes.
Hence, a first step towards the construction of a suitable model for primary control analysis and design is then the definition
of an appropriate graph description of the system topology that takes into account the primary control action.
We consider an hvdc transmission system described by a graph G ↑ (N , E), where n = c + 1 is the number of nodes, where
the additional node is used to model the ground node, and m = c+t is the number of edges, with c and t denoting the number
of converter and transmission units respectively. We implicitly assumed that transmission (interior) buses are eliminated via
Kron reduction [12]. We further denote by p the number of converter units not equipped with primary control—termed
PQ units hereafter—and by v the number of converter units equipped with primary control—that we call voltage-controlled
units, with c = p + v. To facilitate reference to different units we find it convenient to partition the set of converter nodes
(respectively converter edges) into two ordered subsets NP and NV (respectively EP and EV ) corresponding to P Q and
voltage-controlled nodes (respectively edges). The incidence matrix associated to the graph is given by:
Ip
0
BP
Iv
BV ∈ Rn×m ,
(II.1)
B= 0
>
0
−1
−1>
v
p
where the submatrices BP ∈ Rp×t and BV ∈ Rv×t fully capture the topology of the hvdc network with respect to the
different units.
B. Converter units
For a characterization of the converter units we consider power converters based on voltage source converter (VSC)
technology [13]. Since this paper focuses on primary control, we first provide a description of a single VSC in closed-loop
with the corresponding inner-loop controller. In hvdc transmission systems, the inner-loop controller is usually achieved
via a cascaded control scheme consisting of a current control loop whose setpoints are specified by an outer power loop
[14]. Moreover, such a control scheme employs a phase-locked-loop (PLL) circuit, which is a circuit that synchronizes an
oscillator with a reference sinusoidal input [15]. The PLL is thus locked to the phase a of the voltage vac,i (t) and allows,
under the assumption of balanced operation of the phases, to express the model in a suitable dq reference frame, upon
which the current and power loops are designed, see [16], [17] for more details on this topic. For these layers of control,
different strategies can be employed in practice. Amongst these, a technique termed vector control that consists of combining
feedback linearization and PI control is very popular, see [17]–[19] for an extensive overview on this control strategy. A
schematic description of the VSC and of the overall control architecture, which also includes, if any, the primary control
layer, is given in Fig. 1. As detailed above, the inner-loop control scheme is based on an appropriate dq representation of
the ac-side dynamics of the VSC, which for balanced operating conditions is given by the following second order dynamical
system [17]:
Li I˙d,i = −Ri Id,i + Li ωi Iq,i − dd,i vC,i + Vd,i
(II.2)
Li I˙q,i = −Li ωi Id,i − Ri Iq,i − dq,i vC,i + Vq,i
where Id,i ∈ R and Iq,i ∈ R denote the direct and quadrature currents, vC,i ∈ R+ denotes the dc voltage, dd,i ∈ R and
dq,i ∈ R denote the direct and quadrature duty ratios, Vd,i ∈ R and Vq,i ∈ R denote the direct and quadrature input voltages,
Li ∈ R+ and Ri ∈ R+ denote the (balanced) inductance and the resistance respectively. Moreover, the dc voltage dynamics
can be described by the following scalar dynamical system:
Ci v̇C,i = −Gi vC,i + ii + iC,i ,
ii := dd,i Id,i + dq,i Iq,i ,
(II.3)
1 A directed graph is an ordered 3-tuple, G ↑ = {N , E, Π}, consisting of a finite set of nodes N , a finite set of directed edges E and a mapping Π from
E to the set of ordered pairs of N .
PLL
Fig. 1: Control architecture of a three-phase voltage source converter that interfaces an ac subsystem—characterized by a
three–phase input voltage vac,i (t)—to an hvdc network—characterized by an ingoing dc current iC,i (t). Bold lines represent
electrical connections, while dashed lines represent signal connections [16].
where iC,i ∈ R denotes the current coming from the dc network, ii denotes the dc current injection via the VSC, Ci ∈ R+
and Gi ∈ R+ denote the capacitance and the conductance respectively. For a characterization of the power injections we
consider the standard definitions of instantaneous active and reactive power associated to the ac-side of the VSC, which are
given by [20], [21]:
Pi := Vd,i Id,i + Vq,i Iq,i , Qi := Vq,i Id,i − Vd,i Iq,i ,
(II.4)
while the dc power associated to the dc-side is given by:
PDC,i := vC,i ii .
(II.5)
We now make two standard assumptions on the design of the inner-loop controllers.
?
Assumption 2.1: Vq,i = Vq,i
= 0,
∀t ≥ 0.
Assumption 2.2: All inner-loop controllers are characterized by stable current control schemes. Moreover, the employed
schemes guarantee instantaneous and exact tracking of the desired currents.
Assumption 2.1 can be legitimized by appropriate design of the PLL mechanism, which is demanded to fix the dq
transformation angle so that the quadrature voltage is always kept zero after very small transients. Since a PLL usually
operates in a range of a few ms, which is smaller than the time scale at which the power loop evolves, these transients can
be neglected.
Similarly, Assumption 2.2 can be legitimized by an appropriate design of the current control scheme so that the resulting
closed-loop system is internally stable and has a very large bandwidth compared to the dc voltage dynamics and to the outer
loops. In fact, tracking of the currents is usually achieved in 10 − 50 ms, while dc voltage dynamics and outer loops evolve
at a much slower time-scale [1].
Under Assumption 2.1 and Assumption 2.2, from the stationary equations of the currents dynamics expressed by (II.2),
?
?
i.e. for I˙d,i
= 0, I˙q,i
= 0, we have that
d?d,i =
1
?
?
?
−Ri Id,i
+ Li ωi Iq,i
+ Vd,i
,
vC,i
d?q,i =
1
?
?
−Li ωi Id,i
− Ri Iq,i
,
vC,i
(II.6)
?
?
where Id,i
and Iq,i
denote the controlled dq currents (the dynamics of which are neglected under Assumption 2.2), while
?
Vd,i denotes the corresponding direct voltage on the ac-side of the VSC. By substituting (II.6) into (II.3) and recalling the
definition of active power provided in (II.4), the controlled dc current can thus be expressed as
i?i =
? ?
? 2
? 2
Vd,i
Id,i − Ri (Id,i
) − Ri (Iq,i
)
P ? − Di?
= i
,
vC,i
vC,i
(II.7)
where
? ?
Pi? := Vd,i
Id,i ,
? 2
? 2
Di? := Ri (Id,i
) + (Iq,i
)
(II.8)
denote respectively the controlled active power on the ac-side and the power dissipated internally by the converter. We then
make a further assumption.
Assumption 2.3: Di? = 0.
Assumption 2.3 can be justified by the high efficiency of the converter, i.e. by the small values of the balanced threephase resistance R, which yield Di? ≈ 0. Hence, by replacing (II.7) into (II.3) and using the definitions (II.8), we obtain the
following scalar dynamical system [21]:
Ci v̇C,i = −Gi vC,i +
?
Vd,i
I ? + iC,i
vC,i d,i
(II.9)
with i ∼ EP ∪ EV , which describes the dc-side dynamics of a VSC under assumptions 2.1, 2.2 and 2.3. By taking (II.9) as a
point of departure, we next derive the dynamics of the current-controlled VSCs in closed-loop with the outer power control.
If the unit is a PQ unit, the current references are simply determined by the outer power loop via (II.4) with constant
?
active power Pjref and reactive power Qref
j , which by noting that Vq,j = 0, are given by:
?
Id,j
=
Pjref
? ,
Vd,j
?
Iq,j
=−
Qref
j
? ,
Vd,j
(II.10)
with j ∼ EP , which replaced into (II.9) gives
Cj v̇C,j = −Gj vC,j + uj (vC,j ) + iC,j .
(II.11)
Pjref
with the new current variable uj and the dc voltage vC,j verifying the hyperbolic constraint
= vC,j uj , j ∼ EP . Hence,
ref
:= Pjref ,
a PQ unit can be approximated, with respect to its power behavior, by a constant power device of value PP,j
see also Fig. 2a. On the other hand, if the converter unit is a voltage-controlled unit, the current references are modified
according to the primary control strategy. A common approach in this scenario is to introduce an additional deviation (also
called droop) in the direct current reference—obtained from the outer power loop—as a function of the dc voltage, while
keeping the calculation of the reference of the quadrature current unchanged:
?
Id,k
=
Pkref
? + δk (vC,k ),
Vd,k
?
Iq,k
=−
Qref
k
? ,
Vd,k
(II.12)
with k ∼ EV and where δk (vC,k ) represents the state-dependent contribution provided by the primary control. We propose
the primary control law:
1
2
δk (vC,k ) = ? µP,k + µI,k vC,k + µZ,k vC,k
,
(II.13)
Vd,k
with k ∼ EV and where µP,k , µI,k , µZ,k ∈ R are free control parameters. By replacing (II.12)-(II.13) into (II.9), we obtain
Ck v̇C,k = −(Gk − µZ,k )vC,j + µI,k + uk (vC,k ) + iC,k ,
(II.14)
with the new current variable uk and the dc voltage vC,k verifying the hyperbolic constraint Pkref + µP,k = vC,k uk , k ∼ EV .
Moreover, with Assumption 2.3 the injected dc power is given by:
ref
2
PDC,k (vC,k ) = PV,k
+ µI,k vC,k + µZ,k vC,k
,
(II.15)
with
ref
PV,k
:= Pkref + µP,k ,
from which follows, with the control law (II.13), that a voltage-controlled unit can be approximated, with respect to its
(a) Equivalent circuit scheme for PQ units.
(b) Equivalent circuit scheme for voltage-controlled units.
Fig. 2: Equivalent circuit schemes of the converter units with constant power devices (CPDs), under Assumption 2.2.
power behavior, by a ZIP model, i.e. the parallel connection of a constant impedance (Z), a constant current source/sink
ref
(I) and a constant power device (P). More precisely—see also Fig. 2b—the parameters PV,k
, µI,k and µZ,k represent the
constant power, constant current and constant impedance of the ZIP model. Finally, the dynamics of the PQ units can be
represented by the following scalar systems:
Cj v̇C,j = −Gj vC,j + uj + iC,j ,
ref
0 = PP,j
− vC,j uj ,
while for the dynamics of the voltage-controlled units we have:
Ck v̇C,k = −(Gk − µZ,k )vC,k + µI,k + uk + iC,k ,
ref
0 = PV,k
− vC,k uk ,
with j ∼ EP , k ∼ EV and where vC,j , vC,k ∈ R+ denote the voltages across the capacitors, iC,j , iC,k ∈ R denote the
network currents, uj , uk ∈ R denote the currents flowing into the constant power devices, Gj ∈ R+ , Gk ∈ R+ , Cj ∈ R+ ,
Ck ∈ R+ denote the conductances and capacitances. The aggregated model is then given by:
CP v̇P
GP
0
vP
u
0
i
=−
+ P +
+ P ,
(II.16)
CV v̇V
0 GV + GZ vV
uV
ūV
iV
together with the algebraic constraints:
ref
PP,j
= vP,j uP,i ,
ref
PV,k
= vV,k uV,k ,
with i ∼ EP , k ∼ EV and the following definitions.
- State vectors
vP := col(vC,j ) ∈ Rp ,
vV := col(vC,k ) ∈ Rv .
- Network ingoing currents
iP := col(iC,j ) ∈ Rp ,
iV := col(iC,k ) ∈ Rv .
uP := col(uj ) ∈ Rp ,
uV := col(uk ) ∈ Rv .
- Units ingoing currents
- External sources ūV := col(µI,k ) ∈ Rv .
- Matrices
CP : = diag{Cj } ∈ Rp×p , CV := diag{Ck } ∈ Rv×v ,
GP : = diag{Gj } ∈ Rp×p , GV := diag{Gk } ∈ Rv×v , GZ := diag{−µZ,k } ∈ Rv×v .
C. Interconnected model
For the model derivation of the hvdc network we assume that the dc transmission lines can be described by standard,
single-cell π-models. However, it should be noted that at each converter node the line capacitors will result in a parallel
connection with the output capacitor of the converter [22]. Hence, the capacitors at the dc output of the converter can be
replaced by equivalent capacitors and the transmission lines described by simpler RL circuits, for which it is straightforward
to obtain the aggregated model [11]:
(II.17)
LT i̇T = −RT iT + vT ,
with iT := col(iT,i ) ∈ Rt , vT := col(vT,i ) ∈ Rt denoting the currents through and the voltages across the lines and
LT := col(LT,i ) ∈ Rt×t , RT := col(RT,i ) ∈ Rt×t denoting the inductance and resistance matrices. In order to obtain the
reduced, interconnected model of the hvdc transmission system under Assumption 2.2, we need to consider the interconnection
laws determined by the incidence matrix (II.1). Let us define the node and edge vectors:
VP
vP
iP
Vn := VV ∈ Rc+1 , Ve := vV ∈ Rm , Ie := iV ∈ Rm .
0
vT
iT
By using the definition of the incidence matrix (II.1) together with the Kirchhoff’s current and voltage laws given by [23],
[24]:
BIe = 0,
Ve = B > Vn ,
we obtain:
iP = −BP vP ,
iV = −BV vV ,
Replacing iP and iV in (II.16) and vT in (II.17), leads to
CP v̇P
−GP
0
CV v̇V = 0
−GV
BP>
BP>
LT i̇T
vT = BP> vP + BV> vV .
the interconnected model:
−BP
vP
uP
0
−BV vV + uV + ūV ,
iT
0
0
−GZ
(II.18)
(II.19)
together with the algebraic constraints:
ref
PP,j
= vP,j uP,j ,
ref
PV,k
= vV,k uV,k ,
(II.20)
with i ∼ EP , k ∼ EV .
Remark 2.4: With the choice
µP,k = 0,
nom
µI,k = dk vC
,
µZ,k = −dk ,
the primary control (II.13) reduces to:
δk (vC,k ) = −
dk
nom
? (vC,k − vC ),
Vd,k
while the injected current is simply given by
i?k =
?
Vd,k
P ref
?
nom
Id,k
= k − dk (vC
− vC,k ),
vC,k
vC,k
with k ∼ EV . This is exactly the conventional, widely diffused, voltage droop control [2], [6], [25], where dk is called droop
nom
is the nominal voltage of the hvdc system. The conventional droop control can be interpreted as an
coefficient and vC
appropriate parallel connection of a current source with an impedance, which is put in parallel with a constant power device,
thus resulting in a ZIP model. A similar model is encountered in [4] and should be contrasted with the models provided in
[3], [7], where the contribution of the constant power device is absent.
Remark 2.5: A peculiarity of hvdc transmission systems with respect to generalized dc grids is the absence of traditional
loads. Nevertheless, the aggregated model of the converter units (II.16) can be still employed for the modeling of dc grids
with no loss of generality, under the assumption that loads can be represented either by PQ units (constant power loads) or
by voltage-controlled units with assigned parameters (ZIP loads). This model should be contrasted with the linear models
adopted in [3], [7] for dc grids, where loads are modeled as constant current sinks.
III. A REDUCED MODEL FOR GENERAL DC SYSTEMS WITH SHORT LINES CONFIGURATIONS
Since hvdc transmission systems are usually characterized by very long, i.e. dominantly inductive, transmission lines, there
is no clear time-scale separation between the dynamics of the power converters and the dynamics of the hvdc network. This
fact should be contrasted with traditional power systems—where a time-scale separation typically holds because of the very
slow dynamics of generation and loads compared to those of transmission lines [26]—and microgrids—where a time-scale
separation is justified by the short length, and consequently fast dynamics, of the lines [27]. Nevertheless, as mentioned
in Remark 2.5, the model (II.19)-(II.20) is suitable for the description of a very general class of dc grids. By taking this
model as a point of departure, we thus introduce a reduced model that is particularly appropriate for the description of
a special class of dc grids, i.e. dc grids with short lines configurations. This class includes, among the others, the widely
popular case of dc microgrids [28] and the case of hvdc transmission systems with back-to-back configurations [29]. For
these configurations, we can then make the following assumption.
Assumption 3.1: The dynamics of the dc transmission lines evolve on a time-scale that is much faster than the time-scale
at which the dynamics of the voltage capacitors evolve.
Under Assumption 3.1, (II.17) reduces to:
iT ≡ i?T = GT vT ,
(III.1)
RT−1
where i?T is the steady-state vector of the line currents and GT :=
the conductance matrix of the transmission lines.
By replacing the expression (III.1) into (II.19) we finally obtain:
L +G
Lm
CP v̇P
vP
u
0
=− P > P
+ P +
,
(III.2)
CV v̇V
uV
ūV
Lm
LV + GV + GZ vV
together with the algebraic constraints (II.20) and where we defined
LP : = BP GL BP> ,
Lm := BP GL BV> ,
Remark 3.2: The matrix:
Lm
∈ Rc×c
LV
LP
L>
m
L :=
LV := BV GL BV> .
is the Laplacian matrix associated to the weighted undirected graph Ḡ w , obtained from the (unweighted directed) graph G ↑
that describes the hvdc transmission system by: 1) eliminating the reference node and all edges connected to it; 2) assigning
as weights of the edges corresponding to transmission lines the values of their conductances. Similar definitions are also
encountered in [3], [7].
IV. CONDITIONS FOR EXISTENCE OF AN EQUILIBRIUM POINT
From an electrical point of view, the reduced system (II.19)-(II.20) is a linear RLC circuit, where at each node a constant
power device is attached. It has been observed in experiments and simulations that the presence of constant power devices
may seriously affect the dynamics of these circuits hindering the achievement of a constant, stable behavior of the state
variables—the dc voltages in the present case [10], [30]–[32]. A first objective is thus to determine conditions on the free
control parameters of the system (II.19)-(II.20) for the existence of an equilibrium point. Before presenting the main result
of this section, we make an important observation: since the steady-state of the system (II.19)-(II.20) is equivalent to the
steady-state of the system (III.2)-(II.20), the analysis of existence of an equilibrium point follows verbatim. Based on this
consideration, in the present section we will only consider the system (III.2)-(II.20), bearing in mind the the same results
hold for the system (II.19)-(II.20). To simplify the notation, we define
ref
PPref : = col(PP,j
) ∈ Rp ,
RP := LP + GP ∈ Rp×p ,
ref
PVref : = col(PV,k
) ∈ Rv ,
RV := LV + GV + GZ ∈ Rv×v .
(IV.1)
Furthermore, we recall the following lemma, the proof of which can be found in [10].
Lemma 4.1: Consider m quadratic equations of the form fi : Rn → R,
fi (x) :=
1 >
x Ai x + x> Bi ,
2
i ∈ [1, m],
(IV.2)
n×n
where Ai = A>
, Bi ∈ Rn , ci ∈ R and define:
i ∈R
A(T ) : =
m
X
ti Ai ,
B(T ) :=
i=1
m
X
ti Bi ,
C(T ) :=
i=1
m
X
t i ci .
i=1
If the following LMI
Υ(T ) :=
A(T )
B(T )
> 0,
B > (T ) −2C(T )
is feasible, then the equations
fi (x) = ci ,
i ∈ [1, m],
(IV.3)
have no solution.
We are now ready to formulate the following proposition, that establishes necessary, control parameter-dependent, conditions for the existence of equilibria of the system (III.2)-(II.20).
Proposition 4.2: Consider the system (III.2)-(II.20), for some given PPref ∈ Rp , PVref ∈ Rv . Suppose that there exist two
diagonal matrices TP ∈ Rp×p and TV ∈ Rv×v such that:
Υ(TP , TV ) > 0,
(IV.4)
with
TP RP + RP TP
?
Υ :=
?
TP Lm + L>
m TV
TV RV + RV TV
?
0
−TV ūV
ref
>
−2(1>
T
P
P
p
P + 1v TV
,
PVref )
where PP? , PV? , RP and RV are defined in (IV.1). Then the system (III.2)-(II.20) does not admit an equilibrium point.
Proof: First of all, by setting the left-hand of the differential equations in (III.2) to zero and using (IV.1), we have:
0 = − RP vP? − Lm vV? + u?P ,
?
?
?
0 = − L>
m vP − RV vV + uV + ūV .
?
?
Left-multiplying the first and second set of equations by vP,j
and vV,k
respectively, with j ∼ EP , k ∼ EV , we get
ref
?
>
?
?
PP,j
= vP,j
RP,j
vP? + vP,j
L>
m,j vV ,
ref
?
?
>
?
PV,k
= vV,k
Lm,k vP? + vV,k
RV,k
vV? − vV,k
ūV,k ,
which, after some manipulations, gives
ci =
1 ? >
(v ) Ai v ? + (v ? )> Bi ,
2
with i ∼ EP ∪ EV , v ? := col(vP? , vV? ) ∈ Rc and
RP Lm
RP
Ai : = ei e>
+
i
L>
RV
Lm
m
L>
m
e e> ,
RV i i
Bi := ei e>
i
(IV.5)
0
,
ūV
ci := e>
i
ref
PP
.
PVref
Let consider the map f (v ? ) : Rc → Rc with components
1 ? >
(v ) Ai v ? ,
2
with i ∼ EP ∪ EV and denote by F the image of Rc under this map. The problem of solvability of such equations can be
formulated as in Lemma 4.1, i.e. if the LMI (IV.4) holds, then col(c?i ) is not in F , thus completing the proof.
fi (v ? ) =
Remark 4.3: Note that the feasibility of the LMI (IV.4) depends on the system topology reflected in the Laplacian matrix
L and on the system parameters, among which GZ , ūV and PVref are free (primary) control parameters. Since the feasibility
condition is only necessary for the existence of equilibria for (II.19), it is of interest to determine regions for these parameters
that imply non-existence of an equilibrium point.
V. CONDITIONS FOR POWER SHARING
As already discussed, another control objective of primary control is the achievement of power sharing among the voltagecontrolled units. This property consists in guaranteeing an appropriate (proportional) power distribution among these units
in steady-state. We next show that is possible to reformulate such a control objective as a set of quadratic constraints on the
equilibrium point, assuming that it exists. Since it is a steady-state property, the same observation done in Section IV applies,
which means that the results obtained for the system (III.2)-(II.20) also hold for the system (II.19)-(II.20). We introduce the
following definition.
?
Definition 5.1: Let be v ? := (vP? , vV? ) ∈ Rc and PDC,V (v ? ) := col(PDC,k (vC,k
)) ∈ Rv respectively an equilibrium point
for the system (III.2)-(II.20) and the collection of injected powers as defined by (II.15), and let be Γ := diag{γk } ∈ Rv×v ,
a positive definite matrix. Then v ? is said to possess the power sharing property with respect to Γ if:
ΓPDC,V (v ? ) = 1v .
(V.1)
Then we have the following lemma.
Lemma 5.2: Let v ? = (vP? , vV? ) ∈ Rc be an equilibrium point for (III.2)-(II.20) and Γ := diag{γk } ∈ Rv×v a positive
definite matrix. Then v ? possesses the power sharing property with respect to Γ if an only if the quadratic equations
1 ? > ps ?
(v ) Ak v + (Bkps )> v ? = pps
k ,
2
(V.2)
with k ∼ EV and where:
Aps
k
0
:=2
0
0
e e> ,
ΓGZ k k
Bkps
0
:=
e e> ,
ΓūV k k
admit a solution.
Proof: From (V.1) we have that by definition:
ref
γk PDC,k
(vC,k ) = 1,
pps
k
:=
e>
k
0
ΓPVref ,
with k ∼ EV , which by recalling (II.15), is equivalent to:
ref
2
γk (PV,k
+ µI,k vC,k + µZ,k vC,k
) = 1.
After some straightforward manipulations, the above equalities can be rewritten as (V.2), completing the proof.
An immediate implication of this lemma is given in the following proposition, which establishes necessary conditions for
the existence of an equilibrium point that verifies the power sharing property.
Proposition 5.3: Consider the system (III.2)-(II.20), for some given PPref , PVref and Γ. Suppose that there exist three
diagonal matrices TP ∈ Rp×p , TV ∈ Rv×v , TVps ∈ Rv×v , such that:
Υ(TP , TV ) + Υps (TVps ) > 0,
with
Υps
0
:= ?
?
0
2TVps ΓGZ
?
(V.3)
0
.
TVps ΓūV
ps
−2TV (1v − ΓPVref )
Then the system (III.2)-(II.20) does not admit an equilibrium point that verifies the power sharing property.
Proof: The proof is similar to the proof of Proposition 4.2. By using Lemma 5.2 the power sharing constraints can be
indeed rewritten as quadratic equations, similarly to (IV.5). Hence, it suffices to apply Lemma 4.1 to the quadratic equations
(IV.5), (V.2) to complete the proof.
VI. CONDITIONS FOR LOCAL ASYMPTOTIC STABILITY
We now present a result on stability of a given equilibrium point for the system (II.19)-(II.20). The result is obtained by
applying Lyapunov’s first method.
Proposition 6.1: Consider the system (II.19)-(II.20) and assume that v ? = (vP? , vV? , i?T ) ∈ Rm is an equilibrium point. Let
)
(
(
)
ref
ref
PV,k
PP,j
p×p
?
?
∈ R , GV := diag
GP : = diag
∈ Rv×v ,
(VI.1)
? )2
? )2
(vP,j
(vV,k
and
−CP−1 (GP + G?P )
0
−CP−1 BP
J(v ? ) := −
0
−CV−1 (GV + G?V ) −CV−1 BV .
−1 >
>
LT BP
L−1
−L−1
T BV
T RT
Then if:
- all eigenvalues λi of J are such that
Re{λi [J(v ? )]} < 0,
the equilibrium point v ? is locally asymptotically stable;
- there exists at least one eigenvalue λi of J such that
Re{λi [J(v ? )]} > 0,
the equilibrium point v ? is unstable.
Proof: The first-order approximation of the system (II.19)-(II.20) around v ? is given by:
∂iP
0
0
CP v̇P
−GP
0
−BP
vP
vP
∂vP v ?
∂iV
CV v̇V = 0
−GV −BV vV + 0
v
0
V
∂vV v ?
iT
iT
BP>
BV> −RT
LT i̇T
0
0
0
(VI.2)
Differentiating (II.20) with respect to vP , vV , yields:
0p×p =
∂iP
· diag{vP,j } + diag{iP,j },
∂vP
0v×v =
∂iV
· diag{vV,k } + diag{iV,k }.
∂vV
By using (VI.1), it follows that
∂iP
∂vP
= −G?P ,
v?
∂iV
∂vV
= −G?V .
v?
The proof is completed by replacing into (VI.2) and invoking Lyapunov’s first method.
VII. AN ILLUSTRATIVE EXAMPLE
In order to validate the results on existence of equilibria and power sharing for the system (II.19)-(II.20) we next provide
an illustrative example. Namely, we consider the four-terminal hvdc transmission system depicted in Fig. 3, the parameters
of which are given in Table I.
TABLE I: System parameters.
Gi
Ci
Value
0 Ω−1
20 µF
?
PV,1
G12
Value
30 M W
0.1 Ω−1
?
PP,2
G14
Value
−20 M W
0.15 Ω−1
?
PV,3
G23
Value
9 MW
0.11 Ω−1
1
?
PP,4
G24
Value
−24 M W
0.08 Ω−1
4
2
3
Fig. 3: Four-terminal hvdc transmission system.
Since c = t = 4, the graph associated to the hvdc system has n = 4 + 1 = 5 nodes and m = 4 + 4 = 8 edges. We then
make the following assumptions.
- Terminal 1 and Terminal 3 are equipped with primary control, from which it follows that there are p = 2 PQ units and
v = 2 voltage-controlled units. More precisely we take
δk (vC,k ) = −
dk
nom
? (vC,k − vC ),
Vd,k
k = {1, 3}.
nom
is the nominal voltage
This is the well-known voltage droop control, where dk is a free control parameter, while vC
of the hvdc system, see also Remark 2.4.
- The power has to be shared equally among terminal 1 and terminal 3, from which it follows that Γ = I2 in Definition
5.1.
The next results are obtained by investigating the feasibility of the LMIs (IV.4), (V.3) as a function of the free control
parameters d1 and d3 . For this purpose, CVX, a package for specifying and solving convex programs, has been used to
solve the semidefinite programming feasibility problem [33]. By using a gridding approach, the regions of the (positive)
parameters that guarantee feasibility (yellow) and unfeasibility (blue) of the LMI (IV.4) are shown in Fig. 4, while in Fig. 5
the same is done with respect to the LMI (V.3). We deduce that a necessary condition for the existence of an equilibrium
point is that the control parameters are chosen inside the blue region of Fig. 4. Similarly, a necessary condition for the
existence of an equilibrium point that further possesses the power sharing property is that the control parameters are chosen
inside the blue region of Fig. 5.
VIII. CONCLUSIONS AND FUTURE WORKS
In this paper, a new nonlinear model for primary control analysis and design has been derived. Primary control laws
are described by equivalent ZIP models, which include the standard voltage droop control as a special case. A necessary
condition for the existence of equilibria in the form of an LMI—which depends on the parameters of the controllers—is
established, thus showing that an inappropriate choice of the latter may lead to non-existence of equilibria for the closed-loop
system. The same approach is extended to the problem of existence of equilibria that verify a pre-specified power sharing
property. The obtained necessary conditions can be helpful to system operators to tune their controllers such that regions
7000
6000
d 3 ·V d,3
5000
4000
3000
2000
1000
0
0
2000
4000
6000
8000
10000
12000
d 1 ·V d,1
Fig. 4: Feasibility regions of the LMI (IV.4) on the plane (d1 , d3 ) of droop control parameters. Regions are yellow-coloured
if the LMI is feasible and blue-coloured if the LMI is unfeasible.
7000
6000
d 3 ·V d,3
5000
4000
3000
2000
1000
0
0
2000
4000
6000
8000
10000
12000
d 1 ·V d,1
Fig. 5: Feasibility regions of the LMI (V.3) on the plane (d1 , d3 ) of droop control parameters. Regions are yellow-coloured
if the LMI is feasible and blue-coloured if the LMI is unfeasible.
where the closed-loop system will definitely not admit a stationary operating point are excluded. In that regard, the present
paper is a first, fundamental stepping stone towards the development of a better understanding of how existence of stationary
solutions of hvdc systems are affected by the system parameters, in particular the network impedances and controller gains.
A final contribution consists in the establishment of conditions of local asymptotic stability of a given equilibrium point.
The obtained results are illustrated on a four-terminal example.
Starting from the obtained model, future research will concern various aspects. First of all, a better understanding of how
the feasibility of the LMIs are affected by the parameters is necessary. A first consideration is that the established conditions,
besides on the controllers parameters, also depends on the network topology and the dissipation via the Laplacian matrix
induced by the electrical network. This suggests that the location of the voltage-controlled units, as well as the network
impedances, play an important role on the existence of equilibria for the system. Similarly, it is of interest to understand in
which measure the values of Z, I and P components of the equivalent ZIP mode affect the LMIs, in order to provide guidelines
for the design of primary controllers. Furthermore, the possibility to combine the obtained necessary conditions with related
(sufficient) conditions from the literature, e.g. in [34], is very interesting and timely. Other possible developments will focus
on the establishment of necessary (possibly sufficient) conditions for the existence of equilibria in different scenarios: small
deviations from the nominal voltage [4], [9]; power unit outages [4]; linear three-phase, ac circuit, investigating the role
played by reactive power [32].
IX. ACKNOWLEDGMENTS
The authors acknowledge the support of: the Future Renewable Electric Energy Distribution Management Center (FREEDM),
a National Science Foundation supported Engineering Research Center, under grant NSF EEC-0812121; the Ministry of
Education and Science of Russian Federation (Project14.Z50.31.0031); the European Union’s Horizon 2020 research and
innovation programme under the Marie Sklodowska-Curie grant agreement No. 734832.
R EFERENCES
[1] A. Egea-Alvarez, J. Beerten, D. V. Hertem, and O. Gomis-Bellmunt, “Hierarchical power control of multiterminal HVDC grids,” Electric Power
Systems Research, vol. 121, pp. 207 – 215, 2015.
[2] T. K. Vrana, J. Beerten, R. Belmans, and O. B. Fosso, “A classification of DC node voltage control methods for HVDC grids,” Electric Power Systems
Research, vol. 103, pp. 137 – 144, 2013.
[3] M. Andreasson, M. Nazari, D. V. Dimarogonas, H. Sandberg, K. H. Johansson, and M. Ghandhari, “Distributed voltage and current control of
multi-terminal high-voltage direct current transmission systems,” IFAC Proceedings Volumes, vol. 47, no. 3, pp. 11910–11916, 2014.
[4] J. Beerten and R. Belmans, “Analysis of power sharing and voltage deviations in droop-controlled DC grids,” Power Systems, IEEE Transactions on,
vol. 28, no. 4, pp. 4588 – 4597, 2013.
[5] S. Shah, R. Hassan, and J. Sun, “HVDC transmission system architectures and control - a review,” in Control and Modeling for Power Electronics,
2013 IEEE 14th Workshop on, pp. 1–8, June 2013.
[6] T. Haileselassie, T. Undeland, and K. Uhlen, “Multiterminal HVDC for offshore windfarms – control strategy,” European Power Electronics and
Drives Association, 2009.
[7] J. Zhao and F. Dorfler, “Distributed control and optimization in DC microgrids,” Automatica, vol. 61, pp. 18 – 26, 2015.
[8] N. Monshizadeh, C. D. Persis, A. van der Schaft, and J. M. A. Scherpen, “A networked reduced model for electrical networks with constant power
loads,” CoRR, vol. abs/1512.08250, 2015.
[9] J. W. Simpson-Porco, F. Dörfler, and F. Bullo, “On resistive networks of constant-power devices,” Circuits and Systems II: Express Briefs, IEEE
Transactions on, vol. 62, pp. 811–815, Aug 2015.
[10] N. Barabanov, R. Ortega, R. Grino, and B. Polyak, “On existence and stability of equilibria of linear time-invariant systems with constant power
loads,” Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. PP, no. 99, pp. 1–8, 2015.
[11] D. Zonetti, R. Ortega, and A. Benchaib, “Modeling and control of HVDC transmission systems from theory to practice and back,” Control Engineering
Practice, vol. 45, pp. 133 – 146, 2015.
[12] F. Dörfler and F. Bullo, “Kron reduction of graphs with applications to electrical networks,” IEEE Transactions on Circuits and Systems I: Regular
Papers, vol. 60, no. 1, pp. 150–163, 2013.
[13] A. Yazdani and R. Iravani, Voltage–Sourced Controlled Power Converters – Modeling, Control and Applications. Wiley IEEE, 2010.
[14] C. Stijn, Steady-state and dynamic modelling of VSC HVDC systems for power system Simulation. PhD thesis, PhD dissertation, Katholieke University
Leuven, Belgium, 2010.
[15] R. Best, Phase-Locked Loops. Professional Engineering, Mcgraw-hill, 2003.
[16] D. Zonetti, Energy-based modelling and control of electric power systems with guaranteed stability properties. PhD thesis, Université Paris-Saclay,
2016.
[17] V. Blasko and V. Kaura, “A new mathematical model and control of a three-phase ac-dc voltage source converter,” IEEE Transactions on Power
Electronics, vol. 12, pp. 116–123, Jan 1997.
[18] L. Xu, B. Andersen, and P. Cartwright, “Control of vsc transmission systems under unbalanced network conditions,” in Transmission and Distribution
Conference and Exposition, 2003 IEEE PES, vol. 2, pp. 626–632 vol.2, Sept 2003.
[19] T. Lee, “Input-output linearization and zero-dynamics control of three-phase AC/DC voltage-source converters,” IEEE Tranactions on Power
Electronics, vol. 18, pp. 11–22, Jan 2003.
[20] H. Akagi, Instantaneous Power Theory and Applications to Power Conditioning. Newark: Wiley, 2007.
[21] R. Teodorescu, M. Liserre, and P. Rodrı́guez, Grid Converters for Photovoltaic and Wind Power Systems. John Wiley and Sons, Ltd, 2011.
[22] S. Fiaz, D. Zonetti, R. Ortega, J. Scherpen, and A. van der Schaft, “A port-Hamiltonian approach to power network modeling and analysis,” European
Journal of Control, vol. 19, no. 6, pp. 477 – 485, 2013.
[23] A. van der Schaft, “Characterization and partial synthesis of the behavior of resistive circuits at their terminals,” Systems & Control Letters, vol. 59,
no. 7, pp. 423 – 428, 2010.
[24] S. Fiaz, D. Zonetti, R. Ortega, J. Scherpen, and A. van der Schaft, “A port-Hamiltonian approach to power network modeling and analysis,” European
Journal of Control, vol. 19, no. 6, pp. 477 – 485, 2013.
[25] E. Prieto-Araujo, F. Bianchi, A. Junyent-Ferré, and O. Gomis-Bellmunt, “Methodology for droop control dynamic analysis of multiterminal vsc-hvdc
grids for offshore wind farms,” Power Delivery, IEEE Transactions on, vol. 26, pp. 2476–2485, Oct 2011.
[26] P. Sauer, “Time-scale features and their applications in electric power system dynamic modeling and analysis,” in American Control Conference
(ACC), 2011, pp. 4155–4159, June 2011.
[27] J. Schiffer, D. Zonetti, R. Ortega, A. M. Stanković, T. Sezi, and J. Raisch, “A survey on modeling of microgrids—from fundamental physics to
phasors and voltage sources,” Automatica, vol. 74, pp. 135–150, 2016.
[28] Y. Ito, Y. Zhongqing, and H. Akagi, “DC microgrid based distribution power generation system,” in Power Electronics and Motion Control Conference,
2004. IPEMC 2004. The 4th International, vol. 3, pp. 1740–1745, IEEE, 2004.
[29] M. Bucher, R. Wiget, G. Andersson, and C. Franck, “Multiterminal HVDC Networks – what is the preferred topology?,” Power Delivery, IEEE
Transactions on, vol. 29, pp. 406–413, Feb 2014.
[30] M. Belkhayat, R. Cooley, and A. Witulski, “Large signal stability criteria for distributed systems with constant power loads,” in Power Electronics
Specialists Conference, 1995. PESC ’95 Record., 26th Annual IEEE, vol. 2, pp. 1333–1338 vol.2, Jun 1995.
[31] A. Kwasinski and C. N. Onwuchekwa, “Dynamic behavior and stabilization of DC microgrids with instantaneous constant-power loads,” Power
Electronics, IEEE Transactions on, vol. 26, pp. 822–834, 2011.
[32] S. Sanchez, R. Ortega, R. Grino, G. Bergna, and M. Molinas, “Conditions for existence of equilibria of systems with constant power loads,” Circuits
and Systems I: Regular Papers, IEEE Transactions on, vol. 61, no. 7, pp. 2204–2211, 2014.
[33] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.1.”
[34] S. Bolognani and S. Zampieri, “On the existence and linear approximation of the power flow solution in power distribution networks,” IEEE
Transactions on Power Systems, vol. 31, pp. 163–172, Jan 2016.
| 3 |
arXiv:1412.4056v2 [] 19 May 2016
Blind system identification
using kernel-based methods?
Giulio Bottegal, Riccardo S. Risuleo, and Håkan Hjalmarsson
∗
March 15, 2018
Abstract
We propose a new method for blind system identification (BSI). Resorting to a Gaussian regression framework, we model the impulse response of the unknown linear system as a realization of a Gaussian process. The structure of the covariance matrix (or kernel) of such a process
is given by the stable spline kernel, which has been recently introduced
for system identification purposes and depends on an unknown hyperparameter. We assume that the input can be linearly described by few
parameters. We estimate these parameters, together with the kernel hyperparameter and the noise variance, using an empirical Bayes approach.
The related optimization problem is efficiently solved with a novel iterative scheme based on the Expectation-Maximization (EM) method. In
particular, we show that each iteration consists of a set of simple update
rules. We show, through some numerical experiments, very promising
performance of the proposed method.
1
Introduction
In many engineering problems where data-driven modeling of dynamical systems
is required, the experimenter may not have access to the input data. In these
cases, standard system identification tools such as PEM [1] cannot be applied
and specific methods, namely blind system identification (BSI) methods (or blind
deconvolution, if one is mainly interested in the input), need be employed [2].
BSI finds applications in a wide range of engineering areas, such as image
reconstruction [3], biomedical sciences [4] and in particular communications [5,
6], for which literally hundreds of methods have been developed. It would be
impossible to give a thorough literature review here.
∗ G.
Bottegal, R. S. Risuleo and H. Hjalmarsson are with the ACCESS Linnaeus
Center, School of Electrical Engineering, KTH Royal Institute of Technology, Sweden
(risuleo;bottegal;[email protected]). This work was supported by the European Research Council under the advanced grant LEARN, contract 267381 and by the Swedish Research Council under contract 621–2009–4017.
1
Clearly, the unavailability of the input signal makes BSI problems generally
ill-posed. Without further information on the input sequence or the structure
of the system, it is impossible to retrieve a unique description of the system [7].
To circumvent (at least partially) this intrinsic non-uniqueness issue, we shall
assume some prior knowledge on the input. Following the framework of [8]
and [9], we describe the input sequence using a number of parameters considerably smaller than the length of the input sequence; see Section 2 for details
and applications.
The main contribution of this paper is to propose a new BSI method. Our
system modeling approach relies upon the kernel-based methods for linear system identification recently introduced in a series of papers [10, 11, 12, 13].
The main advantage of these methods, compared to standard parametric methods [1], is that the user is not required to select the model structure and order
of the system, an operation that might be difficult if little is known about the
dynamics of the system. Thus, we model the impulse response of the unknown
system as a realization of a Gaussian random process, whose covariance matrix,
or kernel, is given by the so called stable spline kernel [10, 14], which encodes
prior information on BIBO stability and smoothness. Such a kernel depends
on a hyperparameter which regulates the exponential decay of the generated
impulse responses.
In the kernel-based framework, the estimate of the impulse response can be
obtained as its Bayes estimate given the output data. However, when applied
to BSI problems, such an estimator is a function of the kernel hyperparameter,
the parameters characterizing the input and the noise variance. All these parameters need to be estimated from data. In this paper, using empirical Bayes
arguments, we estimate such parameters by maximizing the marginal likelihood
of the output measurements, obtained by integrating out the dependence on
the system. In order to solve the related optimization problem, which is highly
non-convex, involving a large number of variables, we propose a novel iterative
solution scheme based on the Expectation-Maximization method [15]. We show
that each iteration of such a scheme consists of a sequence of simple updates
which can be performed using little computational efforts. Notably, our method
is completely automatic, since the user is not required to tune any kind parameter. This in contrast with the BSI methods recently proposed in [8, 9], where,
although the system is retrieved via a convex optimization problem, the user
is required to select some regularization parameters and the model order. The
method derived in this paper follows the same approach used in [16], where a
novel method for Hammerstein system identification is described.
The paper is organized as follows. In the next section, we introduce the BSI
problem and we state our working assumptions. In Section 3, we give a background on kernel-based methods, while in Section 4 we describe our approach
to BSI. Section 5 presents some numerical experiments, and some conclusions
end the paper.
2
2
Blind system identification
We consider a SISO linear time-invariant discrete-time dynamic system (see
Figure 1)
+∞
X
yt =
gi ut−i + vt ,
(1)
i=0
+∞
{gt }t=0
where
is a strictly causal transfer function (i.e., g0 = 0) representing
the dynamics of the system, driven by the input ut . The measurements of the
output yt are corrupted by the process vt , which is zero-mean white Gaussian
noise with unknown variance σ 2 . For the sake of simplicity, we will also hereby
assume that the system is at rest until t = 0.
vt
ut
g
+
yt
Figure 1: Block scheme of the system identification scenario.
We assume that N samples of the output measurements are collected, and
N
denote them by {yt }t=1 . The input u(t) is not directly measurable and only
some information about it is available. More specifically, we assume we know
N −1
that the input, restricted to the N time instants {ut }t=0 , belongs to a certain
subspace of RN and thus can be written as
u = Hx ,
(2)
T
where u = u0 · · · uN −1 . In the above equation, H ∈ RN ×p is a known
matrix with full column rank and x ∈ Rp , p ≤ N , is an unknown vector characterizing the evolution of u(t). Below we report two examples of inputs generated
in this way.
Piecewise constant inputs with known switching instants
Consider a piecewise constant input signal u(t) with known switching instants
T1 , T2 . . . Tp , with Tp = N . The levels the input takes in between the switching
instants are unknown and collected in the vector x. Then, the input signal can
3
be expressed as
u0
u1
..
.
1
1
..
.
uT1 −1 1
uT1
..
u= .
=
uT2 −1
.
..
uT
p−1
.
..
uTp
1
..
.
1
..
.
x1
x2
. = Hx .
.
.
xp
1
..
.
(3)
1
with
H = diag {1T1 , 1T2 −T1 , . . . , 1Tp −Tp−1 } , H ∈ RN ×p
(4)
where 1m denotes a column vector of length m with all entries equal to 1:
The vector x needs to be estimated from output data. Applications of BSI
with piecewise constant inputs are found in room occupancy estimation [17] and
nonintrusive appliance load monitoring (NIALM) [18, 19].
Combination of known sinusoids
Assume that u is composed by the sum of p sinusoids with unknown amplitude
and known frequencies ω1 , . . . , ωp . Then in this case we have
sin(ω1 ) · · ·
sin(ωp )
..
..
H=
(5)
,
.
.
sin(N ω1 ) · · ·
sin(N ωp )
with ω1 , . . . , ωp such that H is full column rank. The vector x represents
the amplitude of the sinusoids. Applications of this setting are found in blind
channel estimation [9].
2.1
Problem statement
We state our BSI problem as the problem of obtaining an estimate of the impulse
n
N
response gt for n time instants, namely {gt }t=1 , given {yt }t=1 and H. Recall
that, by choosing n sufficiently large, these samples can be used to approximate
gt with arbitrary accuracy [1]. To achieve our goal we will need to estimate the
T
input u = u0 · · · uN −1 ; hence, we might also see our problem as a blind
deconvolution problem.
4
Remark 1 The identification method we propose in this paper can be derived
also in the continuous-time setting, using the same arguments as in [10]. However, for ease of exposition, here we focus only on the discrete-time case.
In a condition of complete information, this problem can be solved by least
squares [1] or using regularized kernel-based approaches [10], [11], [12].
2.2
Identifiability issues
It is well-known that BSI problems are not completely solvable (see e.g. [2,
5]). This because the system and the input can be determined up to a scaling
factor, in the sense that every pair (αu, α1 g), α ∈ R, can describe the output
dynamics equally well. Hence, we shall consider our BSI problem as the problem
of determining the system and the input up to a scaling factor. Another possible
way out for this issue is to assume that kgk2 or g1 are known [20].
3
Kernel-based system identification
In this section we briefly review the kernel-based approach introduced in [10,
11] and show how to readapt it to BSI problem.
Let us first introduce the following vector notation
u0
y1
g1
v1
..
..
..
..
u := . , y := . , g := . , v := .
uN −1
yN
and the operator Tn (·) that, given
Toeplitz matrix, e.g.
u0
0
u1
u
0
..
Tn (u) = ...
.
uN −2 uN −3
uN −1 uN −2
gn
vN
a vector of length N , maps it to an N × n
0
···
···
..
.
···
···
uN −n+1
···
0
0
..
.
∈ RN ×n .
0
uN −n
We shall reserve the symbol U for Tn (u). Then, the input-output relation for
the available samples can be written
y = Ug + v .
(6)
In this paper we adopt a Bayesian approach to the BSI problem. Following
a Gaussian process regression approach [21], we model the impulse response as
follows
g ∼ N (0, λKβ )
(7)
where Kβ is a covariance matrix whose structure depends on a shaping parameter β, and λ ≥ 0 is a scaling factor, which regulates the amplitude of the
5
realizations from (7). Given the identifiability issue described in Section 2.2, λ
can be arbitrarily set to 1. In the context of Gaussian regression, Kβ is usually
called a kernel and its structure is crucial in imposing properties on the realizations drawn from (7). An effective choice of kernel for system identification
purposes is given by the so-called stable spline kernels [10, 11]. In particular, in
this paper we adopt the so-called first-order stable spline kernel (or TC kernel
in [12]), which is defined as
{Kβ }i,j := β max(i,j) ,
(8)
where β is a scalar in the interval [0, 1). Such a parameter regulates the decaying
velocity of the generated impulse responses.
Recall the assumption introduced in Section 2 on the Gaussianity of noise.
Due to this assumption, the joint distribution of the vectors y and g is Gaussian,
provided that the vector x (and hence the input u), the noise variance σ 2 and
the parameter β are given. Let us introduce the vector
θ := xT σ 2 β
∈ Rp+2 ,
(9)
which we shall call hyperparameter vector. Then we can write
0
Σy Σyg
y
θ ∼N
,
,
p
0
Σgy Kβ
g
(10)
where Σyg = ΣTgy = U Kβ and Σy = U Kβ U T + σ 2 I. It follows that the posterior
distribution of g given y (and θ) is Gaussian, namely
p(g|y, θ) = N (Cy, P ) ,
where
P =
UT U
+ Kβ−1
σ2
−1
,
C=P
(11)
UT
.
σ2
(12)
From (11), the impulse response estimator can be derived as the Bayesian estimator [22]
ĝ = E[g|y, θ] = Cy .
(13)
Clearly, such an estimator is a function of θ, which needs to be determined
from the available data y before performing the estimation of g. Thus, the BSI
algorithm we propose in this paper consists of the following steps.
1. Estimate the hyperparameter vector θ.
2. Obtain ĝ by means of (13).
In the next section, we discuss how to efficiently compute the first step of
the algorithm.
6
4
Estimation of the hyperparameter vector
An effective approach to choose the hyperparameter vector characterizing the
impulse response estimator (13) relies on Empirical Bayes arguments [23]. More
precisely, since y and g are jointly Gaussian, an efficient method to choose θ
is given by maximization of the marginal likelihood [24], which is obtained by
integrating out g from the joint probability density of (y, g). Hence, an estimate
of θ can be computed as follows
θ̂ = arg max log p(y|θ) .
θ
(14)
Solving (14) in that form can be hard, because it is a nonlinear and nonconvex problem involving a large number (p + 2) of decision variables. For
this reason, we propose an iterative solution scheme which resorts to the EM
method. To this end, we define the complete likelihood
L(y, g|θ) := log p(y, g|θ) ,
(15)
which depends also on the missing data g. Then, the EM method provides θ̂ by
iterating the following steps:
(E-step) Given an estimate θ̂k after the k-th iteration of the scheme, compute
Q(θ, θ̂k ) := Ep(g|y, θ̂k ) [L(y, g|θ)] ;
(16)
θ̂k+1 = arg max Q(θ, θ̂k ) .
(17)
(M-step) Compute
θ
The iteration of these steps is guaranteed to converge to a (local or global)
maximum of (14) [25], and the iterations can be stopped if kθ̂k+1 − θ̂k k2 is
below a given threshold.
Assume that, at iteration k + 1 of the EM scheme, the estimate θ̂k of θ is
available. Using the current estimate of the hyperparameter vector, we construct the matrices Ĉ k and P̂ k using (12) and, accordingly, we denote by ĝ k the
estimate of g computed using (13), i.e. ĝ k = Ĉ k y and the linear prediction of y
as ŷ k = U ĝ k . Furthermore, let us define
Âk = −H T RT (P̂ k + ĝ k ĝ kT ) ⊗ IN RH
T
b̂k = H T TN (ĝ k ) y ,
(18)
where R ∈ RN n×N is a matrix such that, for any u ∈ RN :
Ru = vec(Tn (u)) .
(19)
Having introduced this notation, we can state the following theorem, which
provides a set of upgrade rules to obtain the hyperparameter vector estimate
θ̂k+1 .
7
Theorem 1 Let θ̂k be the estimate of the hyperparameter vector after the k-th
iteration of the EM scheme. Then
θ̂k+1 = x̂k+1,T σ̂ 2,k+1 β̂ k+1
(20)
can be obtained performing the following operations:
• The input estimate is updated computing
−1 k
x̂k+1 = −(Ak )
b ;
• The noise variance is updated computing
h
i
1
ky − ŷ k k22 + Tr Û k+1 P̂ k Û k+1,T
,
σ̂ 2,k+1 =
N
(21)
(22)
where Û k+1 denotes the Toeplitz matrix of the sequence ûk+1 = H x̂k+1 ;
• The kernel shaping parameter is updated solving
β̂ k+1 = arg min Q(β, θ̂k ) ,
(23)
h
i
Q(β, θ̂k ) = log det Kβ + Tr Kβ−1 (P̂ k + ĝ k ĝ kT ) .
(24)
β∈[0, 1)
where
Hence, the maximization problem (14) reduces to a sequence of very simple
optimization problems. In fact, at each iteration of the EM algorithm, the
input can be estimated by computing a simple update rule available in closedform. The same holds for the noise variance, whereas the update of the kernel
hyperparameter β does not admit any closed-form expression. However, it can
be retrieved by solving a very simple scalar optimization problem, which can be
solved efficiently by grid search, since the domain of β is the interval [0, 1).
It remains to establish a way to set up the initial estimate θ̂0 for the EM
method. This can be done by just randomly choosing the entries of such a
vector, keeping the constraints β̂ 0 ∈ [0, 1), σ̂ 2,0 > 0.
Below, we provide our BSI algorithm.
5
Numerical experiments
We test the proposed BSI method by means of Monte Carlo experiments. Specifically, we perform 6 groups of simulations where, for each group, 100 random
systems and input/output trajectories are generated. The random systems are
generated by picking 20 zeros and 20 poles with random magnitude and phase.
The zero magnitude is less than or equal to 0.95 and the pole magnitude is no
larger than 0.92. The inputs are piecewise constant signals and the number of
switching instants is p = 10, 20, 30, 40, 50, 60, depending on the group of experiments. We generate 200 input/output samples per experiment. The output
8
Algorithm: Bayesian kernel-based EM Blind System Identification
N
Input: {yt }t=1 , H
n
N
Output: {ĝ}t=1 , {ût }t=1
1. Initialization: randomly set θ̂0 = x̂0T
σ̂ 2,0
β̂ 0
2. Repeat until convergence:
(a) E-step: update P̂ k , Ĉ k from (12) and ĝ k from (13);
(b) M-step: update the parameters:
• x̂k+1 from (21);
• σ̂ k+1 from (22),
• β̂ k+1 from (23)
n
N
3. Compute {ĝ}t=1 from (13) and {ût }t=1 = H x̂;
is corrupted by random noise whose variance is such that σ 2 = var(U g)/10, i.e.
the noise variance is ten times smaller than the variance of the noiseless output.
The goal of the experiments is to estimate n = 50 samples of the generated
impulse responses. We compare the following three estimators.
• B-KB. This is the proposed Bayesian kernel-based BSI method that estimates the input by marginal likelihood maximization, using an EM-based
scheme. The convergence criterion of the EM method is kθ̂k+1 − θ̂k k2 <
10−3 .
• NB-LS. This is an impulse response estimator based on the least squares
criterion. Here, we assume that the input is known, so the only quantity
to be estimated is the system. Hence, this corresponds to an unbiased
FIR estimator [1].
• NB-KB. This is the Bayesian kernel-based system identification method
introduced in [10] and revisited in [12]. Like the estimator NB-LS, this
method has knowledge of the input and in fact corresponds to the estimator B-KB when x is known and not estimated.
The performance of the estimators are evaluated by means of the output fitting
score
kÛi ĝi − Ui gi k2
,
(25)
F IT = 1 −
kUi gi − Ui gi k2
where, at the i-th Monte Carlo run, Ui and Ûi are the Toeplitz matrices of the
true and estimated inputs (Ûi = Ui if the method needs not estimate x), gi and
ĝi are the true and estimated systems and Ui gi is the mean of Ui gi .
9
Figure 2 shows the results of the six group of experiments. As expected,
the estimator NB-KB, which has access to the true input, gives the best performance for all the values of p and in fact is independent of such a value.
Surprisingly, for p = 10 amd p = 20, the proposed BSI method outperforms
the least squares estimator that knows the true input. An example of one such
Monte Carlo experiments in reported in Figure 3. As one might expect, there
p = 20
1
0.95
0.95
0.9
0.9
Fit
Fit
p = 10
1
0.85
0.85
0.8
0.8
0.75
0.75
0.7
B−KB
NB−LS
0.7
NB−KB
B−KB
1
1
0.95
0.95
0.9
0.9
0.85
0.8
0.75
0.75
B−KB
NB−LS
0.7
NB−KB
B−KB
p = 50
1
0.95
0.95
NB−KB
0.9
Fit
0.9
Fit
NB−LS
p = 60
1
0.85
0.85
0.8
0.8
0.75
0.75
0.7
NB−KB
0.85
0.8
0.7
NB−LS
p = 40
Fit
Fit
p = 30
B−KB
NB−LS
0.7
NB−KB
B−KB
NB−LS
NB−KB
Figure 2: Results of the Monte Carlo simulations for different values of p, namely
the number of different levels in the input signals.
is a performance degradation as p increases, since the blind estimator has to
estimate more parameters. Figure 4 shows the median of the fitting score of
each group of experiments as function of p. It appears that, approximately,
there is a linear trend in the performance degradation.
10
Input Signal
0.12
true
B−KB
0.1
0.08
0.06
0.04
0.02
0
0
20
40
60
80
100
120
140
160
180
200
Impulse Response
0.6
true
B−KB
0.4
0.2
0
−0.2
−0.4
5
10
15
20
25
30
35
40
45
50
Output
15
True
B−KB
10
5
0
−5
0
20
40
60
80
100
120
140
160
180
200
Figure 3: Example of one Monte Carlo realization with p = 20. Top panel: true
vs estimated (normalized) input. Middle panel: true vs estimated (normalized)
impulse response. Bottom panel: true vs predicted output.
0.9
0.88
Fit
0.86
0.84
0.82
0.8
0.78
10
20
30
40
50
60
p
Figure 4: Median of the fitting score for each group of Monte Carlo simulations
as function of p.
11
6
Conclusions
In this paper we have proposed a novel blind system identification algorithm.
Under a Gaussian regression framework, we have modeled the impulse response
of the unknown system as the realization of a Gaussian process. The kernel
chosen to model the system is the stable spline kernel. We have assumed that the
unknown input belongs to a known subspace of the input space. The estimation
of the input, together with the kernel hyperparameter and the noise variance,
has been performed using an empirical Bayes approach. We have solved the
related maximization problem resorting to the EM method, obtaining a set of
update rules for the parameters which is simple and elegant, and permits a fast
computation of the estimates of the system and the input. We have shown,
through some numerical experiments, very promising results.
We plan to extend the current method in two ways. First, a wider class of
models of the system, such as the Box-Jenkins model, will be considered. We
shall also attempt to remove the assumption on the input belonging to a known
subspace by adopting suitable Bayesian models.
A
Proof of Theorem 1
First note that p(y, g|θ) = p(y|g, θ)p(g|θ). Hence we can write the complete
likelihood as
L(y, g|θ) = log p(y|g, θ) + log p(g|θ)
(26)
and so
1
N
2
log σ 2 − 2 ky − U gk
2
2σ
1
1
− log det Kβ − g T Kβ−1 g
2
2
N
1
= − log σ 2 − 2 y T y + g T U T U g − 2y T U g
2
2σ
1
1
− log det Kβ − g T Kβ−1 g .
2
2
L(y, g|θ) = −
We now proceed by taking the expectation of this expression with respect to
the random variable g|y, θ̂k . We obtain the following components
N
N
2
(a) : E − log σ = − log σ 2
2
2
1 T
1 T
(b) : E − 2 y y = − 2 y y
2σ
2σ
1 T T
1 T
k
k kT
(c) : E − 2 g U U g = Tr − 2 U U (P̂ +ĝ ĝ )
2σ
2σ
1 T
1 T k
(d) : E 2 y U g = 2 y U ĝ
σ
σ
12
1
1
(e) : E − log det Kβ = − log det Kβ
2
2
i
1
1 h
(f ) : E − g T Kβ−1 g = − Tr Kβ−1 (P̂ k + ĝ k ĝ kT )
2
2
It follows that Q(θ, θ̂k ) is the summation of the elements obtained above. By
inspecting the structure of Q(θ, θ̂k ), it can be seen that such a function splits
in two independent terms, namely
Q(θ, θ̂k ) = Q1 (x, σ 2 , θ̂k ) + Qβ (β, θ̂k ) ,
(27)
Q1 (x, σ 2 , θ̂k ) = (a) + (b) + (c) + (d)
(28)
where
2
is function of x and σ , while
Qβ (β, θ̂k ) = (e) + (f )
(29)
depends only on β and corresponds to (24). We now address the optimization
of (28). To this end we write
1
Qx (x, θ̂k ) + Qσ2 (σ 2 , θ̂k )
(30)
σ2
1
1
= 2 Tr − U T U (P̂ k +ĝ k ĝ kT ) + y T U ĝ k
σ
2
N
1
−
log σ 2 − 2 y T y .
2
2σ
This means that the optimization of Q1 can be carried out first with respect
to x, optimizing only the term Qx , which is independent of σ 2 and can be
written in a quadratic form
1
Qx (x, θ̂k ) = xT Âk x + b̂kT x .
(31)
2
To this end, first note that, for all v1 ∈ Rn , v2 ∈ Rm ,
Q1 (x, σ 2 , θ̂k ) =
Tm (v1 )v2 = Tn (v2 )v1 .
Recalling (19), we can write
h
i
T
Tr U T U (P̂ k+ĝ k ĝ kT ) =vec(U ) ((P̂ k+ĝ k ĝ kT ) ⊗ IN )vec(U )
1
= − uT RT (P̂ k + ĝ k ĝ kT ) ⊗ IN Ru
2
1
= − xT H T RT (P̂ k + ĝ k ĝ kT ) ⊗ IN RHx ,
2
(32)
(33)
where the matrix in the middle corresponds to Âk defined in (18). For the linear
term we find
y T U ĝ k = y T TN (ĝ k )u = y T TN (ĝ k )Hx ,
(34)
so that the term b̂kT in (18) is retrieved and the maximizer x̂k+1 is as in (21).
Plugging back x̂k+1 into (28) and maximizing with respect to σ 2 we easily find
σ̂ 2,k+1 corresponding to (22). This concludes the proof.
13
References
[1]
L. Ljung. System Identification, Theory for the User. Prentice Hall, 1999.
[2]
K. Abed-Meraim, W. Qiu, and Y. Hua. “Blind system identification”.
Proc. IEEE 85.8 (1997), pp. 1310–1322. doi: 10.1109/5.622507.
[3]
G. R. Ayers and J. C. Dainty. “Iterative blind deconvolution method and
its applications”. Opt. Lett. 13.7 (1988), p. 547. doi: 10.1364/ol.13.
000547.
[4]
D. B. McCombie, A. T. Reisner, and H. H. Asada. “Laguerre-Model Blind
System Identification: Cardiovascular Dynamics Estimated From Multiple
Peripheral Circulatory Signals”. IEEE Trans. Biomed. Eng. 52.11 (2005),
pp. 1889–1901. doi: 10.1109/tbme.2005.856260.
[5]
F. Gustafsson and B. Wahlberg. “Blind equalization by direct examination
of the input sequences”. IEEE Trans. Commun. 43.7 (1995), pp. 2213–
2222. doi: 10.1109/26.392964.
[6]
E Moulines, P Duhamel, J.-F. Cardoso, and S Mayrargue. “Subspace
methods for the blind identification of multichannel FIR filters”. IEEE
Trans. Signal Process. 43.2 (1995), pp. 516–525. doi: 10.1109/78.348133.
[7]
L Tong, R.-W. Liu, V. C. Soon, and Y.-F. Huang. “Indeterminacy and
identifiability of blind identification”. IEEE Trans. Circuits Syst. 38.5
(1991), pp. 499–509. doi: 10.1109/31.76486.
[8]
H. Ohlsson, L. J. Ratliff, R. Dong, and S. S. Sastry. “Blind Identification
Via Lifting”. Proc. IFAC World Cong. 2014. doi: 10.3182/20140824-6za-1003.02567.
[9]
A. Ahmed, B. Recht, and J. Romberg. “Blind Deconvolution Using Convex
Programming”. IEEE Trans. Inform. Theory 60.3 (2014), pp. 1711–1732.
doi: 10.1109/tit.2013.2294644.
[10]
G. Pillonetto and G. De Nicolao. “A new kernel-based approach for linear
system identification”. Automatica 46.1 (2010), pp. 81–93. doi: doi:10.
1016/j.automatica.2009.10.031.
[11]
G. Pillonetto, A. Chiuso, and G. De Nicolao. “Prediction error identification of linear systems: a nonparametric Gaussian regression approach”.
Automatica 47.2 (2011), pp. 291–305. doi: 10.1016/j.automatica.2010.
11.004.
[12]
T. Chen, H. Ohlsson, and L. Ljung. “On the estimation of transfer functions, regularizations and Gaussian processes—Revisited”. Automatica 48.8
(2012), pp. 1525–1535. doi: 10.1016/j.automatica.2012.05.026.
[13]
G. Pillonetto, F. Dinuzzo, T. Chen, G. D. Nicolao, and L. Ljung. “Kernel
methods in system identification, machine learning and function estimation: A survey”. Automatica 50.3 (2014), pp. 657–682. doi: 10.1016/j.
automatica.2014.01.001.
14
[14]
G. Bottegal and G. Pillonetto. “Regularized spectrum estimation using
stable spline kernels”. Automatica 49.11 (2013), pp. 3199–3209. doi: 10.
1016/j.automatica.2013.08.010.
[15]
A. P. Dempster, N. M. Laird, and D. B. Rubin. “Maximum likelihood
from incomplete data via the EM algorithm”. J. R. Stat. Soc. Ser. B.
Stat. Methodol. (1977), pp. 1–38.
[16]
R. S. Risuleo, G. Bottegal, and H. Hjalmarsson. “A kernel-based approach
to Hammerstein system identication”. Proc. IFAC Symp. System Identification (SYSID). Vol. 48. 28. 2015, pp. 1011–1016. doi: doi:10.1016/j.
ifacol.2015.12.263.
[17]
A. Ebadat, G. Bottegal, D. Varagnolo, B. Wahlberg, and K. H. Johansson.
“Estimation of building occupancy levels through environmental signals
deconvolution”. Proc. ACM Workshop Embedded Systems For EnergyEfficient Buildings (BuildSys). Association for Computing Machinery (ACM),
2013. doi: 10.1145/2528282.2528290.
[18]
G. Hart. “Nonintrusive appliance load monitoring”. Proceedings of the
IEEE 80.12 (1992), pp. 1870–1891. doi: 10.1109/5.192069.
[19]
R. Dong, L. Ratliff, H. Ohlsson, and S. S. Sastry. “A dynamical systems
approach to energy disaggregation”. 52nd IEEE Conference on Decision
and Control. Institute of Electrical & Electronics Engineers (IEEE), 2013.
doi: 10.1109/cdc.2013.6760891.
[20]
E. W. Bai and D. Li. “Convergence of the iterative Hammerstein system identification algorithm”. IEEE Trans. Autom. Control 49.11 (2004),
pp. 1929–1940.
[21]
C. Williams and C. Rasmussen. Gaussian processes for machine learning.
2006.
[22]
B. D. O. Anderson and J. B. Moore. Optimal filtering. Courier Corporation, 2012.
[23]
J. Maritz and T. Lwin. Empirical bayes methods. Chapman and Hall London, 1989. isbn: 9780412277603.
[24]
G. Pillonetto and A. Chiuso. “Tuning complexity in kernel-based linear
system identification: The robustness of the marginal likelihood estimator”. Proc. European Control Conf. (ECC). 2014, pp. 2386–2391. doi:
10.1109/ECC.2014.6862629.
[25]
G. McLachlan and T. Krishnan. The EM algorithm and extensions. Vol. 382.
John Wiley and Sons, 2007. isbn: 9780471201700.
15
| 3 |
Variable screening with multiple studies
Tianzhou Ma1 , Zhao Ren2 and George C. Tseng1
1
Department of Biostatistics, University of Pittsburgh
2
Department of Statistics, University of Pittsburgh
arXiv:1710.03892v1 [stat.ME] 11 Oct 2017
Abstract
Advancement in technology has generated abundant high-dimensional data that allows integration of multiple relevant studies. Due to their huge computational advantage, variable screening methods based on marginal correlation have become promising
alternatives to the popular regularization methods for variable selection. However, all
these screening methods are limited to single study so far. In this paper, we consider
a general framework for variable screening with multiple related studies, and further
propose a novel two-step screening procedure using a self-normalized estimator for highdimensional regression analysis in this framework. Compared to the one-step procedure
and rank-based sure independence screening (SIS) procedure, our procedure greatly reduces false negative errors while keeping a low false positive rate. Theoretically, we
show that our procedure possesses the sure screening property with weaker assumptions on signal strengths and allows the number of features to grow at an exponential
rate of the sample size. In addition, we relax the commonly used normality assumption
and allow sub-Gaussian distributions. Simulations and a real transcriptomic application illustrate the advantage of our method as compared to the rank-based SIS method.
Key words and phrases: Multiple studies, Partial faithfulness, Self-normalized estimator, Sure screening property, Variable selection
1
Introduction
In many areas of scientific disciplines nowadays such as omics studies (including genomics,
transcriptomics, etc.), biomedical imaging and signal processing, high dimensional data with
much greater number of features than the sample size (i.e. p >> n) have become rule rather
than exception. For example, biologists may be interested in predicting certain clinical
outcome (e.g. survival) using the gene expression data where we have far more genes than
the number of samples. With the advancement of technologies and affordable prices in recent
biomedical research, more and more experiments have been performed on a related hypothesis
or to explore the same scientific question. Since the data from one study often have small
sample size with limited statistical power, effective information integration of multiple studies
can improve statistical power, estimation accuracy and reproducibility. Direct merging of the
data (a.k.a. “mega-analysis”) is usually less favored due to the inherent discrepancy among
the studies (Tseng et al., 2012). New statistical methodologies and theories are required to
solve issues in high-dimensional problem when integrating multiple related studies.
Various regularization methods have been developed in the past two decades and frequently used for feature selection in high-dimensional regression problems. Popular methods
include, but are not limited to, Lasso (Tibshirani, 1996), SCAD (Fan and Li, 2001), elastic
1
net (Zou and Hastie, 2005) and adaptive Lasso (Zou, 2006). When group structure exists
among the variables (for example, a set of gene features belonging to a pre-specified pathway), group version of regularization methods can be applied (Yuan and Lin, 2006; Meier
et al., 2008; Nardi et al., 2008). One can refer to Fan and Lv (2010) and Huang et al. (2012)
for a detailed overview of variable selection and group selection in high-dimensional models.
When the number of features grows significantly larger than the sample size, most regularization methods perform poorly due to the simultaneous challenges of computation expediency,
statistical accuracy and algorithmic stability (Fan et al., 2009). Variable screening methods
become a natural way to consider by first reducing to a lower or moderate dimensional problem and then performing variable regularization. Fan and Lv (2008) first proposed a sure
independent screening (SIS) method to select features based on their marginal correlations
with the response in the context of linear regression models and showed such fast selection
procedure enjoyed a “sure screening property”. Since the development of SIS, many screening methods have been proposed for generalized linear models (Fan et al., 2009, 2010; Chang
et al., 2013), nonparametric additive models or semiparametric models (Fan et al., 2011;
Chang et al., 2016), quantile linear regression (Ma et al., 2017), Gaussian graphical models
(Luo et al., 2014; Liang et al., 2015) or exploit more robust measures for sure screening (Zhu
et al., 2011; Li et al., 2012, 2017). However, all these screening methods are limited to single
study so far.
In this paper, we first propose a general framework for simultaneous variable screening
with multiple related studies. Compared to single study scenario, inclusion of multiple studies
gives us more evidence to reduce dimension and thus increases the accuracy and efficiency
of removing unimportant features during screening. To our knowledge, our paper is the first
to utilize multiple studies to help variable screening in high-dimensional linear regression
model. Such a framework provides a novel perspective to the screening problem and opens
a door to the development of methods using multiple studies to perform screening under
different types of models or with different marginal utilities. In this framework, it is natural
to apply a selected screening procedure to each individual study, respectively. However,
important features with weak signals in some studies may be incorrectly screened out if only
such a one-step screening is performed. To avoid such false negative errors and fully take
advantage of multiple studies, we further propose a two-step screening procedure, where one
additional step of combining studies with potential zero correlation is added to the one-step
procedure for a second check. This procedure has the potential to save those important
features with weak signals in individual studies but strong aggregate effect across studies
during the screening stage. Compared to the naive multiple study extension of SIS method,
our procedure greatly reduces the false negative errors while keeping a low false positive
rate. These merits are confirmed by our theoretical analysis. Specifically, we show that
our procedure possesses the sure screening property with weaker assumptions on signals and
allows the number of features to grow at an exponential rate of the sample size. Furthermore,
we only require the data to have sub-Gaussian distribution via using novel self-normalized
statistics. Thus our procedure can be applied to more general distribution family other than
Gaussian distribution, which is considered in Fan and Lv (2008) and Bühlmann et al. (2010)
for a related screening procedure under single study scenarios. After screening, we further
apply two general and applicable variable selection algorithms: the multiple study extension
of PC-simple algorithm proposed by Bühlmann et al. (2010) as well as a two-stage feature
selection method to choose the final model in a lower dimension.
2
The rest of the paper is organized as follows. In Section 2, we present a framework for
variable screening with multiple related studies as well as notations. Then we propose our
two-step screening procedure in Section 3. Section 4 provides the theoretical properties of our
procedure, and demonstrates the benefits of multiple related studies as well as the advantages
of our procedure. General algorithms for variable selection that can follow from our screening
procedure are discussed in Section 5. Section 6 and 7 include the simulation studies and
a real data application on three breast cancer transcriptomic studies, which illustrate the
advantage of our method in reducing false negative errors and retaining important features
as compared to the rank-based SIS method. We conclude and discuss possible extensions of
our procedure in Section 8. Section 9 provides technical proofs to the major theorems.
2
Model and Notation
Suppose we have data from K related studies, each has n observations. Consider a random
design linear model in each study k ∈ [K] ([K] = 1, . . . , K):
Y
(k)
=
p
X
(k)
(k)
βj Xj + (k) ,
(2.1)
j=1
(k)
(k)
(k)
where each Y (k) ∈ R, each X (k) = (X1 , . . . , Xp )T ∈ Rp with E(X (k) ) = µX and
(k)
cov(X (k) ) = ΣX , each (k) ∈ R with E((k) ) = 0 and var((k) ) = σ 2 such that (k) is
(k)
(k)
(k)
(k)
uncorrelated with X1 , . . . , Xp , and β (k) = (β1 , . . . , βp )T ∈ Rp . We assume implicitly
(k)
with E(Y (k)2 ) < ∞ and E{(Xj )2 } < ∞ for j ∈ [p] ([p] = 1, . . . , p).
When p is very large, we usually assume that only a small set of covariates are true
predictors that contribute to the response. In other words, we assume most of βj =
(1)
(K)
(βj , . . . , βj )T , where j ∈ [p], are equal to a zero vector. In addition, in this paper, we
(k)
assume βj ’s are either zero or non-zero in all K studies. This framework is partially motivated by a high-dimensional linear random effect model considered in literature (e.g.,Jiang
T
, 0T )T , where β(1) is the vector of the
et al. (2016)). More specifically, we can have β = (β(1)
first s0 non-zero components of β (1 ≤ s0 ≤ p). Consider a random effect model where only
(k)
the true predictors of each study are treated as the random effect, that is, β (k) = (β(1) , 0)T
(k)
and β(1) is distributed as N (β(1) , τ 2 Is0 ), where τ 2 is independent of and X. Consequently,
(k)
βj ’s are either zero or non-zero in all K studies with probability one. Such assumption
fits the reality well, for example, in a typical GWAS study, a very small pool of SNPs are
reported to be associated with a complex trait or disease among millions (Jiang et al., 2016).
With n i.i.d. observations from model (2.1), our purpose is to identify the non-zero β(1) ,
thus we define the following index sets for active and inactive predictors:
(k)
6= 0 for all k};
(k)
= 0 for all k},
A = {j ∈ [p]; βj 6= 0} = {j ∈ [p]; βj
AC = {j ∈ [p]; βj = 0} = {j ∈ [p]; βj
(2.2)
where A is our target. Clearly, under our setting, A and AC are complementary to each
other so that the identification of AC is equivalent to the identification of A. Let |A| = s0 ,
where | · | denotes the cardinality.
3
3
Screening procedure with multiple studies
3.1
Sure independence screening
For a single study (K = 1), Fan and Lv (2008) first proposed the variable screening method
called sure independence screening (SIS) which ranked the importance of variables according
to their marginal correlation with the response and showed its great power in preliminary
screening and dimension reduction for high-dimensional regression problems. Bühlmann
et al. (2010) later introduced the partial faithfulness condition that a zero partial correlation
for some separating set S implied a zero regression coefficient and showed that it held almost
surely for joint normal distribution. In the extreme case when S = ∅, it is equivalent to the
SIS method.
The purpose of sure screening is to identify a set of moderate size d (with d << p) that
will still contain the true set A. Equivalently, we can try to identify AC or subsets of AC
which contain unimportant features that need to be screened out. There are two potential
errors that may occur in any sure screening methods (Fan and Lv, 2010):
1. False Negative (FN): Important predictors that are marginally uncorrelated but
jointly correlated with the response fail to be selected.
2. False Positive (FP): Unimportant predictors that are highly correlated with the
important predictors can have higher priority to be selected than other relatively weaker
important predictors.
The current framework for variable screening with multiple studies is able to relieve us
from the FP errors significantly. Indeed, we have multiple studies in our model setting thus
we have more evidence to exclude noises and reduce FP errors than single study. In addition,
sure screening is used to reduce dimension at a first stage, so we can always include a second
stage variable selection methods such as Lasso or Dantzig selection to further refine the set
and reduce FP errors.
The FN errors occur when signals are falsely excluded after screening. Suppose ρj is
the marginal correlation of the jth feature with the response, with which we try to find the
set {j : ρj = 0} to screen out. Under the assumption of partial faithfulness (for explicit
definition, see Section 4.3), these variables have zero coefficients for sure so the FN errors
are guaranteed to be excluded. However, this might not be true for the empirical version
of marginal correlation. For a single study (K = 1), to rule out the FN errors in empirical
case, it is well-known that the signal-to-noise ratio has to be large (at least of an order of
(log p/n)1/2 after Bonferroni adjustment). In the current setting with multiple studies, the
requirement on strong signals remains the same if we naively perform one-step screening in
each individual study. As we will see next, we propose a novel two-step screening procedure
which allows weak signals in individual studies as long as the aggregate effect is strong
enough. Therefore our procedure is able to reduces FN errors in the framework with multiple
studies.
Before closing this section, it is worthwhile to mention that, to perform a screening test,
one usually applies Fisher’s z-transformation on the sample correlation (Bühlmann et al.,
2010). However, this will require the bivariate normality assumption. Alternatively, in this
paper, we propose to use the self-normalized estimator of correlation that works generally well
even for non-Gaussian data (Shao, 1999). Similar ideas have been applied in the estimation
of large covariance matrix (Cai and Liu, 2016).
4
3.2
Two-step screening procedure with multiple studies
(k)
In the presence of multiple studies, we have more evidence to reduce dimension and ρj = 0
for any k will imply a zero coefficient for that feature. On one hand, it is possible for features
(k)
with zero βj to have multiple non-zero ρj ’s. On the other hand, a non-zero βj will have
(k)
non-zero ρj ’s in all studies. Thus, we aim to identify the following two complementary sets
while performing screening with multiple studies:
(k)
A[0] = {j ∈ [p];
min |ρj | = 0},
A[1] = {j ∈ [p];
min |ρj | =
6 0}.
k
(k)
(3.1)
k
We know for sure that A[0] ⊆ AC and A ⊆ A[1] with the partial faithfulness assumption.
For j ∈ A[0] , the chance of detecting a zero marginal correlation in at least one study has
been greatly increased with increasing K, thus unimportant features will more likely be
screened out as compared to single study scenario.
(k)
One way to estimate A[1] is to test H0 : ρj = 0 of each k for each feature j. When any
of the K tests is not rejected for a feature, we will exclude this feature from Â[1] (we call it
the “One-Step Sure Independence Screening” procedure, or “OneStep-SIS” for short). This
can be viewed as an extension of the screening test to multiple study scenario. However, in
(k)
reality, it is possible for important features to have weak signals thus small |ρj |’s in at least
one study. These features might be incorrectly classified into Â[0] since weak signals can be
indistinguishable from null signals in individual testing. It will lead to the serious problem
of false exclusion of important features (FN) from the final set during screening.
This can be significantly improved by adding a second step to combine those studies
(k)
with potential zero correlation (i.e., fail to reject the null H0 : ρj = 0) identified in the
first step and perform another aggregate test. For the features with weak signals in multiple
studies, as long as their aggregate test statistics is large enough, they will be retained. Such
procedure will be more conservative in screening features as to the first step alone, but will
guarantee to reduce false negative errors.
(k)
(k)
For simplicity, we assume n i.i.d. observations (Xi , Yi ), i ∈ [n], are obtained from
all K studies. It is straightforward to extend the current procedure and analysis to the scenarios with different sample sizes across multiple studies, and thus omitted. Our proposed
“Two-Step Aggregation Sure Independence Screening” procedure (“TSA-SIS” for short) is
formally described below:
Step 1. Screening in each study
In the first step, we perform screening test in each study k ∈ [K] and obtain the estimate
of study set with potential zero correlations ˆlj for each j ∈ [p] as:
√ (k)
nσ̂
(k)
(k)
−1
ˆlj = {k; |T̂ | ≤ Φ (1 − α1 /2)} and T̂ = q j ,
(3.2)
j
j
(k)
θ̂j
P
P
(k)
(k)
(k)
(k)
(k)
(k)
where σ̂j = n1 ni=1 (Xij −X̄j )(Yi −Ȳ (k) ) is the sample covariance and θ̂j = n1 ni=1 [(Xij −
(k)
(k)
(k)
(k)
(k)
X̄j )(Yi − Ȳ (k) ) − σ̂j ]2 . T̂j is the self-normalized estimator of covariance between Xj
5
and Y (k) . Φ is the CDF of standard normal distribution and α1 the pre-specified significance
level.
(k)
In each study, we test if |T̂j | > Φ−1 (1 − α1 /2), if not, we will include study k into
ˆlj . This step does not screen out any variables, but instead separates potential zero and
non-zero study-specific correlations for preparation of the next step. Define the cardinality
of ˆlj as κ̂j = |ˆlj |. If κ̂j = 0 (i.e., no potential zero correlation), we will for sure retain feature
j and not consider it in step 2; Otherwise, we move on to step 2.
(k)
Remark 1. By the scaling property of T̂j , it is sufficient to impose assumptions on the stan(k)
(k)
(k)
dardized variables: W (k) = Y√ −E(Y(k) ) , Zj =
var(Y
)
(k)
(k)
Xj −E(Xj )
q
.
(k)
var(Xj )
(k)
Thus T̂j
(k)
can also be treated
(k)
as the self-normalized estimator of correlation. We thus can define θj = var(Zj W (k) ) and
(k)
(k)
(k)
σj = cov(Zj , W (k) ) = ρj .
Remark 2. In our analysis, the index set in (3.2) is shown to coincide with lj (j ∈ A[0] ) and
lj (j ∈ A[1] ) which will be introduced in more details in Section 4.
Step 2. Aggregate screening
In the second step, we wish to test whether the aggregate effect of potential zero correlations in ˆlj identified in step 1 is strong enough to be retained. Define the statistics
P (k) 2
L̂j =
(T̂j ) and this statistics will approximately follow a χ2κ̂j distribution with degree
k∈l̂j
of freedom κ̂j under null. Thus we can estimate Â[0] by:
Â[0] = {j ∈ [p]; L̂j ≤ ϕ−1
κ̂j (1 − α2 ) and κ̂j 6= 0},
(3.3)
or equivalently estimate Â[1] by:
Â[1] = {j ∈ [p]; L̂j > ϕ−1
κ̂j (1 − α2 ) or κ̂j = 0},
(3.4)
where ϕκ̂j is the CDF of chi-square distribution with degree of freedom equal to κ̂j and α2
the pre-specified significance level.
(k)
The second step takes the sum of squares of T̂j from studies with potential zero corP (k) 2
relation as the test statistics. For each feature j, we test if
(T̂j ) > ϕ−1
κ̂j (1 − α2 ). If
k∈l̂j
rejected, we conclude that the aggregate effect is strong and the feature needs to be retained,
otherwise, we will screen it out. This step performs a second check in addition to the individual testing in step 1 and potentially saves those important features with weak signals in
individual studies but strong aggregate effect.
In Table 1, we use a toy example to demonstrate our idea and compare the two approaches
(“OneStep-SIS” vs. “TSA-SIS”). In this example, suppose we have five studies (K = 5)
and three features (two signals and one noise). “S1” is a strong signal with β = 0.8 in all
studies, “S2” is a weak signal with β = 0.4 in all studies and “N1” is a noise with β = 0.
In hypothesis testing, both small β and zero β can give small marginal correlation and are
sometimes indistinguishable. Suppose T = 3.09 is used as the threshold (corresponding to
6
Table 1: Toy example to demonstrate the strength of two-step screening procedure.
k=1
k=2
k=3
k=4
k=5
ˆlj
κ̂j
TSA-SIS
L̂j
Â[0]
Â[1]
Â[0]
OneStep-SIS
Â[1]
S1 (signal)
S2 (signal)
N1 (noise)
(1)
(1)
(1)
|T̂1 | = 3.71
|T̂2 | = 3.70
|T̂3 | = 0.42
(2)
(2)
(2)
|T̂1 | = 3.16
|T̂2 | = 2.71
|T̂3 | = 0.54
(3)
(3)
(3)
|T̂1 | = 3.46
|T̂2 | = 2.65
|T̂3 | = 0.56
(4)
(4)
(4)
|T̂1 | = 3.63
|T̂2 | = 2.68
|T̂3 | = 0.12
(5)
(5)
(5)
|T̂1 | = 3.24
|T̂2 | = 1.94
|T̂3 | = 0.69
∅
{2, 3, 4, 5}
{1, 2, 3, 4, 5}
0
4
5
25.31 > ϕ4 (0.95) 1.27 < ϕ5 (0.95)
N
N
Y
Y
Y
N
N
Y
Y
Y
N (FN)
N
α1 = 0.001). For the strong signal “S1”, all studies have large marginal correlations, so both
“OneStep-SIS” and “TSA-SIS” procedures include it correctly. For the weak signal “S2”,
since in many studies it has small correlations, it is incorrectly screened out by “OneStepSIS” procedure (False Negative). However, the “TSA-SIS” procedure saves it in the second
step (with α2 = 0.05). For the noise “N1”, both methods tend to remove it after screening.
4
4.1
Theoretical properties
Assumptions and conditions
We impose the following conditions to establish the model selection consistency of our procedure:
(C1) (Sub-Gaussian Condition) There exist some constants M1 > 0 and η > 0 such that for
all |t| ≤ η, j ∈ [p], k ∈ [K]:
(k)2
E{exp(tZj
)} ≤ M1 ,
E{exp(tW (k)2 )} ≤ M1 .
(k)
In addition, there exist some τ0 > 0 such that min θj ≥ τ0 .
j,k
(C2) The number of studies K = O(pb ) for some constant b ≥ 0. The dimension satisfies:
log3 (p) = o(n) and κj log2 p = o(n), where κj is defined next.
(k)
(k)
(C3) For j ∈ A[0] , lj (j ∈ A[0] ) = {k; ρj = 0} and κj = |lj |. If k ∈
/ lj , then |ρj | ≥
q
q
(k)
log p
1.01θj , where C3 = 3(L + 1 + b).
C3
n
7
q
q
(k)
(k)
(C4) For j ∈ A[1] , lj (j ∈ A[1] ) = {k; |ρj | < C1 logn p 0.99θj } and κj = |lj |, where
q
q
(k)
(k)
C1 = L + 1 + b. If k ∈
/ lj , then |ρj | ≥ C3 logn p 1.01θj . In addition, we require
√
P (k) 2 C2 (log2 p+ κj log p)
|ρj | ≥
, where C2 is some large positive constant.
n
k∈lj
(k)
The first condition (C1) assumes that each standardized variable Zj or W (k) , j ∈ [p],
k ∈ [K], marginally follow a sub-Gaussian distribution in each study. This condition relaxes
the normality assumption in (Fan and Lv, 2008; Bühlmann et al., 2010). The second part
of (C1) assumes there always exist some positive τ0 not greater than the minimum variance
(k)
(k)
of Zj W (k) . In particular, if (Xj , Y (k) ) jointly follows a multivariate normal distribution,
(k)
(k)2
then θj = 1 + ρj ≥ 1, so we can always pick τ0 = 1.
The second condition (C2) allows the dimension p to grow at an exponential rate of
sample size n, which is a fairly standard assumption in high-dimensional analysis. Many
sure screening methods like “SIS”, “DC-SIS” and “TPC” have used this assumption (Fan
and Lv, 2008; Li et al., 2012, 2017). Though the PC-simple algorithm (Bühlmann et al.,
2010) assumes a polynomial growth of pn as a function of n, we notice that it can be readily
relaxed to an exponential of n level. Further, we require the product κj log2 p to be small,
which is used to control the errors in the second step of our screening procedure. It is always
true if K log2 p = o(n).
Conditions (C3) assumes a lower bound on non-zero correlation (i.e. k ∈
/ lj ) for features
(k)
[0]
from A . In other words, if the marginal correlation |ρj | is not zero, then it must have a
large enough marginal correlation to be detected. While this has been a key assumption for
a single study in many sure screening methods (Fan and Lv, 2008; Bühlmann et al., 2010;
Li et al., 2012, 2017), we only impose this assumption for j ∈ A[0] rather than all j ∈ [p].
This condition is used to control for type II error in step 1 for features from A[0] .
Condition (C4) gives assumptions on features from A[1] . We assume the correlations to
be small for those k ∈ lj and large for those k ∈
/ lj so that studies with strong or weak
signals can be well separated in the first step. This helps control the type II error in step
1 for features from A[1] . For those studies in lj , we further require their sum of squares of
correlations to be greater than a threshold, so that type II error can be controlled in step 2.
This condition is different from other methods with single study scenario, where they usually
assume a lower bound on each marginal correlation for features from A[1] just like (C3). We
relax this condition and only put restriction on their L2 norm. This allows features from A[1]
to have weak signals in each study but combined strong signal. To appreciate this relaxation,
we compare the minimal requirements with and without step 2. For each j ∈ A[1] , in order to
(k)
detect this feature, we need |ρj | ≥ C(log p/n)1/2 with some large constant C for all k ∈ lj ,
P (k) 2
and thus at least
|ρj | ≥ C 2 κj log p/n. In comparison, the assumption in (C4) is much
k∈lj
weaker in reasonable settings κj >> log p.
4.2
Consistency of the two-step screening procedure
We state the first theorem involving the consistency of screening in our step 1:
8
Theorem 1. Consider a sequence of linear models as in (2.1) which satisfy assumptions and
conditions (C1)-(C4), define the event A = {ˆlj = lj for √
all j ∈ [p]}, there exists a sequence
α1 = α1 (n, p) → 0 as (n, p) → ∞ where α1 = 2{1 − Φ(γ log p)} with γ = 2(L + 1 + b) such
that:
P (A) = 1 − O(p−L ) → 1 as (n, p) → ∞.
(4.1)
The proof of Theorem 1 can be found in Section 9. This theorem states that the screening
in our first step correctly identifies the set lj for features in both A[0] and A[1] (in which strong
and weak signals are well separated) and the chance of incorrect assignment is low. Given
the results in Theorem 1, we can now show the main theorem for the consistency of the
two-step screening procedure:
Theorem 2. Consider a sequence of linear models as in (2.1) which satisfy assumptions
and conditions (C1)-(C4), we know there exists a sequence
α1 = α1 (n, p) → 0 and α2 =
√
α2 (n, p) → 0 as (n, p) → ∞ where α1 = 2{1 −
pΦ(γ log p)} with γ = 2(L + 1 + b) and
α2 = 1 − ϕκj (γκj ) with γκj = κj + C4 (log2 p + κj log p) and some constant C4 > 0 such
that:
P {Â[1] (α1 , α2 ) = A[1] } = 1 − O(p−L ) → 1 as (n, p) → ∞.
(4.2)
The proof of Theorem 2 can be found in Section 9. The result shows that the two-step
screening procedure enjoys the model selection consistency and identifies the model specified
in (3.1) with high
√ probability. The choice of significance level that yields consistency is
α1 = 2{1 − Φ(γ log p)} and α2 = 1 − ϕκj (γκj ) .
4.3
Partial faithfulness and Sure screening property
Bühlmann et al. (2010) first came up with the partial faithfulness assumption which theoretically justified the use of marginal correlation or partial correlation in screening as follows:
ρj|S = 0 for some S ⊆ {j}C implies βj = 0,
(4.3)
where S is the set of variables conditioned on. For independence screening, S = ∅.
Under the two conditions: the positive definiteness of ΣX and non-zero regression coefficients being realization from some common absolutely continuous distribution, they showed
that partial faithfulness held almost surely (Theorem 1 in Bühlmann et al. (2010)). Since
the random effect model described in Section 2 also satisfies the two conditions, the partial
faithfulness holds almost surely in each study.
Thus, we can readily extend their Theorem 1 to a scenario with multiple studies:
Corollary 1. Consider a sequence of linear models as in (2.1) satisfying the partial faithfulness condition in each study and true active and inactive set defined in (2.2), then the
following holds for every j ∈ [p]:
(k)
ρj|S = 0 for some k for some S ⊆ {j}C implies βj = 0.
9
(4.4)
(k)
The proof is straightforward and thus omitted: if ρj|S = 0 for some study k, then with
(k)
partial faithfulness, we will have βj = 0 for that particular k. Since we only consider
(k)
features with zero or non-zero βj ’s in all studies in (2.2), we will have βj = 0. In the case
(k)
of independence screening (i.e. S = ∅), ρj = 0 for some k will imply a zero βj .
With the model selection consistency in Theorem 2 and the extended partial faithfulness
condition in Corollary 1, the sure screening property of our two-step screening procedure
immediately follows:
Corollary 2. Consider a sequence of linear models as in (2.1) which satisfy assumptions
and conditions (C1)-(C4) as well as the extended partial faithfulness condition in Corollary
1, there exists a sequence
√ α1 = α1 (n, p) → 0 and α2 = α2 (n, p) → 0 as (n, p) → ∞
where α1 = 2{1 − Φ(γ log p)} with γ = 2(L + 1 + b) and α2 = 1 − ϕκj (γκj ) with γκj =
p
κj + C4 (log2 p + κj log p) such that:
P {A ⊆ Â[1] (α1 , α2 )} = 1 − O(p−L ) → 1 as (n, p) → ∞.
(4.5)
The proof of this Corollary simply combines the results of Theorem 2 and the extended
partial faithfulness and is skipped here.
5
Algorithms for variable selection with multiple studies
Usually, performing sure screening once may not remove enough unimportant features. In our
case since there are multiple studies, we expect our two-step screening procedure to remove
many more unimportant features than in single study. If the dimension is still high after
applying our screening procedure, we can readily extend the two-step screening procedure
to an iterative variable selection algorithm by testing the partial correlation with gradually
increasing size of the conditional set S. Since such method is a multiple study extension of
the PC simple algorithm in Bühlmann et al. (2010), we call it “Multi-PC” algorithm (Section
5.1).
On the other hand, if the dimension has already been greatly reduced with the two-step
screening, we can simply add a second stage group-based feature selection techniques to
select the final set of variables (Section 5.2).
5.1
Multi-PC algorithm
We start from S = ∅, i.e., our two-step screening procedure and build a first set of candidate
active variables:
Â[1,1] = Â[1] = {j ∈ [p]; L̂j > ϕ−1
κ̂j (1 − α2 ) or κ̂j = 0}.
(5.1)
We call this set stage1 active set, where the first index in [, ] corresponds to the stage
of our algorithm and the second index corresponds to whether the set is for active variables
([, 1]) or inactive variables ([, 0]). If the dimensionality has already been decreased by a large
10
amount, we can directly apply group-based feature selection methods such as group lasso to
the remaining variables (to be introduced in Section 5.2).
However, if the dimension is still very high, we can further reduce dimension by increasing
the size of S and considering partial correlations given variables in Â[1,1] . We follow the
similar two-step procedure but now using partial correlation of order one instead of marginal
correlation and yield a smaller stage2 active set:
Â[2,1] = {j ∈ Â[1,1] ; L̂j|q > ϕκ̂−1j|q (1 − α2 ) or κ̂j|q = 0, for all q ∈ Â[1,1] \{j}},
(5.2)
where each self-normalized estimator of partial correlation can be computed by taking the
residuals from regressing over the variables in the conditional set.
We can continue screening high-order partial correlations, resulting in a nested sequence
of m active sets:
Â[m,1] ⊆ . . . ⊆ Â[2,1] ⊆ Â[1,1] .
(5.3)
Note that the active and inactive sets at each stage are non-overlapping and the union
of active and inactive sets at a stage m will be the active set in a previous stage m − 1, i.e.,
Â[m,1] ∪ Â[m,0] = Â[m−1,1] . This is very similar to the original PC-simple algorithm, but now
at each order-level, we perform the two-step procedure. The algorithm can stop at any stage
m when the dimension of Â[m,1] already drops to low to moderate level and other common
group-based feature selection techniques can be used to select the final set. Alternatively,
we can continue the algorithm until the candidate active set does not change anymore. The
algorithm can be summarized as follows:
Algorithm 1. Multi-PC algorithm for variable selection.
Step 1. Set m = 1, perform the two-step screening procedure to construct stage1 active set:
Â[1,1] = {j ∈ [p]; L̂j > ϕ−1
κ̂j (1 − α2 ) or κ̂j = 0}.
Step 2. Set m = m + 1. Construct the stagem active set:
Â[m,1] = {j ∈ Â[m−1,1] ; L̂j|S > ϕ−1
κ̂j|S (1 − α2 ) or κ̂j|S = 0,
for all S ⊆ Â[m−1,1] \{j} with |S| = m − 1}.
Step 3. Repeat Step 2 until m = m̂reach , where m̂reach = min{m : |Â[m,1] | ≤ m}.
11
5.2
Two-stage feature selection
As an alternative to “Multi-PC” algorithm for variable selection, we also introduce here a
two-stage feature selection algorithm by combining our two-step screening procedure and
other regular feature selection methods together. In single study, for example, Fan & Lv
(2008) performed sure independence screening in the first stage followed by model selection
techniques including Adaptive Lasso, Dantzig Selector and SCAD, etc., and named those
procedures as “SIS-AdaLasso”,“SIS-DS”, “SIS-SCAD” , accordingly.
In our case, since the feature selection is group-based, we adopt a model selection technique using group Lasso penalty in the second stage:
min
β
K
X
(k)
(k)
||y (k) − XÂ[1] βÂ[1] ||22 + λ
k=1
X
||βj ||2
,
(5.4)
j∈Â[1]
where Â[1] is the active set identified from our two-step screening procedure and the tuning
parameter λ can be chosen by cross-validation or BIC in practice just like for a regular group
Lasso problem. We call such two-stage feature selection algorithm as “TSA-SIS-groupLasso”.
In addition, at any stages of the “Multi-PC” algorithm when the dimension has already
been dropped to a moderate level, the group Lasso-based feature selection techniques can
always take over to select the final set of variables.
6
Numerical evidence
In this section, we demonstrate the advantage of TSA-SIS procedure in comparing to the
multiple study extension of SIS (named “Min-SIS”), which ranks the features by the minimum absolute correlation among all studies. We simulated data according to the linear
(k)
model in (2.1) including p covariates with zero mean and covariance matrix Σi,j = r|i−j|
(k)
(k)
where Σi,j denotes the (i, j)th entry of ΣX .
In the first part of simulation, we fixed the sample size n = 100, p = 1000, the number of
studies K = 5 and performed B = 1000 replications in each setting. We assumed that the
true active set consisted of only ten variables and all the other variables had zero coefficients
(i.e., s0 = 10). The indices of non-zero coefficients were evenly spaced between 1 and p.
The variance of the random error term in linear model was fixed to be 0.52 . We randomly
drew r from {0, 0.2, 0.4, 0.6} and allowed different r’s in different studies. We considered the
following four settings:
1. Homogeneous weak signals across all studies: nonzero βj generated from Unif(0.1, 0.3)
(1)
(2)
(K)
and βj = βj = . . . = βj = βj .
2. Homogeneous strong signals across all studies: nonzero βj generated from Unif(0.7, 1)
(1)
(2)
(K)
and βj = βj = . . . = βj = βj .
3. Heterogeneous weak signals across all studies: nonzero βj generated from Unif(0.1, 0.3)
(k)
and βj ∼ N (βj , 0.52 ).
12
Table 2: Sensitivity analysis on the choice of α1 and α2 in simulation (Sensitivity/Specificity)
Sensitivity/Specificity
α1 =0.01
0.001
0.0001
α2 = 0.15
0.05
0.01
0.001
0.793/0.901 0.525/0.984 0.210/0.999 0.142/1.000
0.947/0.826 0.864/0.943 0.691/0.990 0.373/0.999
0.966/0.816 0.922/0.932 0.840/0.985 0.681/0.998
Note: All value are based on average results from B = 1000 replications.
4. Heterogeneous strong signals across all studies: nonzero βj generated from Unif(0.7, 1)
(k)
and βj ∼ N (βj , 0.52 ).
We evaluated the performance of Min-SIS using receiver operating characteristic (ROC)
curves which measured the accuracy of variable selection independently from the issue of
choosing good tuning parameters (for Min-SIS, the tuning parameter is the top number of
features d). The OneStep-SIS procedure we mentioned above was actually one special case
of the Min-SIS procedure (by thresholding at α1 ). In presenting our TSA-SIS procedure,
we fixed α1 = 0.0001 and α2 = 0.05 so the result was just one point on the sensitivity vs.
1-specificity plot. We also performed some sensitivity analysis on the two cutoffs based on
the first simulation (see Table 2) and found the two values to be optimal since they had
both high sensitivity and high specificity. Thus we suggested fixing these two values in all
the simulations.
Figure 1 showed the results of simulation 1-4. When the signals were homogeneously weak
in all studies as in (1), TSA-SIS clearly outperformed the Min-SIS procedure (above its ROC
curve). It reached about 90% sensitivity with controlled false positive errors (specificity
∼ 95%). In order to reduce false negatives, Min-SIS had to sacrifice the specificity and
increased the false positives, which in the end lost the benefits of performing screening
(i.e. end up keeping too many features). When the signals became strong as in (2), both
procedures performed equally well. This fit our motivation and theory and showed the
strength of our two-step procedure in saving weak signals without much increase in false
positive rates. When the signals became heterogeneous as in (3) and (4), both procedures
performed worse than before. But the Min-SIS procedure never outperformed the TSASIS procedure since it only examined the minimum correlation among all studies while the
two-step procedure additionally considered the aggregate statistics.
7
Real data application
We next demonstrated our method in three microarray datasets of triple-negative breast
cancer (TNBC, sometimes a.k.a. basal-like), an aggressive subtype of breast cancer usually
with poor prognosis. Previous studies have shown that the tumor suppressor protein “p53”
played an important role in breast cancer prognosis and its expression was associated with
both disease-free survival and overall survival in TNBC (Yadav et al., 2015). Our purpose
was to identify the genes most relevant and predictive to the response - the expression level
of TP53 gene, which encodes p53 protein. The three datasets are publicly available on
authors’ website or at GEO repository including METABRIC (a large cohort consisting
of roughly 2000 primary breast tumours), GSE25066 and GSE76250 (Curtis et al., 2012;
13
Figure 1: Simulation results 1-4: the ROC curve is for Min-SIS, the black point is for our
TSA-SIS using α1 = 0.0001 and α2 = 0.05.
1.0
(2) Homogeneous strong
1.0
(1) Homogeneous weak
●
0.8
0.6
Sensitivity
0.0
0.2
0.4
0.6
0.4
0.0
0.2
Sensitivity
0.8
●
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
(3) Heterogeneous weak
(4) Heterogeneous strong
1.0
1.0
1−Specificity
1.0
1−Specificity
0.8
0.6
Sensitivity
0.0
0.2
0.4
0.6
0.4
0.0
0.2
Sensitivity
0.8
●
●
0.0
0.2
0.4
0.6
0.8
1.0
0.0
1−Specificity
0.2
0.4
0.6
0.8
1.0
1−Specificity
Itoh et al., 2014; Liu et al., 2016). We subset the data to focus on the TNBC cases only
and ended up with 275, 178 and 165 TNBC samples in each dataset, respectively. After
routine preprocessing and filtering by including genes sufficiently expressed and with enough
variation, a total of 3377 genes remained in common for the analysis.
We applied our Multi-PC algorithm and compared to the OneStep-SIS procedure as
well as the Min-SIS method by using d = n/ log(n) = 49 (as suggested by their paper).
We used α1 = 0.0001 and α2 = 0.05 (as determined by sensitivity analysis in simulation)
and the “Multi-PC” algorithm only ran up to the first order (i.e. m = 2) and stopped
with six features. This again showed the power of screening with multiple studies. After
feature selection, we fit the linear model in each study to obtain the coefficient estimates and
adjusted R2 . Table 3 showed the coefficient estimates and standard errors of the final set of
six genes selected by our procedure. We added three columns to indicate whether they were
also retained by the Min-SIS (and their relative rank) or OneStep-SIS procedures. As we
can see from the table, all the six genes selected by our procedure were missed by the other
14
Table 3: The six genes selected by our TSA-SIS procedure.
Gene
METABRIC
Est (SE)
Intercept
7.600 (1.502)
EXOC1
0.251 (0.081)∗∗
ITGB1BP1 -0.134 (0.045)∗∗
RBM23
0.168 (0.078)∗
SETD3
-0.166 (0.081)∗
SQSTM1 -0.114 (0.050)∗
TRIOBP -0.126 (0.062)∗
Adjusted-R2
0.151
GSE25066
Est (SE)
0.213 (0.553)
0.278 (0.157).
0.003 (0.111)
0.144 (0.167)
0.366 (0.184)∗
0.029 (0.099)
0.084 (0.118)
0.522
GSE76250 Min-SIS Rank in OneStep-SIS
Est (SE)
d=49 Min-SIS
|S|=25
-1.783 (0.971)
.
0.293 (0.167)
N
164
N
-0.178 (0.194)
N
123
N
0.367 (0.168)∗
N
152
N
-0.080 (0.175)
N
101
N
0.245 (0.183)
N
98
N
0.628 (0.261)∗
N
91
N
0.359
Note: “.” indicates significant level of 0.1, “∗” for level of 0.05, “∗∗” for level of 0.01.
methods. Those genes typically had weak signals in one or more studies thus were very likely
to be incorrectly excluded if only one step screening is performed. Since the METABRIC
study had a larger sample size, all the coefficients appeared to be more significant than the
other two studies.
The gene EXOC1 and p53 are both components of the Ras signaling pathway which
is responsible for cell growth and division and can ultimately lead to cancer (Rajalingam
et al., 2007). RBM23 encodes for an RNA-binding protein implicated in the regulation of
estrogen-mediated transcription and has been found to be associated with p53 indirectly via
a heat shock factor (Asano et al., 2016). ITGB1BP1 encodes for an integrin protein which is
essential for cell adhesion and other downstream signaling pathways that are also modulated
by p53 (Brakebusch et al., 2002).
8
Discussion
In this paper, we proposed a two-step screening procedure for high-dimensional regression
analysis with multiple related studies. In a fairly general framework with weaker assumptions
on the signal strength, we showed that our procedure possessed the sure screening property
for exponentially growing dimensionality without requiring the normality assumption. We
have shown through simulations that our procedure consistently outperformed the rankbased SIS procedure independent of their tuning parameter d. As far as we know, our paper
is the first proposed procedure to perform variable screening in high-dimensional regression
when there are multiple related studies. In addition, we also introduced two applicable
variable selection algorithms following the two-step screening procedure.
Variable selection in regression with multiple studies have been studied in a subfield of
machine learning called multi-task learning (MTL) before and the general procedure is to
apply regularization methods by putting group Lasso penalty, fused Lasso penalty or trace
norm penalty, etc. (Argyriou et al., 2007; Zhou et al., 2012; Ji and Ye, 2009). However, at
ultra-high dimension, such regularization methods usually fail due to challenges in computation expediency, statistical accuracy and algorithmic stability. Instead, sure screening can be
15
used as a fast algorithm for preliminary feature selection, and as long as it exhibits comparable statistical performance both theoretically and empirically, its computational advantages
make it a good choice in application (Genovese et al., 2012). Our method has provided an
alternative to target the high-dimensional multi-task learning problems.
The current two-step screening procedure is based on the linear models but relaxes the
Gaussian assumption to sub-Gaussian distribution. One can apply a modified Fisher’s ztransformation estimator rather than our self-normalized estimator to readily accommodate
general elliptical distribution families (Li et al., 2017). In biomedical applications, noncontinuous outcomes such as categorical, count or survival outcomes are more commonly
seen. Fan et al. (2010) extended SIS and proposed a more general independent learning
approach for generalized linear models by ranking the maximum marginal likelihood estimates. Fan et al. (2011) further extended the correlation learning to marginal nonparametric
learning for screening in ultra-high dimensional additive models. Other researchers exploited
more robust measure for the correlation screening (Zhu et al., 2011; Li et al., 2012; Balasubramanian et al., 2013). All these measures can be our potential extension by modifying the
marginal utility used in the screening procedure. Besides, the idea of performing screening
with multiple studies is quite general and is applicable to relevant statistical models other
than the regression model, for example, Gaussian graphical model with multiple studies. We
leave these interesting problems in future study.
9
Proofs
We start by introducing three technical lemmas that are essential for the proofs of the main
(k)
results. By the scaling property of T̂j and Remark 1, without loss of generality, we can
(k)
(k)
assume E(Xj ) = E(Y (k) ) = 0 and var(Xj ) = var(Y (k) ) = 1 for all k ∈ [K], j ∈ [p].
(k)
(k)
Therefore in the proof we do not distinguish between σj and ρj . The first lemma is on
(k)
the concentration inequalities of the self-normalized covariance and θ̂j .
Lemma 1. Under the assumptions (C1) and (C2), for any δ ≥ 2 and M > 0, we have:
q
(k)
(k)
σ̂ −σj
(i) P (max | j (k) 1/2
| ≥ δ logn p ) = O((log p)−1/2 p−δ+1+b ),
j,k
(θ̂j )
(k)
(k)
(ii) P (max |θ̂j − θj | ≥ Cθ
q
j,k
log p
)
n
= O(p−M ),
where Cθ is a positive constant depending on M1 , η and M only.
The second and third lemmas, which will be used in the proof of Theorem 2, describe
n
P
r (k)
(k)
(k)
(k)
(k)
√ (k)
√1
[(Xij −X̄j )(Yi −Ȳ (k) )−ρj ]
n
θ̂j
nρ
(k)
(k)
i=1
q
q j
the concentration behaviors of Ĥj :=
=
T̂
−
(k)
j
(k)
(k)
θj
n
P
(k)
and Ȟj
:=
(k) (k)
(k)
√1
(Xij Yi −ρj )
n
i=1
q
(k)
θj
.
16
θj
θj
Lemma 2. There exists some constant c > 0 such that,
P (|
X
(k)2
[Ȟj
− 1]| > t) ≤ 2 exp(−c min[
k∈lj
t2 1/2
, t ]),
κj
where c depends on M1 and η only.
Lemma 3. There exists some constant CH > 0 such that,
s
log2 p
(k)
(k)
P (max |Ȟj − Ĥj | > CH
) = O(p−M ),
j,k
n
s
log3 p
(k)2
(k)2
P (max |Ȟj − Ĥj | > CH
) = O(p−M ),
j,k
n
where CH depends on M1 , η, M and τ0 only.
The proofs of the three lemmas are provided in the Appendix.
Proof of Theorem 1. We first define the following error events:
[0]
I,A
Ej,k
[0]
II,A
Ej,k
[1]
I,A
Ej,k
[1]
II,A
Ej,k
(k)
= {|T̂j | > Φ−1 (1 − α1 /2) and j ∈ A[0] , k ∈ lj },
(k)
/ lj },
= {|T̂j | ≤ Φ−1 (1 − α1 /2) and j ∈ A[0] , k ∈
(k)
= {|T̂j | > Φ−1 (1 − α1 /2) and j ∈ A[1] , k ∈ lj },
(k)
/ lj }.
= {|T̂j | ≤ Φ−1 (1 − α1 /2) and j ∈ A[1] , k ∈
To show Theorem 1 that P (A) = 1 − O(p−L ), it suffices to show that,
[ I,A[0]
II,A[0]
P { (Ej,k
∪ Ej,k
)} = O(p−L ),
(9.1)
j,k
and
P{
[ I,A[1]
II,A[1]
(Ej,k ∪ Ej,k
)} = O(p−L ).
(9.2)
j,k
One√can apply Lemma 1 to bound each component in (9.1) and (9.2) with α1 = 2{1 −
Φ(γ log p)} and γ = 2(L + 1 + b). Specifically, we obtain that,
[ I,A[0]
P ( Ej,k
) = P ( max
j,k
j∈A[0] ,k∈lj
(k)
|T̂j | ≥ γ
p
1
log p) = O( √
p−γ+1+b ) = o(p−L ),
log p
17
(9.3)
(k)
where the second equality is due to Lemma 1 (i) with δ = γ, noting that σj
√ (k) q (k)
(k)
T̂j = nσ̂j / θ̂j . In addition, we have that,
[ I,A[1]
P ( Ej,k
) =P { max
j∈A[1] ,k∈lj
j,k
(k)
|T̂j | ≥ γ
(k)
≤P ( max
|
p
j∈A[1] ,k∈lj
log p}
(k)
σ̂j − ρj
r
| ≥ (γ − C1 )
(k)
(θ̂j )1/2
= 0 and
log p
) + O(p−L )
n
(9.4)
1
p−(γ−C1 )+1+b ) + O(p−L )
log p
=O(p−L ),
=O( √
where the inequality on the second line is due to assumption (C4) on lj for j ∈ A[1] , Lemma
(k)
(k)
(k)
1 (ii) with M = L, and assumption (C1) minj,k θj ≥ τ0 , i.e., θ̂j ≥ θj − Cθ (log p/n)1/2 ≥
(k)
0.99θj . The equality on the third line follows from Lemma 1 (i) where δ = γ −C1 = L+1+b.
In the end, we obtain that,
[ II,A[0]
p
(k)
II,A[1]
P { (Ej,k
∪ Ej,k
)} =P (max |T̂j | < γ log p)
j,k∈l
/j
j,k
(k)
≤P (max |
j,k∈l
/j
(k)
σ̂j − ρj
(k)
(θ̂j )1/2
r
| ≥ (C3 − γ)
log p
) + O(p−L )
n
(9.5)
1
p−(C3 −γ)+1+b ) + O(p−L )
log p
−L
=O(p ),
=O( √
where the inequality on the second line is due to assumptions (C3) and (C4) on lj , Lemma
(k)
(k)
1 (ii) with M = L and assumption (C1) on sub-Gaussian distributions, i.e., θ̂j ≤ θj +
(k)
(k)
Cθ (log p/n)1/2 ≤ 1.01θj . In particular, we have implicitly used the fact that maxj,l θj is
upper bounded by a constant depending on M1 and η only. The equality on the third line
follows from Lemma 1 (i) where δ = C3 − γ = L + 1 + b.
Finally, we complete the proof by combining (9.3)-(9.5) to show (9.1)-(9.2).
Proof of Theorem 2. We first define the following error events:
[0] ,2
EjA
[1] ,2
EjA
= {|L̂j | > ϕ−1 (1 − α2 ) or κ̂j = 0} for j ∈ A[0] ,
= {|L̂j | < ϕ−1 (1 − α2 ) and κ̂j 6= 0} for j ∈ A[1] .
To prove Theorem 2, we only need to show that,
[ A[0] ,2
[ A[1] ,2
Ej
Ej
P(
) = O(p−L ) and P (
) = O(p−L ),
j∈A[0]
with α2,κj := 1 − ϕκj [κj + C4 (log2 p +
j∈A[1]
p
κj log p)] := 1 − ϕκj (γκj ).
18
(9.6)
Recall the event A defined in Theorem 1. Thus we have that,
[
[0]
[1]
P {(∪j∈A[0] EjA ,2 ) (∪j∈A[1] EjA ,2 )}
X (k)2
X
≤P (AC ) + p max P (
T̂j > γκj ) + p max P (
T̂ (k)2 < γκj ).
j∈A[0]
j∈A[1] ,κj 6=0
k∈lj
k∈lj
Therefore, given the results in Theorem 1, it suffices to show,
X
P(
T̂ (k)2 > γκj ) = O(p−L−1 ) for any j ∈ A[0] ,
(9.7)
k∈lj
and
P(
X
T̂ (k)2 < γκj ) = O(p−L−1 ) for any j ∈ A[1] and κj > 0.
(9.8)
k∈lj
[0]
We first prove equation (9.7). Since j ∈ A , we have
P
(k)2
to bound the probability of k∈lj T̂j > γκj below.
P(
X
(k)2
T̂j
> γκj )
(k)2
Ĥj
Cθ
> (1 −
τ0
(k)
Ĥj
=
(k)
T̂j
r
(k)
θ̂j
(k)
θj
. We are ready
k∈lj
≤P (
X
k∈lj
r
log p
)γκj ) + O(p−L−1 )
n
r
s
log3 p
) + O(p−L−1 )
n
j
s
X (k)2
p
κ2j log p
C
θ
2
=P ( (Ȟj − 1) > κj + C4 (log p + κj log p) −
τ0
n
k∈lj
s
s
s
Cθ C4
log5 p
κj log2 p
log3 p
−
(
+
) − κj − κj CH
) + O(p−L−1 )
τ0
n
n
n
X (k)2
p
≤P ( (Ȟj − 1) > C20 (log2 p + κj log p)) + O(p−L−1 )
X (k)2
Cθ
≤P ( (Ȟj − 1) > (1 −
τ0
k∈l
log p
)γκj − κj − κj CH
n
k∈lj
=O(p−L−1 ).
(k)
The inequality on the second line is due to assumption (C1) that min θj
j,k
≥ τ0 > 0 and
Lemma 1 (ii) with M = L + 1. The inequality on the third line follows from Lemma 3 with
M = L + 1. The inequality on the fifth line is by the choice of γκj with a sufficiently large
C4 > 0 and the assumption (C2) that log3 p = o(n) and κj log2 p = o(n). The last equality
follows from Lemma 2.
19
Lastly, we prove (9.8) as follows,
X (k)2
P(
T̂j < γκj )
k∈lj
√ (k)
(k)
X (k)
nρj 2 θj
) (k) < γκj )
=P ( (Ĥj + q
(k)
θ̂j
k∈lj
θj
r
√ (k)
X (k)
nρj 2
Cθ log p
)γκj ) + O(p−L−1 )
≤P ( (Ĥj + q
) ≤ (1 +
(k)
τ
n
0
k∈lj
θj
s
r
X (k)2
X (k)2
log3 p
Cθ log p
≤P ( (Ȟj − 1) ≤ κj CH
− κj + (1 +
)γκj − Cm n
ρj
n
τ
n
0
k∈lj
k∈lj
s
√ (k)
√
(k)
X (k) nρj
log2 p X n|ρj |
q
Ȟj q
+ 2CH
) + O(p−L−1 ).
−2
(k)
(k)
n
k∈lj
k∈lj
θj
θj
(k)
The inequality on the third line is due to assumption (C1) that min θj
j,k
(9.9)
≥ τ0 > 0 and
Lemma 1 (ii) with M = L + 1. The inequality on the fourth line follows from Lemma 3
(k)
(both equations) and min(θj )−1 := Cm > 0, guaranteed by the sub-Gaussian assumption
j,k
in assumption (C1).
We can upper bound the term 2CH
s
2CH
q
log2 p
n
√ (k)
n|ρ |
q j
k∈lj
(k)
θj
P
in (9.9) as follow,
s
√ (k)
√
s X
sX
X
n|ρj |
log p
log2 p n √
(k)2
(k)2
q
ρj = o( n
≤ 2CH
κj
ρj ).
√
(k)
n k∈l
n
τ0
k∈lj
k∈lj
θj
j
2
(9.10)
The first inequality is by the Cauchy-Schwarz inequality and assumption (C1), and the
second equality by the assumption (C2) that κj log2 p = o(n).
√ (k)
P
(k)
(k) nρj
We next upper bound the term −2 k∈lj Ȟj q (k)
with high probability. Note that θj
θj
(k)
θj
(k)
−1
Cm
is bounded below and above, i.e., τ0 ≤
≤
by assumption (C1). In addition, Ȟj has
zero mean and is sub-exponential with bounded constants by assumption (C1). By Bernstein
inequality (Proposition 5.16 in Vershynin (2010)), we have with some constant c0 > 0,
P (|2
X
k∈lj
We pick t = CB
(k)
|Ȟj
√ (k)
n|ρ |
t2
t
q j | > t) ≤ 2 exp(−c0 min[ P (k)2 ],
).
√
(k)
(k)
n
ρj
max n|ρj |
θj
k∈lj
k∈lj
q P
(k)2
n k∈lj ρj log2 p with a large constant CB in the inequality above and
20
apply (9.10) to reduce (9.9) as follows,
X (k)2
P(
T̂j < γκj )
k∈lj
s X
X (k)2
X (k)2
(k)2
≤P ( (Ȟj − 1) ≤ −Cm n
ρj log2 p
ρj + 2CB n
k∈lj
k∈lj
k∈lj
p
+ 2C4 κj log p + 2C4 log2 p) + O(p−L−1 )
q
X (k)2
p
p
2
≤P ( (Ȟj − 1) ≤ −Cm C2 (log p + κj log p) + 2CB C2 log2 p(log2 p + κj log p)
k∈lj
p
+ 2C4 κj log p + 2C4 log2 p) + O(p−L−1 )
X (k)2
p
≤P ( (Ȟj − 1) ≤ −C20 (log2 p + κj log p)) + O(p−L−1 )
k∈lj
=O(p−L−1 ).
The inequality on the first line is obtained by the choice of γκj with the chosen C4 > 0
and the assumption (C2) that κj log2 p = o(n). The inequalities
√ on the second line and third
P
C2 (log2 p+ κj log p)
(k) 2
for a sufficiently large
line are by the assumption (C4) that k∈lj |ρj | ≥
n
C2 > 0. The last equality is by Lemma 2.
This completes the proof of (9.7) and (9.8), which further yields to
[
[0]
[1]
P {(∪j∈A[0] EjA ,2 ) (∪j∈A[1] EjA ,2 )} = O(p−L ),
with the results from Theorem 1. Therefore we complete the proof of Theorem 2.
Appendix
S1. Proof of Lemma 1
Proof. Part (i) immediately follows from Lemma 2 (i) equation (25) in Cai and Liu (2011).
To prove part (ii), we need to bound the three terms on the right side of the following
inequality,
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
max |θ̂j − θj | ≤ max |θ̂j − θ̃j | + max |θ̃j − θ̌j | + max |θ̌j − θj |,
j,k
(k)
where θ̃j
(k)
:=
j,k
1
n
n
P
(k)
(k)
(Xij Yi
j,k
(k)
(k)
− ρ̃j )2 with ρ̃j =
i=1
(k)
1
n
(A1)
j,k
n
P
(k)
(k)
(k)
Xij Yi , and θ̌j
i=1
(k)
:=
1
n
n
P
(k)
(k)
(Xij Yi
−
i=1
ρj )2 . Note that E(θ̌j ) = θj .
By the marginal sub-Gaussian distribution assumption in assumption (C1), we have that
(k) (k)
(k)
(k)
(Xij Yi − ρj )2 has mean θj and finite Orlicz ψ1/2 -norm (see, e.g., Adamczak et al.
(2011)). Thus we can apply equation (3.6) of Adamczak et al. (2011), i.e.,
√ (k)
t2
(k)
P (max n|θ̌j − θj | > t) ≤ 2 exp(−c min[ , t1/2 ]),
j,k
n
21
√
with t = (Cθ /3) n log p for a large enough constant Cθ > 0 depending on M1 , η and M only
to obtain that,
r
log p
(k)
(k)
P (max |θ̌j − θj | > (Cθ /3)
) = O(p−M ).
(A2)
j,k
n
2
We have used the assumption log p = o(n1/3 ) in assumption (C2) to make sure tn ≤ t1/2 .
By applying equation (1) in supplement of Cai and Liu (2011), we obtain that,
r
log p
(k)
(k)
P (max |θ̃j − θ̂j | > (Cθ /3)
) = O(p−M ).
(A3)
j,k
n
In addition, by a similar truncation argument as that in the proof of Lemma 2 in Cai and
Liu (2011) and equation (7) therein, we obtain that by picking a large enough Cθ > 0,
r
log p
(k)
(k)
P (max |θ̃j − θ̌j | > (Cθ /3)
) = O(p−M ).
(A4)
j,k
n
We complete the proof by combining (A1)-(A4) with a union bound argument.
S2. Proof of Lemma 2
(k)
(k)
Proof. It is easy to check that E(Ȟj ) = 0 and var(Ȟj ) = 1. The marginal sub-Gaussian
(k)
distribution assumption in assumption (C1) implies that Ȟj has finite Orlicz ψ1 -norm (i.e.,
(k)
sub-exponential distribution with finite constants). Therefore, (Ȟj )2 −1 is centered random
(k)
variable with finite Orlicz ψ1/2 -norm. Note that Ȟj are independent for k ∈ [K]. The result
follows from equation (3.6) of Adamczak et al. (2011).
S3. Proof of Lemma 3
√
(k) √
√ (k)
nX̄j
nȲ (k)
(k)
(k)
q
Proof. Note that Ȟj −Ĥj =
. By assumption (C1), we have that E( nX̄j ) =
(k)
nθj
√ (k)
√ (k)
√
√ (k)
√
E( nȲ ) = 0, var( nX̄j ) = var( nȲ (k) ) = 1, and both nX̄j and nȲ (k) are subGaussian with bounded constants. Therefore, the first equation follows from Bernstein
inequality (e.g., Definition 5.13 in Vershynin (2010)) applied to centered sub-exponential
√ (k) √
(k)
variable nX̄j · nȲ (k) , noting θj ≥ τ0 by assumption (C1). The second equation follows
from the first one, log3 p = o(n), and a Bernstein inequality (e.g., Corollary 5.17 in Vershynin
(k)
(2010)) applied to the sum of centered sub-exponential variables Ȟj .
References
Adamczak, R., Litvak, A. E., Pajor, A., and Tomczak-Jaegermann, N. (2011). Restricted
isometry property of matrices with independent columns and neighborly polytopes by
random sampling. Constructive Approximation, 34(1):61–88.
22
Argyriou, A., Evgeniou, T., and Pontil, M. (2007). Multi-task feature learning. In Advances
in neural information processing systems, pages 41–48.
Asano, Y., Kawase, T., Okabe, A., Tsutsumi, S., Ichikawa, H., Tatebe, S., Kitabayashi,
I., Tashiro, F., Namiki, H., Kondo, T., et al. (2016). Ier5 generates a novel hypophosphorylated active form of hsf1 and contributes to tumorigenesis. Scientific reports,
6.
Balasubramanian, K., Sriperumbudur, B., and Lebanon, G. (2013). Ultrahigh dimensional
feature screening via rkhs embeddings. In Artificial Intelligence and Statistics, pages 126–
134.
Brakebusch, C., Bouvard, D., Stanchi, F., Sakai, T., and Fassler, R. (2002). Integrins in
invasive growth. The Journal of clinical investigation, 109(8):999.
Bühlmann, P., Kalisch, M., and Maathuis, M. H. (2010). Variable selection in highdimensional linear models: partially faithful distributions and the pc-simple algorithm.
Biometrika, 97(2):261–278.
Cai, T. and Liu, W. (2011). Adaptive thresholding for sparse covariance matrix estimation.
Journal of the American Statistical Association, 106(494):672–684.
Cai, T. T. and Liu, W. (2016). Large-scale multiple testing of correlations. Journal of the
American Statistical Association, 111(513):229–240.
Chang, J., Tang, C. Y., and Wu, Y. (2013). Marginal empirical likelihood and sure independence feature screening. Annals of statistics, 41(4).
Chang, J., Tang, C. Y., and Wu, Y. (2016). Local independence feature screening for
nonparametric and semiparametric models by marginal empirical likelihood. Annals of
statistics, 44(2):515.
Curtis, C., Shah, S. P., Chin, S.-F., Turashvili, G., Rueda, O. M., Dunning, M. J., Speed, D.,
Lynch, A. G., Samarajiwa, S., Yuan, Y., et al. (2012). The genomic and transcriptomic
architecture of 2,000 breast tumours reveals novel subgroups. Nature, 486(7403):346–352.
Fan, J., Feng, Y., and Song, R. (2011). Nonparametric independence screening in sparse
ultra-high-dimensional additive models. Journal of the American Statistical Association,
106(494):544–557.
Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its
oracle properties. Journal of the American statistical Association, 96(456):1348–1360.
Fan, J. and Lv, J. (2008). Sure independence screening for ultrahigh dimensional feature space. Journal of the Royal Statistical Society: Series B (Statistical Methodology),
70(5):849–911.
Fan, J. and Lv, J. (2010). A selective overview of variable selection in high dimensional
feature space. Statistica Sinica, 20(1):101.
23
Fan, J., Samworth, R., and Wu, Y. (2009). Ultrahigh dimensional feature selection: beyond
the linear model. Journal of Machine Learning Research, 10(Sep):2013–2038.
Fan, J., Song, R., et al. (2010). Sure independence screening in generalized linear models
with np-dimensionality. The Annals of Statistics, 38(6):3567–3604.
Genovese, C. R., Jin, J., Wasserman, L., and Yao, Z. (2012). A comparison of the lasso and
marginal regression. Journal of Machine Learning Research, 13(Jun):2107–2143.
Huang, J., Breheny, P., and Ma, S. (2012). A selective review of group selection in highdimensional models. Statistical science: a review journal of the Institute of Mathematical
Statistics, 27(4).
Itoh, M., Iwamoto, T., Matsuoka, J., Nogami, T., Motoki, T., Shien, T., Taira, N., Niikura, N., Hayashi, N., Ohtani, S., et al. (2014). Estrogen receptor (er) mrna expression
and molecular subtype distribution in er-negative/progesterone receptor-positive breast
cancers. Breast cancer research and treatment, 143(2):403–409.
Ji, S. and Ye, J. (2009). An accelerated gradient method for trace norm minimization.
In Proceedings of the 26th annual international conference on machine learning, pages
457–464. ACM.
Jiang, J., Li, C., Paul, D., Yang, C., Zhao, H., et al. (2016). On high-dimensional misspecified mixed model analysis in genome-wide association study. The Annals of Statistics,
44(5):2127–2160.
Li, R., Liu, J., and Lou, L. (2017). Variable selection via partial correlation. Statistica
Sinica, 27(3):983.
Li, R., Zhong, W., and Zhu, L. (2012). Feature screening via distance correlation learning.
Journal of the American Statistical Association, 107(499):1129–1139.
Liang, F., Song, Q., and Qiu, P. (2015). An equivalent measure of partial correlation coefficients for high-dimensional gaussian graphical models. Journal of the American Statistical
Association, 110(511):1248–1265.
Liu, Y.-R., Jiang, Y.-Z., Xu, X.-E., Hu, X., Yu, K.-D., and Shao, Z.-M. (2016). Comprehensive transcriptome profiling reveals multigene signatures in triple-negative breast cancer.
Clinical cancer research, 22(7):1653–1662.
Luo, S., Song, R., and Witten, D. (2014). Sure screening for gaussian graphical models.
arXiv preprint arXiv:1407.7819.
Ma, S., Li, R., and Tsai, C.-L. (2017). Variable screening via quantile partial correlation.
Journal of the American Statistical Association, pages 1–14.
Meier, L., Van De Geer, S., and Bühlmann, P. (2008). The group lasso for logistic regression.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(1):53–71.
Nardi, Y., Rinaldo, A., et al. (2008). On the asymptotic properties of the group lasso
estimator for linear models. Electronic Journal of Statistics, 2:605–633.
24
Rajalingam, K., Schreck, R., Rapp, U. R., and Albert, S. (2007). Ras oncogenes and
their downstream targets. Biochimica et Biophysica Acta (BBA)-Molecular Cell Research,
1773(8):1177–1195.
Shao, Q.-M. (1999). A cramér type large deviation result for student’s t-statistic. Journal
of Theoretical Probability, 12(2):385–398.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal
Statistical Society. Series B (Methodological), pages 267–288.
Tseng, G. C., Ghosh, D., and Feingold, E. (2012). Comprehensive literature review and
statistical considerations for microarray meta-analysis. Nucleic acids research, 40(9):3785–
3799.
Vershynin, R. (2010). Introduction to the non-asymptotic analysis of random matrices. arXiv
preprint arXiv:1011.3027.
Yadav, B. S., Chanana, P., and Jhamb, S. (2015). Biomarkers in triple negative breast
cancer: A review. World journal of clinical oncology, 6(6):252.
Yuan, M. and Lin, Y. (2006). Model selection and estimation in regression with grouped
variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology),
68(1):49–67.
Zhou, J., Liu, J., Narayan, V. A., and Ye, J. (2012). Modeling disease progression via fused
sparse group lasso. In Proceedings of the 18th ACM SIGKDD international conference on
Knowledge discovery and data mining, pages 1095–1103. ACM.
Zhu, L.-P., Li, L., Li, R., and Zhu, L.-X. (2011). Model-free feature screening for ultrahighdimensional data. Journal of the American Statistical Association, 106(496):1464–1475.
Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American
statistical association, 101(476):1418–1429.
Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–
320.
25
| 10 |
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
arXiv:1609.01313v2 [] 23 Jan 2018
MARK F HAGEN AND TIM SUSSE
Abstract. Let X be a proper CAT(0) cube complex admitting a proper cocompact action
by a group G. We give three conditions on the action, any one of which ensures that X has a
factor system in the sense of [BHS14]. We also prove that one of these conditions is necessary.
This combines with [BHS14] to show that G is a hierarchically hyperbolic group; this partially
answers questions raised in [BHS14, BHS15]. Under any of these conditions, our results also
affirm a conjecture of Behrstock-Hagen on boundaries of cube complexes, which implies that
X cannot contain a convex staircase. The necessary conditions on the action are all strictly
weaker than virtual cospecialness, and we are not aware of a cocompactly cubulated group that
does not satisfy at least one of the conditions.
Introduction
Much work in geometric group theory revolves around generalizations of Gromov hyperbolicity: relatively hyperbolic groups, weakly hyperbolic groups, acylindrically hyperbolic groups,
coarse median spaces, semihyperbolicity, lacunary hyperbolicity, etc. Much attention has been
paid to groups acting properly and cocompactly on CAT(0) cube complexes, which also have
features reminiscent of hyperbolicity. Such complexes give a combinatorially and geometrically
rich framework to build on, and many groups have been shown to admit such actions (for a
small sample, see [Sag95, Wis04, OW11, BW12, HW15]).
Many results follow from studying the geometry of CAT(0) cube complexes, often using strong
properties reminiscent of negative curvature. For instance, several authors have studied the
structure of quasiflats and Euclidean sectors in cube complexes, with applications to rigidity
properties of right-angled Artin group [Xie05, BKS08, Hua14]. These spaces have also been
shown to be median [Che00] and to have only semi-simple isometries [Hag07]. Further, under
reasonable assumptions, a CAT(0) cube complex X either splits as a nontrivial product or
Isom(X ) must contain a rank-one element [CS11]. Once a given group is known to act properly
and cocompactly on a CAT(0) cube complex the geometry of the cube complex controls the
geometry and algebra of the group. For instance, such a group is biautomatic and cannot have
Kazhdan’s property (T) [NR98, NR97], and it must satisfy a Tits alternative [SW05]
Here, we examine cube complexes admitting proper, cocompact group actions from the point
of view of certain convex subcomplexes. Specifically, given a CAT(0) cube complex X , we study
the following set F of convex subcomplexes: F is the smallest set of subcomplexes that contains
X , contains each combinatorial hyperplane, and is closed under cubical closest-point projection,
i.e. if A, B ∈ F, then gB (A) ∈ F, where gB : X → B is the cubical closest point projection.
Main results. The collection F of subcomplexes is of interest for several reasons. It was
first considered in [BHS14], in the context of finding hierarchically hyperbolic structures on X .
Specifically, in [BHS14], it is shown that if there exists N < ∞ so that each point of X is
contained in at most N elements of F, then X is a hierarchically hyperbolic space, which has
Date: January 24, 2018.
Hagen was supported by the Engineering and Physical Sciences Research Council grant of Henry Wilton.
Susse was partially supported by National Science Foundation grant DMS-1313559.
1
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
2
numerous useful consequences outlined below; the same finite multiplicity property of F has
other useful consequences outlined below. When this finite multiplicity condition holds, we say,
following [BHS14], that F is a factor system for X .
We believe that if X is proper and some group G acts properly and cocompactly by isometries
on X , then the above finite multiplicity property holds, and thus G is a hierarchically hyperbolic
group. In [BHS14], it is shown that this holds when G has a finite-index subgroup acting
cospecially on X , and it is also verified in a few non-cospecial examples.
This conjecture has proved surprisingly resistant to attack; we earlier believed we had a proof.
However, a subtlety in Proposition 5.1 means that at present our techniques only give a complete
proof under various conditions on the G–action, namely:
Theorem A. Let G act properly and cocompactly on the proper CAT(0) cube complex X . Then
F is a factor system for X provided any one of the following conditions is satisfied (up to
passing to a finite-index subgroup of G):
• the action of G on X is rotational;
• the action of G on X satisfies the weak height condition for hyperplanes;
• the action of G on X satisfies the essential index condition and the Noetherian intersection of conjugates condition (NICC) on hyperplane-stabilisers.
Hence, under any of the above conditions, X is a hierarchically hyperbolic space and G a hierarchically hyperbolic group.
Conversely, if F is a factor system, then the G–action satisfies the essential index condition
and the NICC.
The auxiliary conditions are as follows. The action of G is rotational if, whenever A, B are
hyperplanes of X , and g ∈ StabG (B) has the property that A and gA cross or osculate, then A
lies at distance at most 1 from B. This condition is prima facie weaker than requiring that the
action of G on X be cospecial, so Theorem A generalises the results in [BHS14]. (In fact, the
condition above is slightly stronger than needed; compare Definition 4.1.)
A subgroup K ≤ G satisfies the weak T
finite height condition if the following holds. Let
{gi }i∈I ⊂ G be an infinite set so that K ∩ i∈J K gi is infinite for all finite J ⊂ I. Then there
exist distinct gi , gj so that K ∩ K gi = K ∩ K gj . The action of G on X satisfies the weak height
condition for hyperplanes if each hyperplane stabiliser satisfies the weak height condition.
This holds, for example, when each hyperplane stabiliser has finite height in the sense
of [GMRS98]. Hence Theorem A implies that F is a factor system when X is hyperbolic,
without invoking virtual specialness [Ago13] because quasiconvex subgroups (in particular hyperplane stabilisers) have finite height [GMRS98]; the existence of a hierarchically hyperbolic
structure relative to F also follows from recent results of Spriano in the hyperbolic case [Spr17].
Also, if F is a factor system and X does not decompose as a product of unbounded CAT(0) cube
complexes, then results of [BHS14] imply that G is acylindrically hyperbolic. On the other hand,
recent work of Genevois [Gen16] uses finite height of hyperplane-stabilisers to verify acylindrical
hyperbolicity for certain groups acting on CAT(0) cube complexes. In our opinion, this provides
some justification for the naturality of the weak height condition for hyperplanes.
The NIC condition for hyperplanes
asks the following for each hyperplane-stabiliser K. Given
T
any {gi }i≥0 so that Kn = K ∩ ni=0 K gi is infinite for all n, there exists ` so that Kn and K`
are commensurable for n ≥ `. Note that ` is allowed to depend on {gi }i≥0 . The accompanying
essential index condition asks that there exists a constant ζ so that for any F ∈ F , the stabiliser
of F has index at most ζ in the stabiliser of the essential core of F , defined in [CS11]. These
conditions are somewhat less natural than the preceding conditions, but they follow fairly easily
from the finite multiplicity of F.
We prove Theorem A in Section 6. There is a unified argument under the weak finite height
and NICC hypotheses, and a somewhat simpler argument in the presence of a rotational action.
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
3
To prove Theorem A, the main issue is to derive a contradiction from the existence of an
infinite strictly ascending chain {Fi }, in F, using that the corresponding chain of orthogonal
complements must strictly descend. The existence of such chains can be deduced from the
failure of the finite multiplicity of F using only the proper cocompact group action; it is in
deriving a contradiction from the existence of such chains that the other conditions arise.
Any condition that allows one to conclude that the Fi have bounded-diameter fundamental
domains for the actions of their stabilisers yields the desired conclusion. So, there are most
likely other versions of Theorem A using different auxiliary hypotheses. We are not aware of a
cocompactly cubulated group which is not covered by Theorem A.
Hierarchical hyperbolicity. Hierarchically hyperbolic spaces/groups (HHS/G’s), introduced
in [BHS14, BHS15], were proposed as a common framework for studying mapping class groups
and (certain) cubical groups. Knowledge that a group is hierarchically hyperbolic has strong
consequences that imply many of the nice properties of mapping class groups.
Theorem A and results of [BHS14] (see Remark 13.2 of that paper) together answer Question
8.13 of [BHS14] and part of Question A of [BHS15] — which ask whether a proper cocompact
CAT(0) cube complex has a factor system — under any of the three auxiliary hypotheses
in Theorem A. Hence our results expand the class of cubical groups that are known to be
hierarchically hyperbolic. Some consequences of this are as follows, where X is a CAT(0) cube
complex on which G acts geometrically, satisfying any of the hypotheses in Theorem A:
• In combination with [BHS14, Corollary 14.5], Theorem A shows that G acts acylindrically on the contact graph of X , i.e. the intersection graph of the hyperplane carriers,
which is a quasi-tree [Hag14].
• Theorem A combines with Theorem 9.1 of [BHS14] to provide a Masur-Minsky style
distance estimate in G: up to quasi-isometry, the distance in X from x to gx, where
g ∈ G, is given by summing the distances between the projections of x, gx to a collection
of uniform quasi-trees associated to the elements of the factor system.
• Theorem A combines with Corollary 9.24 of [DHS16] to prove that either G stabilizes
a convex subcomplex of X splitting as the product of unbounded subcomplexes, or G
contains an element acting loxodromically on the contact graph of X . This is a new
proof of a special case of the Caprace-Sageev rank-rigidity theorem [CS11].
Proposition 11.4 of [BHS14] combines with Theorem A to prove:
Theorem B. Let G act properly and cocompactly on the proper CAT(0) cube complex X , with
the action satisfying the hypotheses of Theorem A. Let F be the factor system, and suppose that
for all subcomplexes A ∈ F and g ∈ G, the subcomplex gA is not parallel to a subcomplex in F
which is in the orthogonal complement of A. Then X quasi-isometrically embeds in the product
of finitely many trees.
The set F is shown in Section 2 to have a graded structure: the lowest-grade elements are
combinatorial hyperplanes, then we add projections of combinatorial hyperplanes to combinatorial hyperplanes, etc. This allows for several arguments to proceed by induction on the grade.
Essentially by definition, a combinatorial hyperplane H is the orthogonal complement of a 1–
cube e, i.e. a maximal convex subcomplex H for which X contains the product e × H as a
subcomplex. We show, in Theorem 3.3, that F is precisely the set of convex subcomplexes F
such that there exists a compact, convex subcomplex C so that the orthogonal complement of
C is F . This observation plays an important role.
Relatedly, we give conditions in Proposition 5.1 ensuring that F is closed under the operation of taking orthogonal complements. As well as being used in the proof of Theorem A,
this is needed for applications of recent results about hierarchically hyperbolic spaces to cube
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
4
complexes. Specifically, in [ABD17], Abbott-Behrstock-Durham introduce hierarchically hyperbolic spaces with clean containers, and work under that (quite natural) hypothesis. Among
its applications, they produce largest, universal acylindrical actions on hyperbolic spaces for
hierarchically hyperbolic groups. We will not give the definition of clean containers for general
hierarchically hyperbolic structures, but for the CAT(0) cubical case, our results imply that it
holds for hierarchically hyperbolic structures on X obtained using F, as follows:
Theorem C (Clean containers). Let X be a proper CAT(0) cube complex on which the group
G acts properly and cocompactly, and suppose F is a factor system. Let F ∈ F, and let V ∈ F
be properly contained in F . Then there exists U ∈ F, unique up to parallelism, such that:
• U ⊂ F;
• V ,→ F extends to a convex embedding V × U ,→ F ;
• if W ∈ F, and the above two conditions hold with U replaced by W , then W is parallel
to a subcomplex of U .
Proof. Let x ∈ V be a 0–cube and let U 0 = V ⊥ , the orthogonal complement of V at x (see
Definition 1.10). Proposition 5.1 implies that U 0 ∈ F, so U = U 0 ∩ F is also in F, since F is
closed under projections. By the definition of the orthogonal complement, V → X extends to
a convex embedding V × U 0 → X , and (V × U 0 ) ∩ F = V × U since V ⊂ F and F, V × U 0 are
convex. Now, if W ∈ F and W ⊂ F , and V → X extends to a convex embedding V × W → X ,
then V × W is necessarily contained in F , by convexity. On the other hand, by the definition
of the orthogonal complement, W is parallel to a subcomplex of U 0 . Hence W is parallel to a
subcomplex of U . This implies the third assertion and uniqueness of U up to parallelism.
We now turn to applications of Theorem A that do not involve hierarchical hyperbolicity.
Simplicial boundary and staircases. Theorem A also gives insight into the structure of the
boundary of X . We first mention an aggravating geometric/combinatorial question about cube
complexes which is partly answered by our results.
A staircase is a CAT(0) cube complex Z defined as follows. First, a ray-strip is a square
complex of the form Sn = [n, ∞) × [− 12 , 12 ], with the product cell-structure where [n, ∞) has
0–skeleton {m ∈ Z : m ≥ n} and [− 12 , 12 ] is a 1–cube. To build Z, choose an increasing sequence
(an )n of integers, collect the ray-strips San ∼
= [an , ∞) × [− 21 , 12 ], and identify [an+1 , ∞) × {− 12 } ⊂
1
San +1 with [an , ∞) × { 2 } ⊂ San for each n. The model staircase is the cubical neighbourhood
of a Euclidean sector in the standard tiling of E2 by squares, with one bounding ray in the
x–axis, but for certain (an )n , Z may not contain a nontrivial Euclidean sector. (One can define
a d-dimensional staircase analogously for d ≥ 2.) We will see below that the set of “horizontal”
hyperplanes in Z – see Figure 1 for the meaning of “horizontal” – is interesting because there is
no geodesic ray in Z crossing exactly the set of horizontal hyperplanes.
Figure 1. Part of a staircase.
Now let X be a proper CAT(0) cube complex with a group G acting properly and cocompactly.
Can there be a convex staircase subcomplex in X ? A positive answer seems very implausible,
but this question is open and has bothered numerous researchers.
In Section 7, we prove that if F is a factor system, then X cannot contain a convex staircase.
Hence, if X admits a geometric group action satisfying any of the hypotheses in Theorem A,
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
5
then X cannot contain a convex staircase. In fact, we prove something more general, which is
best formulated in terms of the simplicial boundary ∂4 X .
Specifically, the simplicial boundary ∂4 X of a CAT(0) cube complex X was defined in [Hag13].
Simplices of ∂4 X come from equivalence classes of infinite sets H of hyperplanes such that:
• if H, H 0 ∈ H are separated by a hyperplane V , then V ∈ H;
• if H1 , H2 , H3 ∈ H are disjoint, then one of H1 , H2 , H3 separates the other two;
• for H ∈ H, at most one halfspace associated to H contains infinitely many V ∈ H.
These boundary sets are partially ordered by coarse inclusion (i.e., A B if all but finitely
many hyperplanes of A are contained in B), and two are equivalent if they have finite symmetric
difference; ∂4 X is the simplicial realization of this partial order. The motivating example of
a simplex of ∂4 X is: given a geodesic ray γ of X , the set of hyperplanes crossing γ has the
preceding properties. Not all simplices are realized by a geodesic ray in this way: a simplex in
X is called visible if it is. For example, if Z is a staircase, then ∂4 Z has an invisible 0–simplex,
represented by the set of horizontal hyperplanes.
Conjecture 2.8 of [BH16] holds that every simplex of ∂4 X is visible when X admits a proper
cocompact group action; Theorem A hence proves a special case. Slightly more generally:
Corollary D. Let X be a proper CAT(0) cube complex which admits a proper and cocompact
group action satisfying the NICC for hyperplanes. Then every simplex is ∂4 X is visible. Moreover, let v ∈ ∂4 X be a 0–simplex. Then there exists a CAT(0) geodesic ray γ such that the set
of hyperplanes crossing γ represents v.
The above could, in principle, hold even if F is not a factor system, since we have not imposed
the essential index condition. The “moreover” part follows from the first part and [Hag13,
Lemma 3.32]. Corollary D combines with [BH16, Theorem 5.13] to imply that ∂4 X detects
thickness of order 1 and quadratic divergence for G, under the NICC condition. Corollary D
also implies the corollary about staircases at the beginning of this paper. More generally, we
obtain the following from Corollary D and a simple argument in [Hag13]:
Corollary E. Let γ be a CAT(0)-metric or combinatorial geodesic ray in X , where X is as in
Corollary D and the set of hyperplanes crossing γ represents a d-dimensional simplex of ∂4 X .
Then there exists a combinatorially isometrically embedded d+1-dimensional orthant subcomplex
O ⊆ Hull(γ). Moreover, γ lies in a finite neighbourhood of O.
(A k-dimensional orthant subcomplex is a CAT(0) cube complex isomorphic to the product of
k copies of the standard tiling of [0, ∞) by 1–cubes, and the convex hull Hull(A) of a subspace
A ⊆ X is the smallest convex subcomplex containing A.)
Corollary E is related to Lemma 4.9 of [Hua14] and to statements in [Xie05, BKS08] about
Euclidean sectors in cocompact CAT(0) cube complexes and arcs in the Tits boundary. In
particular it shows that in any CAT(0) cube complex with a proper cocompact group action
satisfying NICC, nontrivial geodesic arcs on the Tits boundary extend to arcs of length π/2.
Further questions and approaches. We believe that any proper cocompact CAT(0) cube
complex admits a factor system, but that some additional ingredient is needed to remove the
auxiliary hypotheses in Theorem A; we hope that the applications we have outlined stimulate
interest in finding this additional idea. Since the property of admitting a factor system is
inherited by convex subcomplexes [BHS14], we suggest trying to use G–cocompactness of X to
arrange for a convex (non-G–equivariant) embedding of X into a CAT(0) cube complex Y where
a factor system can be more easily shown to exist. One slightly outrageous possibility is:
Question 1. Let X be a CAT(0) cube complex which admits a proper and cocompact group
action. Does X embed as a convex subcomplex of the universal cover of the Salvetti complex
of some right-angled Artin group?
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
6
However, there are other possibilities, for example trying to embed X convexly in a CAT(0)
cube complex whose automorphism group is sufficiently tame to enable one to use the proof of
Theorem A, or some variant of it.
Plan of the paper. In Section 1, we discuss the necessary background. Section 2 contains
basic facts about F, and Section 3 relates F to orthogonal complements. Section 4 introduces
the auxiliary hypotheses for Theorem A, which we prove in Section 6. The applications to the
simplicial boundary are discussed in Section 7.
Acknowledgements. MFH thanks: Jason Behrstock and Alessandro Sisto for discussions of
factor systems during our work on [BHS14]; Nir Lazarovich and Dani Wise for discussions on
Question 1; Talia Fernós, Dan Guralnik, Alessandra Iozzi, Yulan Qing, and Michah Sageev for
discussions about staircases. We thank Richard Webb and Henry Wilton for helping to organize
Beyond Hyperbolicity (Cambridge, June 2016), at which much of the work on this paper was
done. Both of the authors thank Franklin for his sleepy vigilance and Nir Lazarovich for a
comment on the proof of Lemma 2.9. We are greatly indebted to Bruno Robbio and Federico
Berlai for a discussion which led us to discover a gap in an earlier version of this paper, and to
Elia Fioravanti for independently drawing our attention to the same issue. We also thank an
anonymous referee for helpful comments on an earlier version.
1. Background
1.1. Basics on CAT(0) cube complexes. Recall that a CAT(0) cube complex X is a simplyconnected cube complex in which the link of every vertex is a simplicial flag complex (see
e.g. [BH99, Chapter II.5], [Sag14, Wis, Che00] for precise definitions and background). In this
paper, X always denotes a CAT(0) cube complex. Our choices of language and notation for
describing convexity, hyperplanes, gates, etc. follow the account given in [BHS14, Section 2].
Definition 1.1 (Hyperplane, carrier, combinatorial hyperplane). A midcube in the unit cube
c = [− 12 , 12 ]n is a subspace obtained by restricting exactly one coordinate to 0. A hyperplane in
X is a connected subspace H with the property that, for all cubes c of X , either H ∩ c = ∅ or
H ∩c consists of a single midcube of c. The carrier N (H) of the hyperplane H is the union of all
closed cubes c of X with H ∩c 6= ∅. The inclusion H → X extends to a combinatorial embedding
∼
=
H × [− 21 , 21 ] −→ N (H) ,→ X identifying H × {0} with H. Now, H is isomorphic to a CAT(0)
cube complex whose cubes are the midcubes of the cubes in N (H). The subcomplexes H ± of
N (H) which are the images of H ×{± 12 } under the above map are isomorphic as cube complexes
to H, and are combinatorial hyperplanes in X . Thus each hyperplane of X is associated to two
combinatorial hyperplanes lying in N (H).
Remark. The distinction between hyperplanes (which are not subcomplexes) and combinatorial hyperplanes (which are) is important. Given A ⊂ X , either a convex subcomplex or a
hyperplane, and a hyperplane H, we sometimes say H crosses A to mean that H ∩ A 6= ∅.
Observe that the set of hyperplanes crossing a hyperplane H is precisely the set of hyperplanes
crossing the associated combinatorial hyperplanes.
Definition 1.2 (Convex subcomplex). A subcomplex Y ⊆ X is convex if Y is full — i.e. every
cube c of X whose 0–skeleton lies in Y satisfies c ⊆ Y — and Y (1) , endowed with the obvious
path-metric, is metrically convex in X (1) .
There are various characterizations of cubical convexity. Cubical convexity coincides with
CAT(0)–metric convexity for subcomplexes [Hag07], but not for arbitrary subspaces.
Definition 1.3 (Convex Hull). Given a subset A ⊂ X , we denote by Hull(A) its convex hull,
i.e. the intersection of all convex subcomplexes containing A.
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
7
If Y ⊆ X is a convex subcomplex, then Y is a CAT(0) cube complex whose hyperplanes have
the form H ∩ Y, where H is a hyperplane of X , and two hyperplanes H ∩ Y, H 0 ∩ Y intersect if
and only if H, H 0 intersect.
Recall from [Che00] that the graph X (1) , endowed with the obvious path metric dX in which
edges have length 1, is a median graph (and in fact being a median graph characterizes 1–skeleta
of CAT(0) cube complexes among graphs): given 0–cubes x, y, z, there exists a unique 0–cube
m = m(x, y, z), called the median of x, y, z, so that Hull(x, y) ∩ Hull(y, z) ∩ Hull(x, z) = {m}.
Let Y ⊆ X be a convex subcomplex. Given a 0–cube x ∈ X , there is a unique 0–cube y ∈ Y
so that dX (x, y) is minimal among all 0–cubes in Y. Indeed, if y 0 ∈ Y, then the median m of
x, y, y 0 lies in Y, by convexity of Y, but dX (x, y 0 ) = dX (x, m) + dX (m, y 0 ), and the same is true
for y. Thus, if dX (x, y 0 ) and dX (x, y) realize the distance from x to Y (0) , we have m = y = y 0 .
Definition 1.4 (Gate map on 0–skeleton). For a convex subcomplex Y ⊆ X , the gate map to
Y is the map gY : X (0) → Y (0) so that or all v ∈ X (0) , gY (v) is the unique 0–cube of Y lying
closest to v.
Lemma 1.5. Let Y ⊆ X be a convex subcomplex. Then the map gY from Definition 1.4 extends
to a unique cubical map gY : X → Y so that the following holds: for any d–cube c, of X with
vertices x0 , . . . , x2d ∈ X (0) , the map gY collapses c to the unique k–cube c0 in Y with 0–cells
gY (x0 ) . . . , gY (x2d ) in the natural way, respecting the cubical structure.
Furthermore, for any convex subcomplex Y, Y 0 ⊆ X , the hyperplanes crossing gY (Y 0 ) are
precisely the hyperplanes which cross both Y and Y 0 .
Proof. The first part is proved in [BHS14, p. 1743]: observe that the integer k is the number
of hyperplanes that intersect both c and Y. The hyperplanes that intersect c0 are precisely the
hyperplanes which intersect both c and Y. Indeed, the Helly property ensures that there are
cubes crossing exactly this set of hyperplanes, while convexity of Y shows that at least one such
cube lies in Y; the requirement that it contain gY (xi ) then uniquely determines c0 .
To prove the second statement, let H be a hyperplane crossing Y and Y 0 . Then H separates
0–cubes y1 , y2 ∈ Y 0 , and thus separates their gates in Y, since, because it crosses Y, it cannot
separate y1 or y2 from Y. On the other hand, if H crosses gY (Y 0 ), then it separates gY (y1 ), gY (y2 )
for some y1 , y2 ∈ Y 0 . Since it cannot separate y1 or y2 from Y, the hyperplane H must separate
y1 from y2 and thus cross Y. (Here we have used the standard fact that H separates yi from
gY (yi ) if and only if H separates yi from Y; see e.g. [BHS14, p. 1743].) Hence H crosses gY (Y 0 )
if and only if H crosses Y, Y 0 .
The next definition formalizes the relationship between gY (Y 0 ), gY 0 (Y) in the above lemma.
Definition 1.6 (Parallel). The convex subcomplexes F and F 0 are parallel, written F k F 0 , if
for each hyperplane H of X , we have H ∩ F 6= ∅ if and only if H ∩ F 0 6= ∅. The subcomplex
F is parallel into F 0 if F is parallel to a subcomplex of F 0 , i.e. every hyperplane intersecting F
intersects F 0 . We denote this by F ,→k F 0 . Any two 0–cubes are parallel subcomplexes.
The following is proved in [BHS14, Section 2] and illustrated in Figure 2:
Lemma 1.7. Let F, F 0 be parallel subcomplexes of the CAT(0) cube complex X . Then Hull(F ∪
F 0) ∼
= F × A, where A is the convex hull of a shortest combinatorial geodesic with endpoints on
F and F 0 . The hyperplanes intersecting A are those separating F, F 0 . Moreover, if D, E ⊂ X
are convex subcomplexes, then gE (D) ⊂ E is parallel to gD (E) ⊂ D.
The next Lemma will be useful in Section 2.
Lemma 1.8. For convex subcomplexes C, D, E, we have ggC (D) (E) k gC (gD (E)) k gC (gE (D)).
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
gD (E)
8
gE (D)
H
D
V
E
Figure 2. Here, D, E are convex subcomplexes. The gates gD (E), gE (D) are
parallel, and are joined by a product region, shown as a cylinder. Each hyperplane crossing Hull gD (E) ∪ gE (D) either separates gD (E), gE (D) (e.g. the
hyperplane V ) or crosses both of gD (E), gE (D) (e.g. the hyperplane H).
Proof. Let F = gC (D). Let H be a hyperplane so that H ∩ gF (E) 6= ∅. Then H ∩ E, H ∩ F 6= ∅
and thus H ∩ C, H ∩ D 6= ∅, by Lemma 1.5. Thus gF (E) is parallel into gC (gD (E)) and
gC (gE (D)). However, the hyperplanes crossing either of these are precisely the hyperplanes
crossing all of C, D, E. Thus, they cross F and D, and thus cross gF (E) by Lemma 1.5.
The next lemma will be used heavily in Section 6, and gives a group theoretic description of
the stabilizer of a projection.
Lemma 1.9. Let X be a proper CAT(0) cube complex on which G acts properly and cocompactly.
Let H, H 0 be two convex subcomplexes in X . Then StabG (gH (H 0 )) is commensurable with
StabG (H) ∩ StabG (H 0 ). Further, for any finite collection H1 , . . . , Hn of convex subcomplexes
whose
stabilisers act cocompactly, StabG (gH1 (gH2 (· · · gHn−1 (Hn ) · · · ))) is commensurable with
Tn
Stab
G (Hi ).
i=1
Proof. Let H and H 0 be two convex subcomplexes and suppose that g ∈ StabG (H) ∩ StabG (H 0 ).
Then g ∈ StabG (gH (H 0 )), and thus StabG (H) ∩ StabG (H 0 ) ≤ StabG (gH (H 0 )).
Let d = d(H, H 0 ). In particular, for any 0–cube in gH (H 0 ), its distance to H 0 is exactly
d. However, there are only finitely many such translates of H 0 in X , and any element of
StabG (gH (H 0 )) must permute these. Further, there are only finitely many translates of H in X
that contain gH (H 0 ), and any element of the stabilizer must also permute those. Thus, there
is a finite index subgroup (obtained as the kernel of the permutation action on the finite sets
of hyperplanes) that stabilizes both H and H 0 . A similar argument covers the case of finitely
many complexes.
Definition 1.10 (Orthogonal complement). Let A ⊆ X be a convex subcomplex. Let PA be the
convex hull of the union of all parallel copies of A, so that PA ∼
= A × A⊥ , where A⊥ is a CAT(0)
cube complex that we call the abstract orthogonal complement of A in X . Let φA : A × A⊥ → X
be the cubical isometric embedding with image PA .
For any a ∈ A(0) , the convex subcomplex φA ({a} × A⊥ ) is the orthogonal complement of A
at a. See Figures 3 and 4.
Lemma 1.11. Let A ⊆ X be a convex subcomplex. For any a ∈ A, a hyperplane H intersects
φA ({a} × A⊥ ) if and only if H is disjoint from every parallel copy of A but intersects each
hyperplane V with V ∩ A 6= ∅. Hence φA ({a} × A⊥ ), φA ({b} × A⊥ ) are parallel for all a, b ∈ A(0) .
Proof. This follows from the definition of PA : the hyperplanes crossing PA are partitioned into
two classes, those intersecting A (and its parallel copies) and those disjoint from A (and any of
its parallel copies). By definition, φA ({a} × A⊥ ) is the convex hull of the set of 0–cubes of PA
that are separated from a only by hyperplanes of the latter type. The product structure ensures
that any hyperplane of the first type crosses every hyperplane of the second type.
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
9
e⊥
e
Figure 3. Combinatorial hyperplanes are orthogonal complements of 1–cubes.
e⊥
1
s⊥
e⊥
2
⊥
(e1 ∪ s ∪ e2 )
e1
e2
s
Figure 4. Orthogonal complements of 1–cubes e1 , e2 and 2–cube s are shown.
Note that (e1 ∪ e2 ∪ s)⊥ k ge⊥ (ge⊥ (s⊥ )).
2
1
Finally, in [CS11], Caprace and Sageev defined the notion of an essential hyperplane and an
essential action. We record the necessary facts here.
Definition 1.12. Let X be a CAT(0) cube complex, and let F ⊆ X be a convex subcomplex.
Let G ≤ Aut(X ) preserve F .
(1) We say that a hyperplane H is essential in F if H crosses F , and each halfspace associated to H contains 0–cubes of F which are arbitrarily far from H.
(2) We say that H if G–essential in F if for any 0–cube x ∈ F , each halfspace associated
to H contains elements of Gx arbitrarily far from H.
(3) We say that G acts essentially on F if every hyperplane crossing F is G–essential in F .
If G acts cocompactly on F , then a hyperplane is G–essential if and only if it is essential in F .
Proposition 1.13. Let X be a proper CAT(0) cube complex admitting a proper cocompact action
by a group Γ, let F ⊆ X be a convex subcomplex, and let G ≤ Γ act on F cocompactly. Then:
(i) there exists a G–invariant convex subcomplex FbG , called the G–essential core of F ,
crossed by every essential hyperplane in F , on which G acts essentially and cocompactly;
(ii) FbG is unbounded if and only if F is unbounded;
(iii) if G0 ≤ Aut(X ) also acts on F cocompactly, then FbG0 is parallel to FbG ;
(iv) if G0 ≤ G is a finite-index subgroup, we can take FbG0 = FbG .
(v) the subcomplex FbG is finite Hausdorff distance from F .
Proof. By [CS11, Proposition 3.5], F contains a G–invariant convex subcomplex FbG on which
G acts essentially and cocompactly (in particular, FbG is unbounded if and only if F is, and
dHaus (F, FbG ) < ∞). The hyperplanes of F crossing FbG are precisely the G–essential hyperplanes. Observe that if H is a hyperplane crossing F essentially, then cocompactness of the
G–action on F implies that H is G–essential and thus crosses FbG . It follows that if G, G0 both
act on F cocompactly, then a hyperplane crossing F is G–essential if and only if it is G0 –essential,
so FbG , FbG0 cross the same hyperplanes, i.e. they are parallel. If G0 ≤ G, then FbG is G0 –invariant,
and if [G : G0 ] < ∞, then G0 also acts cocompactly on F , so we can take FbG = FbG0 .
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
10
1.2. Hyperclosure and factor systems.
Definition 1.14 (Factor system, hyperclosure). The hyperclosure of X is the intersection F of
all sets F0 of convex subcomplexes of X that satisfy the following three properties:
(1) X ∈ F0 , and for all combinatorial hyperplanes H of X , we have H ∈ F0 ;
(2) if F, F 0 ∈ F0 , then gF (F 0 ) ∈ F0 ;
(3) if F ∈ F0 and F 0 is parallel to F , then F 0 ∈ F0 .
Note that F is Aut(X )–invariant. If there exists ξ such that for all x ∈ X , there are at most ξ
elements F ∈ F with x ∈ F , then, following [BHS14], we call F a factor system for X .
Remark 1.15. The definition of a factor system in [BHS14] is more general than the definition
given above. The assertion that X has a factor system in the sense of [BHS14] is equivalent to
the assertion that the hyperclosure of X has finite multiplicity, because any factor system (in
the sense of [BHS14]) contains all elements of F whose diameters exceed a given fixed threshold.
Each of the five conditions in Definition 8.1 of [BHS14] is satisfied by F, except possibly the
finite multiplicity condition, Definition 8.1.(3). Indeed, parts (1),(2),(4) of that definition are
included in Definition 1.14 above. Part (5) asserts that there is a constant p so that gF (F 0 ) is in
the factor system provided F, F 0 are and diam(gF (F 0 )) ≥ p. Hence Definition 1.14.(2) implies
that this condition is satisfied by F, with p = 0.
2. Analysis of the hyperclosure
Fix a proper X with a group G acting properly and cocompactly. Let F be the hyperclosure.
2.1. Decomposition. Let F0 = {X } and, for each n ≥ 1, let Fn be the subset of F consisting
of those subcomplexes that can be written in the form gH (F ), where F ∈ Fn−1 and H is a
combinatorial hyperplane. Hence F1 is the set of combinatorial hyperplanes in X .
Lemma 2.1 (Decomposing F). Each F ∈ F − {X } is parallel to a subcomplex of the form
gH1 (gH2 (· · · gHn−1 (Hn ) · · · ))
for some n ≥ 1, where each Hi is a combinatorial hyperplane, i.e. F/k = (∪n≥1 Fn ) /k .
Proof. This follows by induction, Lemma 1.8, and the definition of F.
Corollary 2.2. F = ∪n≥0 Fn .
Proof. It suffices to show F ⊆ ∪n≥0 Fn . Let F ∈ F. If F = X , then F ∈ F0 . Otherwise,
by Lemma 2.1, there exists n ≥ 1, a combinatorial hyperplane H, and a convex subcomplex
F 0 ∈ ∪k≤n Fk with F k gH (F 0 ). Consider φF (PF ∼
= F × F ⊥ ) and choose f ∈ F ⊥ so that
φF (F × {f }) coincides with F . Then φF (F × {f }) lies in some combinatorial hyperplane H 0
– either H 0 = H and F = gH (F ), or F is non-unique in its parallelism class, so lies in a
combinatorial hyperplane in the carrier of a hyperplane crossing F ⊥ . Consider gH 0 (gH (F 0 )).
On one hand, gH 0 (gH (F 0 )) ∈ ∪k≤n+1 Fk . But gH 0 (gH (F 0 )) = F . Hence F ∈ ∪n≥1 Fn .
2.2. Stabilizers act cocompactly. The goal of this subsection is to prove that StabG (F ) acts
cocompactly on F for each F ∈ F. The following lemma is standard but we include a proof in
the interest of a self-contained exposition.
Lemma 2.3 (Coboundedness from finite multiplicity). Let X be a metric space and let G →
Isom(X) act cocompactly, and let Y be a G–invariant collection of subspaces such that every ball
intersects finitely many elements of Y. Then StabG (P ) acts coboundedly on P for every P ∈ Y.
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
11
Proof. Let P ∈ Y, choose a basepoint r ∈ X, and use cocompactness to choose t < ∞ so that
d(x, G · r) ≤ t for all x ∈ X. Choose g1 , . . . , gs ∈ G so that the G–translates of P intersecting
N10t (r) are exactly gi P for i ≤ s. Since Y is G–invariant and locally finite, s < ∞. (In
other words, the assumptions guarantee that there are finitely many cosets of StabG (P ) whose
corresponding translates of P intersect N10t (r), and we have fixed a representative of each of
these cosets.) Let Kr = maxi≤s d(r, gi r). For each g ∈ G, the translates of P that lie within
distance 10t of g · r are precisely gg1 P, . . . , ggs P and Kgr = Kr since d(r, gi · r) = d(g · r, ggi · r).
Fix a basepoint p ∈ P and let q ∈ P be an arbitrary point; choose hp , hq ∈ G so that
d(hp · r, p) ≤ t, d(hq · r, q) ≤ t. Without loss of generality, we may assume that hq = 1. Then
{hp gi P }si=1 is the set of P –translates intersecting N10t (hp · r). Now, p ∈ P and d(hp · r, p) < 10t,
so there exists i so that hp gi P = P , i.e. hp gi ∈ StabG (P ). Finally,
d(hp gi · q, p) ≤ d(hp gi · r, hp gi · q) + d(hp · r, p) + d(hp gi · r, hp · r) ≤ 2t + Kr ,
which is uniformly bounded. Hence the action of StabG (P ) on P is cobounded.
Remark 2.4. We use Lemma 2.3 when X and P are proper, to get a cocompact action.
Lemma 2.5. Let X be a proper CAT(0) cube complex with a group G acting cocompactly. Let
Y, Y 0 ⊂ X be parallel convex subcomplexes, then StabG (Y ) and StabG (Y 0 ) are commensurable.
Thus, if StabG (Y ) acts cocompactly on Y , then StabG (Y ) ∩ StabG (Y 0 ) acts cocompactly on Y 0 .
Proof. Let T be the set of StabG (Y )–translates of Y 0 . Then each gY 0 ∈ T is parallel to Y , and
dX (gY 0 , Y ) = dX (Y 0 , Y ). Since Y ⊥ is locally finite, |T | < ∞. Hence K = ker(StabG (Y ) →
Sym(T )) has finite index in StabG (Y ) but lies in StabG (Y ) ∩ StabG (Y 0 ). By Lemma 1.7, K acts
cocompactly on Hull(Y ∪ Y 0 ), stabilizing Y 0 , and thus acts cocompactly on Y 0 .
Definition 2.6. Let H ∈ F1 . For n ≥ 1, k ≥ 0, let Fn,H,k be the set of F ∈ Fn so that F =
gH (F 0 ) for some F 0 ∈ Fn−1 with d(H, F 0 ) ≤ k. Let Fn,H = ∪k≥0 Fn,H,k and Fn,k = ∪H∈F1 Fn,H,k .
Proposition 2.7 (Cocompactness). Let n ≥ 1. Then, for any F ∈ Fn , StabG (F ) acts cocompactly on F . Hence StabG (F ) acts cocompactly on F for each F ∈ F.
Proof. The second assertion follows from the first and Corollary 2.2. We argue by double
induction on n, k to prove the first assertion, with k as in Definition 2.6. First, observe that Fn ,
Fn,k are G–invariant for all n, k. Similarly, Fn,H,k is StabG (H)–invariant for all H ∈ F1 .
Base Case: n = 1. From local finiteness of X , cocompactness of the action of G and
Lemma 2.3, we see that StabG (H) acts cocompactly on H for each H ∈ F1 .
Inductive Step 1: (n, k) for all k implies (n + 1, 0). Let F ∈ Fn+1,0 . Then F = H ∩ F 0 ,
where H ∈ F1 and F 0 ∈ Fn . By definition, F 0 = gH 0 (F 00 ) for some F 00 ∈ Fn−1 and H 0 ∈ F1 .
Thus K = StabG (F 0 ) acts cocompactly on F 0 by induction.
Let S = {k(H ∩ F 0 ) : k ∈ K}, which is a K–invariant set of convex subcomplexes of F 0 .
Moreover, since the set of all K–translates of H is a locally finite collection, because X is locally
finite and H is a combinatorial hyperplane, S has the property that every ball in F 0 intersects
finitely many elements of S. Lemma 2.3, applied to the cocompact action of K on F 0 , shows
that StabK (H ∩ F 0 ) (which equals StabK (F )), and hence StabG (F ), acts cocompactly on F .
Inductive Step 2: (n, k) implies (n, k + 1). Let F ∈ Fn,k+1 so that F = gH (F 0 ) with
H ∈ F1 , F 0 ∈ Fn−1 and d = d(H, F 0 ) ≤ k + 1. If d ≤ k, induction applies. Thus, we can
assume that d = k + 1. Then there is a product region F × [0, d] ⊂ X with F × {0} = F ,
and F × {d} ⊂ F 0 . Then F1 := F × {1} is a parallel copy of F , and F1 ⊂ gH 0 (F 0 ) with
d(H 0 , F 0 ) = d − 1 ≤ k. By induction L = StabG (gH 0 (F 0 )) acts cocompactly on gH 0 (F 0 ).
We claim that F1 = gH 0 (F 0 ) ∩ gH 0 (H). To see this, note that the hyperplanes that cross F1
are exactly the hyperplanes that cross F 0 and H. However, those are the hyperplanes which
cross H 0 and F 0 which also cross H. It easily follows that the two subcomplexes are equal.
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
12
Now let T be the set of L–translates of F1 = gH 0 (F 0 ) ∩ gH 0 (H) in gH 0 (F 0 ). This is an L–
invariant collection of convex subcomplexes of gH 0 (F 0 ). Moreover, each ball in gH 0 (F 0 ) intersects
finitely many elements of T . Indeed, T is a collection of subcomplexes of the form T` =
g`H 0 (`H) ∩ gH 0 (F 0 ), where ` ∈ L. Recall that dX (H, H 0 ) = 1. Hence, fixing y ∈ gH 0 (F 0 )
and t ≥ 0, if {T`i }i∈I ⊆ T is a collection of elements of T , all of which intersect Nt (y), then
{`i H, `i H 0 }i∈I all intersect Nt+1 (y). However, by local finiteness of X there are only finitely
many distinct `i H, `i H 0 . Further, if `i H = `j H and `i H 0 = `j H 0 , then T`i = T`j . Thus, the
index set I must be finite. Hence, by Lemma 2.3 and cocompactness of the action of L on
gH 0 (F 0 ), we see (as in Inductive Step 1) that StabG (F1 ) acts cocompactly on F1 . Now, since F1
is parallel to F , we see by Lemma 2.5 that StabG (F ) acts cocompactly on F .
The next Lemma explains how to turn the algebraic conditions on the G–action described
in Section 4 into geometric properties of the convex subcomplexes in F. This is of independent
interest, giving a complete algebraic characterization of when two cocompact subcomplexes have
parallel essential cores.
Lemma 2.8 (Characterization of commensurable stabilizers). Let Y1 and Y2 be two convex
subcomplexes of X and let Gi = StabG (Yi ). Suppose further that Gi acts on Yi cocompactly.
c1 and the G2 –essential
Then G1 and G2 are commensurable if and only of the G1 –essential core Y
c
core Y2 are parallel.
c1 , Y
c2 are parallel, then Lemma 2.5 shows that StabG (Y
c1 ), StabG (Y
c2 ) contain
Proof. First, if Y
c
c
b
StabG (Y1 ) ∩ StabG (Y2 ) as a finite-index subgroup. Since StabG (Yi ) contains Gi as a finite-index
subgroup, it follows that G1 ∩ G2 has finite index in G1 and in G2 .
Conversely, suppose that G1 , G2 have a common finite-index subgroup. Thus, G1 ∩ G2 acts
cocompactly on both Y1 and Y2 . This implies that Y1 , Y2 lie at finite Hausdorff distance, since
choosing r > 0 and yi ∈ Yi so that (G1 ∩ G2 )Br (yi ) = Yi , we see that Y1 is in the d(y1 , y2 ) + r
c1 , Y
c2 lie at finite Hausdorff
neighbourhood of Y2 , and vice-versa. Further, this implies that Y
distance, since Ybi is finite Hausdorff distance from Yi .
c1 , Y
c2 are not parallel. Then, without loss of generality, some hyperplane H of
Suppose that Y
c1 but not Y
c2 . Since G1 acts on Y
c1 essentially and cocompactly, [CS11] provides a
X crosses Y
←
− ←
−
←
−
c
hyperbolic isometry g ∈ G1 of Y1 so that g H ( H , where H is the halfspace of X associated
to H and disjoint from Y2 . Choosing n > 0 so that the translation length of g n exceeds the
distance from Y2 to the point in which some g–axis intersects H, we see that H cannot separate
←
−
g n Yb2 from the axis of g. Thus, g n Fb0 ∩ H 6= ∅, whence hgi ∩ G2 = {1}, contradicting that G1
and G2 are commensurable (since g has infinite order). Thus Yb1 , Yb2 are parallel.
2.3. Ascending or descending chains. We reduce Theorem A to a claim about chains in F.
T
Lemma 2.9 (Finding chains). Let U ⊆ F be an infinite subset satisfying U ∈U U 3 x for some
x ∈ X . Then one of the following holds:
• there exists a sequence {Fi }i≥1 in F so that x ∈ Fi ( Fi+1 for all i;
• there exists a sequence {Fi }i≥1 in F so that x ∈ Fi and Fi ) Fi+1 for all i.
Proof. Let Fx ⊇ U be the set of F ∈ F with x ∈ F . Let Ω be the directed graph with vertex set
Fx , with (F, F 0 ) a directed edge if F ( F 0 and there does not exist F 00 ∈ Fx with F ( F 00 ( F 0 .
Let F0 = {x}. Since x is the intersection of the finitely many hyperplane carriers containing
it, F0 ∈ F and in particular F0 ∈ Fx . Moreover, note that F0 has no incoming Ω–edges, since
F0 cannot properly contain any other subcomplex. For any F ∈ Fx , either Ω contains an edge
from F0 to F , or there exists F 0 ∈ Fx such that F0 ⊂ F 0 ⊂ F .
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
13
Hence either Fx contains an infinite ascending or descending ⊆–chain, or Ω is a connected
directed graph in which every non-minimal vertex as an immediate predecessor, and every nonmaximal vertex has an immediate successor. In the first two cases, we are done, so assume that
the third holds. In the third case, there is a unique vertex namely F0 , with no incoming edges
and there is a finite length directed path from F0 to any vertex.
Let F ∈ Fx and suppose that {Fi }i is the set of vertices of Ω so that (F, Fi ) is an edge. For
i 6= j, we have F ⊆ Fi ∩ Fj ( Fi , so since Fi ∩ Fj = gFi (Fj ) ∈ F, we have Fi ∩ Fj = F .
The set {Fi }i is invariant under the action of StabG (F ). Also, by Proposition 2.7, StabG (F )
acts cocompactly on F . A 0–cube y ∈ F is diplomatic if there exists i so that y is joined to a
vertex of Fi − F by a 1–cube in Fi . Only uniformly finitely many Fi can witness the diplomacy
of y since X is uniformly locally finite and Fi ∩ Fj = F whenever i 6= j. Also, y is diplomatic,
witnessed by Fi1 , . . . , Fik , if and only if gy is diplomatic, witnessed by gFi1 , . . . , gFik , for each
g ∈ StabG (F ). Since StabG (F ) y F cocompactly, we thus get |{Fi }i / StabG (F )| < ∞.
b be the graph with a vertex for each F ∈ F containing a point of G · x and a directed
Let Ω
b is a graded directed graph as above. For each
edge for minimal containment as above. Then Ω
b at distance n from a minimal element. The above
n ≥ 0, let Sn be the set of vertices in Ω
b
argument shows that G acts cofinitely on each Sn , and thus Ω/G
is locally finite. Hence, by
b
b (0) /G is finite. In the former
König’s infinity lemma, either Ω/G
contains a directed ray or Ω
b must contain a directed ray, in which case there exists {Fi } ⊆ F with Fi ( Fi+1 for all
case, Ω
i. Up to translating by an appropriate element of G, we can assume that x ∈ F1 . The latter
case means that the set of F ∈ F such that F ∩ G · x 6= ∅ is G–finite. But since G acts properly
and cocompactly on X , any G–invariant G–finite collection of subcomplexes whose stabilizers
act cocompactly has finite multiplicity, a contradiction.
3. Orthogonal complements of compact sets and the hyperclosure
We now characterise F in a CAT(0) cube complex X , without making use of a group action.
Lemma 3.1. Let A ⊆ B ⊆ X be convex subcomplexes and let a ∈ A. Then φB ({a} × B ⊥ ) ⊆
φA ({a} × A⊥ ).
Proof. Let x ∈ φB ({a} × B ⊥ ). Then every hyperplane H separating x from a separates two
parallel copies of B and thus separates two parallel copies of A, since A ⊆ B. It follows
from Lemma 1.11 that every hyperplane separating a from x crosses φA ({a} × A⊥ ), whence
x ∈ φA ({a} × A⊥ ).
⊥
Given a convex subcomplex F ⊆ X , fix a base 0–cube f ∈ F and
for brevity,
let F =
⊥
⊥
φF ({f } × F ⊥ ) ⊆ X . Note that f ∈ F ⊥ , and so we let F ⊥ = φF ⊥ {f } × (F ⊥ )
(here, the
(F ⊥ )
⊥
is the abstract orthogonal complement of F ⊥ ), which again contains f , and so we can
⊥ ⊥
similarly define F ⊥
etc.
Lemma 3.2. Let F be a convex subcomplex of X . Then
F⊥
⊥ ⊥
= F ⊥.
Proof. If F is a convex subcomplex, there is a parallel copy of F ⊥ based at each 0–cube of
⊥
F , since F × F ⊥ is a convex subcomplex of X . Thus F ,→k F ⊥ , and by Lemma 3.1 we
⊥ ⊥
have F ⊥ ⊇ F ⊥
. To obtain the other inclusion, we show that every parallel copy of F
⊥
is contained in a parallel copy of F ⊥ . This is clear since, letting A = F ⊥ , we have that
φA (A × A⊥ ) is a convex subcomplex of X , but F ⊂ A⊥ by the above, and thus φF (F × F ⊥ ) ⊆
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
14
⊥
⊥ ⊥
φF ⊥ (F ⊥ × F ⊥ ), both of which are convex subcomplexes of X . Hence F ⊥ ⊆ F ⊥
,
completing the proof.
3.1. Characterisation of F using orthogonal complements of compact sets. In this
section, we assume that X is locally finite, but do not need a group action.
Theorem 3.3. Let F ⊂ X be a convex subcomplex. Then F ∈ F if and only if there exists a
compact convex subcomplex C so that C ⊥ = F .
Proof. Let C be a compact convex subcomplex of X . Let H1 , . . . , Hk be the hyperplanes crossing
C. Fix a basepoint x ∈ C, and suppose the Hi are labeled so that x ∈ N (Hi ) for 1 ≤ i ≤ m,
T
and x 6∈ N (Hi ) for i > m, for some m ≤ k. Let F = ki=1 gH1 (Hi ), which contains x. Any
hyperplane H crosses φC ({x} × C ⊥ ) if and only if H crosses each Hi , which occurs if and only
if H crosses F . Hence F = φC ({x} × C ⊥ ), as required.
We now prove the converse. Let F ∈ Fn for n ≥ 1. If n = 1 and F is a combinatorial
hyperplane, F = e⊥ for some 1–cube e of X . Next, assume that n ≥ 2 and write F = gH (F 0 )
where F 0 ∈ Fn−1 and H is a combinatorial hyperplane. Induction on n gives F 0 = (C 0 )⊥ for
some compact convex subcomplex C 0 .
Let e be a 1–cube with orthogonal complement H ∈ F1 , chosen as close as possible to C 0 ,
so that d(e, C 0 ) = d(H, C 0 ). In particular, any hyperplane separating e from C 0 separates H
from C 0 . Moreover, we can and shall assume that C 0 was chosen in its parallelism class so that
d(e, C 0 ) is minimal when e, C 0 are allowed to vary in their parallelism classes.
Let C be the convex hull of (the possibly disconnected set) e ∪ C 0 .
We claim that gH (F 0 ) = {x} × C ⊥ . First, suppose that V is a hyperplane crossing {x} × C ⊥ .
Then V separates two parallel copies of C, each of which contains a parallel copy of e and one
of C 0 . Hence V crosses H and F 0 , so V crosses gH (F 0 ). Thus {x} × C ⊥ ⊆ gH (F 0 ).
Conversely, suppose V is a hyperplane crossing gH (F 0 ), i.e. crossing H and F 0 . To show
that V crosses {x} × C ⊥ , it suffices to show that V crosses every hyperplane crossing C. If W
crosses C, then either W separates e, C 0 or crosses e ∪ C 0 . In the latter case, V crosses W since
V crosses H and (C 0 )⊥ = F 0 . In the former case, since e, C 0 are as close as possible in their
parallelism classes, W separates e, C 0 only if it separates H from C 0 × (C 0 )⊥ , so W must cross
V . Hence gH (F 0 ) ⊆ {x} × C ⊥ . Since only finitely many hyperplanes V either cross e, cross C 0 ,
or separate e from C 0 , the subcomplex C is compact.
⊥
Corollary 3.4. If F ∈ F, then (F ⊥ ) = F .
⊥ ⊥
Proof. If F ∈ F, then F = C ⊥ for some compact C, by Theorem 3.3, and hence ((C ⊥ ) ) =
⊥
(F ⊥ ) = C ⊥ = F , by Lemma 3.2.
4. Auxiliary conditions
In this section, the group G acts geometrically on the proper CAT(0) cube complex X .
4.1. Rotation.
Definition 4.1 (Rotational). The action of G on X is rotational if the following holds. For
each hyperplane B, there is a finite-index subgroup KB ≤ StabG (B) so that for all hyperplanes
A with d(A, B) > 0, and all k ∈ KB , the carriers N (A) and N (kA) are either equal or disjoint.
Remark 4.2. For example, if G\X is (virtually) special, then G acts rotationally on X , but
one can easily make examples of non-cospecial rotational actions on CAT(0) cube complexes.
To illustrate how to apply rotation, we first prove a lemma about F2 .
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
15
Lemma 4.3 (Uniform cocompactness in F2 under rotational actions). Let G act properly, cocompactly, and rotationally on X . Then for any ball Q in X , there exists s ≥ 0, depending only
on X , and the radius of Q, so that for all A, B ∈ F1 , at most s distinct translates of gB (A) can
intersect Q.
Proof. Note that if B, gB are in the same G–orbit, and KB ≤ StabG (B) witnesses the rotation
g
of the action at B, then KB
does the same for gB, so we can assume that the index of KB ∈
StabG (B) is uniformly bounded by some constant ι as B varies over the (finitely many orbits
of) combinatorial hyperplanes.
Next, note that it suffices to prove the claim for Q of radius 0, since the general statement
will then follow from uniform properness of X .
Finally, it suffices to fix combinatorial hyperplanes B and A and bound the number of
StabG (B)–translates of A whose projections on B contain some fixed 0–cube x ∈ B, since
only boundedly many translates of B can contain x.
We can assume that A is disjoint from B, for otherwise gB (A) = A ∩ B, and the number of
translates of A containing x is bounded in terms of G and X only.
Now suppose d(A, B) > 0. First, let {g1 , . . . , gk } ⊂ KB be such that the translates gi gB (A)
T
T
are all distinct and x ∈ ki=1 gB (gi A) = ki=1 gi gB (A). For simplicity, we can and shall assume
that g1 = 1.
We can also assume, by multiplying our eventual bound by 2, that the gi A all lie on the same
side of B, i.e. the hyperplane B 0 whose carrier is bounded by B and a parallel copy of B does
not separate any pair of the gi A. By rotation, A, g2 A are disjoint, and hence separated by some
hyperplane V .
Since V cannot separate gB (A), gB (g2 A), we have that V separates either A or g2 A from
B. (The other possibility is that V = B but we have ruled this out above.) Up to relabelling,
we can assume the former. Then, for i ≥ 2, we have that gi V separates gi A from gi B = B.
Moreover, by choosing V as close as possible to B among hyperplanes that separate A from B
and g2 A, we see that the hyperplanes {gi V }ki=1 have pairwise-intersecting carriers, and at least
two of them are distinct. This contradicts the rotation hypothesis unless k = 1.
More generally, the above argument shows that if {g1 , . . . , gk } ⊂ StabG (B) are such that the
T
T
translates gi gB (A) are all distinct and x ∈ ki=1 gB (gi A) = ki=1 gi gB (A), then the number of gi
belonging to any given left coset of KB in StabG (B) is uniformly bounded. Since [StabG (B) :
KB ] ≤ ι, the lemma follows.
More generally:
Lemma 4.4. Let G act properly, cocompactly, and rotationally on X . Then there exists a
constant s0 so that the following holds. Let F ∈ F. Then for all ρ ≥ 0, at most s0 distinct
G–translates of F can intersect any ρ–ball in X .
Proof. As in the proof of Lemma 4.3, it suffices to bound the number of G–translates of F
containing a given 0–cube x. As in the same proof, it suffices to bound the number of StabG (B)–
translates of F containing x, where B is a combinatorial hyperplane for which x ∈ F ⊂ B.
By the first paragraph of the proof
Tn of Theorem 3.3, there exists n and combinatorial
Tn−1 hyperplanes A1 , . . . , An such that F = i=1 gB (Ai ). If An is parallel to B, then F = i=1 gB (Ai ),
so by choosing a smallest such collection, we have that no Ai is parallel to B.
Let gT1 , . . . , gk ∈ KB have the property that the gi F are all distinct and contain x. Note that
gi F = nj=1 gB (gi Aj ).
Now, if Aj is disjoint from B, then rotation implies that for all i, either gi Aj = Aj or
gi Aj ∩ Aj are disjoint. If gi Aj 6= Aj , there is a hyperplane V separating them, and V cannot
cross or coincide with B (as in the proof of Lemma 4.3). Hence V separates Aj , say, from B.
So gi V separates Aj from B, and gi V 6= gj V . By choosing V as close as possible to B, we have
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
16
(again as in Lemma 4.3) that V and gi V cross or osculate, which contradicts rotation. Hence
gi Aj = Aj for all such i, j.
T
Let J be the set of j ≤ n so that Aj is disjoint from B, so that j∈J gB (Aj ) is fixed by each
gi . Let J 0 = {1, 2, . . . , n} − J. Note that for all j ∈ J 0 , the combinatorial hyperplane Aj is one
of at most χ combinatorial hyperplanes that contain x and χ is the maximal degree of a vertex
in X . Moreover, gB (ATj ) = Aj ∩ Bj .
Since each gi fixes j∈J gB (Aj ), k must be bounded in terms of the number of translates of
T
j∈J 0 gB (Aj ) containing x; since we can assume that the Aj contain x, this follows.
As in the proof of Lemma 4.3, if g1 , . . . , gk ∈ StabG (B) have the property that the gi F are all
distinct and contain x, then the number of gi belonging to any particular hKB , h ∈ StabG (B)
is uniformly bounded, and the number of such cosets is bounded by ι, so the number of such
StabG (B)–translates of F containing x is uniformly bounded.
We now prove Theorem A in the special case where G acts on X rotationally.
Corollary 4.5. Let G act properly, cocompactly, and rotationally on the proper CAT(0) cube
complex X . Then F is a factor system.
Proof. By Lemma 4.4, there exists s0 < ∞ so that for all F ∈ F, at most s0 distinct G–translates
of F can contain a given point. By uniform properness of F and the proof of Lemma 2.3, there
exists R < ∞ so that each F ∈ F has the following property: fix a basepoint x ∈ F . Then for
any y ∈ F , there exists g ∈ StabG (F ) so that d(gx, y) ≤ R. Hence there exists k so that for all
F , the complex F contains at most k StabG (F )–orbits of cubes.
Conclusion in the virtually torsion-free case: If G is virtually torsion-free, then (passing
to a finite-index torsion-free subgroup) G\X is a compact nonpositively-curved cube complex
admitting a local isometry StabG (F )\F → G\X , where StabG (F )\F is a nonpositively-curved
cube complex with at most k cubes. Since there are only finitely many such complexes, and
finitely many such local isometries, the quotient G\F is finite. Since each x ∈ S is contained
in boundedly many translates of each F ∈ F, and there are only finitely many orbits in F, it
follows that x is contained in boundedly many elements of F, as required.
General case: Even if G is not virtually torsion-free, we can argue essentially as above, except we have to work with nonpositively-curved orbi-complexes instead of nonpositively-curved
cube complexes.
First, let Y be the first barycentric subdivision of X , so that G acts properly and cocompactly
on Y and, for each cell y of Y, we have that StabG (y) fixes y pointwise (see [BH99, Chapter
III.C.2].) Letting F 0 be the first barycentric subdivision of F , we see that F 0 is a subcomplex of
Y with the same properties with respect to the StabG (F )–action. Moreover, F 0 has at most k 0
StabG (F )–orbits of cells, where k 0 depends on dim X and k, but not on F .
The quotient G\Y is a complex of groups whose cells are labelled by finitely many different
finite subgroups, and the same is true for StabG (F )\F . Moreover, we have a morphism of
complexes of groups StabG (F )\F → G\Y which is injective on local groups. Since G acts
on X properly, the local groups in G\Y are finite. Hence there are boundedly many cells in
StabG (F )\F , each of which has boundedly many possible local groups (namely, the various
subgroups of the local groups for the cells of G\Y). Hence there are finitely many choices of
StabG (F )\F , and thus finitely many G–orbits in F, and we can conclude as above.
4.2. Weak height and essential index conditions.
Definition 4.6 (Weak height condition). Let G be a group and H ≤ G a subgroup. The
subgroup H satisfies theTweak height condition if the following holds. Let {gi }i∈I be an infinite
subset of G so that H ∩ i∈J H gi is infinite whenever J ⊂ I is a finite subset. Then there exists
i, j so that H ∩ H gi = H ∩ H gj .
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
17
Definition 4.7 (Noetherian Intersection of Conjugates Condition (NICC)). Let G be a group
and H ≤ G a subgroup. The subgroup H satisfies the Noetherian intersection of conjugates
condition (NICC) if the
holds. Let {gi }∞
i=1 be an infinite subset of distinct elements of
Tn following
g
i
G so that Hn = H ∩ i=1 H is infinite for all n, then there exists ` > 0 so that for all j, k ≥ `,
Hj and Hk are commensurable.
Definition 4.8 (Conditions for hyperplanes). Let G act on the CAT(0) cube complex X .
Then the action satisfies the weak height condition for hyperplanes or respectively NICC for
hyperplanes if, for each hyperplane B of X , the subgroup StabG (B) ≤ G satisfies the weak
height condition, or NICC, respectively.
Remark 4.9. Recall that H ≤ G has finite height if there exists n so that any collection of at
least n + 1 distinct left cosets of H has the property that the intersection of the corresponding
conjugates of H is finite. Observe that if H has finite height, then it satisfies both the weak
height condition and NICC, but that the converse does not hold.
Definition 4.10 (Essential index condition). The action of G on X satisfies the essential index
condition if there exists ζ ∈ N so that for all F ∈ F we have [StabG (Fb) : StabG (F )] ≤ ζ, where
Fb is the StabG (F )–essential core of F .
4.3. Some examples where the auxiliary conditions are satisfied. We now briefly consider some examples illustrating the various hypotheses. Our goal here is just to illustrate the
conditions in simple cases.
4.3.1. Special groups. Stabilizers of hyperplanes in a right-angled Artin groups are simply special
subgroups generated by the links of vertices. Let Γ be a graph generating a right angled Artin
group AΓ and let Λ be any inducted subgraph. Then AΛ is a special subgroup of AΓ , and AgΛi
g
has non-trivial intersection with AΛj if and only if gi gj−1 commutes with some subgraph Λ0 of Λ.
Further, their intersection is conjugate to the special subgroup AΛ0 . The weak height condition
and NICC follow. The essential index condition also holds since each AΛ acts essentially on the
corresponding element of F, which is just a copy of the universal cover of the Salvetti complex of
AΛ . In fact, these considerations show that hyperplane stabilisers in RAAGs have finite height.
It is easily verified that these properties are inherited by subgroups arising from compact local
isometries to the Salvetti complex, reconfirming that (virtually) compact special cube complexes
have factor systems in their universal covers.
4.3.2. Non-virtually special lattices in products of trees. The uniform lattices in products of trees
from [Wis96, BM97, Rat07, JW09] do not satisfy the weak height condition, but they do satisfy
NICC and the essential index condition.
Indeed, let G be a cocompact lattice in Aut(T1 × T2 ), where T1 , T2 are locally finite trees. If
A, B are disjoint hyperplanes, then gB (A) is a parallel copy of some Ti , i.e. gB (A) is again a
hyperplane; otherwise, if A, B cross, then gB (A) is a single point. The essential index condition
follows immediately, as does the NICC. However, G can be chosen so that there are pairs of
parallel hyperplanes A, B so that StabG (A) ∩ StabG (B) has arbitrarily large (finite) index in
StabG (B), so the weak height condition fails.
4.3.3. Graphs of groups. Let Γ be a finite graph of groups, where each vertex group Gv acts
properly and cocompactly on a CAT(0) cube complex Xv with a factor system Fv , and each edge
group Ge acts properly and cocompactly on a CAT(0) cube complex Xe , with a factor system
Fe , so that the following conditions are satisfied, where v, w are the vertices of e:
• there are G–equivariant convex embeddings Xe → Xv , Xw
• these embedding induce injective maps Fe → Fv , Fw .
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
18
If the action of G on the Bass-Serre tree is acylindrical, then one can argue essentially as in the
proof of [BHS15, Theorem 8.6] to prove that the resulting tree of CAT(0) cube complexes has a
factor system. Moreover, ongoing work on improving [BHS15, Theorem 8.6] indicates that one
can probably obtain the same conclusion in this setting without this acylindricity hypothesis.
Of course, one can imagine gluing along convex cocompact subcomplexes that don’t belong
to the factor systems of the incident vertex groups. Also, we believe that the property of being
cocompactly cubulated with a factor is preserved by taking graph products, and that one can
prove this by induction on the size of the graph by splitting along link subgroups. This is the
subject of current work in the hierarchically hyperbolic setting.
4.3.4. Cubical small-cancellation quotients. There are various ways of building more exotic examples of non-virtually special cocompactly cubulated groups using groups. In [Hua16, JW17],
Jankiewicz-Wise construct a group G that is cocompactly cubulated but does not virtually
split. They start with a group G0 of the type discussed in Remark 4.2 and consider a smallcancellation quotient of the free product of several copies of G0 . This turns out to satisfy strong
cubical small-cancellation conditions sufficient to produce a proper, cocompact action of G on
a CAT(0) cube complex. However, it appears that the small-cancellation conditions needed
to achieve this are also strong enough to ensure that the NICC and essential index properties
pass from G0 to G. The key points are that G is hyperbolic relative to G0 , and each wall in G
intersects each coset of G0 in at most a single wall (Lemma 4.2 and Corollary 4.5 of [JW17]).
5. F is closed under orthogonal complementation, given a group action
We now assume that X is a locally finite CAT(0) cube complex on which the group G acts
properly and cocompactly. Let F be the hyperclosure in X and let B be a constant so that each
0–cube x of X lies in ≤ B combinatorial hyperplanes.
For convex subcomplexes D, F of X , we write F = D⊥ to mean F = φD ({f } × D⊥ ) for some
f ∈ D, though we may abuse notation, suppress the φD , and write e.g. {f } × D⊥ to mean
φD ({f } × D⊥ ) when we care about the specific point f .
Proposition 5.1 (F is closed under orthogonal complements). Let G act on X properly and
cocompactly. Suppose that one of the following holds:
• the G–action on X satisfies the weak height property for hyperplanes;
• the G–action on X satisfies the essential index condition and the NICC for hyperplanes;
• F is a factor system.
Let A be a convex subcomplex of X . Then A⊥ ∈ F. Hence StabG (A⊥ ) acts on A⊥ cocompactly.
In particular, for all F ∈ F, we have that F ⊥ ∈ F.
We first need a lemma.
Lemma 5.2. Let A ⊂ X be a convex subcomplex with diam(A) > 0 and let x ∈ A(0) . Let
H1 , . . . , Hk be all of the hyperplanes intersecting A whose carriers contain x, so that for each i,
there is a combinatorial hyperplanes Hi+ associated to Hi with x ∈ Hi+ . Let Y = ∩ki=1 Hi+ . Let
S be the set of all combinatorial hyperplanes associated to hyperplanes crossing A. Then
\
A⊥ =
gY (H 0 ),
H 0 ∈S
where A⊥ denotes the orthogonal complement of A at x. In particular, if A is unique in its
parallelism class, then A⊥ = {x}. Finally, if diam(A) = 0, then A⊥ = X .
Proof. If A is a single 0–cube, then A⊥ = X by definition. Hence suppose that diam(A) > 0.
Let H 0 ∈ S. Since gY (H 0 ∩ A) ⊆ Y ∩ A = {x}, we see that x ∈ gY (H 0 ). Suppose that y ∈ A⊥ .
Then every hyperplane V separating y from x crosses each of the hyperplanes H 0 crossing A, and
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
19
T
thus crosses Y , whence yT∈ gY (H 0 ) for each H 0 ∈ S. Thus A⊥ ⊆ H 0 ∈S gY (H 0 ). On the other
hand, suppose that y ∈ H 0 ∈S gY (H 0 ). Then every hyperplane H 0 separating x from y crosses
T
every hyperplane crossing A, so y ∈ A⊥ . This completes the proof that A⊥ = H 0 ∈S gY (H 0 ).
Finally, A is unique in its parallelism class if and only if A⊥ = {x}, by definition of A⊥ .
We can now prove the proposition:
Proof of Proposition 5.1. The proof has several stages.
Setup using Lemma 5.2: If A is a single point, then A⊥ = X , which is in F by definition.
Hence suppose diam(A) > 0, and let H1 , . . . , Hk , x ∈ A, Y ⊂ X , and S be as in Lemma 5.2, so
\
A⊥ =
gY (H 0 ).
H 0 ∈S
Thus, to prove the proposition, it is sufficient to produce a finite collection H of hyperplanes
H 0 crossing A so that
\
\
gY (H 0 ) =
gY (H 0 ).
H 0 ∈S
H 0 ∈H
Indeed, if there is such a collection, then we have shown A⊥ to be the intersection of finitely
many elements of Fk , whence A⊥ ∈ Fk+|H| , as required. Hence suppose for a contradiction that
for any finite collection H ⊂ S, we have
\
\
gY (H 0 ) (
gY (H 0 ).
H 0 ∈S
H 0 ∈H
Bad hyperplanes crossing Y : For each m, let Hm be the (finite) set of hyperplanes H 0
A (x) = A ∩ N (x) (and hence satisfying x ∈ g (H 0 )).
intersecting Nm
m
Y
Consider the collection Bm of all hyperplanes W such that W crosses each element of Hm and
W crosses Y , but W fails to cross A⊥ . (This means that there exists j > m and some U ∈ Hj
so that W ∩ U = ∅.)
Suppose that there exists m so that Bn = ∅ for n > m. Then we can take Hm to be our
desired set H, and we are done. Hence suppose that Bm is nonempty for arbitrarily large m.
A (x) is parallel into U , but there exists j > m so that N A (x) is
Note that if U ∈ Bm , then Nm
j
A
not parallel into U . (Here Nj (x) denotes the j–ball in A about x.)
Elements of Bm osculating A⊥ : Suppose that U ∈ Bm , so that NjA (x) is not parallel into
U for some j > m. Suppose that U 0 is a hyperplane separating U from A⊥ .
Then U 0 separates U from x so, since U intersects Y and x ∈ Y , we have that U 0 intersects Y .
Since gY (A) = {x} is not crossed by any hyperplanes, U 0 cannot cross A. Hence U 0 separates
A (x), so N A (x) is parallel into U 0 (since it is parallel into U ). On the other hand,
U from Nm
m
0
since U separates U from A⊥ , U 0 cannot cross A⊥ , and thus fails to cross some hyperplane
crossing A. Hence U 0 ∈ Bm . Thus, for each m, there exists Um ∈ Bm whose carrier intersects
A⊥ . Indeed, we have shown that any element of Bm as close as possible to A⊥ has this property.
Hence we have a sequence of radii rn and hyperplanes Un so that:
• Un crosses Y ;
• N (Un ) ∩ A⊥ 6= ∅;
• NrAn (x) is parallel into Un for all n;
• NrAn+1 (x) is not parallel into Un , for all n.
The above provides a sequence {Vn } of hyperplanes so that for each n:
• Vn crosses A;
• Vn crosses Um for m ≥ n;
• Vn does not cross Um for m < n.
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
20
Indeed, for each n, choose Vn to be a hyperplane crossing NrAn+1 (x) but not crossing Un . For
each n, let V̄n be one of the two combinatorial hyperplanes (parallel to Vn ) bounding N (Vn ).
Claim 1. For each n, the subcomplex A⊥ is parallel into V̄n .
Proof of Claim 1. Let H be a hyperplane crossing A⊥ . Then, by definition of A⊥ , H crosses
each hyperplane crossing A. But Vn crosses A, so H must also cross Vn . Thus A⊥ is parallel
into Vn .
Next, since G acts on X cocompactly, it acts with finitely many orbits of hyperplanes, so,
by passing to a subsequence (but keeping our notation), we can assume that there exists a
hyperplane V crossing A and elements gn ∈ G, n ≥ 1 so that Vn = gn V for n ≥ 1. For
simplicity, we can assume g1 = 1.
Next, we can assume, after moving the basepoint x ∈ A a single time, that x ∈ V̄ , i.e. V̄ is
among the k combinatorial hyperplanes whose intersection is Y . This assumption is justified by
the fact that F is closed under parallelism, so it suffices to prove that any given parallel copy of
A⊥ lies in F. Thus we can and shall assume Y ⊂ V̄ .
For each m, consider the inductively defined subcomplexes Z1 = V̄ and for each m ≥ 2,
Zm = gV̄ (gg1 V̄ (· · · (ggm−1 V̄ (gm V̄ )) · · · )), which is an element of F.
Claim 2. For all m ≥ 1, we have A⊥ ⊆ Zm .
Proof of Claim 2. Indeed, since A⊥ ⊂ Y by definition, and Y ⊂ V̄ , we have A⊥ ⊂ V̄ . On the
other hand, for each n, Claim 1 implies that A⊥ is parallel into gn V̄ for all n, so by induction,
A⊥ ⊂ Zm for all m ≥ 1, as required.
Claim 3. For all m ≥ 1, we have Zm ) Zm+1 .
Proof of Claim 3. For each m, the hyperplane Um crosses Y , by construction. Since Y ⊆ V̄ ,
this implies that Um crosses V̄ . On the other hand, Um does not cross V̄m+1 . This implies that
Zm 6= Zm+1 . On the other hand, Zm+1 ⊂ Zm just by definition.
Tm
Let Km = StabG (V ) ∩ n=1 StabG (V )gn ; by Lemma 1.9 Km has finite index in StabG (Zm ).
Thus, since Zm ∈ F, we see that Km acts on Zm cocompactly. Claim 3 implies that no Zm is
compact, for otherwise we would be forced to have Zm = Zm+1 for some m. Since Km acts on
Zm cocompactly, it follows that Km is infinite for all m.
Thus far, we have not used any of the auxiliary hypotheses. We now explain how to derive a
contradiction under the weak finite height hypothesis.
Claim 4. Suppose that the G–action on X satisfies weak finite height for hyperplanes. Then,
after passing to a subsequence, we have Km = K2 for all m ≥ 2.
T
Proof of Claim 4. Let I ⊂ N be a finite set and let m = max I. Then n∈I StabG (V )gi contains
Km , and is thus infinite, since Km was shown above to be infinite. Hence, since StabG (V ) satisfies the weak finite height property, there exist distinct m, m0 so that StabG (V ) ∩ StabG (V )gm =
0
StabG (V ) ∩ StabG (V )gm .
0
Declare m ∼ m0 if StabG (V ) ∩ StabG (V )gm = StabG (V ) ∩ StabG (V )gm , so that ∼ is an
equivalence relation on N. If any ∼–class [m] is infinite, then we can pass to the subsequence
0
[m] and assume that StabG (V ) ∩ StabG (V )gn = StabG (V ) ∩ StabG (V )gn for all n. Otherwise, if every ∼–class is finite, then there are infinitely many ∼–classes, and we can pass to
a subsequence containing one element from each ∼–class. This amounts to assuming that
0
StabG (V ) ∩ StabG (V )gm 6= StabG (V ) ∩ StabG (V )gm for all distinct m, m0 , but this contradicts
weak finite height, as shown above.
Hence, passing to a subsequence, we can assume that StabG (V ) ∩ StabG (V )gm = StabG (V ) ∩
0
StabG (V )gm for all m, m0 ≥ 2, and hence Km = K2 for all m.
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
21
From Claim 4 and the fact that Km stabilises Zm for each m, we have that K2 stabilises each
Zm . Moreover, K2 acts on ZTm cocompactly. T
Now, by Claim 2, A⊥ ∈ m≥1 Zm , so x ∈ m≥1 Zm since x ∈ A⊥ . The fact that K2 acts
on Z2 cocompactly provides a constant R so that each point of Z2 lies R–close to the orbit
K2 · x. In fact, for any m ≥ 2, the K2 –invariance of Zm and the fact that x ∈ Zm implies that
Zm ⊂ NR (K2 · x).
Recall that for each m, the hyperplane Um has the property that d(Um , A⊥ ) ≤ 1, and hence
d(Um , Zm0 ) ≤ 1 for all m, m0 , by Claim 2. Thus d(Um , K2 · x) ≤ R + 1 for all m. So, there exists
an increasing sequence (mi )i so that the hyperplanes Umi all belong to the same K2 –orbit, since
only boundedly many hyperplanes intersect any (R + 1)–ball. Hence there is a single hyperplane
U and, for each i, an element ki ∈ K2 so that Umi = ki U .
Now, since K2 = Kmi for all i, we have that K2 stabilises V and each of the hyperplanes
gmi V . In particular, for each i, the set of hyperplanes crossing V and gmi V is K2 –invariant,
and the same is true for the set of hyperplanes crossing V but failing to cross gmi V . However,
our construction of the set of Um implies that there exists i0 so that U crosses Vmi0 but does
not cross Vmi0 +1 . Thus the same is true for each K2 –translate of U and hence for each Umi .
This contradicts our original construction of the Um ; this contradiction shows that the claimed
sequence of Um cannot exist, whence A⊥ ∈ F.
Having proved the proposition under the weak finite height assumption, we now turn to the
other hypotheses. Let {Zm } and {Km } be as above.
Claim 5. Suppose that the G–action on X satisfies the NICC and the essential index condition.
Then there exists ` so that StabG (Zm ) = StabG (Z` ) for all m ≥ `.
T
Proof of Claim 5. Let I ⊂ N be a finite set and let m = max I. Then n∈I StabG (V )gi contains
Km , and is thus infinite, since Km was shown above to be infinite. Hence, since StabG (V )
satisfies the NICC, there exists ` so that Km is commensurable with K` for all m ≥ `.
bm and Zb` are parallel for all m ≥ `. Moreover, since
Hence, by Lemma 2.8, we have that Z
Km ≤ K` , Proposition 1.13 implies that we can choose essential cores within their parallelism
classes so that Zbm = Zb` for all m ≥ `.
b` ) = StabG (Zbm ) for all m ≥ `. The essential index condition implies that
Let L = StabG (Z
StabG (Zm ) has uniformly bounded index in L as m → ∞, so by passing to a further infinite
subsequence, we can assume that StabG (Zm ) = StabG (Z` ) for all m ≥ ` (since L has finitely
many subgroups of each finite index).
Claim 5 implies that (up to passing to a subsequence), StabG (Z` ) = StabG (Zm ) preserves
Zm (and acts cocompactly on Zm ) for all m ≥ `.
For simplicity, we pass to a finite-index subgroup Γ of StabG (Z` ) = StabG (Zm ) that preserves
V (and necessarily still acts cocompactly on each Zm ). Such a subgroup exists since Zm is
contained in only finitely many hyperplanes, and that set of hyperplanes must be preserved
under the action of StabG (Z` ).
Claim 6. Let ` ≤ m < m0 . Then for all g ∈ Γ, we have that Um crosses gV and gVm but does
not cross Vm0 or gVm0 . Hence, for all g ∈ Γ, the hyperplane gUm crosses V and Vm but not Vm0 .
Proof of Claim 6. By definition, Zm and gVm (Zm ) are parallel. Hence gZm and ggVm (gZm ) are
parallel. But g ∈ Γ ≤ StabG (Vm ), so gZm = Zm . Hence ggVm (Zm ) is parallel to Zm . So, a
hyperplane U crosses Zm if and only if U crosses ggVm (Zm ). Now, Um crosses V and Vm , and
thus crosses Zm and hence crosses ggVm (Zm ). Thus Um crosses gVm .
On the other hand, suppose Um crosses gVm0 . Then since Um crosses V (because it crosses
Zm ), we have that Um crosses g −1 V = V and Vm0 , which contradicts our choice of the Um and
Vm0 when m < m0 .
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
22
To verify the final assertion, observe that by the first assertion, Um crosses V = g −1 V and
−1 ∈ Γ, so gU crosses V and V . On the other hand, U does not cross g −1 V 0 ,
m , since g
m
m
m
m
so gUm cannot cross Vm0 .
g −1 V
We can now argue exactly as in the weak finite height argument, with L playing the role
of K2 , to obtain the same contradiction and conclude that A⊥ ∈ F. Specifically, just using
Γ–cocompactness of Z` , we can pass to a subsequence of the Um that all belong to a single
Γ–orbit. These hyperplanes Um all cross V , and the set of m0 such that Um crosses Vm0 is
independent of which Um in the Γ–orbit we have chosen, by Claim 6. As before, this contradicts
our construction of the sequences (Um ), (Vm ), and hence A⊥ ∈ F.
Applying the factor system assumption: If F is a factor system, then since Zm ∈ F for
all m, and Zm ) Zm+1 for all m, we have an immediate contradiction, so A⊥ ∈ F.
Conclusion: We have shown that under any of the additional hypotheses, A⊥ ∈ F when
A ⊂ X is a convex subcomplex. This holds in particular if A ∈ F.
The preceding proposition combines with earlier facts to yield:
Corollary 5.3 (Ascending and descending chains). Let G act properly and cocompactly on X ,
satisfying any of the hypotheses of Proposition 5.1. Suppose that for all N ≥ ∞, there exists a 0–
cube x ∈ X so that x lies in at least N elements of F. Then there exist sequences (Fi )i≥1 , (Fi0 )i≥1
of subcomplexes in F so that all of the following hold for all i ≥ 1:
• Fi ( Fi+1 ;
0 ;
• Fi0 ) Fi+1
• Fi0 = Fi⊥ .
Moreover, there exists a 0–cube x that lies in each Fi and each Fi0 .
Proof. Lemma 2.9, cocompactness, and G–invariance of F provide a sequence (Fi ) in F and a
point x so that x ∈ Fi for all i and either Fi ( Fi+1 for all i, or Fi ) Fi+1 for all i. For each i,
let Fi0 = φFi ({x} × Fi⊥ ). Proposition 5.1 implies that each Fi0 ∈ F, and Lemma 3.1 implies that
(Fi0 ) is an ascending or descending chain according to whether (Fi ) was descending or ascending.
⊥ = F 0 , then by Corollary 3.4,
Assume first that Fi ( Fi+1 for all i. Now, if Fi0 = Fi⊥ = Fi+1
i+1
0
0
we have Fi = Fi+1 , a contradiction. Hence (Fi ) is properly descending, i.e. Fi0 ) Fi+1
for all i.
The case where (Fi ) is descending is identical. This completes the proof.
6. Proof of Theorem A
We first establish the setup. Recall that X is a proper CAT(0) cube complex with a proper,
cocompact action by a group G. We denote the hyperclosure by F; our goal is to prove that
there exists N < ∞ so that each 0–cube of X is contained in at most N elements of F, under
any of the three additional hypotheses of Theorem A.
If there is no such N , then Corollary 5.3 implies that there exists a 0–cube x ∈ X and a
sequence (Fi )i≥1 in F so that x ∈ Fi ( Fi+1 for each i ≥ 1. For the sake of brevity, given any
subcomplex E 3 x, let E ⊥ denote the orthogonal complement of E based at x. Corollary 5.3
⊥ for all i. Proposition 2.7 shows that Stab (F ⊥ )
also says that Fi⊥ ∈ F for all i and Fi⊥ ) Fi+1
G i
acts on Fi⊥ cocompactly
for
all
i.
S
T
Let U = i Fi and let I = i Fi⊥ , and note that U ⊥ = I and I ⊥ = U . In particular,
⊥
⊥
U ⊥ = I ⊥ = U and I ⊥ = U ⊥ = I. From here, we can now prove our main theorem:
Proof of Theorem A. We have already proved the theorem under the rotation hypothesis, in
Corollary 4.5. Hence suppose that either weak finite height holds or the NICC and essential
index conditions both hold, so that Proposition 5.1 implies that U, I ∈ F.
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
23
By Corollary 3.4, we have compact convex subcomplexes D, E with U = D⊥ and I = E ⊥ .
Moreover, we can take D ⊂ I and E ⊂ U . Now, Corollary 3.4, Theorem 3.3, and Proposition 5.1
provide, for each i ≥ 1, a compact, convex subcomplex Ci , containing x and contained in Fi , so
that Ci⊥ = Fi⊥ .
0
Let C10 = C1 and, for each i ≥ 2, let Ci0 = Hull(Ci−1
∪ Ci ). Then Ci0 is convex, by definition,
and compact, being the convex hull of the union of a pair of compact subspaces. Now, Ci0 ⊆ Fi
0
for each i. Indeed, C10 = C1 ⊆ F1 by construction. Now, by induction, Ci−1
⊂ Fi−1 , so
0
0
Ci−1 ⊂ Fi since Fi−1 ⊂ Fi . But Ci ⊂ Fi by construction, so Ci ∪ Ci−1 ⊂ Fi , whence the hull of
that union, namely Ci0 , lies in Fi since Fi is convex. Hence Ci ⊆ Ci0 ⊆ Fi for i ≥ 1.
By Lemma 3.1, for each i, Fi⊥ ⊆ (Ci0 )⊥ ⊆ Ci⊥ = Fi⊥ since Ci ⊆ Ci0 . Hence (Ci0 )i≥1 is an
ascending sequence of convex, compact subcomplexes, containing x, with (Ci0 )⊥ = Fi⊥ for all i.
T
T
S 0 ⊥
S
Note that i (Ci0 )⊥ = i Fi⊥ = I. However, I = ( S
Ci ) . But, Ci0 cannot be compact since
0 , and by Corollary 3.4, we can choose E ⊆
0 for some R. But by
Ci0 ( Ci+1
Ci0 . Thus E ⊆ CR
⊥
⊥
⊥
0
Lemma 3.1 this means that I = E ⊇ (CR ) = FR , a contradiction. Thus, F must have finite
multiplicity, as desired.
We show now that for cube complexes that admit geometric actions, having a factor system
implies the NICC for hyperplanes and essential index conditions for hyperplanes. Thus, any
proof that F forms a factor system for all cocompact cubical groups must necessarily show that
any group acting geometrically on a CAT(0) cube complex satisfies these conditions.
Theorem 6.1. Let X be a CAT(0) cube complex admitting a geometric action by group G. If
F is a factor system, then G satisfies NICC for hyperplanes and the essential index condition.
Proof. Suppose that F is a factor system and at most N elements of F can contain any given
x ∈ X (0) . Then for any A, B ∈ F1 , there are at most N elements of F which can contain Fb, the
StabG (F )–essential core of F . In particular, there are at most N distinct StabG (Fb)–translates
of F . Thus, [StabG (Fb), StabG (F )] ≤ N , verifying the essential index condition.
To verify NICC for hyperplanes, let H be a hyperplane and let K = Stab
(H). Let {gi }∞
i=1
TG
n
be sequence of distinct elements of G so that for n ≥ 1, the subgroup K ∩ i=1 K gi is infinite.
Consider the hyperplane H, notice that K gi is the stabilizer of gi H. Now, consider F1 =
gH (g1 H) and inductively define Fk = gFk−1 (gH (gk H)). Since F is a factor system, the set of
G–translates of Fk−1 and gH (gk H) have finite multiplicity for all k ≥ 2, and so we can apply
the argument of Lemma 1.9 and induction to conclude that StabG (Fk ) is commensurable with
T
Gk = K ∩ ki=1 K gk , which is infinite by assumption.
Since F is a factor system, there must be some ` so that for all k ≥ `, Fk = F` . In this case,
Gk and G` are commensurable for all k ≥ `, and in particular Gk and Gk0 are commensurable
for all k, k 0 ≥ `, and thus G satisfies NICC for hyperplanes.
7. Factor systems and the simplicial boundary
Corollary D follows from Theorem A, Proposition 7.1 and [Hag13, Lemma 3.32]. Specifically,
the first two statements provide a combinatorial geodesic ray representing each boundary simplex
v, and when v is a 0–simplex, [Hag13, Lemma 3.32] allows one to convert the combinatorial
geodesic ray into a CAT(0) ray. Proposition 7.1 is implicit in the proof of [DHS16, Theorem
10.1]; we give a streamlined proof here.
Proposition 7.1. Let X be a CAT(0) cube complex with a factor system F. Then each simplex σ
of ∂4 X is visible, i.e. there exists a combinatorial geodesic ray α such that the set of hyperplanes
intersecting α is a boundary set representing the simplex σ.
Remark 7.2. Proposition 7.1 does not assume anything about group actions on X , but instead
shows that the existence of an invisible boundary simplex is an obstruction to the existence of
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
24
a factor system. The converse does not hold: counterexamples can be constructed by beginning
with a single combinatorial ray, and gluing to the nth vertex a finite staircase Sn , along a single
vertex. The staircase Sn is obtained from [0, n]2 by deleting all squares that are strictly above
the diagonal joining (0, 0) to (n, n). In this case, F has unbounded multiplicity, and any factor
system must contain all elements of F exceeding some fixed threshold diameter, so the complex
cannot have a factor system.
Proof of Proposition 7.1. Let σ be a simplex of ∂4 X . Let σ 0 be a maximal simplex containing
σ, spanned by v0 , . . . , vd . The existence of σ 0 follows from [Hag13, Theorem 3.14], which says
that maximal simplices exist since X is finite-dimensional (otherwise, it could not have a factor
system). By Theorem 3.19 of [Hag13], which says that maximal simplices are visible, σ 0 is
visible, i.e. there exists a combinatorial geodesic ray γ such that the set H(γ) of hyperplanes
crossing γ is a boundary set representing σ 0 . We will prove that each 0–simplex vi is visible. It
then follows from [Hag13, Theorem 3.23] that any face of σ 0 (hence σ) is visible.
Let Y be the convex hull of γ. The set of hyperplanes crossing Y is exactly H(γ). Since
Y is convex in X , Lemma 8.4 of [BHS14], which provides an induced factor system on convex
subcomplexes of cube complexes with factor systems, implies that Y contains a factor system.
F
By Theorem 3.10 of [Hag13], we can write H(γ) = di=1 Vi , where each Vi is a minimal
boundary set representing the 0–simplex vi . Moreover, up to reordering and discarding finitely
many hyperplanes (i.e. moving the basepoint of γ) if necessary, whenever i < j, each hyperplane
H ∈ Vj crosses all but finitely many of the hyperplanes in Vi .
For each 1 ≤ i ≤ d, minimality of Vi provides a sequence of hyperplanes (Vni )n≥0 in Vi so
i
that Vni separates Vn±1
for n ≥ 1 and so that any other U ∈ Vi separates Vmi , Vni for some m, n,
by the proof of [Hag13, Lemma 3.7] or [CFI16, Lemma B.6] (one may have to discard finitely
many hyperplanes from Vi for this to hold; this replaces γ with a sub-ray and shrinks Y).
We will show that, after discarding finitely many hyperplanes from H(γ) if necessary, every
element of Vi crosses every element of Vj , whenever i 6= j. Since every element of Vi either lies
in (Vni )n or separates two elements of that sequence, it follows that U and V cross whenever
T
U ∈ Vi , V ∈ Vj and i 6= j. Then, for any i, choose n ≥ 0 and let H = j6=i Vnj . Projecting γ to
H yields a geodesic ray in Y, all but finitely many of whose dual hyperplanes belong to Vi , as
required. Hence it suffices to show that Vni and Vmj cross for all m, n whenever i 6= j.
j
Fix j ≤ d and i < j. For each n ≥ 0, let m(n) ≥ 0 be minimal so that Vm(n)
fails to cross Vni .
Note that we may assume that this is defined: if Vni crosses all Vmj , then, since Vmj crosses all
but finitely many of the hyperplanes from Vi , it crosses Vki for k >> n. Since it also crosses Vni ,
it must also cross Vri for all n ≤ r ≤ k. By discarding Vki for k ≤ n we complete the proof. Now
suppose that m(n) is bounded as n → ∞. Then there exists N so that Vni , Vmj cross whenever
m, n ≥ N , and we are done, as before.
Hence suppose that m(n) → ∞ as n → ∞. In other words, for all m ≥ 0, there exists
n ≥ 0 so that Vmj crosses Vki if and only if k ≥ n. Choose M 0 and choose n maximal with
j
m(n) < M . Then all of the hyperplanes Vm(k)
with k ≤ n cross Vki , . . . , Vni but do not cross Vti
j
for t < k. Hence the subcomplexes gVni (Vki ), k ≤ n are all different: gVni (Vki ) intersects Vm(k)
but
i
i
i
i
gVni (Vk−1 ) does not. On the other hand, since Vk separates V` from Vn when ` < k < n, every
hyperplane crossing Vni and V`i crosses Vki , so gVni (Vki ) ∩ gVni (V`i ) 6= ∅. Thus the factor system on
Y has multiplicity at least n. But since m(n) → ∞, we could choose n arbitrarily large in the
preceding argument, violating the definition of a factor system.
Proof of Corollary E. If γ is a CAT(0) geodesic, then it can be approximated, up to Hausdorff
distance depending on dim X , by a combinatorial geodesic, so assume that γ is a combinatorial
geodesic ray. By Corollary D, the simplex of ∂4 X represented by γ is spanned by 0–simplices
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
25
v0 , . . . , vd with each vi represented by a Q
combinatorial geodesic ray γi . Theorem 3.23 of [Hag13]
0
0
0
says
that
X
contains
a
cubical
orthant
i γi , where each γi represents vi . Hence Hull(∪i γi ) =
Q
0
i Hull(γi ). Up to truncating an initial subpath of γ, we have that γ is parallel into Hull(∪i γi )
(and thus lies in a finite neighbourhood of it). The projection of the original CAT(0) geodesic
approximated by γ to each Hull(γi0 ) is a CAT(0) geodesic representing vi . The product of these
geodesics is a combinatorially isometrically embedded (d + 1)-dimensional orthant subcomplex
of Y containing (the truncated) CAT(0) geodesic in a regular neighbourhood.
In the presence of a proper, cocompact group action, we can achieve full visibility under
slightly weaker conditions than those that we have shown suffice to obtain a factor system:
Proposition 7.3. Let X be a proper CAT(0) cube complex on which the group G acts properly
and cocompactly. Suppose that the action of G on X satisfies NICC for hyperplanes. Then each
simplex σ of ∂4 X is visible, i.e. there exists a combinatorial geodesic ray α such that the set of
hyperplanes intersecting α is a boundary set representing the simplex σ.
Proof. We adopt the same notation as in the proof of Proposition 7.1. As in that proof, if ∂4 X
contains an invisible simplex, then we have two infinite sets {Vi }i≥0 , {Hj }j≥0 of hyperplanes
with the following properties:
• for each i ≥ 1, the hyperplane Hi separates Hi−1 from Hi+1 ;
• for each j ≥ 1, the hyperplane Vj separates Vj−1 from Vj+1 ;
• there is an increasing sequence (ij ) so that for all j, Vj crosses Hi if and only if i ≤ ij .
This implies that for all i ≥ 1, the subcomplex Fi = gH0 (gH1 (· · · (gHi−1 (Hi )) · · · )) is unbounded.
Since StabG (Fi ) acts cocompactly, by Proposition 2.7, StabG (Fi ) is infinite. By Lemma 1.9,
T
StabG (Fi ) is commensurable with Ki = ij=1 StabG (Hj ), and so by NICC, there exists N so
that Ki is commensurable with KN for all i ≥ N . Thus, after passing to a subsequence, we
see that for all i, the Ki –essential core of Fi is a fixed nonempty (indeed, unbounded) convex
subcomplex Fb of H0 .
Now, for each j, the hyperplane Vj cannot cross Fb, because Fb lies in Fi for all i, and Vj fails
to cross Hi for all sufficiently large i. Moreover, this shows that Fb must lie in the halfspace
associated to Vj that contains Vj+1 . But since this holds for all j, we have that Fb is contained
in an infinite descending chain of halfspaces, contradicting that Fb 6= ∅.
References
[ABD17]
[Ago13]
[BH99]
[BH16]
[BHS14]
[BHS15]
[BKS08]
[BM97]
[BW12]
Carolyn Abbott, Jason Behrstock, and Matthew Gentry Durham, Largest acylindrical actions and
stability in hierarchically hyperbolic groups, arXiv preprint arXiv:1705.06219 (2017).
Ian Agol, The virtual Haken conjecture, Doc. Math. 18 (2013), 1045–1087, With an appendix by
Agol, Daniel Groves, and Jason Manning. MR 3104553
Martin R. Bridson and André Haefliger, Metric spaces of non-positive curvature, Grundlehren
der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 319,
Springer-Verlag, Berlin, 1999. MR 1744486 (2000k:53038)
Jason Behrstock and Mark F Hagen, Cubulated groups: thickness, relative hyperbolicity, and simplicial
boundaries, Groups Geom. Dyn. 10 (2016), no. 2, 649–707. MR 3513112
Jason Behrstock, Mark F Hagen, and Alessandro Sisto, Hierarchically hyperbolic spaces I: curve
complexes for cubical groups, arXiv:1412.2171 (2014).
, Hierarchically hyperbolic spaces II: combination theorems and the distance formula,
arXiv:1509.00632 (2015).
Mladen Bestvina, Bruce Kleiner, and Michah Sageev, Quasiflats in CAT(0) complexes, arXiv preprint
arXiv:0804.2619 (2008).
Marc Burger and Shahar Mozes, Finitely presented simple groups and products of trees, Comptes
Rendus de l’Académie des Sciences-Series I-Mathematics 324 (1997), no. 7, 747–752.
Nicolas Bergeron and Daniel T. Wise, A boundary criterion for cubulation, Amer. J. Math. 134
(2012), no. 3, 843–859. MR 2931226
ON HIERARCHICAL HYPERBOLICITY OF CUBICAL GROUPS
26
[CFI16]
Indira Chatterji, Talia Fernós, and Alessandra Iozzi, The median class and superrigidity of actions
on cat (0) cube complexes, Journal of Topology 9 (2016), no. 2, 349–400.
[Che00]
Victor Chepoi, Graphs of some CAT(0) complexes, Advances in Applied Mathematics 24 (2000),
no. 2, 125–179.
[CS11]
Pierre-Emmanuel Caprace and Michah Sageev, Rank rigidity for CAT(0) cube complexes, Geom.
Funct. Anal. 21 (2011), no. 4, 851–891. MR 2827012
[DHS16] Matthew G Durham, Mark F Hagen, and Alessandro Sisto, Boundaries and automorphisms of hierarchically hyperbolic spaces, arXiv preprint arXiv:1604.01061 (2016).
[Gen16]
Anthony Genevois, Acylindrical action on the hyperplanes of a cat (0) cube complex, arXiv preprint
arXiv:1610.08759 (2016).
[GMRS98] Rita Gitik, Mahan Mitra, Eliyahu Rips, and Michah Sageev, Widths of subgroups, Transactions of
the American Mathematical Society 350 (1998), no. 1, 321–329.
[Hag07]
Frédéric Haglund, Isometries of CAT(0) cube complexes are semi-simple, arXiv preprint
arXiv:0705.3386 (2007).
[Hag13]
Mark F Hagen, The simplicial boundary of a CAT(0) cube complex, Algebraic & Geometric Topology
13 (2013), no. 3, 1299–1367.
[Hag14]
, Weak hyperbolicity of cube complexes and quasi-arboreal groups, J. Topol. 7 (2014), no. 2,
385–418. MR 3217625
[Hua14]
Jingyin Huang, Top dimensional quasiflats in CAT(0) cube complexes, arXiv:1410.8195 (2014).
[Hua16]
, Commensurability of groups quasi-isometric to RAAG’s, arXiv:1603.08586 (2016).
[HW15]
Mark F. Hagen and Daniel T. Wise, Cubulating hyperbolic free-by-cyclic groups: the general case,
Geom. Funct. Anal. 25 (2015), no. 1, 134–179. MR 3320891
[JW09]
David Janzen and Daniel T Wise, A smallest irreducible lattice in the product of trees, Algebraic &
Geometric Topology 9 (2009), no. 4, 2191–2201.
[JW17]
Kasia Jankiewicz and Daniel T Wise, Cubulating small cancellation free products, Preprint.
[NR97]
Graham Niblo and Lawrence Reeves, Groups acting on CAT(0) cube complexes, Geom. Topol. 1
(1997), approx. 7 pp. (electronic). MR 1432323
[NR98]
G. A. Niblo and L. D. Reeves, The geometry of cube complexes and the complexity of their fundamental
groups, Topology 37 (1998), no. 3, 621–633. MR 1604899
[OW11]
Yann Ollivier and Daniel T. Wise, Cubulating random groups at density less than 1/6, Trans. Amer.
Math. Soc. 363 (2011), no. 9, 4701–4733. MR 2806688
[Rat07]
Diego Rattaggi, A finitely presented torsion-free simple group, Journal of Group Theory 10 (2007),
no. 3, 363–371.
[Sag95]
Michah Sageev, Ends of group pairs and non-positively curved cube complexes, Proc. London Math.
Soc. (3) 71 (1995), no. 3, 585–617. MR 1347406
, CAT(0) cube complexes and groups, Geometric group theory, IAS/Park City Math. Ser.,
[Sag14]
vol. 21, Amer. Math. Soc., Providence, RI, 2014, pp. 7–54. MR 3329724
[Spr17]
Davide Spriano, Hyperbolic hhs i: Factor systems and quasi-convex subgroups, arXiv preprint
arXiv:1711.10931 (2017).
[SW05]
Michah Sageev and Daniel T. Wise, The Tits alternative for CAT(0) cubical complexes, Bull. London
Math. Soc. 37 (2005), no. 5, 706–710. MR 2164832
[Wis]
Daniel T Wise, The structure of groups with a quasiconvex hierarchy. Preprint (2011).
[Wis96]
, Non-positively curved squared complexes aperiodic tilings and non-residually finite groups,
Princeton University, 1996.
, Cubulating small cancellation groups, Geometric & Functional Analysis GAFA 14 (2004),
[Wis04]
no. 1, 150–214.
[Xie05]
Xiangdong Xie, The Tits boundary of a CAT(0) 2-complex, Transactions of the American Mathematical Society 357 (2005), no. 4, 1627–1661.
DPMMS, University of Cambridge, Cambridge, UK
Current address: School of Mathematics, University of Bristol, Bristol, UK
E-mail address: [email protected]
Mathematics Department, Bard College at Simon’s Rock, Great Barrington, Massachusetts,
USA
E-mail address: [email protected]
| 4 |
arXiv:1609.00830v1 [] 3 Sep 2016
COHEN-MACAULAY LEXSEGMENT COMPLEXES IN
ARBITRARY CODIMENSION
HASSAN HAGHIGHI, SIAMAK YASSEMI, AND RAHIM ZAARE-NAHANDI
Abstract. We characterize pure lexsegment complexes which are CohenMacaulay in arbitrary codimension. More precisely, we prove that any lexsegment complex is Cohen-Macaulay if and only if it is pure and its one dimensional links are connected, and, a lexsegment flag complex is Cohen-Macaulay
if and only if it is pure and connected. We show that any non-Cohen-Macaulay
lexsegment complex is a Buchsbaum complex if and only if it is a pure disconnected flag complex. For t ≥ 2, a lexsegment complex is strictly CohenMacaulay in codimension t if and only if it is the join of a lexsegment pure disconnected flag complex with a (t − 2)-dimensional simplex. When the StanleyReisner ideal of a pure lexsegment complex is not quadratic, the complex is
Cohen-Macaulay if and only if it is Cohen-Macaulay in some codimension.
Our results are based on a characterization of Cohen-Macaulay and Buchsbaum lexsegment complexes by Bonanzinga, Sorrenti and Terai.
1. Introduction
Primary significance of lexsegment ideals comes from Macaulay’s result that for
any monomial ideal there is a unique lexsegment monomial ideal with the same
Hilbert function (see [10] and [2]). Recent studies on the topic began with the work
of Bigatti [3] and Hulett [6] on extremal properties of lexsegment monomial ideals.
Aramova, Herzog and Hibi showed that for any squarefree monomial ideal there
exists a squarefree lexsegment monomial ideal with the same Hilbert function [1].
In this direction, some characterizations of pure, Cohen-Macaulay and Buchsbaum
complexes associated with squarefree lexsegment ideals was given by Bonansinga,
Sorrenti and Terai in [5]. As a generalization of Cohen-Macaulay and Buchsbaum
complexes, CMt complexes, were studied in [8]. These are pure simplicial complexes which are Cohen-Macaulay in codimension t. Naturally, one may ask for
a characterization of CMt lexsegment complexes. In this paper, using the behavior of CMt property under the operation of join of complexes [9], we first provide
some modifications of the results of Bonanzinga, Sorrenti and Terai in [5]. Then,
we characterize CMt lexsegment simplicial complexes. Our characterizations are
mostly in terms of purity and connectedness of certain subcomplexes. In particular, it turns out a lexsegment complex is Cohen-Macaulay if and only it is pure
and its one dimensional links are connected while for lexsegment flag complexes,
2000 Mathematics Subject Classification. 13H10, 13D02.
Key words and phrases. Squarefree lexsegment ideal, Cohen-Macaulay complex, Buchsbaum
complex, flag complex, CMt complex.
H. Haghighi was supported in part by a grant from K. N. Toosi University of Technology.
S. Yassemi and R. Zaare-Nahandi were supported in part by a grant from the University of
Tehran.
Emails: [email protected], [email protected], [email protected].
1
2
HASSAN HAGHIGHI, SIAMAK YASSEMI, AND RAHIM ZAARE-NAHANDI
the Cohen-Macaulay property is equivalent to purity and connectedness of the simplicial complex. The Buchsbaum property is equivalent to being Cohen-Macaulay
or a pure flag complex. A non-Buchsbaum complex is CMt , t ≥ 2, if and only
if it is the join of a Buchsbaum complex with a (t − 2)-simplex. It also appears
that any CMt lexsegment complex for which the associated Stanley-Reisner ideal
is generated in degree d ≥ 3, is indeed Cohen-Macaulay. Our proofs are heavily
based on the results in [4] and particularly, on results in [5].
2. Preliminaries and notations
Let R = k[x1 , · · · , xn ] be the ring of polynomials in n variables over a field k
with standard grading. Let Md be the set of all squarefree monomials of degree d
in R. Consider the lexicographic ordering of monomials in R induced by the order
x1 > x2 > · · · > xn . A squarefree lexsegment monomial ideal in degree d is an ideal
generated by a lexsegment L(u, v) = {w ∈ Md : u ≥ w ≥ v} for some u, v ∈ Md
with u ≥ v.
Let ∆ be a simplicial complex on [n] = {1, · · · , n} with the Stanley-Reisner ring
k[∆]. Recall that for any face F ∈ ∆, the link of F in ∆ is defined as follows:
lk ∆ (F ) = {G ∈ ∆|G ∪ F ∈ ∆, G ∩ F = ∅}.
In the sequel by a complex we will always mean a simplicial complex. When
a complex has a quadratic Stanley-Reisner ideal, that is, it is the independence
complex of a graph,then it is called a flag complex.
A complex is said to satisfy the S2 condition of Serre if k[∆] satisfies the S2
condition. Using [13, Lemma 3.2.1] and Hochster’s formula on local cohomology
modules, a pure (d−1)-dimensional complex ∆ satisfies the S2 condition if and only
e 0 (link∆ (F ); k) = 0 for all F ∈ ∆ with #F ≤ d − 2 (see [14, page 4]). Therefore,
if H
∆ is S2 if and only if it is pure and link∆ (F ) is connected whenever F ∈ ∆ and
dim(link∆ (F )) ≥ 1.
Let t be an integer 0 ≤ t ≤ dim(∆) − 1. A pure complex ∆ is called CMt , or
Cohen-Macaulay in codimension t, over k if the complex lk ∆ (F ) is Cohen-Macaulay
over k for all F ∈ ∆ with #F ≥ t. It is clear that for any j ≥ i, CMi implies CMj .
For t ≥ 1, a CMt complex is said to be strictly CMt if it is not CMt−1 . A squarefree
monomial ideal is called CMt if the associated simplicial complex is CMt . Note that
from the results by Reisner [11] and Schenzel [13] it follows that CM0 is the same
as Cohen-Macaulayness and CM1 is identical with the Buchsbaum property.
A complex ∆ is said to be lexsegment if the associated Stanley-Reisner ideal
I∆ is a lexsegment ideal. Therefore, ∆ is a CMt lexsegment complex if I∆ is a
lexsegment ideal and ∆ is CMt .
3. CMt lexsegment complexes
The following result plays a significant role in the study of CMt lexsegment
complexes.
Theorem 3.1. [9, Theorem 3.1] Let ∆1 and ∆2 be two complexes of dimensions
r1 − 1 and r2 − 1, respectively. Then
COHEN-MACAULAY LEXSEGMENT COMPLEXES IN ARBITRARY CODIMENSION
3
(i) The join complex ∆1 ∗ ∆2 is Cohen-Macaulay if and only if ∆1 and ∆2 are
both Cohen-Macaulay.
(ii) If ∆1 is Cohen-Macaulay and ∆2 is CMt for some t ≥ 1, then ∆1 ∗ ∆2 is
CMr1 +t (independent of r2 ). This is sharp, i.e., if ∆2 is strictly CMt , then
∆1 ∗ ∆2 is strictly CMr+t .
Let d ≥ 2 be an integer and let u ≥ v be in Md , u = xi xi2 · · · xid with i < i2 <
· · · < id . Let ∆ be a complex on [n] such that I∆ = (L(u, v)) ⊂ k[x1 , · · · , xn ].
Let ∆1 be the simplex on [i − 1] = {1, · · · , i − 1}, and let ∆2 be the complex of
the lexsegment ideal generated by L(u, v) as an ideal in k[xi , · · · , xn ]. Then it is
immediate that ∆ = ∆1 ∗ ∆2 . Observe that, ∆ is disconnected if and only if ∆1 = ∅
and ∆2 is disconnected. Similarly, ∆ is pure if and only if ∆2 is pure. Furthermore,
by Theorem 3.1 we have the following corollary.
Corollary 3.2. With the notation and assumption as above the following statements hold:
(i) The complex ∆ is Cohen-Macaulay if and only if ∆2 is Cohen-Macaulay.
In other words, to check the Cohen-Macaulay property of ∆ one may always
assume i = 1.
(ii) Assume that ∆2 is strictly CMt for some t ≥ 1. Then ∆ is strictly CMt+i .
In particular, a characterization of CMt squarefree lexsegment ideals with
i = 1 uniquely provides a characterization of CMt lexsegment ideals.
Remark 3.3. Corollary 3.2 substantially simplifies the statements and proofs of
[4] and [5].
Based on Corollary 3.2, unless explicitly specified, we will assume that i = 1.
Bonanzinga, Sorrenti and Terai [5] have given a characterization of CohenMacaulay squarefree lexsegment ideals in degree d ≥ 2. We give an improved
version of their result. Our proof is extracted from their proof.
Theorem 3.4. [5, An improved version of Theorem 3.4] Let u > v be in Md ,
I∆ = (L(u, v)). Then the following statements are equivalent:
(i) ∆ is shellable;
(ii) ∆ is Cohen-Macaulay;
(iii) ∆ is S2 ;
(iv) ∆ is pure and lk ∆ (F ) is connected for all F ∈ ∆ with dim(lk ∆ (F )) = 1.
Proof. Clearly, (iii)⇒(iv). Thus, by [5, Theorem 3.4] of Bonanzinga, Sorrenti and
Terai, we only need to prove (iv) ⇒ (i). As mentioned above, we may assume i = 1.
Checking all cases from the proof of (iii) ⇒ (iv) in [5, Theorem 3.4], it reveals that
when ∆ is pure, if u and v are not in the list (1),...,(7) in their theorem, then one
of the following cases (a) (with d ≥ 3), (b), or (c) specified in the their proof, may
occur: (a) lk ∆ ([n] \ {1, 2, d + 1, d + 2}) = h{1, 2}, {d + 1, d + 2}i,
(b) lk ∆ ({n − d + 1, · · · , k̂, k[
+ 1, · · · , n}) = h{1, 2}, {1, 3}, · · · , {1, n − d}, {k, k + 1}i,
where n − d + 1 ≤ k ≤ n − 1,
(c) lk ∆ ({n − d + 3, · · · , n}) = h{1, 2}, · · · , {1, n − d − 1}, {n − d, n − d + 1}, {n −
d, n − d + 2}, {n − d + 1, n − d + 2}i).
4
HASSAN HAGHIGHI, SIAMAK YASSEMI, AND RAHIM ZAARE-NAHANDI
In all these cases, lk ∆ (F ) is disconnected for some F ∈ ∆ with dim(lk ∆ (F )) = 1.
Therefore, assuming (iv) above, u and v will be in the list (1),...,(7) in their theorem.
Hence, by the proof of (iv) ⇒ (i) in their theorem, ∆ is shellable.
Remark 3.5. It is known that for some flag complexes including the independence
complex of a bipartite graph or a chordal graph, the conditions S2 and CohenMacaulay-ness are equivalent [7]. Nevertheless, the condition (d) is in general
weaker than the S2 property. For example, if ∆ = h{1, 2, 3, 4}, {1, 5, 6.7}i, then
∆ satisfies the condition (d) but does not satisfy the S2 property. Indeed, any onedimensional face has a connected one-dimensional link, but lk ∆ ({1}) is disconnected
of dimension 2.
For d = 2, the statement in Theorem 3.4(iv) could be relaxed as follows. The
assumption i = 1 is still in order.
Theorem 3.6. Let u > v be in M2 , I∆ = (L(u, v)). Then the following statements
are equivalent:
(i) ∆ is shellable;
(ii) ∆ is Cohen-Macaulay;
(iii) ∆ is S2 ;
(iv) ∆ is pure and connected.
Proof. We only need to check (iv) ⇒ (i). Once again, checking the proof of [5,
Theorem 3.4], the case (a) could not occur for d = 2. In case (b), it follows
that u = x1 xn−1 , v = xn−2 xn with n > 3. Then, ∆ = h{1, 2}, {n − 1, n}i is
disconnected. In case (c), it turns out that u = x1 xn−2 , v = xn−2 xn−1 with n > 4.
Then, ∆ = h{1, 2}, {n−2, n}, {n−1, n}i is again disconnected. Therefore, assuming
purity and connectedness of ∆, u and v will be in the list (1),...,(7) of [5, Theorem
3.4]. Hence, by the proof of (iv) ⇒ (i) of the same theorem, ∆ is shellable.
Let u > v be in Md , I∆ = (L(u, v)). Bonanzinga, Sorrenti and Terai in [5] have
shown that for d ≥ 3, ∆ is Buchsbaum if and only if it is Cohen-Macaulay. The
same proof implies that this is the case for CMt lexsegment complexes.
Proposition 3.7. Let u > v be in Md with d ≥ 3 and I∆ = (L(u, v)). Then for
any t ≥ 0, ∆ is CMt if and only if ∆ is Cohen-Macaulay.
Proof. Clearly any Cohen-Macaulay complex is CMt . Assume that ∆ is CMt . Then
as noticed in the proof [5, Thorem 4.1], for d ≥ 3, depthk[∆] ≥ 2. Hence ∆ is S2 .
Therefore, by Theorem 3.6, ∆ is Cohen-Macaulay.
By Proposition 3.7 to check the CMt property with t ≥ 1, for squarefree lexsegment ideals we should restrict to the case d = 2.
We now drop the assumption i = 1 and assume that u > v are in M2 , u = xi xj ,
v = xr xs with i < j and r < s. Let I∆ = (L(u, v)) ⊂ k[x1 , · · · , xn ]. Recall that if
∆1 is the simplex on [i−1], and ∆2 is the complex of the lexsegment ideal generated
by L(u, v) as an ideal in k[xi , · · · , xn ]. Then ∆ = ∆1 ∗ ∆2 .
Theorem 3.8. Let u > v be in M2 , I∆ = (L(u, v)). Assume that ∆ is not CohenMacaulay. Then the following statements are equivalent:
(1) ∆ is Buchsbaum;
COHEN-MACAULAY LEXSEGMENT COMPLEXES IN ARBITRARY CODIMENSION
5
(2) One of the following conditions hold;
(a) u = x1 xn−2 , v = xn−2 xn−1 , n > 4;
(b) u = x1 xn−1 , v = xn−2 xn , n > 3.
(3) ∆1 = ∅ and ∆2 is pure.
Furthermore, in either of these equivalent cases, ∆ is disconnected.
Proof. The equivalence of (1) and (2) is given in [4, Theorem 2.1]. Assuming (2),
then ∆1 = ∅, and as shown in the proof of Theorem 3.6, ∆ = ∆2 is pure in both
cases (a) and (b). Thus (2) ⇒ (3). Now if ∆1 = ∅ and ∆2 is pure, as it was
observed in the proof of Theorem 3.6, the only cases where ∆ is pure but not
Cohen-Macaulay are the cases (a) and (b) above, which settles (3) ⇒ (2). For
the last statement, observe that if ∆ = ∆2 also happens to be connected, then by
Theorem 3.6, ∆ is Cohen-Macaulay. But this is contrary to the assumption. Hence
∆ is disconnected.
Theorem 3.9. Let u > v be in M2 , I∆ = (L(u, v)). Let ∆ = ∆1 ∗ ∆2 be as above.
Let t ≥ 2. Assume that ∆ is not CMt−1 . Then the following is equivalent:
(i) ∆ is CMt ;
(ii) One of the following conditions hold;
(a) u = xt xn−2 , v = xn−2 xn−1 , n > 4;
(b) u = xt xn−1 , v = xn−2 xn , n > 3.
(iii) ∆1 is of dimension t − 2 and ∆2 is pure.
(iv) ∆1 is of dimension t − 2 and ∆2 is Buchsbaum.
Proof. The equivalence of (ii), (iii) and (iv) follows by applying Theorem 3.8 to
∆2 . Assuming (iv), the statement (i) follows by Corollary 3.2. Now assume (i).
Let u = xi xj , v = xr xs . Then since ∆ is CMt , it is pure, and hence ∆2 is pure. But
since ∆ is not Cohen-Macaulay, ∆2 can not be Cohen-Macaulay. Thus by Theorem
3.8, ∆2 is Buchsbaum but not Cohen-Macaulay. Now since t ≥ 2 and ∆ is not
CMt−1 , it follows that ∆ 6= ∅ and i ≥ 2. Hence by 3.2 ∆ = ∆1 ∗ ∆2 is CMi but not
CMi−1 . Therefore, i = t and ∆1 is of dimension t − 2.
Acknowledgments
Part of this note was prepared during a visit of Institut de Mathématiques de
Jussieu, Université Pierre et Marie Curie by the second and the third author. This
visit was supported by Center for International Studies & Collaborations(CISSC)
and French Embassy in Tehran in the framework of the Gundishapur project
27462PL on the Homological and Combinatorial Aspects of Commutative Algebra. The third author has been supported by research grant no. 4/1/6103011 of
University of Tehran.
References
1. A. Aramova, J. Herzog and T. Hibi, Squarefree lexsegment ideals, Math. Z. 228 (1998), 353–
378.
2. D. Bayer, The division algorithm and the Hilbert scheme, Ph.D. Thesis, Harvard University,
Boston (1982).
3. A. Bigatti, Upper bounds for the Betti numbers of a given Hilbert function, Comm. Algebra
21 (1993), 2317–2334.
4. V. Bonanzinga and L. Sorrenti, Cohen-Macaulay squarefree lexsegment ideals generated in
degree 2, Contemporary Mathematics 502 (2009), 25–31.
6
HASSAN HAGHIGHI, SIAMAK YASSEMI, AND RAHIM ZAARE-NAHANDI
5. V. Bonanzinga, L. Sorrenti and N. Terai, Pure and Cohen-Macaulay simplicial complexes
associated with squarefree lexsegment ideals, Commun. Algebra 40 (2012), 4195–4214.
6. H. A. Hulett, Maximum Betti numbers of homogenous ideals with a given Hilbert function,
Comm. Algebra 21 (1993), 2335–2350.
7. H. Haghighi, S. Yassemi and R. Zaare-Nahandi, Bipartite S2 graphs are Cohen-Macaulay,
Bull. Math. Soc. Sci. Roumanie. 53 (2010), 125–132.
8. H. Haghighi, S. Yassemi and R. Zaare-Nahandi, A generalization of k-Cohen-Macaulay complexes, Ark. Mat. 50 (2012), 279–290.
9. H. Haghighi, S. Yassemi and R. Zaare-Nahandi, Cohen-Macaulay bipartite graphs in arbitrary
codimension, Amer. Math. Soc. 143 No. 5 (2015), 1981–1989
10. F. S. Macaualy, Some properties of enumeration in the theory of modular systems, Proc.
London Math. Soc. 26 (1927) 531–555.
11. G. Reisner, Cohen-Macaulay quotients of polynomial rings, Adv. Math. 21(1976), 30–49.
12. P. Schenzel, Dualisierende Komplexe in der lokalen Algbera und Buchsbaum Ringe, LNM
907, Springer, 1982.
13. P. Schenzel, On the number of faces of simplicial complexes and the purity of Frobenius,
Math. Z. 178 (1981), 125–142.
14. N. Terai, Alexander duality in Stanley-Reisner rings, in “Affine Algebraic Geometry (T. Hibi,
ed.)”, Osaka University Press, Osaka 2007, 449–462.
Hassan Haghighi, Department of Mathematics, K. N. Toosi University of Technology,
Tehran, Iran.
Siamak Yassemi, School of Mathematics, Statistics & Computer Science, University
of Tehran, Tehran Iran.
Rahim Zaare-Nahandi, School of Mathematics, Statistics & Computer Science, University of Tehran, Tehran, Iran.
| 0 |
arXiv:1711.04036v1 [] 10 Nov 2017
Physiological and behavioral profiling for nociceptive
pain estimation using personalized multitask learning
Daniel Lopez-Martinez1,2 , Ognjen (Oggi) Rudovic2 , Rosalind Picard2
1
Harvard-MIT Health Sciences and Technology, [email protected]
2
Affective Computing group, MIT Media Lab, Massachusetts Institute of Technology
Abstract
Pain is a subjective experience commonly measured through patient’s self report.
While there exist numerous situations in which automatic pain estimation methods
may be preferred, inter-subject variability in physiological and behavioral pain
responses has hindered the development of such methods. In this work, we address
this problem by introducing a novel personalized multitask machine learning
method for pain estimation based on individual physiological and behavioral pain
response profiles, and show its advantages in a dataset containing multimodal
responses to nociceptive heat pain.
1
Introduction
Pain is an unpleasant sensory and emotional experience associated with actual or potential tissue
damage with sensory, emotional, cognitive and social components [1]. In the clinical and research
settings, pain intensity is measured using patient’s self-reported pain rating scales such as the visual
analog scale (VAS) [2]. Unfortunately, these self-report measures only work when the subject is
sufficiently alert and cooperative, and hence they lack utility in multiple situations (e.g. during
drowsiness) and patient populations (e.g. patients with dementia or paralysis).
To circumvent these limitations, automatic methods for pain estimation based on physiological
autonomic signals [3, 4] and/or facial expressions [5, 6] have been proposed. However, inter-subject
variability in pain responses has limited the ability for the automated methods to generalize across
people. For example, autonomic responses captured in signals such as heart rate and skin conductance
have been found to be correlated only moderately with self-reported pain [7]. A large part of the
variance may be explained by inter-individual variability in autonomic reactivity, independent from
stimulus intensity, and also by inter-individual variability in brain activity within structures involved
in regulation of nociceptive autonomic responses [7]. Similarly, facial responses [8] also vary across
individuals [9, 10]: Some people show strong facial expressions for very low pain intensities, while
others show little or no expressiveness. Therefore, it is important to account for individual differences.
While several recent works have shown the advantages of personalization for pain estimation, both
from physiological signals [3] and from face images [6, 5], none of these approaches explored the
effect on pain estimation performance of clustering subjects into different clusters or profiles. Hence,
in this work we investigate the grouping of subjects into different profiles based on their unique
multimodal physiological and behavioral (facial expression) responses to pain. These profiles are
then used to define the structure of a multi-task neural network (MT-NN), where each task in the
MT-NN corresponds to a distinct profile. The advantages of this work are two-fold: (i) we show that
the proposed multimodal approach achieves better performance than single-modality approaches, and
(ii) we also show that by accounting for the different profiles using multi-task machine learning we
achieve further improvements in pain estimation compared to single-task (population level) models.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2
Dataset, Methods and Experiments
We used the publicly available BioVid Heat Pain database [11], which contains 87 participants
with balanced gender ratio. The experimental setup consisted of a thermode that was used for pain
elicitation on the right arm. Before the data recording started, each subject’s individual pain threshold
(the temperature for which the participant’s sensing changes to pain) and tolerance threshold (the
temperature at which the pain becomes unacceptable) were determined. These thresholds were used
as the temperatures for the lowest and highest pain levels together with two additional intermediate
levels, thus obtaining four equally distributed pain levels (P = {1, 2, 3, 4}). For each subject, pain
stimulation was applied 20 times for each of the 4 calibrated temperatures, for 4 seconds followed
by a recovery phase (8-12 seconds, randomized). Each recording lasted 25 min on average. The
following signals were recorded: (1) skin conductance, (2) electrocardiogram, and (3) face videos.
2.1
Signal processing and feature extraction
Our pain estimation approach is posed as a continuous regression problem. Hence, for each recording
we sample windows of duration 6 seconds with a step size of 0.5 seconds, and assign the label
corresponding to the heat applied at the beginning of the window. To build the profiles, we used
windows of duration 8 seconds, as described in Sec. 2.2. In what follows, we describe the features
extracted for each window.
Skin conductance (SC): We used nonnegative deconvolution [12] to decompose the SC signal into
its tonic and phasic components, and to extract the phasic driver, a correlate of the activity of the
sudomotor nerves [13], such that SC = SCtonic + SCphasic = SCtonic + Driverphasic ∗ IRF, where IRF
is the impulse response function of the SC response (SCR). Then, for each window, we extracted
the following features: (1) the number of SCRs with onset in the window, (2) the sum of amplitudes
of all reconvolved SCRs (SCphasic ) with onset in the window, (3) average phasic driver activity, (4)
maximum phasic driver activity, (5) integrated phasic driver activity, and (6) mean tonic activity.
Electrocardiogram (ECG): From the ECG, we extracted the R peaks [14] and subsequently
calculated the inter-beat intervals (IBIs). Based on previous work [15], the following features were
extracted for each window: (1) the mean of the IBIs, (2) the root mean square of the successive
differences, (3) the mean of the standard deviations of the IBIs, and (4) the slope of the linear
regression of IBIs in its time series.
Face: We used OpenFace [16], an open source facial behaviour analysis toolkit, to extract for each
frame the following features from the video sequences: locations of 68 facial landmarks, 3D head
pose and eye gaze direction [17], and 17 facial action units (AUs) [18]. From the facial landmark
locations, we extracted a set of geometric-based features. To do so, we first registered all landmarks
using the affine transform with 4 reference points: the two eye centers, the nose center and the
mouth center. Then, as in [19], at the frame level, we calculated the euclidean distance between
each of the 41 facial landmarks (we excluded the face contour and eyebrows) and the center of
gravity of the facial landmark set. Therefore, each frame was represented by a 41D vector. Then, at
the window level, we calculated 4 statistical features for each distance (mean, standard deviation,
max, min), hence obtaining 164D geometric-based features for each window. We also calculated the
same statistical descriptors for the eye gaze coordinates (24D) head pose (24D), and intensities of 17
actions units (AUs) detected by OpenFace, which are given on a scale from 0-5.
2.2
Subject profiling with spectral clustering
To build the profiles, we extracted windows corresponding to the first 48 heat stimuli. Specifically,
we used windows of duration of 8 sec., starting immediately after the onset of the stimulus. To profile
the subjects based on their physiology, we calculated the SC and ECG features described in Sec. 2.1.
For facial behavioral profiles, we computed the facial expressiveness of the subjects, defined as the
amount of variability in the landmark coordinates within the 8 sec. windows. From each sequence,
we calculated average
coordinates of each
of the registered facial landmarks li = (xi , yi ), such that
¯li = (x̄i , ȳi ) = PF xf,i , PF yf,i , where F is the number of frames in each 25 min sequence,
f =1
f =1
and i ∈ [1, 68], provided by OpenFace [16]. Then, for each landmark i and each frame f = 1, . . . , N ,
in a given window, we calculated the distance from the landmark coordinates to the mean coordinates,
2
Figure 1: Subject profiling using spectral clustering. The i, j elements in the (clustered) similarity
matrices represent the distance from person i to person j in their physiological and behavioral features.
The more yellow the matrix element, the closer the corresponding subjects are.
# clusters
NC
c=2
c=3
Cluster
1
1
2
1
2
3
Size
85
40
45
23
35
27
Males
50.59%
55.00%
46.67%
56.52%
48.57%
48.15%
Females
49.41%
45.00%
53.33%
43.48%
51.43%
51.85%
Age (mean(std))
41.20(14.36)
44.53(13.11)∗
38.24(14.78)∗
47.61(11.24)∗
38.03(14.80)
39.85(14.42)
Table 1: Cluster statistics for the clusters shown in Fig. 1. Figure 2: Example of face tracked with
For each cluster, statistical significance (p-value ≤ 0.5) in OpenFace [16] showing landmarks,
terms of age composition with respect to the other clusters head pose and eye gaze.
is indicated with ∗.
i.e., di,f = ||li,f − ¯li ||. Finally, for each window, the facial expressiveness
scoreis obtained as the
PN P68
1
level of variability in facial landmarks, computed as lexp = N f =1
i=1 di,f .
To obtain the subject descriptors, for each subject we computed the mean value of each of the above
11 features, within windows of each of the 4 pain levels separately. Hence, each subject s was
represented by a 44D vector ps = [ps,1 , ..., ps,44 ]. We normalized these vectors to sum to 1 across
the pain levels per subject (to compensate for the scale differences among subjects), and used them to
cluster subjects into different profiles. To this end, we used normalized spectral clustering [20, 21].
Namely, let p̂s be the normalized descriptor of subject s. First, we construct a fully connected
similarity graph and the corresponding weighted adjacency matrix W . We used the fully-conected
graph, with edge weights wij =Ki,j = K(pi , pj ), and the radial basis function (RBF) kernel:
K(pi , pj ) = exp −γ||pi − pj ||2 with γ = 0.18, as the similarity measure. Then, we build the
degree matrix D as the diagonal matrix with degrees d1 , ..., dn on the diagonal, where di is given by
PM
di = j=1 wij where M is the number of subjects in our dataset. Next, we compute the normalized
graph Laplacian L = I − D−1 W and calculate the first c eigenvectors u1 , ..., uc of L, where c is the
desired number of clusters. Let U ∈ RM ×c be the matrix containing the vectors u1 , ..., uc as columns.
For i = 1, ..., n, let yi ∈ Rc be the vector corresponding to the i-th row of U . We cluster the points
(yi )y=1,...,M in Rc with the k-means algorithm into clusters C1 , ..., Cc , where c was determined by
visual inspection of the grouped elements of W after clustering (see Fig.1).
2.3
Personalized multi-task neural network for pain level estimation
As in [3], we use a multi-task neural network (MT-NN) approach with shared layers and task-specific
layers. While in [3] each task corresponded to a different person, here we assign a task to each profile.
The benefits of using profiles as tasks are two-fold: (i) more data is available to tune the models and
avoid over-fitting, and (2) in real-world applications, when a new subject arrives, only a small amount
of data will need to be acquired to assign the subject to a profile, without the need to train a new
layer in the MT-NN. Specifically, in this work our regression MT-NN consisted of one shared hidden
layer, and one task-specific layer. For all units in the MT-NN, we employed the rectified linear unit
(ReLU) activation function: xi = max(0, Mi xi + bi ), where xi represents the input of the i-th layer
of the network (x0 is the input feature vector), and Mi and bi are the weight matrix and bias term.
3
Model
Kächele et al. [19]
Physiology (NC)
Video (NC)
Multimodal (NC)
Multimodal (c=2)
Multimodal (c=3)
Multimodal (c=4)
Cluster 1
Cluster 2
All
Cluster 1
Cluster 2
Cluster 3
All
All
MAE
0.99
1.00
0.92
0.88
0.86
0.77
0.82
0.86
0.73
0.84
0.80∗
0.77∗
RMSE
1.16
1.28
1.23
1.21
1.22
1.16
1.19
1.22
1.11
1.19
1.17∗
1.15∗
ICC(3,1)
N/A
0.28
0.21
0.29
0.22
0.37
0.30
0.19
0.42
0.27
0.32
0.31
Table 2: Model performance in
terms of mean absolute error (MAE),
root-mean-square error (RMSE) and
intra-class correlation (ICC) using
different modalities and clustering
approaches (NC for no clustering
(c=1), and c={2,3,4} for the clustered MT-NNs). Two-tail t-tests
were performed on the "all" conditions against "multimodal (NC)"
and ∗ indicate significance with
p-value ≤ 0.05.
The proposed model was implemented using deep learning frameworks TensorFlow 1.2.1 [22] and
Keras 2.0.6 [23]. To optimize the network parameters, we first trained a joint network (c=1) using the
Adam optimizer [24], with mean absolute error loss. We then fixed the weights of the shared layer,
and used the learned weights of layer i = 2 to initialize the profile-specific layers, which were further
fine-tuned using the data corresponding to the subjects assigned to each profile. To regularize the
network, we applied dropout [25] and employed an early stopping strategy based on a validation set.
3
Results
For each recording, we used the first part, corresponding to the first 48 pain stimuli, as training
set, the second part corresponding to the following 10 stimuli as validation set, and the final part
corresponding to the final 22 stimuli as test set. The training set was used to cluster each of the
85 subjects (2 subjects were excluded due to poor landmark tracking by OpenFace) into c clusters
or profiles. Several meta-information statistics of these clusters were calculated. They are shown
in Table 1 and indicate that our clustering process does not result in any differences in the gender
composition of the clusters, but it discriminates according to the subject’s age.
Once the cluster assignment was completed, we extracted features from overlapping windows as
described in Sec.2.1 We then balanced the training set to have equal amount of P = 0 and P > 0
instances by downsampling the over-represented class (P = 0). As performance measures, we
used the mean absolute error (MAE), root-mean-square error (RMSE), and intra-class correlation
ICC(3,1) [26]. The results are shown in Table 2, and also compared to related work [19], which used a
multimodal early fusion of geometric and appearance based features, SC, ECG, and electromyography
(EMG). Our results indicate that similar performance is achieved with the proposed multi-modal
(unclustered) approach and the prior work. The results also indicate an overall improvement of the
multimodal approach with respect to the single-modality approaches, and also by the models with
the proposed clustered multi-task approach. Furthermore, we note that the profiling approach also
finds clusters in which pain estimation performance is better, whereas other clusters seem to be more
challenging. This can be seen from the scores of cluster 3 in the MT-NN with c = 3, where ICC
reaches 42% and achieves the best MAE and RMSE errors, thus indicating that the MT-NN model
achieves improved estimation performance on this subgroup. However, it drops in performance on
the other two clusters, which shows the need to further investigate the data of those subjects. By
comparing the models with different number c of profiles, on average we obtain similar performance,
with c = 4 performing the best in terms of MAE and RMSE, while c = 3 attains the best ICC score.
4
Conclusions
We proposed a clustered multi-task neural network model for continuous pain intensity estimation
from video and physiological signals. Each task in our model represents a cluster of subjects with
similar pain response profiles. We showed the benefit of our multimodal multi-task model with
respect to (a) single-task (population) models, and (b) single-modality approaches. We conclude that
the choice of cluster profiles would need to be selected based on target application and output metric.
Future work will focus on improving the profiling approach and optimizing the network topology for
exploiting their commonalities.
4
References
[1] Amanda C. de C. Williams and Kenneth D. Craig. Updating the definition of pain. PAIN, 157(11):2420–
2423, 11 2016.
[2] Jarred Younger, Rebecca McCue, and Sean Mackey. Pain outcomes: A brief review of instruments and
techniques. Current Pain and Headache Reports, 13(1):39–43, 2009.
[3] Daniel Lopez-Martinez and Rosalind Picard. Multi-task Neural Networks for Personalized Pain Recognition
from Physiological Signals. In Seventh International Conference on Affective Computing and Intelligent
Interaction Workshops and Demos (ACIIW), San Antonio, TX, 2017.
[4] Roi Treister, Mark Kliger, Galit Zuckerman, Itay Goor Aryeh, and Elon Eisenberg. Differentiating between
heat pain intensities: The combined effect of multiple autonomic parameters. Pain, 153(9):1807–1814,
2012.
[5] Daniel Lopez Martinez, Ognjen Rudovic, and Rosalind Picard. Personalized Automatic Estimation of
Self-Reported Pain Intensity from Facial Expressions. In 2017 IEEE Conference on Computer Vision and
Pattern Recognition Workshops (CVPRW), pages 2318–2327, Hawaii, USA, 7 2017. IEEE.
[6] Dianbo Liu, Fengjiao Peng, Andrew Shea, Ognjen Rudovic, and Rosalind Picard. DeepFaceLIFT:
Interpretable Personalized Models for Automatic Estimation of Self-Reported Pain. IJCAI 2017 Workshop
on Artificial Intelligence in Affective Computing, 2017.
[7] Audrey-Anne Dubé, Marco Duquette, Mathieu Roy, Franco Lepore, Gary Duncan, and Pierre Rainville.
Brain activity associated with the electrodermal reactivity to acute heat pain. NeuroImage, 45(1):169–180,
3 2009.
[8] Kenneth M. Prkachin. Assessing pain by facial expression: Facial expression as nexus. Pain Research and
Management, 14(1):53–58, 2009.
[9] Kenneth M. Prkachin and Patricia E. Solomon. The structure, reliability and validity of pain expression:
Evidence from patients with shoulder pain. Pain, 139(2):267–274, 2008.
[10] M J L Sullivan, D A Tripp, and D Santor. Gender differences in pain and pain behavior: The role of
catastrophizing. Cognitive Therapy and Research, 24(1):121–134, 2000.
[11] Steffen Walter, Sascha Gruss, Hagen Ehleiter, Junwen Tan, Harald C Traue, Stephen Crawcour, Philipp
Werner, Ayoub Al-Hamadi, and Adriano O Andrade. The biovid heat pain database data for the advancement and systematic validation of an automated pain recognition system. In 2013 IEEE International
Conference on Cybernetics (CYBCO), pages 128–131. IEEE, 6 2013.
[12] Mathias Benedek and Christian Kaernbach. Decomposition of skin conductance data by means of
nonnegative deconvolution. Psychophysiology, 47:647–658, 2010.
[13] D.M. Alexander, C. Trengove, P. Johnston, T. Cooper, J.P. August, and E. Gordon. Separating individual
skin conductance responses in a short interstimulus-interval paradigm. Journal of Neuroscience Methods,
146(1):116–123, 7 2005.
[14] W. Engelse and C. Zeelenberg. A single scan algorithm for QRS detection and feature extraction. Computers
in Cardiology, 6:37–42, 1979.
[15] Philipp Werner, Ayoub Al-Hamadi, Robert Niese, Steffen Walter, Sascha Gruss, and Harald C. Traue.
Automatic Pain Recognition from Video and Biomedical Signals. In 2014 22nd International Conference
on Pattern Recognition, pages 4582–4587. IEEE, 8 2014.
[16] Tadas Baltrusaitis, Peter Robinson, and Louis-Philippe Morency. OpenFace: An open source facial
behavior analysis toolkit. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV),
pages 1–10. IEEE, 3 2016.
[17] Erroll Wood, Tadas Baltruaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, and Andreas Bulling. Rendering of Eyes for Eye-Shape Registration and Gaze Estimation. In 2015 IEEE International Conference
on Computer Vision (ICCV), volume 2015 Inter, pages 3756–3764. IEEE, 12 2015.
[18] Paul Ekman and Wallace Friesen. Facial action coding system. 2002.
[19] Markus Kächele, Mohammadreza Amirian, Patrick Thiam, Philipp Werner, Steffen Walter, Günther Palm,
and Friedhelm Schwenker. Adaptive confidence learning for the personalization of pain intensity estimation
systems. Evolving Systems, 8(1):71–83, 3 2017.
[20] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 22(8):888–905, 2000.
[21] Ulrike von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395–416, 12 2007.
5
[22] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin,
Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga,
Sherry Moore, Derek G Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke,
Yuan Yu, and Xiaoqiang Zheng. TensorFlow: A System for Large-Scale Machine Learning. 12th USENIX
Symposium on Operating Systems Design and Implementation (OSDI 16), pages 4265–283, 2016.
[23] François Chollet et al. Keras, 2015.
[24] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In Proceedings of the
3rd International Conference on Learning Representations, 2015.
[25] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout:
A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research,
15:1929–1958, 2014.
[26] Patrick E. Shrout and Joseph L. Fleiss. Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2):420–428, 1979.
6
| 2 |
Mathematics Is Imprecise
Prabhakar Ragde
Cheriton School of Computer Science
University of Waterloo
Waterloo, Ontario, Canada
[email protected]
We commonly think of mathematics as bringing precision to application domains, but its relationship
with computer science is more complex. This experience report on the use of Racket and Haskell to
teach a required first university CS course to students with very good mathematical skills focusses
on the ways that programming forces one to get the details right, with consequent benefits in the
mathematical domain. Conversely, imprecision in mathematical abstractions and notation can work
to the benefit of beginning programmers, if handled carefully.
1
Introduction
Mathematics is often used to quantify and model what would otherwise be poorly-understood phenomena. However, as an activity carried out by humans for humans, it can and does take advantage of
imprecision: using ambiguous notation, omitting cases that are “similar,” and eliding details. The machines that mediate activity by humans for humans in computer science introduce an element of forced
precision. The thesis of this paper is that pedagogical attention to this relationship can enhance learning
in both disciplines, by introducing more precision to mathematics, and by careful use of imprecision in
computer science.
The University of Waterloo has the world’s largest Faculty of Mathematics, with six departments
(including a School of Computer Science), over 200 faculty members, and about 1400 undergraduate
students entering each year. These students are required to take two CS courses, and they have a choice
of three streams. Two are aimed at majors and non-majors respectively; the third is aimed at students
with high mathematical aptitude. A similar high-aptitude stream has existed for the two required math
sequences (Calculus and Algebra) for decades, but the CS advanced stream is relatively recent, starting
with a single accelerated course in 2008 and moving to a two-course sequence in 2011-2012.
The CS advanced stream currently has a target of 50-75 students per year. Admission is by instructor consent, or by scoring sufficiently high on math or programming contests at the senior high-school
level. Consequently, a significant fraction (sometimes more than half) of the students taking the advanced
stream are not CS majors (and many who are will take a second major in one of the other Math departments). Some students have considerable experience in imperative programming, while others have no
programming experience at all. Functional programming, with its low barriers to entry and its elegant
abstractions, is well-suited to provide the right sort of challenges for such a diverse population.
Our major and non-major streams use Racket [6] exclusively in the first course, with the “How To
Design Programs” (HtDP) textbook [2] and the Program By Design (PBD) methodology [5]. (The second
courses make a gradual transition to C for majors and Python for non-majors.) Because of the difficulty
of assessing placement (many non-majors would be better off with the moderate challenge of the major
course, and the advanced course also draws from both groups) and consequent student migration between
streams, the advanced stream cannot stray too far from this model, but some deviation is possible. The
rest of the curriculum ignores functional programming, so upward compatibility is not an issue.
M. Morazán and P. Achten (Eds.): Trends in Functional
Programming in Education 2012 (TFPIE 2012).
EPTCS 106, 2013, pp. 40–49, doi:10.4204/EPTCS.106.3
c P. Ragde
P. Ragde
41
There are thus some major similarities among the first courses in all three streams, and indeed with
courses on functional programming using other languages and textbooks: starting with the manipulation
of numbers and structures with a fixed number of fields, introducing recursion with lists, and continuing
with trees. PBD emphasizes data-directed design, and the use of examples and tests to guide code
development.
In the remainder of this paper, I will describe some unusual choices that I made in the design of the
first advanced course, some techniques that seemed to find favour with students, and some issues that
remain to be overcome.
2
The roles of Racket and Haskell
Among institutions using a functional-first approach, Haskell [3] is a popular choice. Haskell is an
elegant and highly-expressive language, and its proximity to mathematics would make it a natural choice
for students in the advanced stream. Thus the reader may be surprised at the choice I made in the first
advanced course: while the first set of lectures uses Haskell exclusively, and students see it throughout
the advanced course, all of their assignment programming is done in Racket. Haskell is used as functional
pseudocode.
Conventional pseudocode, at its best, resembles untyped Pascal: imperative, with loops manipulating
arrays and pointers. In comparison, code written in a functional language is transparent enough that
it often serves the same purpose. However, there are degrees of transparency, and some functional
languages are more readable than others. Haskell, with patterns in function definitions and local bindings,
and infix notation, is rich in expressivity, and it is highly readable as long as care is taken to not make it
too terse (at least on early exposure).
However, students actually programming in Haskell (as opposed to just reading it for comprehension)
have to learn about operator precedence, and have to learn the pattern language. Mistakes in these areas
often manifest themselves as type errors, aggravated by type inference making interpretations that the
student does not yet know enough to deliberately intend or avoid, and compiler errors designed to inform
the expert. Well-written Haskell code is a joy to read; poorly-written, incorrect Haskell code can be a
nightmare for the beginner to fix.
Racket’s uniform, parenthesized syntax (inherited from Lisp and Scheme) is by contrast relatively
straightforward; the teaching language subsets implemented by the DrRacket IDE limit student errors that
produce “meaningful nonsense”; and testing is lightweight, facilitating adherence to the PBD methodology. Seeing two languages from the beginning lets students distinguish between concepts and surface
syntax (in effect providing them with a basis for generalization), while programming in just one minimizes operational confusion. When I introduce more advanced features available in full Racket (such
as pattern matching and macros), students can appreciate them (with the foreshadowing provided by
Haskell) and put them to use immediately.
Following Hutton, who in his textbook “Programming In Haskell” [4] does not even mention lazy
evaluation until the penultimate chapter, I am vague about the computational model of Haskell at the
beginning. But a precise computational model is important in debugging, and the simplified reduction
semantics that HtDP presents is quite useful, especially combined with the DrRacket tool (the Stepper)
that illustrates it on student code.
Math Is Imprecise
42
In fact, though the code I show is legal Haskell (with a few elisions, such as the use of deriving Show
or type signatures necessary to assuage the compiler), as pseudocode it should perhaps be called “Raskell,”
because, in early computational traces and later analysis of running time, I assume strict (not lazy) semantics corresponding to those of Racket.
3
Computation and proof
Here is the first program that the students see.
data Nat = Z | S Nat
plus x Z
= x
plus x (S y) = S (plus x y)
Peano arithmetic is not normally treated in a first course on computing, though it may show up in
a later course on formal logic or a deep enough treatment of Haskell to show its utility in advanced
notions of types. One reason to introduce it here is that the Algebra course my students are taking
simultaneously is not linear algebra, but “classical algebra”, which uses elementary number theory to
illustrate the process of doing mathematics. However, that course assumes the properties of integers as
a ring and rational numbers as a field (without using those terms), as does every math course before a
formal treatment of groups, rings, and fields. This gives us an opportunity to show that computers cannot
just assume these operations exist, but must implement them.
HtDP distinguishes three kinds of recursion: structural recursion, where the structure of the code
mirrors a recursive data definition (as above); accumulative recursion, where structural handling of one
or more parameters is augmented by allowing other parameters to accumulate information from earlier
in the computation (illustrated below); and generative recursion, where the arguments in a recursive
application are “generated” from the data (early examples include GCD and Quicksort).
A computational treatment of Peano arithmetic respects this hierarchy (the code above is structurally
recursive) while immediately serving notice that mathematical assumptions will be challenged and details
are important. Being precise about addition, an activity students have carried out almost as long as
they can remember, but which they likely have not examined carefully, gives a fresh perspective on
mathematics. This approach also permits me to address in a timely fashion the notion of proofs and their
importance to computer science.
The first proof they see is an example of classic ∀-introduction, where a free variable in a proved
statement can be quantified. Here is a proof of “add x (S (S Z)) = S (S x)”.
add x (S (S Z)) = S (add x (S Z))
= S (S (add x Z))
= S (S x)
We can now conclude “For all Nats x, add x (S (S Z)) = S (S x)”. I describe this to the
students as “the anonymous method”; the emphasis here is another example of greater precision in mathematics than is typical at this level, where implicit for-all quantification is a source of much confusion.
(Note the computational model here, a restricted form of equational reasoning where the clauses of the
function definition are treated as rewriting rules. This meshes quite well with the reduction semantics
given for Racket.)
The anonymous method is inadequate for a proper exploration of proof, even at this point. Attempts to prove, for example, commutativity or associativity (other concepts they have taken for granted)
P. Ragde
43
founder. An even simpler example is “For all Nats x, add Z x = x”. We can prove this for small
examples, such as x = S (S (S Z)):
add Z (S (S (S Z))) =
=
=
=
S
S
S
S
(add Z (S (S Z)))
(S (add Z (S Z)))
(S (S (add Z Z)))
(S (S Z))
At this point the student can see the proof for the case x = S (S Z), on the right hand side if one
layer of S is stripped away. In this way, we arrive at the need for and justification of structural induction
on our definition of Nat. They see induction in their Algebra sequence (immediately in the advanced
stream, after a few weeks in the regular stream) but it is not applied to “fundamental” properties of
arithmetic, which are taken for granted.
This approach falls short of full formalism, either through a proof assistant such as Coq or ACL, or
through a classic presentation of Peano arithmetic in the context of formal logic, either of which would
be overkill for an introductory course. Instead, it uses computer science and mathematics together to
yield more insight than traditional pedagogical approaches at this level in either discipline.
Discussing proofs by induction also reinforces the idea that structural recursion, should it work for
the problem at hand, is a preferable approach, as it is easier to reason about, even informally. We look at
a non-structurally-recursive version of addition:
data Nat = Z | S Nat
add x Z
= x
add x (S y) = add (S x) y
This function uses accumulative recursion (the first parameter is an accumulator), and it is harder to
prove properties such as the one above, commutativity, or associativity. In fact, the easiest way to do this
is to prove that add is equivalent to plus, and then prove the properties for plus.
Surprisingly, this situation carries over into many early uses of accumulative recursion, such as to add
up or reverse a list. An accumulator resembles a loop variable, and the correspondence is direct in the
case of tail recursion. The conventional approach to proving correctness is to specify a loop invariant that
is then proved by induction on the number of iterations (or, in the functional case, the number of times
the recursive function is applied). But it turns out that a direct proof (by structural induction) that the
accumulatively-recursive function was equivalent to the structurally-recursive version is, in many cases,
easier and cleaner. The reason is that many of the standard proofs of loop invariants involve definitions
that use notation (such as Σ for addition) whose properties themselves require recursive definitions and
proofs.
As an example, consider adding up a list.
sumh [] acc
= acc
sumh (x:xs) acc = sumh xs (x+acc)
sumlist2 xs = sumh xs 0
An informal proof of correctness of sumlist2, based on Hoare logic, would use an invariant such as “In
every application of the form sumh ys acc, the sum of the whole list is equal to acc plus the sum of
the ys.” But there really is no better formalization of “the sum of” in this statement than the structurally
recursive definition of sumlist:
sumlist []
= 0
sumlist (x:xs) = x + sumlist xs
Math Is Imprecise
44
At which point it is easier and more straightforward to prove “For all xs, for all acc, sumh xs acc =
acc + sumlist xs” by structural induction on xs. We arrive at this only by trying to prove the more
obvious statement “For all xs, sumh xs 0 = sumlist xs” and failing, because the inductive hypothesis is not strong enough. The difficulty of finding an appropriate generalization to capture the role of the
accumulator (which gets harder with more complex code) underlines the difficulty of understanding and
informally justifying code that uses an accumulator.
The strong connection between structural recursion and structural induction makes it possible to
discuss rigourous proofs of correctness in a way that is not overwhelming (as it typically is for Hoare
logic), and this extends to most uses of accumulative recursion. Traditional invariants are easier to
work with in the absence of mutation than if it is present, but they still require more work than the
direct approach of structural induction. Strong induction, or induction on time or number of recursive
applications, can thus be deferred until generative recursion is taught.
4
Analyzing efficiency
A traditional CS1-CS2 approach defers discussion of algorithm analysis and order notation to the second course, leaving the first one to concentrate on the low-level mechanics of programming. However,
efficiency influences not only the design of imperative languages, but the ways in which elementary
programming techniques are taught. Efficiency is also the elephant in the room in a functional-first approach, though the source of the problem is different. A structurally-recursive computation where it
is natural to repeat a subexpression involving a recursive application (for example, finding the maximum of a nonempty list) leads to an exponential-time implementation, with noticeable slowdown even
on relatively small instances. The fix (moving code with repeated subexpressions to a helper function)
is awkward unless local variables are prematurely introduced, and even then, the motivation has to be
acknowledged. Accumulative recursion is also primarily motivated by efficiency.
Our major stream also postpones order notation to the second course, while reluctantly acknowledging the elephant where necessary. The advanced stream, however, introduces order notation early. An
intuitive illustration of time and space complexity is easy with our first example of unary numbers, as it
is clear from a few traces that our representation takes up a lot of room and computation with it is slower
than by hand. We more carefully exercise these ideas by moving at this point into a sequence of lectures
on representing sets of integers by both unordered and ordered lists.
Order notation shares pedagogical pitfalls with another topic commonly introduced in first year,
limits in calculus. Both concepts have precise definitions involving nested, alternating quantifiers, but
students are encouraged to manipulate them intuitively in a quasi-algebraic fashion. A typical early
assignment involves questions like “Prove that 6n2 − 9n − 7 is O(n2 ).” As with epsilon-delta proofs,
not only do weaker students turn the crank on the form without much understanding, but questions like
this have little to do with subsequent use of the ideas. The situation is worse with order notation (more
quantifiers, discrete domains that are difficult to visualize).
The analysis of imperative programs at the first-year level is little more than adding running times
for sequential blocks and multiplying for loop repetitions; in other words, it is compositional based
on program structure. The obvious approach for recursive functions involves recurrences. But solving
recurrences is not easy, even with standard practices such as omitting inconvenient floors and ceilings,
and setting up recurrences is not straightforward, either. I have found that a compositional approach
works for many recursive functions encountered in this course, with the aid of a table.
The tabular method works for functions that use structural or accumulative recursion, as long as
P. Ragde
45
the recursive application is done at most once on each “piece” of the argument corresponding to a selfreferential part of the data definition. For lists, this means the “rest” of the list; for binary trees, this
means the two subtrees. All the functions they need to write in early treatment of lists and binary trees
are structurally or accumulatively recursive.
Racket functions consuming data of these forms consist of a cond at the top level, and the table has
one row for each question-answer pair (equivalently, for each pattern plus guard in a Haskell multipart
definition). The row contains entries for the number of times the question is asked (as a function of the
“size” of the argument), the cost of asking the question (nearly always constant), the number of times the
answer is evaluated, and the cost of evaluating the answer (apart from recursive applications). These are
multiplied in pairs and added to give the cost of the row, and then these costs are added up over all rows.
Here is how the table might look for sumlist (where n is the length of the list argument):
Row
#Q
time Q #A time A total
1
n + 1 O(1)
1
O(1) O(n)
2
n
O(1)
n
O(1) O(n)
O(n)
For a function with more than two cases, we typically cannot be so precise about the number of
questions and answers. Order notation once again comes to the rescue.
filter p [] = []
filter p (x:xs)
| p x
= x : filter p xs
| otherwise = filter p xs
Here is the tabular analysis of the running time of filter on a list of length n.
#Q
time Q #A time A total
Row
1
n + 1 O(1)
1
O(1) O(n)
O(n)
O(1) O(n) O(1) O(n)
2
3
O(n)
O(1) O(n) O(1) O(n)
O(n)
This approach does not entirely avoid recurrences, which are necessary to explain, for example, the
exponential-time behaviour of naı̈ve list-maximum, but it limits their use.
Here we are using the imprecision of order notation in two different ways. The loss of information
about the exact running time streamlines the analysis by not carrying along irrelevant detail. We are
also working with an intuitive or fuzzy understanding in the heads of students as to the meaning of an
order-notation assertion (it is still easy, when using the tabular method, to erase the distinction between
the n2 appearing in a table entry and the actual running time that it bounds, qualified by the appropriate
constants). While this can lead them into difficulty in more pathological situations, it suffices for the
kind of analyses necessary at the first-year level.
Math Is Imprecise
46
5
Efficient representations of integers
The approach I take to the efficient representation of integers starts by arguing that the problem with
unary arithmetic stems from the use of a single data constructor with interpretation S:n 7→ n + 1. Using
two data constructors, we must decide on interpretations.
data Nat = Z | A Nat | B Nat
Effective decoding requires that the range of the two interpretations partition the positive integers. “Dealing out” the positive integers suggests an odd-even split, with interpretations A: n 7→ 2n and B: n 7→ 2n+ 1.
This leads to a form of binary representation (with the rightmost bit outermost), with unique representation enforced by a rule that A should not be applied to Z (corresponding to the omission of leading
zeroes). The interpretation easily yields a structurally recursive fromNat to convert to standard numeric
representation, and its inverse toNat.
toNat 0 = Z
toNat 1 = B Z
toNat 2 = A (B Z)
toNat 3 = B (B Z)
toNat 4 = A (A (B Z))
We cover addition and multiplication in the new representation, and analyze them. This leads to an
interesting side effect. Mutual recursion is introduced in HtDP in the context of trees of arbitrary fan-out.
But it arises naturally with the linear structures used here.
A first attempt at addition might look like this:
add x Z = x
add Z y = y
add
add
add
add
(A
(A
(B
(B
x)
x)
x)
x)
(A
(B
(A
(B
y)
y)
y)
y)
=
=
=
=
A
B
B
A
(add x y)
(add x y)
(add x y)
(add1 (add x y))
add1 Z = B Z
add1 (A x) = B x
add1 (B x) = A (add1 x)
A naı̈ve analysis of add first analyzes add1, which takes O(s) time on a number of size s (number
of data constructors used in the representation). Then add takes time O(m2 ), where m is the size of the
larger argument. However, this analysis is too pessimistic. add actually takes time O(m), since the total
work done by all applications of add1 is O(m), not just one application. This is because the recursion in
add1 stops when an A is encountered, but the result of applying add1 in add is wrapped in an A.
But this argument is subtle and difficult to comprehend. It is better to replace the last line in the
definition of add with an application of an “add plus one” function.
P. Ragde
47
add (B x) (B y) = A (addp x y)
We then develop addp, which has a similar structure to add, and recursively applies add. It is now easy
to see that add has running time linear in the size of the representation, because it (or addp) reduces the
size of the arguments at each step.
Another surprising benefit of this approach is that we can easily represent negative numbers simply
by introducing the new nullary constructor N, representing −1. The interpretations of A and B remain the
same, as do the representations of positive numbers; we add the rule that B cannot be applied to N. The
resulting representation of integers is isomorphic to two’s complement notation.
toInts (-1) = N
toInts (-2) = A N
toInts (-3) = B (A N)
toInts (-4) = A (A N)
toInts (-5) = B (B (A N))
The more traditional representation of two’s complement can be seen by reading right-to-left and
making the following substitutions: 0 for A, 1 for B, the left-infinite sequence of 0’s for Z, and the leftinfinite sequence of 1’s for N.
3
2
1
0
-1
-2
-3
-4
-5
=
=
=
=
=
=
=
=
=
...011
...010
...01
...0
...11
...10
...101
...100
...1011
When we work out addition for the extended representation, we discover that the existing rules for
add stay the same, and the new ones involving N are easy to work out. Two’s complement notation is
normally mystifying to second-year students taking a computer architecture course, because it is presented as a polished technique that “just works” (that is, reuse of the logic for unsigned binary addition,
with just a little added circuitry). Here we have not only a clear explanation of how it works, but good
motivation for the development. The internal representation of numbers in both Racket and Haskell is no
longer magic.
The savings in space and time are intuitive, but when we quantify them, we can introduce and solve
exactly the recurrence relating a natural number n to the size of its representation, which is an effective
introduction of logarithms to the base 2 that does not duck issues of discretization.
Math Is Imprecise
48
6
Efficient representations of sequences
Trees are often introduced to mirror structure in data: in HtDP, using family trees, and in our major
sequence, using phylogeny trees. An important insight is that introducing tree structure to data not
obviously structured in this fashion can yield improvements in efficiency. Unfortunately, the example
usually chosen to illustrate this, binary search trees, is not effective at the first-year level. The simplest
algorithms are elegant but degenerate to lists in the worst case; there are many versions of balanced
search trees, but the invariants are complex and the code lengthy, particularly for deletion. As a result,
first-year students only see artificial examples of balanced trees, such as the ones that can be built from
an already-sorted sequence of keys.
Of course, this material is important, and we do treat it. But the first example should be a success.
The first introduction of a tree structure to data for purposes of efficiency should result in a quantifiable
improvement, one that is not deferred to an intermediate data structures course in second year or later.
The treatment of natural numbers in the previous section provides a path to an effective introduction
of logarithmic-height binary trees. Consider the problem of representing a sequence of elements so as to
allow efficient access to the ith element. A list can be viewed as being indexed in unary, with the element
of index Z stored at the head and the tail containing the sequence of elements of index S x, stored in the
same fashion but with the common S removed from all indices. The reason it takes O(i) time to access
the ith element of a list is similar to the reason it takes O(i) time to add the unary representation of i to
another number.
Binary representation of numbers suggests storing two subsequences instead of one: the sequence of
elements of index A x, and the sequence of elements of index B x. This leads to the idea of a binary tree
where an element of index A x is accessed by looking for the element of index x in the left (“A”) subtree,
and an element of index B x is accessed by looking for the element of index x in the right (“B”) subtree.
This is just an odd-even test, as used in toNat, and the reader will recognize the concept of a binary trie.
But there is a problem in this particular application, stemming from the lack of unique representation
and our ad-hoc rule to get around it. Not all sequences of A’s and B’s are possible, since A cannot be
applied to Z. This means that roughly half the nodes (every left child) have no element stored at them,
since that element would have an index ending with A Z. We can avoid this problem by starting the
indexing at 1, or, equivalently, retaining indexing starting at 0 but “shifting” to 1-based before applying/removing A or B and then shifting back. In other words, we can replace the A-B representation with
a C-D representation, with interpretation C(n) = A(n + 1) − 1 and D(n) = B(n + 1) − 1.
This results in the interpretation C: n 7→ 2n + 1 and D: n 7→ 2n + 2. Conversion between the new C-D
representation and built-in integers is as simple as with the old A-B representation. The new representation is naturally unique (without the need for extra rules), and all sequences are possible, so there are no
empty nodes in the tree with “C” left subtrees and “D” right subtrees. It is easy to show (again, by solving
a recurrence exactly) that the tree has depth logarithmic in the total number of elements. Furthermore,
not only does access to the ith element takes time O(log i) by means of very simple purely-functional
code, but standard list operations (cons, first, rest) take logarithmic time in the length of the sequence.
We have rederived the data structure known as a Braun tree [1]. The code for deletion (rest) is no more
complicated than the code for addition; indeed, there is a pleasant symmetry.
Our attention to mathematical detail in the treatment of natural numbers has paid off with an unexpected and fruitful connection to purely-functional data structures. We see that a more mathematical
treatment of fundamentals is not in conflict with core computer science content; on the contrary, it supports the content and increases accessibility by providing sensible explanations for choices.
P. Ragde
7
49
Conclusions
Course evaluations indicate that students greatly appreciate the first advanced course. The use of Haskell
as pseudocode does not seem to confuse them. They can translate it into Racket when asked to do so, and
the Racket code they write on exams does not have Haskell elements creeping into it. This is probably
due to the fact that they never have to write Haskell, even as pseudocode, during the course. Haskell
intrigues them, and some students express interest in using it. I hope to develop some optional learning
materials for such students in the near future.
There is more than enough material to fill a first course with topics approached in a purely functional
manner (and one that largely emphasizes structural recursion). The only real difficulty with content is
the necessity to leave out favourite topics due to the finite length of the term.
The second advanced course, which needs to move towards mainstream computer science, is more
problematic. The advanced sequence shares some issues with the major sequence: the more complicated
semantics of mutation; the increased difficulty of testing code written in a primarily imperative language;
the confusing syntax, weak or absent abstractions, and lack of good support tools associated with popular languages. Added to these for the advanced sequence are the disappointment associated with the
comparative lack of elegance and the relatively low-level nature of problem solving typical with such
material. It is not the best advertisement for computer science.
Despite this, students appreciate the second advanced course, perhaps because all of these elements
are present and have even more impact on students in the second regular course (for majors). They also
voice some of the frustrations that I feel as instructor. The second course remains a work in progress,
with hope sustained by the fact that Racket is a good laboratory for language experimentation. With luck
I will soon be able to report on a second course which is as rewarding for students as the first one.
8
Bibliography
References
[1] W. Braun & M. Rem (1983): A logarithmic implementation of flexible arrays. Technical Report MR83/4,
Eindhoven Institute of Technology.
[2] M. Felleisen, M. Flatt, R. Findler & S. Krishnamurthi (2003): How To Design Programs. MIT Press.
[3] (2012): Haskell. Available at http://www.haskell.org.
[4] G. Hutton (2007): Programming In Haskell. Cambridge University Press.
[5] (2012): Program By Design. Available at http://www.programbydesign.org.
[6] (2012): Racket. Available at http://www.racket-lang.org.
| 6 |
An Application of the EM-algorithm to
Approximate Empirical Distributions of Financial
Indices with the Gaussian Mixtures
arXiv:1607.01033v1 [] 29 Jun 2016
Sergey Tarasenko
Abstract—In this study I briefly illustrate application of the
Gaussian mixtures to approximate empirical distributions of financial
indices (DAX, Dow Jones, Nikkei, RTSI, S&P 500). The resulting
distributions illustrate very high quality of approximation as evaluated
by Kolmogorov-Smirnov test. This implies further study of application of the Gaussian mixtures to approximate empirical distributions
of financial indices.
Let Θ be a vector of parameters: Θ = (p1 ,p2 ,...,
pk−1 ,Θ1 ,Θ2 ,..., Θk ). Next, we deompose the logarithms likelihood function into three components:
lnL(Θ) =
Keywords—financial indices, Gaussian distribution, mixtures of
Gaussian distributions, Gaussian mixtures, EM-algorithm
n
X
X
k
ln
pj · f (xi ; Θj ) =
i=1
n
k X
X
PPROXIMATION of empirical distributions of financial indices using mixture of Gaussian distributinos
(Gaussian mixtures) (eq. (1)) has been recently discussed
by Tarasenko and Artukhov [1]. Here I provide detailed
explanation of steps and methods.
A
n
X
pi · N (µi , σi )
(1)
i=1
II. EM- ALGORITHM FOR MIXTURE SEPARATION
A. General Theory
The effective procedure for separation of mixtures was
proposed by Day [2], [3] and Dempster et al. [4]. This
procedure is based on maximization of logarithmic likelihood
function under parameters p1 ,p2 ,..., pk−1 ,Θ1 ,Θ2 ,..., Θk , where
k is number of mixture components:
n
X
i=1
X
k
ln
pj · f (xi ; Θj ) → max
j=1
gij ln(pj )
(5)
gij lnf (xi ; Θj )
(6)
j=1 i=1
I. I NTRODUCTION
GM =
(4)
j=1
pj ,Θj
Posterior probability gij is equal or greater then 0 and
Pk
j=1 gij for any i.
S. Tarasenko is independent researcher. Email: [email protected]
−
k X
n
X
gij ln(gij )
(7)
j=1 i=1
For this algorithm to work, the initial value Θ̂0 is used
to calculate inital approximations for posterior probabilities
0
0
gij
. This is Expectation step. Then values of gij
are used to
1
calculate value of Θ̂ during the Maximization step.
Each of components (6) and (7) are maximized independently from each other. This is possible because component
(6) depends only on pj (i=1,...,k), and component (7) depends
only on Θj (j=1,...,n).
As a solution of optimization task (8)
k X
n
X
gij ln(pj ) → max
p1 ,...,pk
j=1 i=1
(t+1)
(3)
n
k X
X
j=1 i=1
(2)
In general, the algorithms of mixture separations based on
(2) are called Estimation and Maximization (EM) algorithms.
EM-algorithm consists of two steps: E - expectation and Mmaximization. This section is focused scheme how to construct
EM-algorithm.
Let gij is defined as posterior porbability of oservations xi
to belong to j-th mixture component (class):
pj · f (xi ; Θj )
gij = Pk
j=1 pj · f (xi ; Θj )
+
a value of pj
(8)
for the iteration t + 1 is calculated as:
n
(t+1)
pj
=
1 X (t)
g
n i=1 ij
(9)
where t is iteration number, t = 1,2, ,,,
A solution of optimization task (10)
k X
n
X
j=1 i=1
gij lnf (xi ; Θj ) → max
Θ1 ,...,Θk
(10)
depends on a particular type of function f (·).
Next we consider solution of optimization task (10), when
f (·) is Gaussian distribution.
B. Mixtures of Gaussian Distributions
TABLE I
M IXTURE MODEL FOR DAX
Here we employ Guassian distributions:
(x−µ)2
1
N (x; µ, σ) = √ exp− 2σ2
σ 2π
(11)
Therefore, a specific formula to compute posterior probabilities in the case of Gaussian mixtures is
exp−
gij = P
k
(x−µ)2
2σ 2
−
j=1 exp
Component
Component
Component
Component
1
2
3
4
Weight
Mean
0.152
0.223
0.287
0.337
-0.002
0.001
0.004
0.001
Standard
Diviation
0.018
0.017
0.014
0.009
+ln(pj )−ln(σj )
(x−µ)2
2σ 2
(12)
+ln(pj )−ln(σj )
According to the EM-algorithm, the task is to find value of
parameters Θj = (µj , σj ) by solving maximization problem
(14)
k
X
lnLj =
k X
n
X
lnLj =
(13)
(x−µ)2
1
−
2
2σ
ln √ exp
→
max
Θ1 ,Θ2 ,...,Θk
σ 2π
j=1 i=1
(14)
j=1
j=1 i=1
n
k X
X
The solution of this mazimization problem is given by eqs.
(16) and (16):
µ̂j = Pn
σ̂j = Pn
1
1
n
X
i=1 gij i=1
n
X
i=1 gij i=1
gij xi
gij (xi − µ̂j )2
(15)
TABLE II
M IXTURE MODEL FOR D OW J ONES (DJIA)
(16)
Having calculated optimal values for weights pj and paramaters Θj (j=1,...,k) during a single iteration, we apply these
optimal values to obtain estimates of posterior probabilities
during the Expectation step of the next iteration.
As a stop criterion, we use difference between values of
loglikelyhood on iteration t and iteration t + 1:
lnL(t+1) (Θ) − lnL(t) (Θ) <
Fig. 1. Empirical distribution of DAX values during the period 14 April
2003 - 14 April 2004. p < 0.05, KSSTAT=0.031
Component
Component
Component
Component
1
2
3
4
Weight
Mean
0.173
0.279
0.396
0.152
0.001
0.001
0.001
0.000
Standard
Diviation
0.008
0.008
0.008
0.001
(17)
where is infinitely small real value.
III. A N A PPLICATION OF THE EM- ALGORITHM TO
A PPROXIMATE E MPIRICAL D ISTRIBUTIONS OF F INANCIAL
I NDICES WITH G UASSIAN M IXTURES
In this section, I provide several examples of EM-algorithm
applications to approximate empirical distributions of financial
indices with Gaussian mixtures. I consider the following
indices: DAX, Dow Jones Industrial, Nikkei, RTSI, and S&P
500.
In Figs. 1-5, the green line corresponds to the Gaussian
distribution, the red line illustrates a Gaussian mixture and
the blue lines represent components of the Gaussian mixture.
Fig. 2. Empirical distribution of Dow Jones Industrial values during the
period 14 April 2003 - 14 April 2004, p < 0.01, KSSTAT=0.023
TABLE V
M IXTURE MODEL FOR S&P 500
TABLE III
M IXTURE MODEL FOR N IKKEI
Component
Component
Component
Component
1
2
3
4
Weight
Mean
0.167
0.180
0.367
0.286
-0.014
0.002
0.011
-0.002
Standard
Diviation
0.014
0.013
0.008
0.005
Component
Component
Component
Component
1
2
3
4
Weight
Mean
0.014
0.331
0.470
0.186
0.011
0.000
0.001
0.000
Standard
Diviation
0.027
0.009
0.009
0.001
Fig. 5. Empirical distribution of S&P 500 values during the period 14 April
2003 - 14 April 2004, p < 0.01, KSSTAT=0.024
Fig. 3. Empirical distribution of Nikkei index values during the period 14
April 2003 - 14 April 2004, p < 0.01, KSSTAT=0.027
TABLE IV
M IXTURE MODEL FOR RTSI
Component
Component
Component
Component
1
2
3
4
Weight
Mean
0.062
0.294
0.303
0.341
-0.014
-0.005
0.005
0.011
Standard
Diviation
0.044
0.019
0.014
0.011
IV. D ISCUSSION AND C ONCLUSION
The results presented in this study illustrate that EMalgrothim can be effectively used to approximate empiral
distributions of log daily differences of financial indices.
Throughout the data for five selected indices, EM-algorithm
provided very good approximation of empoiral distirbution
with Gaussian mixtures.
The approximations based on Gaussian mixtures can be
used to improve application of Value-at-Risk and other methods for financial risk analysis.
This implies further explorations in applying Gaussian mixtures and the EM-algorithm for the purpose of approximation
of empirial distirbutions of financial indices.
R EFERENCES
[1] Tarasenko, S., and Artukhov, S. (2004) Stock market pricing models.
In the Proceedings of International Conference of Young Scientists
Lomonosov 2004, p. 248-250.
[2] Day, N.E. (1969) Divisive cluster analysis and test for multivariate
normality. Session of the ISI, London, 1969.
[3] Day, N.E. (1969) Estimating the components of a mixture of normal
distributions., Biometrika, 56, N3.
[4] Dempster, A., Laird, G. and Rubin, J. (1977) Maximum likelihood from
incomplete data via EM algorithm. Journal of Royal Statistical Society,
B, 39.
Fig. 4. Empirical distribution of RTSI values during the period 14 April
2003 - 14 April 2004, p < 0.01, KSSTAT=0.021
| 5 |
Critical Parameters in Particle Swarm Optimisation
arXiv:1511.06248v1 [] 19 Nov 2015
J. Michael Herrmann∗, Adam Erskine, Thomas Joyce
Institute for Perception, Action and Behaviour
School of Informatics, The University of Edinburgh
10 Crichton St, Edinburgh EH8 9AB, Scotland, U.K.
Abstract
Particle swarm optimisation is a metaheuristic algorithm which finds reasonable solutions in
a wide range of applied problems if suitable parameters are used. We study the properties of the
algorithm in the framework of random dynamical systems which, due to the quasi-linear swarm
dynamics, yields analytical results for the stability properties of the particles. Such considerations
predict a relationship between the parameters of the algorithm that marks the edge between
convergent and divergent behaviours. Comparison with simulations indicates that the algorithm
performs best near this margin of instability.
1
PSO Introduction
Particle Swarm Optimisation (PSO, [1]) is a metaheuristic algorithm which is widely used to solve
search and optimisation tasks. It employs a number of particles as a swarm of potential solutions.
Each particles shares knowledge about the current overall best solution and also retains a memory of the best solution it has encountered itself previously. Otherwise the particles, after random
initialisation, obey a linear dynamics of the following form
vi,t+1
xi,t+1
=
=
ωvi,t + α2 R1 (pi − xi,t ) + α2 R2 (g − xi,t )
xi,t + vi,t+1
(1)
Here xi,t and vi,t , i = 1, . . . , N , t = 0, 1, 2, . . . , represent, respectively, the d-dimensional position in
the search space and the velocity vector of the i-th particle in the swarm at time t. The velocity update
contains an inertial term parameterised by ω and includes attractive forces towards the personal best
location pi and towards the globally best location g, which are parameterised by α1 and and α2 ,
respectively. The symbols R1 and R2 denote diagonal matrices whose non-zero entries are uniformly
distributed in the unit interval. The number of particles N is quite low in most applications, usually
amounting to a few dozens.
In order to function as an optimiser, the algorithm uses a nonnegative cost function F : Rd → R,
where without loss of generality F (x∗ ) = 0 is assumed at an optimal solution x∗ . In many problems,
where PSO is applied, there are also states with near-zero costs can be considered as good solutions.
The cost function is evaluated for the state of each particle at each time step. If F (xi,t ) is better
than F (pi ), then the personal best pi is replaced by xi,t . Similarly, if one of the particles arrives at a
state with a cost less than F (g), then g is replaced in all particles by the position of the particle that
has discovered the new solution. If its velocity is non-zero, a particle will depart from the current
best location, but it may still have a chance to return guided by the force terms in the dynamics.
Numerous modifications and variants have been proposed since the algorithm’s inception [1] and
it continues to enjoy widespread usage. Ref. [2] groups around 700 PSO papers into 26 discernible
application areas. Google Scholar reveals over 150,000 results for “Particle Swarm Optimisation” in
total and 24,000 for the year 2014.
In the next section we will report observations from a simulation of a particle swarm and move
on to a standard matrix formulation of the swarm dynamics in order to describe some of the existing
∗ corresponding
author: [email protected]
1
analytical work on PSO. In Sect. 3 we will argue for a formulation of PSO as a random dynamical
system which will enable us to derive a novel exact characterisation of the dynamics of one-particle
system, which will then be generalised towards the more realistic case of a multi-particle swarm. In
Sect. 4 we will compare the theoretical predictions with simulations on a representative set of benchmark functions. Finally, in Sect. 5 we will discuss the assumption we have made in the theoretical
solution in Sect. 3 and address the applicability of our results to other metaheuristic algorithms and
to practical optimisation problems.
2
2.1
Swarm dynamics
Empirical properties
The success of the algorithm in locating good solutions depends on the dynamics of the particles in
the state space of the problem. In contrast to many evolution strategies, it is not straight forward to
interpret the particle swarm as following a landscape defined by the cost function. Unless the current
best positions p or g change, the particles do not interact with each other and follow an intrinsic
dynamics that does not even indirectly obtain any gradient information.
The particle dynamics depends on the parameterisation of the Eq. 1. To obtain the best result
one needs to select parameter settings that achieve a balance between the particles exploiting the
knowledge of good known locations and exploring regions of the problem space that have not been
visited before. Parameter values often need to be experimentally determined, and poor selection may
result in premature convergence of the swarm to poor local minima or in a divergence of the particles
towards regions that are irrelevant for the problem.
Empirically we can execute PSO against a variety of problem functions with a range of ω and
α1,2 values. Typically the algorithm shows performance of the form depicted in Fig. 1. The best
solutions found show a curved relationship between ω and α = α1 + α2 , with ω ≈ 1 at small α, and
α ' 4 at small ω. Large values of both α and ω are found to cause the particles to diverge leading
to results far from optimality, while at small values for both parameters the particles converge to
a nearby solution which sometimes is acceptable. For other cost functions similar relationships are
observed in numerical tests (see Sect. 4) unless no good solutions found due to problem complexity
or run time limits, see Sect. 5.3. For simple cost functions, such as a single well potential, there are
also parameter combinations with small ω and small α will usually lead to good results. The choice
of α1 and α2 at constant α may have an effect for some cost functions, but does not seem to have a
big effect in most cases.
2.2
Matrix formulation
In order to analyse the behaviour of the algorithm it is convenient to use a matrix formulation by
inserting the velocity explicitly in the second equation (1).
zt+1 = M zt + α1 R1 (p, p)⊤ + α2 R2 (g, g)⊤
⊤
with z = (v, x)
(2)
and
M=
ωId
ωId
−α1 R1 − α2 R2
Id − α1 R1 − α2 R2
,
(3)
where Id is the unit matrix in d dimensions. Note that the two occurrence of R1 in Eq. 3 refer to
the same realisation of the random variable. Similarly, the two R2 ’s are the same realisation, but
different from R1 . Since the second and third term on the right in Eq. 2 are constant most of the
time, the analysis of the algorithm can focus on the properties of the matrix M . In spite of its
wide applicability, PSO has not been subject to deeper theoretical study, which may be due to the
multiplicative noise in the simple quasi-linear, quasi-decoupled dynamics. In previous studies the
effect of the noise has largely been ignored.
2.3
Analytical results
An early exploration of the PSO dynamics [4] considered a single particle in a one-dimension space
where the personal and global best locations were taken to be the same. The random components were
2
Figure 1: Typical PSO performance as a function of its ω and α parameters. Here a 25 particle
swarm was run for pairs of ω and α values (α1 = α2 = α/2). Cost function here was the d = 10
non-continuous rotated Rastrigin function [3]. Each parameter pair was repeated 25 times and the
minimal costs after 2000 iterations were averaged.
replaced by their averages such that apart from random initialisation the algorithm was deterministic.
Varying the parameters was shown to result in a range of periodic motions and divergent behaviour
for the case of α1 + α2 ≥ 4. The addition of the random vectors was seen as beneficial as it adds
noise to the deterministic search.
Control of velocity, not requiring the enforcement of an arbitrary maximum value as in Ref. [4],
is derived in an analytical manner by [5]. Here eigenvalues derived from the dynamic matrix of a
simplified version of the PSO algorithm are used to imply various search behaviours. Thus, again the
α1 + α2 ≥ 4 case is expected to diverge. For α1 + α2 < 4 various cyclic and quasi-cyclic motions are
shown to exist for a non-random version of the algorithm.
In Ref. [6] again a single particle was considered in a one dimensional problem space, using a
deterministic version of PSO, setting R1 = R2 = 0.5. The eigenvalues of the system were determined
as functions of ω and a combined α, which leads to three conditions: The particle is shown to converge
when ω < 1, α > 0 and 2ω −α+2 > 0. Harmonic oscillations occur for ω 2 +α2 −2ωα−2ω −2α+1 < 0
and a zigzag motion is expected if ω < 0 and ω−α+1 < 0. As with the preceding papers the discussion
of the random numbers in the algorithm views them purely as enhancing the search capabilities by
adding a drunken walk to the particle motions. Their replacement by expectation values was thus
believed to simplify the analysis with no loss of generality.
We show in this contribution that the iterated use of these random factors R1 and R2 in fact
adds a further level of complexity to the dynamics of the swarm which affects the behaviour of the
algorithm in a non-trivial way. In Ref. [7] these factors were given some consideration. Regions
of convergence and divergence separated by a curved line were predicted. This line separating these
regions (an equation for which is given in Ref. [8]) fails to include some parameter settings that lead to
convergent swarms. Our analytical solution of the stability problem for the swarm dynamics explains
why parameter settings derived from the deterministic approaches are not in line with experiences
from practical tests. For this purpose we will now formulate the PSO algorithm as a random dynamical
system and present an analytical solution for the swarm dynamics in a simplified but representative
case.
3
3
3.1
Critical swarm conditions for a single particle
PSO as a random dynamical system
As in Refs. [4, 6] the dynamics of the particle swarm will be studied here as well in the single-particle
case. This can be justified because the particles interact only via the global best position such that,
while g (1) is unchanged, single particles exhibit qualitatively the same dynamics as in the swarm.
For the one-particle case we have necessarily p = g, such that shift invariance allows us to set both to
zero, which leads us to the following is given by the stochastic-map formulation of the PSO dynamics
(2).
zt+1 = M zt
(4)
Extending earlier approaches we will explicitly consider the randomness of the dynamics, i.e. instead
of averages over R1 and R2 we consider a random dynamical system with dynamical matrices M
chosen from the set
ωId
−αR
Mα,ω =
, Rij = 0 for i 6= j and Rii ∈ [0, 1] ,
(5)
ωId Id − αR
with R being in both rows the same realisation of a random diagonal matrix that combines the effects
of R1 and R2 (1). The parameter α is the sum α1 + α2 with α1 , α2 ≥ 0 and α > 0. As the diagonal
elements of R1 and R2 are uniformly distributed in [0, 1], the distribution of the random variable
Rii = αα1 R1,ii + αα2 R2,ii in Eq. 4 is given by a convolution of two uniform random variables, namely
Pα1 ,α2 (r) =
αr
max{α1 ,α2 }
α
max{α ,α }
α(α−r)1 2
α1 α2
if 0 ≤ r ≤ min{ αα1 , αα2 }
if min{ αα1 , αα2 } < r ≤ max{ αα1 , αα2 }
if max αα1 , αα2 < r ≤ 1
(6)
if the variable r ∈ [0, 1] and Pα1 ,α2 (r) = 0 otherwise. Pα1 ,α2 (r) has a tent shape for α1 = α2 and a
box shape in the limits of either α1 → 0 or α2 → 0. The case α1 = α2 = 0, where the swarm does
not obtain information about the fitness function, will not be considered here.
We expect that the multi-particle PSO is well represented by the simplified version for α2 ≫ α1
or α1 ≫ α2 , the latter case being irrelevant in practice. For α1 ≈ α2 deviations from the theory may
occur because in the multi-particle case p and g will be different for most particles. We will discuss
this as well as the effects of the switching of the dynamics at discovery of better solutions in Sect. 5.2.
3.2
Marginal stability
While the swarm does not discover any new solutions, its dynamical properties are determined by
an infinite product of matrices from the set M (5). Such products have been studied for several
decades [9] and have found applications in physics, biology and economics. Here they provide a
convenient way to explicitly model the stochasticity of the swarm dynamics such that we can claim
that the performance of PSO is determined by the stability properties of the random dynamical
system (4).
Since the equation (4) is linear, the analysis can be restricted to vectors on the unit sphere in the
(v, x) space, i.e. to unit vectors
⊤
⊤
a = (x, v) / k (x, v) k,
(7)
where k · k denotes the Euclidean norm. Unless the set of matrices shares the same eigenvectors
(which is not the case here) standard stability analysis in terms of eigenvalues is not applicable.
Instead we will use means from the theory of random matrix products in order to decide whether
the set of matrices is stochastically contractive. The properties of the asymptotic dynamics can be
described based on a double Lebesgue integral over the unit sphere S 2d−1 and the set M [10, 11].
As in Lyapunov exponents, the effect of the dynamics is measured in logarithmic units in order to
account for multiplicative action.
Z
Z
λ (α, ω) = dνα,ω (a) dPα,ω (M ) log kM ak
(8)
4
If a (α, ω) is negative the algorithm will converge to p with probability 1, while for positive a arbitrarily
large fluctuations are possible. While the measure for the inner integral (8) is given by Eq. 6, we
have to determine the stationary distribution ν on the unit sphere for the outer integral. It is given
as the solution of the integral equation
Z
Z
να,ω (a) = dνα,ω (b) dPα,ω (M ) δ (a, M b/ kM bk) , a, b ∈ S 2d−1 .
(9)
The existence of the invariant measure requires the dynamics to be ergodic which is ensured if at least
some of elements of M have complex eigenvalues, such as being the case for ω 2 +α2 /4−ωα−2ω−α+1 <
0 (see above, [6]). This condition excludes a small region in the parameters space at small values of
ω, such that there we have to take all ergodic components into account. There are not more than two
components which due to symmetry have the same stability properties. It depends on the parameters
α and ω and differs strongly from a homogenous distribution, see Fig. 2 for a few examples in the
case d = 1. Critical parameters are obtained from Eq. 8 by the relation
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
1
2
3
4
5
6
Figure 2: Stationary distribution να,ω (a) on the unit circle (a ∈ [0, 2π)) in the (x, v) plane for a
one-particle system (4) for ω = 0.7 and α = α2 = 0.5, 1.5, 2.5, 3.5, 4.5 (the distribution with peak
near π is for α = 0.5, otherwise main peaks are highest for largest α).
λ (α, ω) = 0.
(10)
Solving Eq. 10 is difficult in higher dimensions, so we rely on the linearity of the system when
considering the (d = 1)-case as representative. The curve in Fig. 3 represents the solution of Eq. 10
for d = 1 and α = α2 . For other settings of α1 and α2 the distribution of the random factors has
a smaller variance rendering the dynamics more stable such that the contour moves towards larger
parameter values (see Fig. 4). Inside the contour λ (α, ω) is negative, meaning that the state will
approach the origin with probability 1. Along the contour and in the outside region large state
fluctuations are possible. Interesting parameter values are expected near the curve where due to a
coexistence of stable and unstable dynamics (induced by different sequences of random matrices) a
theoretically optimal combination of exploration and exploitation is possible. For specific problems,
however, deviations from the critical curve can be expected to be beneficial.
3.3
Personal best vs. global best
Due to linearity, the particle swarm update rule (1) is subject to a scaling invariance which was
already used in Eq. 7. We now consider the consequences of linearity for the case where personal best
and global best differ, i.e. p 6= g. For an interval where pi and g remain unchanged, the particle i
with personal best pi will behave like a particle in a swarm where together with x and v, pi is also
scaled by a factor κ > 0. The finite-time approximation of the Lyapunov exponent (see Eq. 8)
λ(t) =
1
log hk(xt , vt )ki
t
5
(11)
Figure 3: Solution of Eq. 10 representing a single particle in one dimension with a fixed best value at
g = p = 0. The curve that has higher α-values on the right (magenta) is for α1 = α2 , the other curve
(green) is for α = α2 , α1 = 0. Except for the regions near ω = ±1, where numerical instabilities can
occur, a simulation produces an indistinguishable curve. In the simulation we tracked the probability
of a particle to either reach a small region (10−6 ) near the origin or to escape beyond a radius of
106 after starting from a random location on the unit circle. Along the curve both probabilities are
equal.
will be changed by an amount of 1t log κ by the scaling. Although this has no effect on the asymptotic
behaviour, we will have to expect an effect on the stability of the swarm for finite times which may be
relevant for practical applications. For the same parameters, the swarm will be more stable if κ < 1
and less stable for κ > 1, provided that the initial conditions are scaled in the same way. Likewise, if
kpk is increased, then the critical contour will move inwards, see Fig. 5. Note that in this figure, the
low number of iterations lead to a few erroneous trials at parameter pairs outside the outer contour
which have been omitted here. We also do not consider the behaviour near α = 0 which is complex
but irrelevant for PSO. The contour (10) can be seen as the limit κ → 0 such that only an increase
of kpk is relevant for comparison with the theoretical stability result. When comparing the stability
results with numerical simulations for real optimisation problems, we will need to take into account
the effects caused by differences between p and g in a multi-particle swarm with finite runtimes.
4
Optimisation of benchmark functions
Metaheuristic algorithms are often tested in competition against benchmark functions designed to
present different problem space characteristics. The 28 functions [3] contain a mix of unimodal, basic
multimodal and composite functions. The domain of the functions in this test set are all defined to
be [−100, 100]d where d is the dimensionality of the problem. Particles were initialised within the
same domain. We use 10-dimensional problems throughout. Our implementation of PSO performed
no spatial or velocity clamping. In all trials a swarm of 25 particles was used. We repeated the
algorithm 100 times, on each occasion allowing 200, 2000, 20000 iterations to pass before recording
the best solution found by the swarm. For the competition 50000 fitness evaluation were allowed
which corresponds to 2000 iterations with 25 particles. Other iteration numbers were included for
comparison. This protocol was carried out for pairs of ω ∈ [−1.1, 1.1] and α ∈ [0, 5] This was repeated
for all 28 functions. The averaged solution costs as a function of the two parameters showed curved
valleys similar to that in Fig. 1 for all problems. For each function we obtain different best values
along (or near) the theoretical curve (10). There appears to be no preferable location within the
valley. Some individual functions yield best performance near ω = 1. This is not the case near ω = 0,
although the global average performance over all test functions is better in the valley near ω = 0 than
near ω = 1, see Fig 4.
6
Figure 4: Best parameter regions for 200 (blue), 2000 (green), and 20000 (magenta) iterations: For
more iterations the region shifts towards the critical line. Cost averaged over 100 runs and 28 CEC
benchmark functions. The red (outer) curve represents the zero Lyapunov exponent for N = 1, d = 1,
α1 = α2 .
At medium values of ω the difference between the analytical solutions for the cases α1 = α2
and α1 = 0 is strongest, see Fig. 4. In simulations this shows to a lesser extent, thus revealing a
shortcoming of the one-particle approximation. Because in the multi-particle case, p and g are often
different, the resulting vector will have a smaller norm than in the one-particle case, where p = g.
The case p 6= g violates a the assumption of the theory the dynamics can be described based unit
vectors. While a particle far away from both p and g will behave as predicted from the one-particle
case, at length scales smaller than kp − gk the retractive forces will tend to be reduced such that
the inertia becomes more effective and the particle is locally less stable which shows numerically in
optimal parameters that are smaller than predicted.
5
5.1
Discussion
Relevance of criticality
Our analytical approach predicts a locus of α and ω pairings that maintain the critical behaviour
of the PSO swarm. Outside this line the swarm will diverge unless steps are taken to constrain it.
Inside, the swarm will eventually converge to a single solution. In order to locate a solution within
the search space, the swarm needs to converge at some point, so the line represents an upper bound
on the exploration-exploitation mix that a swarm manifests. For parameters on the critical line,
fluctuations are still arbitrary large. Therefore, subcritical parameter values can be preferable if the
settling time is of the same order as the scheduled runtime of the algorithm. If, in addition, a typical
length scale of the problem is known, then the finite standard deviation of the particles in the stable
parameter region can be used to decide about the distance of the parameter values from the critical
curve. These dynamical quantities can be approximately set, based on the theory presented here,
such that a precise control of the behaviour of the algorithm is in principle possible.
The observation of the distribution of empirically optimal parameter values along the critical
curve, confirms the expectation that critical or near-critical behaviour is the main reason for success
of the algorithm. Critical fluctuations are a plausible tool in search problem if apart from certain
smoothness assumption nothing is known about the cost landscape: The majority of excursions will
exploit the smoothness of the cost function by local search, whereas the fat tails of the distribution
allow the particles to escape from local minima.
7
Figure 5: For p 6= g we define neutral stability as the equilibrium between divergence and convergence. Convergence means here that the particle approaches the line connecting p and g. Curves
are for a one-dimensional problem with p = 0.1 and g = 0 scaled (see Sect. 3.3) by κ = 1 (outer
curve) κ = 0.1 and κ = 0.04 (inner curve). Results are for 200 iterations and averaged over 100000
repetitions.
5.2
Switching dynamics at discovery of better solutions
Eq. 2 shows that the discovery of a better solution affects only the constant terms of the linear
dynamics of a particle, whereas its dynamical properties are governed by the linear coefficient matrices.
However, in the time step after a particle has found a new solution the corresponding force term in the
dynamics is zero (see Eq. 1) such that the particle dynamics slows down compared to the theoretical
solution which assumes a finite distance from the best position at all (finite) times. As this affects
usually only one particle at a time and because new discoveries tend to become rarer over time, this
effect will be small in the asymptotic dynamics, although it could justify the empirical optimality of
parameters in the unstable region for some test cases.
The question is nevertheless, how often these changes occur. A weakly converging swarm can still
produce good results if it often discovers better solutions by means of the fluctuations it performs
before settling into the current best position. For cost functions that are not ‘deceptive’, i.e. where
local optima tend to be near better optima, parameter values far inside the critical contour (see
Fig. 3) may give good results, while in other cases more exploration is needed.
5.3
The role of personal best and global best
A numerical scan of the (α1 , α2 ) plane shows a valley of good fitness values, which, at small fixed
positive ω, is roughly linear and described by the relation α1 +α2 = const, i.e. only the joint parameter
α = α1 + α2 matters. For large ω, and accordingly small predicted optimal α values, the valley is less
straight. This may be because the effect of the known solutions is relatively weak, so the interaction
of the two components becomes more important. In other words if the movement of the particles is
mainly due to inertia, then the relation between the global and local best is non-trivial, while at low
inertia the particles can adjust their p vectors quickly towards the g vector such that both terms
become interchangeable.
Finally, we should mention that more particles, longer runtime as well as lower search space
dimension increase the potential for exploration. They all lead to the empirically determined optimal
parameters being closer to the critical curve.
8
6
Conclusion
PSO is a widely used optimisation scheme which is theoretically not well understood. Existing theory
concentrates on a deterministic version of the algorithm which does not possess useful exploration
capabilities. We have studied the algorithm by means of a product of random matrices which allows
us to predict useful parameter ranges and may allow for more precise settings if a typical length scale
of the problem is known. A weakness of the current approach is that it focuses on the standard
PSO [1] which is known to include biases [12, 13], that are not necessarily justifiable, and to be
outperformed on benchmark set and in practical applications by many of the existing PSO variants.
Similar analyses are certainly possible and are expected to be carried out for some of the variants, even
though the field of metaheuristic search is often portrayed as largely inert to theoretical advances. If
the dynamics of particle swarms is better understood, the algorithms may become useful as efficient
particle filters which have many applications beyond heuristic optimisation.
Acknowledgments
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC), grant
number EP/K503034/1.
References
[1] J. Kennedy and R. Eberhart. Particle swarm optimization. In Proceedings IEEE International
Conference on Neural Networks, volume 4, pages 1942–1948. IEEE, 1995.
[2] R. Poli. Analysis of the publications on the applications of particle swarm optimisation. Journal
of Artificial Evolution and Applications, 2008(3):1–10, 2008.
[3] CEC2013. http://www.ntu.edu.sg/home/EPNSugan/index files/CEC2013/CEC2013.htm.
[4] J. Kennedy. The behavior of particles. In V.W. Porto, N. Saravanan, D.Waagen, and A. E.
Eiben, editors, Evolutionary programming VII, pages 579–589. Springer, 1998.
[5] M. Clerc and J. Kennedy. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1):58–73, 2002.
[6] I. C. Trelea. The particle swarm optimization algorithm: convergence analysis and parameter
selection. Information Processing Letters, 85(6):317–325, 2003.
[7] M. Jiang, Y. Luo, and S. Yang. Stagnation analysis in particle swarm optimization. In Swarm
Intelligence Symposium, 2007. SIS 2007. IEEE, pages 92–99. IEEE, 2007.
[8] C. W. Cleghorn and A. P Engelbrecht. A generalized theoretical deterministic particle swarm
model. Swarm Intelligence, 8(1):35–59, 2014.
[9] H. Furstenberg and H. Kesten. Products of random matrices. Annals of Mathematical Statistics,
31(2):457–469, 1960.
[10] V. N. Tutubalin. On limit theorems for the product of random matrices. Theory of Probability
& Its Applications, 10(1):15–27, 1965.
[11] R. Z. Khas’minskii. Necessary and sufficient conditions for the asymptotic stability of linear
stochastic systems. Theory of Probability & Its Applications, 12(1):144–147, 1967.
[12] M. Clerc. Confinements and biases in particle swarm optimisation. Technical Report hal00122799, Open archive HAL, 2006.
[13] W. M. Spears, D. Green, and D. F. Spears. Biases in particle swarm optimization. International
Journal of Swarm Intelligence Research, 1(2):34–57, 2010.
9
| 9 |
GENERALIZED U-FACTORIZATION IN COMMUTATIVE RINGS WITH
ZERO-DIVISORS
arXiv:1312.7403v1 [] 28 Dec 2013
CHRISTOPHER PARK MOONEY
Abstract. Recently substantial progress has been made on generalized factorization techniques in integral domains, in particular τ -factorization. There has also been advances made
in investigating factorization in commutative rings with zero-divisors. One approach which
has been found to be very successful is that of U-factorization introduced by C.R. Fletcher.
We seek to synthesize work done in these two areas by generalizing τ -factorization to rings
with zero-divisors by using the notion of U-factorization.
2010 AMS Subject Classification: 13A05, 13E99, 13F15
1. Introduction
Much work has been done on generalized factorization techniques in integral domains.
There is an excellent overview in [4], where particular attention is paid to τ -factorization.
Several authors have investigated ways to extend factorization to commutative rings with
zero-divisors. For instance, D.D. Anderson, Valdez-Leon, Aǧargün, Chun [3, 6, 7]. One
particular method was that of U-factorization introduced by C.R. Fletcher in [11] and [12].
This method of factorization has been studied extensively by Michael Axtell and others
in [9, 10, 8]. We synthesize this work done into a single study of what we will call τ -Ufactorization.
In this paper, we will assume R is a commutative ring with 1. Let R∗ = R − {0}, let
U(R) be the set of units of R, and let R# = R∗ − U(R) be the non-zero, non-units of
R. As in [10], we define U-factorization as follows. Let a ∈ R be a non-unit. If a =
λa1 · · · an b1 · · · bm is a factorization with λ ∈ U(R), ai , bi ∈ R# , then we will call a =
λa1 a2 · · · an ⌈b1 b2 · · · bm ⌉ a U-factorization of a if (1) ai (b1 · · · bm ) = (b1 · · · bm ) for all 1 ≤ i ≤
n and (2) bj (b1 · · · bbj · · · bm ) 6= (b1 · · · bbj · · · bm ) for 1 ≤ j ≤ m where bbj means bj is omitted
from the product. Here (b1 · · · bm ) is the principal ideal generated by b1 · · · bm . The bi ’s in
this particular U-factorization above will be referred to as essential divisors. The ai ’s in this
particular U-factorization above will be referred to as inessential divisors. A U-factorization
is said to be trivial if there is only one essential divisor.
Note: we have added a single unit factor in front with the inessential divisors which was
not in M. Axtell’s original paper. This is added for consistency with the τ -factorization
definitions and it is evident that a unit is always inessential. We allow only one unit factor,
so it will not affect any of the finite factorization properties.
Remark. If a = λa1 · · · an ⌈b1 · · · bm ⌉ is a U-factorization, then for any 1 ≤ i0 ≤ m, we have
(a) = (b1 · · · bm ) ( (b1 · · · bc
i0 · · · bm ). This is immediate from the definition of U-factorization.
Date: November 6, 2017.
Key words and phrases. factorization, zero-divisors, commutative rings.
1
2
CHRISTOPHER PARK MOONEY
In [9], M. Axtell defines a non-unit a and b to be associate if (a) = (b) and a non-zero nonunit a said to be irreducible if a = bc implies a is associate to b or c. R is commutative ring
R to be U-atomic if every non-zero non-unit has a U-factorization in which every essential
divisor is irreducible. R is said to be a U-finite factorization ring if every non-zero non-unit
has a finite number of distinct U-factorizations. R is said to be a U-bounded factorization
ring if every non-zero non-unit has a bound on the number of essential divisors in any Ufactorization. R is said to be a U-weak finite factorization ring if every non-zero non-unit
has a finite number of non-associate essential divisors. R is said to be a U-atomic idf-ring if
every non-zero non-unit has a finite number of non-associate irreducible essential divisors. R
is said to be a U-half factorization ring if R is U-atomic and every U-atomic factorization has
the same number of irreducible essential divisors. R is said to be a U-unique factorization
ring if it is a U-HFR and in addition each U-atomic factorization can be arranged so the
essential divisors correspond up to associate. In [10, Theorem 2.1], it is shown this definition
of U-UFR is equivalent to the one given by C.R. Fletcher in [11, 12].
In the second section, we begin with some preliminary definitions and results about τ factorization in integral domains as well as factorization in rings with zero-divisors. In the
third section, we state definitions for τ -U-irreducible elements and τ -U-finite factorization
properties. We also prove some preliminary results using these new definitions. In the
fourth section, we demonstrate the relationship between rings satisfying the various τ -U finite
factorization properties. Furthermore, we compare these properties with the rings satisfying
τ -finite factorization properties studied in [13]. In the final section, we investigate direct
products of rings. We introduce a relation τ× which carries many τ -U-finite factorization
properties of the component rings through the direct product.
2. Preliminary Definitions and Results
As in [6], we let a ∼ b if (a) = (b), a ≈ b if there exists λ ∈ U(R) such that a = λb, and
a∼
= b if (1) a ∼ b and (2) a = b = 0 or if a = rb for some r ∈ R then r ∈ U(R). We say a
and b are associates (resp. strong associates, very strong associates) if a ∼ b (resp. a ≈ b,
a∼
= b). As in [1], a ring R is said to be strongly associate (resp. very strongly associate) ring
if for any a, b ∈ R, a ∼ b implies a ≈ b (resp. a ∼
= b).
Let τ be a relation on R# , that is, τ ⊆ R# × R# . We will always assume further that τ is
symmetric. Let a be a non-unit, ai ∈ R# and λ ∈ U(R), then a = λa1 · · · an is said to be a
τ -factorization if ai τ aj for all i 6= j. If n = 1, then this is said to be a trivial τ -factorization.
Each ai is said to be a τ -factor, or that ai τ -divides a, written ai |τ a.
We say that τ is multiplicative (resp. divisive) if for a, b, c ∈ R# (resp. a, b, b′ ∈ R# ), aτ b
and aτ c imply aτ bc (resp. aτ b and b′ | b imply aτ b′ ). We say τ is associate (resp. strongly
associate, very strongly associate) preserving if for a, b, b′ ∈ R# with b ∼ b′ (resp. b ≈ b′ ,
b ∼
= b′ ) aτ b implies aτ b′ . We define a τ -refinement of a τ -factorization λa1 · · · an to be a
factorization of the form
(λλ1 · · · λn ) · b11 · · · b1m1 · b21 · · · b2m2 · · · bn1 · · · bnmn
where ai = λi bi1 · · · bimi is a τ -factorization for each i. This is slightly different from the
original definition in [4] where no unit factor was allowed, and one can see they are equivalent
when τ is associate preserving. We then say that τ is refinable if every τ -refinement of a
τ -factorization is a τ -factorization. We say τ is combinable if whenever λa1 · · · an is a τ factorization, then so is each λa1 · · · ai−1 (ai ai+1 )ai+2 · · · an .
GENERALIZED U-FACTORIZATION IN COMMUTATIVE RINGS WITH ZERO-DIVISORS
3
We now summarize several of the definitions given in [13]. Let a ∈ R be a non-unit.
Then a is said to be τ -irreducible or τ -atomic if for any τ -factorization a = λa1 · · · an , we
have a ∼ ai for some i. We will say a is τ -strongly irreducible or τ -strongly atomic if for
any τ -factorization a = λa1 · · · an , we have a ≈ ai for some ai . We will say that a is τ -mirreducible or τ -m-atomic if for any τ -factorization a = λa1 · · · an , we have a ∼ ai for all i.
Note: the m is for “maximal” since such an a is maximal among principal ideals generated
by elements which occur as τ -factors of a. We will say that a is τ -very strongly irreducible or
τ -very strongly atomic if a ∼
= a and a has no non-trivial τ -factorizations. See [13] for more
equivalent definitions of these various forms of τ -irreducibility.
From [13, Theorem 3.9], we have the following relations where † represents the implication
requires a strongly associate ring:
+3 τ -strongly irred.
+3 τ -irred.
τ -very strongly irred.
KS
❯❯❯❯
♠♠ 2:
♠
♠
❯❯❯❯❯
♠♠
†
❯❯❯❯❯
♠♠♠
❯ &.
♠♠♠
τ -m-irred.
3. τ -U-irreducible elements
A τ -U-factorization of a non-unit a ∈ R is a U-factorization a = λa1 a2 · · · an ⌈b1 b2 · · · bm ⌉
for which λa1 · · · an b1 · · · bm is also a τ -factorization.
Given a symmetric relation τ on R# , we say R is τ -U-refinable if for every τ -U-factorization
of any non-unit a ∈ U(R), a = λa1 · · · an ⌈b1 · · · bm ⌉, any τ -U-factorization of an essential
divisors, bi = λ′ c1 · · · cn′ ⌈d1 · · · dm′ ⌉ satisfies
a = λλ′ a1 · · · an c1 · · · cn′ ⌈b1 · · · bi−1 d1 · · · dm′ bi+1 · · · bm ⌉
is a τ -U-factorization.
Example 3.1. Let R = Z/20Z, and let τ = R# × R# .
Certainly 0 = ⌈10 · 10⌉ is a τ -U-factorization. But 10 = ⌈2 · 5⌉ is a τ -U-factorization;
however, 0 = ⌈2 · 5 · 2 · 6⌉ is not a U-factorization since 5 becomes inessential after a τ U-refinement. It will sometimes be important to ensure the essential divisors of a τ -Urefinement of a τ -U-factorization’s essential divisors remain essential. We will see that in
a présimplifiable ring, there are no inessential divisors, so for τ -refinable, R will be τ -Urefinable.
As stated in [9], the primary benefit of looking at U-factorizations is the elimination of
troublesome idempotent elements that ruin many of the finite factorization properties. For
instance, even Z6 is not a BFR (a ring in which every non-unit has a bound on the number of
non-unit factors in any factorization) because we have 3 = 32 . Thus, 3 is an idempotent, so
3 = 3n for all n ≥ 1 which yields arbitrarily long factorizations. When we use U-factorization,
we see any of these factorizations can be rearranged to 3 = 3n−1 ⌈3⌉, which has only one
essential divisor.
Let α ∈ {irreducible, strongly irreducible, m-irreducible, very strongly irreducible}. Let a
be a non-unit. If a = λa1 a2 · · · an ⌈b1 b2 · · · bm ⌉ is a τ -U-factorization, then this factorization
is said to be a τ -U-α-factorization if it is a τ -U-factorization and the essential divisors bi are
τ -α for 1 ≤ i ≤ m.
One must be somewhat more careful with U-factorizations as there is a loss of uniqueness
4
CHRISTOPHER PARK MOONEY
in the factorizations. For instance, if we let R = Z6 × Z8 , then we can factor (3, 4) as
(3, 1) ⌈(3, 3)(1, 4)⌉ or (3, 3) ⌈(3, 1)(1, 4)⌉. On the bright side, we have [8, Proposition 4.1].
Theorem 3.2. Every factorization can be rearranged into a U-factorization.
Corollary 3.3. Let R be a commutative ring with 1 and τ a symmetric relation on R# .
Let α ∈ {irreducible, strongly irreducible, m-irreducible, very strongly irreducible}. For every
τ -α factorization of a non-unit a ∈ R, a = λa1 · · · an , we can rearrange this factorization
into a τ -U-α-factorization.
Proof. Let a = λa1 · · · an be a τ -α-factorization. By Theorem 3.2 we can rearrange this to
form a U-factorization. This remains a τ -factorization since τ is assumed to be symmetric.
Lastly each ai is τ -α, so the essential divisors are τ -α.
This leads us to another equivalent definition of τ -irreducible.
Theorem 3.4. Let a ∈ R be a non-unit. Then a is τ -irreducible if and only if any τ -Ufactorization of a has only one essential divisor.
Proof. (⇒) Let a be τ -irreducible. Let a = λa1 · · · an ⌈b1 · · · bm ⌉ be a τ -U-factorization.
Suppose m ≥ 2, then a = λa1 · · · an b1 · · · bm is a τ -factorization implies a ∼ ai0 for some
1 ≤ i0 ≤ n or a ∼ bi0 for some 1 ≤ i0 ≤ m. But then either
or
(a) = (a1 · · · an b1 · · · bm ) ( (a1 · · · an bb1 b2 · · · bm ) ⊆ (ai0 ) = (a)
d
c
(a) = (a1 · · · an b1 · · · bm ) = (b1 · · · bm ) ( (bb1 · · · bd
i0 −1 · bi0 · bi0 +1 · · · bm ) ⊆ (bi0 ) = (a)
a contradiction.
(⇐) Suppose a = λa1 · · · an . Then this can be rearranged into a U-factorization, and
hence a τ -U-factorization. By hypothesis, there can only be one essential divisor. Suppose
it is an . We have a = λa1 · · · an−1 ⌈an ⌉ is a τ -U-factorization and a ∼ an as desired.
We now define the finite factorization properties using the τ -U-factorization approach.
Let α ∈ { irreducible, strongly irreducible, m-irreducible, very strongly irreducible } and
let β ∈ {associate, strongly associate, very strongly associate }. R is said to be τ -U-α if
for all non-units a ∈ R, there is a τ -U-α-factorization of a. R is said to satisfy τ -U-ACCP
(ascending chain condition on principal ideals) if every properly ascending chain of principal
ideals (a1 ) ( (a2 ) ( · · · such that ai+1 is an essential divisor in some τ -U-factorization of ai ,
for each i terminates after finitely many principal ideals. R is said to be a τ -U-BFR if for all
non-units a ∈ R, there is a bound on the number of essential divisors in any τ -U-factorization
of a.
R is said to be a τ -U-β-FFR if for all non-units a ∈ R, there are only finitely many
τ -U-factorizations up to rearrangement of the essential divisors and β. R is said to be a τ U-β-WFFR if for all non-units a ∈ R, there are only finitely many essential divisors among
all τ -U-factorizations of a up to β. R is said to be a τ -U-α-β-divisor finite (df ) ring if
for all non-units a ∈ R, there are only finitely many essential τ -α divisors up to β in the
τ -U-factorizations of a.
R is said to be a τ -U-α-HFR if R is τ -U-α and for all non-units a ∈ R, the number of
essential divisors in any τ -U-α-factorization of a is the same. R is said to be a τ -U-α-βUFR if R is a τ -U-α-HFR and the essential divisors of any two τ -U-α-factorizations can be
GENERALIZED U-FACTORIZATION IN COMMUTATIVE RINGS WITH ZERO-DIVISORS
5
rearranged to match up to β.
R is said to be présimplifiable if for every x ∈ R, x = xy implies x = 0 or y ∈ U(R).
This is a condition which has been well studied and is satisfied by any domain or local ring.
We introduce two slight modifications of this. R is said to be τ -présimplifiable if for every
x ∈ R, the only τ -factorizations of x which contain x as a τ -factor are of the form x = λx
for a unit λ. R is said to be τ -U-présimplifiable if for every non-zero non-unit x ∈ R, all
τ -U-factorizations have no non-unit inessential divisors.
Theorem 3.5. Let R be a commutative ring with 1 and let τ be a symmetric relation on
R# . We have the following.
(1) If R is présimplifiable, then R is τ -U-présimplifiable.
(2) If R is τ -U-présimplifiable, then R is R is τ -présimplifiable.
That is présimplifiable ⇒ τ -U-présimplifiable ⇒ τ -présimplifiable. If τ = R# × R# , then all
are equivalent.
Proof. (1) Let R be présimplifiable, and x ∈ R# . Suppose x = λa1 · · · an ⌈b1 · · · bm ⌉ is a
τ -U-factorization. Then (x) = (b1 · · · bm ). R présimplifiable implies that all the associate
relations coincide, so in fact x ∼
= b1 · · · bm implies that λa1 · · · an ∈ U(R) and hence all
inessential divisors are units.
(2) Let R be τ -U-présimplifiable, and x ∈ R such that x = λxa1 · · · an is a τ -factorization.
We claim that x = λa1 · · · an ⌈x⌉ is a τ -U-factorization. For any 1 ≤ i ≤ n, x | ai x and
(ai x)(λa1 · · · abi · · · an ) = x shows ai x | x, proving the claim. This implies λa1 · · · an ∈ U(R)
as desired.
Let τ = R# × R# and suppose R is τ -présimplifiable. Suppose x = xy, for x 6= 0, we show
y ∈ U(R). If x ∈ U(R), then multiplying through by x−1 yields 1 = x−1 x = x−1 xy = y and
y ∈ U(R) as desired. We may now assume x ∈ R# . If y = 0, then x = 0, a contradiction.
If y ∈ U(R) we are already done, so we may assume y ∈ R# . Thus xτ y, and x = xy is a
τ -factorization, so y ∈ U(R) as desired.
4. τ -U-finite factorization relations
We now would like to show the relationship between rings with various τ -U-α-finite factorization properties as well as compare these rings with the τ -α-finite factorization properties
of [13].
Theorem 4.1. Let R be a commutative ring with 1 and let τ be a symmetric relation on
R# . Consider the following statements.
(1) R is a τ -BFR.
(2) R is τ -présimplifiable and for every non-unit a1 ∈ R, there is a fixed bound on the length
of chains of principal ideals (ai ) ascending from a1 such that at each stage ai+1 |τ ai .
(3) R is τ -présimplifiable and a τ -U-BFR.
(4) For every non-unit a ∈ R , there are natural numbers N1 (a) and N2 (a) such that if
a = λa1 · · · an ⌈b1 · · · bm ⌉ is a τ -U-factorization, then n ≤ N1 (a) and m ≤ N2 (a).
Then (4) ⇒ (1) and (2) ⇒ (3). For τ refinable, (1) ⇒ (2) and for R τ -U-présimplifiable,
(3) ⇒ (4). Thus all are equivalent if R is τ -U-présimplifiable and τ is refinable.
Let ⋆ represent τ being refinable, and † represent R being τ -U-présimplifiable, then the
6
CHRISTOPHER PARK MOONEY
following diagram summarizes the theorem.
(1)
⋆
KS
(4) ks
+3
(2)
†
(3)
Proof. (1) ⇒ (2) Let τ be refinable. Suppose there were a non-trivial τ -factorization x =
λxa1 · · · an with n ≥ 1. Since τ is assumed to be refinable we can continue to replace the
τ -factor x with this factorization.
x = λxa1 · · · an = (λλ)xa1 · · · an a1 · · · an = · · · = (λλλ)xa1 · · · an a1 · · · an a1 · · · an = · · ·
yields an unbounded series of τ -factorizations of increasing length.
Let a1 be a non-unit in R. Suppose N is the bound on the length of any τ -factorization
of a1 . We claim that N satisfies the requirement of (2). Let (a1 ) ( (a2 ) ( · · · be an
ascending chain of principal ideals generated by elements which satisfy ai+1 |τ ai for each i.
Say ai = λi ai+1 ai1 · · · aini for each i. Furthermore, we can assume ni ≥ 1 for each i or else
the containment would not be proper. Then we can write
a1 = λ1 a2 a11 · · · a1n1 = λ1 λ2 a3 a21 · · · a2n2 a11 · · · a1n1 = · · · .
Each remains a τ -factorization since τ is refinable and we have added at least one factor
at each step. If the chain were greater than length N we would contradict R being a τ -BFR.
(2) ⇒ (3) Let a ∈ R be a non-unit. Let N be the bound on the length of any properly ascending chain of principle ideals ascending from a such that ai+1 |τ ai . If a =
λa1 · · · an ⌈b1 · · · bm ⌉ is a τ -U-factorization, then we get an ascending chain with b1 · · · bi−1 |τ
b1 · · · bi for each i:
(a) = (b1 · · · bm ) ( (b1 · · · bm−1 ) ( (b1 · · · bm−2 ) ( · · · ( (b1 b2 ) ( (b1 ).
Hence, m ≤ N and we have found a bound on the number of essential divisors in any τ -Ufactorization of a, making R a τ -U-BFR.
(3) ⇒ (4) Let a ∈ R be a non-unit. Let Ne (a) be the bound on the number of essential
divisors in any τ -U-factorization of a. Since R is τ -U-présimplifiable, there are no inessential
τ -U-divisors of a. We can set N1 (a) = 0, and N2 (a) = Ne (a) and see that this satisfies the
requirements of the theorem.
(4) ⇒ (1) Let a ∈ R be a non-unit. Then any τ -factorization
a = λa1 · · · an can be
rearranged into a τ -U-factorization, say a = λas1 · · · asi asi+1 · · · asn . But then n = i + (n −
i) ≤ N1 (a) + N2 (a). Hence the length of any τ -factorization must be less than N1 (a) + N2 (a)
proving R is a τ -BFR as desired.
The way we have defined our finite factorization properties on only the essential divisors
causes a slight problem. Given a τ -U-factorization a = λa1 · · · an ⌈b1 · · · bm ⌉, we only know
that a ∼ b1 · · · bm . This may no longer be a τ -factorization of a, but rather only some
associate of a. This is easily remedied by insisting that our rings are strongly associate.
Lemma 4.2. Let R be a strongly associate ring with τ a symmetric relation on R# , and let
α ∈ {irreducible, strongly irreducible, m-irreducible, very strongly irreducible}. Let a ∈ R,
a non-unit. If a = λa1 a2 · · · an ⌈b1 b2 · · · bm ⌉ is a τ -U-α-factorization, then there is a unit
µ ∈ U(R) such that a = µb1 · · · bm is a τ -α-factorization.
GENERALIZED U-FACTORIZATION IN COMMUTATIVE RINGS WITH ZERO-DIVISORS
7
Proof. Let a = λa1 a2 · · · an ⌈b1 b2 · · · bm ⌉ be a τ -U-α-factorization. By definition, (a) =
(b1 · · · bm ), and R strongly associate implies that a ≈ b1 · · · bm . Let µ ∈ U(R) be such
that a = µb1 · · · bm . We still have bi τ bj for all i 6= j, and bi is τ -α for every i. Hence
a = µb1 · · · bm is the desired τ -factorization, proving the lemma.
Theorem 4.3. Let R be a commutative ring with 1, and let τ be a symmetric relation on
R# . Let α ∈ {irreducible, strongly irreducible, m-irreducible, very strongly irreducible}, and
β ∈ {associate, strongly associate, very strongly associate }. We have the following.
(1) If R is τ -α, then R is τ -U-α.
(2) If R satisfies τ -ACCP, then R satisfies τ -U-ACCP.
(3) If R is a τ -BFR, then R is a τ -U-BFR.
(4) If R is a τ -β-FFR, then R is a τ -U-β-FFR.
(5) Let R be a τ -β-WFFR, then R is a τ -U-β-WFFR.
(6) Let R be a τ -α-β-divisor finite ring, then R is τ -U-α-β-divisor finite ring.
(7) Let R be a strongly associate τ -α-HFR (resp. τ -α-β-UFR), then R is τ -U-α-HFR (resp.
τ -U-α-β-UFR).
Proof. (1) This is immediate from Corollary 3.3.
(2) Suppose there were a infinite properly ascending chain of principal ideals (a1 ) ( (a2 ) (
· · · such that ai+1 is an essential divisor in some τ -U-factorization of ai , for each i. Every
essential τ -U-divisor is certainly a τ -divisor. This would contradict the fact that R satisfies
τ -ACCP.
(3) We suppose that there is a non-unit a ∈ R with τ -U-factorizations having arbitrarily
large numbers of essential τ -U-divisors. Each is certainly a τ -factorization, having at least
as many τ -factors as there are essential τ -divisors, so this would contradict the hypothesis.
(4) Every τ -U-factorization is certainly among the τ -factorizations. If the latter is finite,
then so is the former.
(5) For any given non-unit a ∈ R, every essential τ -U-divisor of a is certainly a τ -factor
of a which has only finitely many up to β. Hence there can be only finitely many essential
τ -U-factors up to β.
(6) Let a ∈ R be a non-unit. Every essential τ -U-α-divisor of a is a τ -α-factor of a.
There are only finitely many τ -α-divisors up to β, so then there can be only finitely many
τ -U-α-divisors of a up to β.
(7) We have already seen that R being τ -α implies R is τ -U-α. Let a ∈ R be a non-unit.
We suppose for a moment there are two τ -α-U-factorizations:
a = λa1 · · · an ⌈b1 · · · bm ⌉ = λ′ a′1 · · · a′n′ ⌈b′1 · · · b′m′ ⌉
such that m 6= m′ (resp. m 6= m′ or there is no rearrangement such that bi and b′i are β
for each i). Lemma 4.2 implies ∃µ, µ′ ∈ U(R) with a = µb1 · · · bm = µ′ b′1 · · · b′m′ are two
τ -α-factorizations of a, so m = m′ (resp. m = m′ and there is a rearrangement so that bi
and b′i are β for each 1 ≤ i ≤ m), a contradiction, proving R is indeed a τ -U-α-HFR (resp.
-β-UFR) as desired.
Theorem 4.4. Let R be a commutative ring with 1 and τ a symmetric relation on R# .
Let α ∈ {irreducible, strongly irreducible, m-irreducible, very strongly irreducible}, and let
β ∈ {associate, strongly associate, very strongly associate}.
(1) If R is a τ -U-α-β-UFR, then R is a τ -α-U-HFR.
(2) If R is τ -U-refinable and R is a τ -U-α-β-UFR, then R is a τ -U-β-FFR.
8
(3)
(4)
(5)
(6)
(7)
(8)
CHRISTOPHER PARK MOONEY
If
If
If
If
If
If
R
R
R
R
R
R
is
is
is
is
is
is
τ -U-refinable and R is a τ -U-α-HFR, then R is a τ -U-BFR.
a τ -U-β-FFR, then R is a τ -U-BFR.
a τ -U-β-FFR, then R is a τ -U-β-WFFR.
a τ -U-β-WFFR, then R is a τ -U-α-β-divisor finite ring.
τ -U-refinable and R is a τ -U-α-BFR, then R satisfies τ -U-ACCP.
τ -U-refinable and R satisfies τ -U-ACCP, then R is τ -U-α.
Proof. (1) This is immediate from definitions.
(2) Let a ∈ R be a non-unit. Let a = λa1 · · · an ⌈b1 · · · bm ⌉ be the unique τ -α-U-factorization
up to rearrangement and β. Given any other τ -U-factorization, we can τ -U-refine each essential τ -U-divisor into a τ -U-α-factorization of a. There is a rearrangement of the essential
divisors to match up to β with bi for each 1 ≤ i ≤ m. Thus the essential divisors in any τ -Ufactorization come from some combination of products of β of the m τ -U-α essential factors
in our original factorization. Hence there are at most 2m possible distinct τ -U-factorizations
up to β, making this a τ -U-β-FFR as desired.
(3) For a given non-unit a ∈ R, the number of essential divisors in any τ -U-α-factorization
is the same, say N. We claim this is a bound on the number of essential divisors of any
τ -U-factorization. Suppose there were a τ -U-factorization a = λa1 · · · an ⌈b1 · · · bm ⌉ with
m > N. For every i, bi has a τ -U-α-factorization with at least one essential divisor. Since R
is τ -U-refinable, we can τ -U-refine the factorization yielding a τ -U-α-factorization of a with
at least m τ -U-α essential factors. This contradicts the assumption that R is a τ -U-α-HFR.
(4) Let R be a τ -U-β-FFR. Let a ∈ R be a non-unit. There are only finitely many τ -Ufactorizations of a up to rearrangement and β of the essential divisors. We can simply take
the maximum of the number of essential divisors among all of these factorizations. This is
an upper bound for the number of essential divisors in any τ -U-factorization.
(5) Let R be a τ -U-β-FFR, then for any non-unit a ∈ R. Let S be the collection of essential
divisors in the finite number of representative τ -U-factorizations of a up to β. This gives us
a finite collection of elements up to β. Every essential divisor up to β in a τ -U-factorization
of a must be among these, so this collection is finite as desired.
(6) If every non-unit a ∈ R has a finite number of proper essential τ -U divisors, then
certainly there are a finite number of essential τ -α-U-divisors.
(7) Suppose R is a τ -U-BFR, but (a1 ) ( (a2 ) ( · · · is a properly ascending chain of
principal ideals such that ai+1 is an essential factor in some τ -U-factorization of ai , say
ai = λi ai1 · · · aini ⌈ai+1 bi1 · · · bimi ⌉
for each i. Furthermore, mi ≥ 1, for each i otherwise we would have (ai+1 ) = (ai ) contrary
to our assumption that our chain is properly increasing. Our assumption that R is τ -U
refinable allows us to factor a1 as follows:
a1 = λ1 a11 · · · a1n1 ⌈a2 b11 · · · b1m1 ⌉ =
λ1 λ2 a11 · · · a1n1 a21 · · · a2n2 ⌈a3 b21 · · · b2m2 b11 · · · b1m1 ⌉
and so on. At each iteration i we have at least i + 1 essential factors in our τ -U-factorization.
This contradicts the assumption that a1 should have a bound on the number of essential
divisors in any τ -U-factorization.
(8) Let a1 ∈ R be a non-unit. If a1 is τ -U-α we are already done, so there must be a
non-trivial τ -U factorization of a1 , say:
a1 = λ1 a11 · · · a1n1 ⌈a2 b11 · · · b1m1 ⌉ .
GENERALIZED U-FACTORIZATION IN COMMUTATIVE RINGS WITH ZERO-DIVISORS
9
Now if all of the essential divisors are τ -U-α we are done as we have found a τ -U-αfactorization. After rearranging if necessary, we suppose that a2 is not τ -U-α. Therefore a2
has a non-trivial τ -U-factorization, say:
a2 = λ2 a21 · · · a2n1 ⌈a3 b21 · · · b2m2 ⌉ .
Because R is τ -U-refinable, this gives us a τ -U-factorization:
a1 = λ1 λ2 a11 · · · a1n1 a21 · · · a2n2 ⌈a3 b21 · · · b2m2 b11 · · · b1m1 ⌉
which cannot be τ -U-α or else we would be done. We can continue in this fashion and get
an ascending chain of principal ideals
(a1 ) ⊆ (a2 ) ⊆ · · ·
such that ai+1 is an essential τ -U-divisor of ai for each i.
Claim: This chain must be properly ascending. Suppose (ai ) = (ai+1 ) for some i. When
we look at ai = λi ai1 · · · aini ⌈ai+1 bi1 · · · bimi ⌉, we see that (ai ) = (ai+1 bi1 · · · bimi ). But then
we could remove any of the bij for any 1 ≤ j ≤ mi and still have (ai ) = (ai+1 bi1 · · · bc
ij · · · bimi )
contradicting the fact that the factorization was a τ -U-factorization since bij is inessential.
We certainly have (ai ) ⊆ (ai+1 bi1 · · · bc
ij · · · bimi ). To see the other containment holds,
(ai ) = (ai+1 ) ⇒ ai+1 = ai r for some r ∈ R, and we can simply multiply by bi1 · · · bc
ij · · · bimi
on both sides to see that
c
ai+1 bi1 · · · bc
ij · · · bim = ai (rbi1 · · · bij · · · bim )
i
i
showing the other containment. Proving the claim.
This is a contradiction to the fact that R satisfies τ -U-ACCP, proving we must in finitely
many steps arrive at a τ -U-α-factorization of a1 , proving R is indeed τ -U-α as desired.
The following diagram summarizes our results from the Theorems 4.3 and 4.4 where ⋆
represents R being strongly associate, and † represents R is τ -U-refinable:
τ -α-β-UFR
ks
τ ❧-U-α-HFR
◗◗
19
⋆
τ -α-HFR
◗◗◗◗
❧❧❧
❧❧❧
◗◗◗†◗
❧
❧
❧
⋆
◗◗◗◗
❧❧❧
❧
❧
❧
$,
†
+3 τ -U-BFR
+3 τ -U-β-FFR
τ -U-α-β-UFR
KS
]e ❈❈❈
❈❈❈❈
❈❈❈❈
❈❈❈❈
❈❈
+3 τ -U-β-WFFR ❈❈❈❈❈❈❈
τ -β-WFFR
❈❈❈❈❈ τ -BFR
❈❈❈❈
❈❈❈❈
❈❈❈
+3 τ -U-α-β df ring
τ -α-β df ring
τ -β-FFR
†
+3
τ -U-ACCP
KS
τ −ACCP
†
+3
τ -U-α
KS
τ -α
We have left off the relations which were proven in [13, Theorem 4.1], and focused instead
on the rings satisfying the U-finite factorization properties. Examples given in [9, 10, 4, 2]
show that arrows can neither be reversed nor added to the diagram with a few exceptions.
Question 4.5. Does U-atomic imply atomic?
D.D. Anderson and S. Valdez-Leon show in [6, Theorem 3.13] that if R has a finite number
of non-associate irreducibles, then U-atomic and atomic are equivalent. This remains open
10
CHRISTOPHER PARK MOONEY
in general.
Question 4.6. Does U-ACCP imply ACCP?
We can modify M. Axtell’s proof of [9, Theorem 2.9] to add a partial converse to Theorem
4.4 (5) if τ is combinable and associate preserving. The idea is the same, but slight adjustments are required to adapt it to τ -factorizations and to allow uniqueness up to any type of
associate.
Theorem 4.7. Let β ∈ {associate, strongly associate, very strongly associate}. Let R be a
commutative ring with 1 and let τ be a symmetric relation on R# which is both combinable
and associate preserving. R is a τ -U-β-FFR if and only if R is a τ -U-β-WFFR.
Proof. (⇒) was already shown, so we need only prove the converse. (⇐) Suppose R is not
a τ -U-β-FFR. Let a ∈ R be a non-unit which has infinitely many τ -U-factorizations up to
β. Let b1 , b2 , . . . , bm be a complete list of essential τ -U-divisors of a up to β. Let
a = a1 · · · an ⌈c1 · · · ck ⌉ = a′1 · · · a′n′ ⌈d1 · · · dn ⌉
be two τ -U-factorizations of a and assume we have re-ordered the essential divisors in
both factorizations above so that the β of b1 appear first, followed by β of b2 , etc. Let
A = h(c1 ), (c2 ), . . . , (ck )i and B = h(d1 ), (d2 ), . . . , (dn )i be sequences of ideals. We call the
factorizations comparable if A is a subsequence of B or vice versa.
Suppose A is a proper subsequence of B
B = h(d1 ), . . . , (di1 ) = (c1 ), . . . , (di2 ) = (c2 ), . . . , (dik ) = (ck ), . . . , (dn )i
with n > k. Because τ is combinable and symmetric,
l
m
c
c
a = a′1 · · · a′n′ di1 di2 · · · dik (d1 · · · dc
d
·
·
·
d
·
·
·
d
)
i1 i2
ik
n
remains a τ -factorizations and [9, Lemma 1.3] ensures that this remains a U-factorization.
This yields
c
c
(a) = (d1 · · · dc
i1 di2 · · · dik · · · dn )(di1 di2 · · · dik ) = (d1 · · · dn ) = (c1 · · · ck )
= (c1 ) · · · (ck ) = (di1 ) · · · (dik ) = (di1 · · · dik ).
c
c
But then, (d1 · · · dc
i1 di2 · · · dik · · · dn ) cannot be an essential divisor, a contradiction, unless
n = k.
If n = k, then the sequences of ideals are identical, and we seek to prove this means the τ U-factorizations are the same up to β. It is certainly true for β = associate as demonstrated
in [9, Theorem 2.9]. So we have a pairing of the ci and di such that ci ∼ bj ∼ di for one of
the essential τ -U-divisors bj . We know further that ci and bj (resp. di and bj ) are β since R
is by assumption a τ -U-β-WFFR.
It is well established that β is transitive, so we can conclude that this same pairing
demonstrates that ci and di are β, not just associate. Thus the number of distinct τ -Ufactorizations up to β is less than or equal to the number of non-comparable finite sequences
of elements from the set {(b1 ), (b2 ), . . . , (bm )}.
From here we direct the reader to the proof of the second claim in [9, Theorem 2.9] where
it is shown that this set is finite.
GENERALIZED U-FACTORIZATION IN COMMUTATIVE RINGS WITH ZERO-DIVISORS
11
5. Direct Products
For each i, 1 ≤ i ≤ N, let Ri be commutative rings with τi a symmetric relation on Ri# .
We define a relation τ× on R = R1 × · · · × RN which preserves many of the theorems about
direct products from [8] for τ -factorizations. Let (ai ), (bi ) ∈ R# , then (ai )τ× (bi ) if and only
if whenever ai and bi are both non-units in Ri , then ai τi bi .
For convenience we will adopt the following notation: Suppose x ∈ Ri , then x(i) =
(1R1 , · · · , 1Ri−1 , x, 1Ri+1 , · · · 1RN ). so x appears in the ith coordinate, and all other entries
(1) (2)
(n)
are the identity. Thus for any (ai ) ∈ R, we have (ai ) = a1 a2 · · · an is a τ× -factorization.
We will always move any τ× -factors which may become units in this process to the front and
collect them there.
Lemma 5.1. Let R = R1 × · · · × RN for N ∈ N. Then (ai ) ∼ (bi ) (resp. (ai ) ≈ (bi )) if and
only if ai ∼ bi (resp. ai ≈ bi ) for every i. Furthermore, (ai ) ∼
= bi for all i,
= (bi ) implies ai ∼
∼
∼
and for ai , bi all non-zero, ai = bi for all i ⇒ (ai ) = (bi ).
Proof. See [6, Theorem 2.15].
Example 5.2. If ai0 = 0 for even one index 1 ≤ i0 ≤ N, then ai ∼
= bi for all i need not
imply (ai ) ∼
= (bi ).
Consider the ring R = Z × Z, with τi = Z# × Z# for i = 1, 2, the usual factorization. We
6= (0, 1).
have 1 ∼
= 0 since Z is a domain; however (0, 1) = (0, 1)(0, 1) shows (0, 1) ∼
= 1 and 0 ∼
Lemma 5.3. Let R = R1 × · · · × RN for N ∈ N with τi a symmetric relation on Ri# for
each i. Let α ∈ { irreducible, strongly irreducible, m-irreducible, very strongly irreducible}.
If (ai ) ∈ R is τ -α, then precisely one coordinate is not a unit.
Proof. Let a = (ai ) ∈ R be a non-unit which is τ× -α. Certainly not all coordinates can
be units, or else a ∈ U(R). Suppose for a moment there were at least two coordinates for
which ai is not a unit in Ri . After reordering, we may assume a1 and a2 are not units. Then
(1)
a = a1 (1R1 , a2 , · · · , aN ) is a τ× -factorization. But a is not even associate to either τ× -factor,
a contradiction.
Theorem 5.4. Let R = R1 × · · · × RN for N ∈ N with τi a symmetric relation on Ri# for
each i.
(1) A non-unit (ai ) ∈ R is τ× -atomic (resp. strongly atomic) if and only if ai0 is τi0 -atomic
(resp. strongly atomic) for some 1 ≤ i0 ≤ n and ai ∈ U(Ri ) for all i 6= i0 .
(2) A non-unit (ai ) ∈ R is τ× -m-atomic if and only if ai0 is τi0 -m-atomic for some 1 ≤ i0 ≤ n
and ai ∈ U(Ri ) for all i 6= i0 .
(3) A non-unit (ai ) ∈ R is τ× -very strongly atomic if and only if ai0 is τi0 -very strongly
atomic and non-zero for some 1 ≤ i0 ≤ n and ai ∈ U(Ri ) for all i 6= i0 .
Proof. (1) (⇒) Let a = (ai ) ∈ R be a non-unit which is τ× -atomic (resp. strongly atomic).
By Lemma 5.3, there is only one non-unit coordinate. Suppose after reordering if necessary
that a1 is the non-unit. If a1 were not τ1 -atomic (resp. strongly atomic), then there is a
τ1 -factorization, λ11 a11 a12 · · · a1k for which a1 6∼ a1j (resp. a1 6≈ a1j ) for any 1 ≤ j ≤ k. But
then
(1) (1)
(1)
(ai ) = (λ11 , a2 , . . . , an )a11 a12 · · · a1k
(1)
(1)
is a τ× -factorization. Furthermore, by Lemma 5.1 (ai ) 6∼ a1j (resp. (ai ) 6∼ a1j ) for all
1 ≤ j ≤ k. This would contradict the assumption that a was τ× -atomic (resp. strongly
12
CHRISTOPHER PARK MOONEY
atomic).
(⇐) Let a1 ∈ R1 , a non-unit with a1 being τ1 -atomic (resp. strongly atomic). Let
µi ∈ U(Ri ) for 2 ≤ i ≤ N. We show a = (a1 , µ2 , · · · µN ) is τ× -atomic (resp. strongly
atomic). Suppose a = (λ1 , . . . , λN )(a11 , . . . , a1N ) · · · (ak1 , . . . , akN ) is a τ× -factorization of a.
We first note aij ∈ U(Rj ) for all j ≥ 2. Furthermore, this means ai1 is not a unit in R1
for 1 ≤ i ≤ k, otherwise we would have units as factors in a τ× factorization. This means
a1 = λ1 a11 · · · ak1 is a τ1 factorization of a τ1 -atomic (resp. strongly atomic) element. Thus,
we must have a1 ∼ aj1 (resp. a1 ≈ aj1 ) for some 1 ≤ j ≤ k. Hence by Lemma 5.1, we have
a ∼ (aj1 , . . . , ajN ) (resp. a ≈ (aj1 , . . . , ajN ) for some 1 ≤ j ≤ k and a is τ× atomic (resp.
strongly atomic) as desired.
(2) (⇒) Let a = (ai ) ∈ R be a non-unit which is τ× -m-atomic. By Lemma 5.3, there is
only one non-unit coordinate, say a1 after reordering if necessary. Let a1 = λ11 a11 a12 · · · a1k
be a τ1 factorization for which a1 6∼ a1j0 for at least one 1 ≤ j0 ≤ k. But then
(1) (1)
(1)
(ai ) = (λ11 , a2 , . . . , an )a11 a12 · · · a1k
(1)
is a τ× -factorization of a for which (by Lemma 5.1) a = (ai ) 6∼ a1j . This contradicts the
0
hypothesis that a is τ× -m-atomic.
(⇐) Let a1 ∈ R1 , a non-unit with a1 being τ1 -m-atomic. Let µi ∈ U(Ri ) for 2 ≤ i ≤ N.
We show a = (a1 , µ2 , · · · µN ) is τ× -m-atomic. Suppose
a = (λ1 , . . . , λN )(a11 , . . . , a1N ) · · · (ak1 , . . . , akN )
is a τ× -factorization of a. We first note aij ∈ U(Rj ) for all j ≥ 2. As before, this means
a1 = λ1 a11 · · · ak1 is a τ1 factorization of a τ1 -m-atomic element. Hence a1 ∼ aj1 for each
1 ≤ j ≤ k. By Lemma 5.1 we have a ∼ (aj1 , . . . , ajN ) for all 1 ≤ j ≤ k and thus a is
τ× -m-atomic as desired.
(3) (⇒) Let a = (a1 , . . . aN ) be a non-unit which is τ× -very strongly atomic. By Lemma
5.3, we may assume a1 is the non-unit, and aj is a unit for j ≥ 2. We suppose for a moment
that a1 = 01 . But then (0, a2 , . . . aN ) = (0, 1, . . . 1) · (0, a2 , . . . aN ) shows that a ∼
6= a, a
∼
∼
contradiction. Lemma 5.1 shows that if a = a, then ai = ai for each 1 ≤ i ≤ N. Hence, if a1
were not τ1 -very strongly atomic, then there is a τ1 -factorization, λ11 a11 a12 · · · a1k for which
a1 ∼
6= a1j for any 1 ≤ j ≤ k. But then
(1) (1)
(1)
(ai ) = (λ11 , a2 , . . . , an )a11 a12 · · · a1k
is a τ× -factorization. Furthermore, since every coordinate is non-zero, by Lemma 5.1 (ai ) ∼
6=
(1)
a1j for all 1 ≤ j ≤ k. This would contradict the assumption that a was τ× -very strongly
atomic.
(⇐) Let a1 ∈ R1# be τ1 -very strongly atomic. Let µi ∈ U(Ri ) for 2 ≤ i ≤ N. We
show a = (a1 , µ2 , · · · µN ) is τ× -very strongly atomic. We first check a ∼
= a. By definition
of τ1 -very strongly atomic, a1 ∼
= a1 . Certainly as units, we have µi ∼
= µi for each i ≥ 2.
Lastly, all of these are non-zero, so we may apply Lemma 5.1 to see that a ∼
= a. Suppose
a = (λ1 , . . . , λN )(a11 , . . . , a1N ) · · · (ak1 , . . . , akN ) is a τ× -factorization of a. We first note
aij ∈ U(Rj ) for all j ≥ 2. As before, this means a1 = λ1 a11 · · · ak1 is a τ1 factorization of
a τ1 -very strongly atomic element. Hence a1 ∼
= aj1 for some 1 ≤ j ≤ k. By Lemma 5.1 we
have a ∼
(a
,
.
.
.
,
a
)
and
thus
a
is
τ
-very
strongly
atomic as desired.
= j1
jN
×
Lemma 5.5. Let R = R1 × · · · × RN for N ∈ N with τi a symmetric relation on Ri# . Let
α ∈ {irreducible, strongly irreducible, m-irreducible, very strongly irreducible }. Then we
GENERALIZED U-FACTORIZATION IN COMMUTATIVE RINGS WITH ZERO-DIVISORS
13
have the following.
(1) If a = λa1 · · · aln ⌈b1 · · · bmm⌉ is a τi -U-α-factorization of some non-unit a ∈ Ri , then
(i)
(i)
(i)
(i)
a(i) = λ(i) a1 · · · an b1 · · · bm is a τ× -U-α-factorization.
(2) Conversely, let ai0 ∈ Ri0 be a non-unit and µi ∈ U(Ri ) for all i 6= i0 . Let
(µ1 , µ2 , . . . , µi0 −1 , ai0 , µi0 +1 . . . , µN ) = (λi )(a1i )(a2i ) · · · (ani ) ⌈(b1i )(b2i ) · · · (bmi )⌉
be a τ× -U-α-factorization. Then
is a τi0 -U-α-factorization.
ai0 = λi0 a1i0 · · · ani0 b1i0 · · · bi0
Proof. (1) Let a = λa1 · · · an ⌈b1 · · · bm ⌉lbe a τi -U-α-factorization
of some non-unit a ∈ Ri . It
m
(i)
(i)
(i)
(i)
(i) (i)
is easy to see that a = λ a1 · · · an b1 · · · bm is a τ× -factorization. Furthermore, bj 6= 0
(i)
for all 1 ≤ j ≤ or else it would not be a τi -factorization. Hence by Theorem 5.4 bj is τ× -α
for each 1 ≤ j ≤ m. Thus it suffices to show that we actually have a U-factorization.
Since a = λa1 · · · an ⌈b1 · · · bm ⌉ is a U-factorization, we know ak (b1 · · · bm ) = (b1 · · · bm ) for
all 1 ≤ k ≤ n. In the other coordinates, we have (1Rj ) = (1Rj ) for all j 6= i. Hence, we apply
(i) (i)
(i)
(i)
(i)
Lemma 5.1 and see that this implies that ak (b1 · · · bm ) = (b1 · · · bm ) for all 1 ≤ k ≤ n.
c
(i) (i)
(i)
(i)
Similarly we have bj (b1 · · · bbj · · · bm ) 6= (b1 · · · bbj · · · bm ) which implies bj (b1 · · · bj · · · bm ) 6=
c
(i)
(i)
(i)
(b1 · · · bj · · · bm ), so this is indeed a U-factorization.
(2) Let
(µ1 , µ2 , . . . , µi0 −1 , ai0 , µi0 +1 . . . , µN ) = (λi )(a1i )(a2i ) · · · (ani ) ⌈(b1i )(b2i ) · · · (bmi )⌉
be a τ× -U-α-factorization. We note that aji ∈ U(Ri ) for all i 6= i0 and all 1 ≤ j ≤ n and
bji ∈ U(Ri ) for all i 6= i0 and all 1 ≤ j ≤ m since they divide the unit µi . Next, every
coordinate in the i0 place must be a non-unit in Ri0 or else this factor would be a unit in R
and therefore could not occur as a factor in a τ× -factorization. This tells us that
ai0 = λi0 a1i0 · · · ani0 b1i0 · · · bi0
is a τi0 -factorization. Furthermore, (bki ) is assumed to be τ× -α for all 1 ≤ k ≤ m, and the
other coordinates are units, so bki0 is τi0 -α for all 1 ≤ k ≤ m by Theorem 5.4. Again, we
need only show that
ai0 = λi0 a1i0 a2i0 · · · ani0 b1i0 b2i0 · · · bmi0
is a U-factorization. Since all the coordinates other than i0 are units, we simply apply Lemma
5.1 and see that we indeed maintain a U-factorization.
Theorem 5.6. Let R = R1 × · · · × RN for N ∈ N with τi a symmetric relation on Ri# . Let
α ∈ {irreducible, strongly irreducible, m-irreducible, very strongly irreducible}. Then R is
τ× -U-α if and only if Ri is τi -U-α for each 1 ≤ i ≤ N.
Proof. (⇒) Let a ∈ Ri0 be a non-unit. Then a(i0 ) is a non-unit in R and therefore has a τ× U-α-factorization. Furthermore, the only possible non-unit factors in this factorization must
occur in the ith
0 coordinate. Thus as in Lemma 5.5 (2), we have found a τi0 -U-α-factorization
of a by taking the product of the ith
0 entries. This shows Ri0 is τi0 -U-α as desired.
14
CHRISTOPHER PARK MOONEY
(⇐) Let a = (ai ) ∈ R be a non-unit. For each non-unit ai ∈ Ri , there is a τi -U-αfactorization of ai , say
ai = λi ai1 · · · aini bi1 · · · bimi .
(i)
If ai ∈ U(Ri ), then ai ∈ U(R) and we can simply collect these unit factors in the front, so
we need not worry about these factors. This yields a τ× -U-α-factorization
&m
'
n
Y
Y
(i) (i)
(i)
(i)
(i)
a = (ai ) =
λi ai1 · · · ain
bi1 · · · bim .
i
i=1
i
i=0
It is certainly a τ× -factorization. Furthermore, bjk 6= 0j for 1 ≤ j ≤ m and 1 ≤ k ≤ mj , so
(j)
bjk is τ× -α by Theorem 5.4. It is also clear from Lemma 5.5 that this is a U-factorization,
showing every non-unit in R has a τ× -U-α-factorization.
Theorem 5.7. Let R = R1 × · · · × RN for N ∈ N with τi a symmetric relation on Ri# .
Let α ∈ {irreducible, strongly irreducible, m-irreducible, very strongly irreducible} and let
β ∈ {associate, strongly associate, very strongly associate}. Then R is a τ× -U-α-β-df ring if
and only if Ri is τi -U-α-β-df ring for each 1 ≤ i ≤ N.
Proof. (⇒) Let a ∈ Ri0 be a non-unit. Suppose there were an infinite number of τi0 -U-α
(i0 ) ∞
essential divisors of a, say {bj }∞
j=1 none of which are β. But then {bj }j=1 yields an infinite
set of τ× -U-α-divisors of a(i0 ) by Lemma 5.5. Furthermore, none of them are β by Lemma
5.1.
(⇐) Let (ai ) ∈ R be a non-unit. We look at the collection of τ× -U-α essential divisors of
(ai ). Each must be of the form (λ1 , · · · , bi0 , · · · λN ) with λi ∈ U(Ri ) for each i and with bi0
τi0 -α for some 1 ≤ i0 ≤ N. But then bi0 is a τi0 -α essential divisor of ai0 . For each i between
1 and N, Ri is a τi -U-α-β-df ring, so there can be only finitely many τi -α essential divisors
of ai up to β, say N(ai ). If ai ∈ Ri , then we can simply set N(ai ) = 0 since it is a unit and
has no non-trivial τi -U-factorizations. Hence there can be only
N((ai )) := N(a1 ) + N(a2 ) + · · · + N(aN ) =
N
X
N(ai )
i=1
τ× -α essential divisors of (ai ) up to β. This proves the claim.
Corollary 5.8. Let α and β be as in the theorem. Let R = R1 × · · · × RN for N ∈ N with
τi a symmetric relation on Ri# . Then R is a τ× -U-α τ× -U-α-β-df ring if and only if Ri is
τi -U-α τi -U-α-β-df ring for each 1 ≤ i ≤ N.
Proof. This is immediate from Theorem 5.7 and Theorem 5.6.
Theorem 5.9. Let R = R1 × · · · × RN for N ∈ N with τi a symmetric relation on Ri# . Then
R is a τ× -U-BFR if and only if Ri is a τi -U-BFR for every i.
Proof. (⇒) Let a ∈ Ri0 be a non-unit. Then a(i0 ) is a non-unit in R, and hence has a bound
on the number of essential divisors in any τ× -U-factorization, say Ne (a(i0 ) ). We claim this
also bounds the number of essential divisors in any τi0 -U-factorization of a. Suppose for a
moment a = a1 · · · an ⌈b1 · · · bm ⌉ were a τi0 -U-factorization with m > Ne (a(i) ). But then
l
m
(i0 )
(i0 ) (i0 )
(i0 )
(i0 )
a = λ a1 · · · an b1 · · · bm
GENERALIZED U-FACTORIZATION IN COMMUTATIVE RINGS WITH ZERO-DIVISORS
15
is a τ× -U-factorization with more essential divisors than is allowed, a contradiction.
(⇐) Let a = (ai ) ∈ R be a non-unit. Let B(a) = max{Ne (ai )}N
i=1 . Where Ne (ai ) is
the number of essential divisors in any τi -U-factorization of ai , and will say for ai ∈ U(Ri ),
Ne (ai ) = 0. We claim that B(a)N is a bound on the number of essential divisors in any
τ× -U-factorization of a. Let
(ai ) = (λi )(a1i ) · · · (ani ) ⌈(b1i ) · · · (bmi )⌉
be a τ× -U-factorization. We can decompose this factorization so that each factor has at most
one non-unit entry as follows:
(ai ) =
N
Y
i=1
(i) (i)
λi a1i · · ·
N
Y
i=1
a(i)
ni
N
Y
i=1
(i)
b1i · · ·
N
Y
b(i)
mi .
i=1
Some of these factors may indeed be units; however, by allowing a unit factor in the front of
every τ -U-factorization, we simply combine all the units into one at the front, and maintain
a τ× -factorization. We can always rearrange this to be a τ× -U-factorization. Furthermore,
(i)
since aji is inessential, by Lemma 5.1 aji is inessential. Only some of the components of the
essential divisors could become inessential, for instance if one coordinate were a unit. At
(i)
worst when we decompose, bji remains an essential divisor for all 1 ≤ j ≤ m and for all
1 ≤ i ≤ N. But then the product of each of the ith coordinates gives a τi -U-factorization of
ai and thus is bounded by Ne (ai ), so we have m ≤ Ne (ai ) ≤ B(a) and therefore there are no
more than B(a)N essential divisors. Certainly the original factorization is no longer than
the one we constructed through the decomposition, proving the claim and completing the
proof.
Theorem 5.10. Let R = R1 × · · · × RN for N ∈ N with τi a symmetric relation on Ri# .
Let α ∈ {irreducible, strongly irreducible, m-irreducible, very strongly irreducible }. Then R
is τ× -U-α-HFR if and only if Ri is a τi -U-α-HFR for each i.
Proof. (⇒) Let a ∈ Ri0 be a non-unit. We know by Theorem 5.6 Then a(i0 ) is a non-unit
in R, and has an τ× -U-α-factorization. Suppose there were τi0 -U-α-factorizations of a with
different numbers of essential divisors, say:
a = λa1 · · · an ⌈b1 · · · bm ⌉ = µc1 · · · cn′ ⌈d1 · · · dm′ ⌉
where m 6= m′ . By Lemma 5.5 this yields two τ× -U-α-factorizations:
l
m
l
m
(i0 )
(i0 )
(i0 )
(i0 )
(i0 )
(i0 ) (i0 )
(i0 )
(i0 )
(i0 ) (i0 )
a = λ a1 · · · an b1 · · · bn
= µ c1 · · · cn ′ d 1 · · · d n ′ .
This contradicts the hypothesis that R is a R is τ× -U-α-HFR.
(⇐) Let (ai ) ∈ R be a non-unit. Suppose we had two τ× -U-α factorizations
m
l
′
′
′
′
′
′
(ai ) = (λi )(a1i )(a2i ) · · · (ani ) ⌈(b1i )(b2i ) · · · (bmi )⌉ = (µi )(a1i )(a2i ) · · · (an′i ) (b1i )(b2i ) · · · (bm′i ) .
For each i0 , if ai0 is a non-unit in Ri0 , then since each τ× -α element can only have one
coordinate which is not a unit, we can simply collect all the τ× -divisors which have the i0
coordinate a non-unit. This product forms a τi0 -U-α-factorization of ai0 and therefore the
number of essential τ× -factors with coordinate i0 a non-unit must be the same in the two
factorizations. This is true for each coordinate i0 , hence m = m′ as desired.
16
CHRISTOPHER PARK MOONEY
Theorem 5.11. Let R = R1 × · · · × RN for N ∈ N with τi a symmetric relation on Ri# .
Let α ∈ {irreducible, strongly irreducible, m-irreducible, very strongly irreducible} and let
β ∈ {associate, strongly associate }. Then R is τ× -U-α-β-UFR if and only if Ri is a τi -Uα-β-UFR for each i.
Proof. We simply apply Lemma 5.1 to the proof of Theorem 5.10, to see that the factors can
always be rearranged to match associates of the correct type.
References
[1] D. D. Anderson, M. Axtell, S.J. Forman, and J. Stickles, When are associates unit multiples?, Rocky
Mountain J. Math. 34:3 (2004), 811–828.
[2] D.D. Anderson, D.F. Anderson, and M. Zafrullah, Factorization in integral domains, J. Pure Appl.
Algebra 69 (1990), 1–19.
[3] D.D. Anderson and S. Chun, Irreducible elements in commutative rings with zero-divisors, Rocky Mountain J. Math. 37:3 (2011), 741–744.
[4] D. D. Anderson and Andrea M. Frazier, On a general theory of factorization in integral domains, Rocky
Mountain J. Math. 41:3 (2011), 663–705.
[5] D.D. Anderson and M. Naseer, Beck’s coloring of a commutative ring, J. Algebra 159:2 (1993), 500–514.
[6] D.D. Anderson and S. Valdes-Leon, Factorization in commutative rings with zero divisors, Rocky Mountain J. of Math. 26:2 (1996), 439–480.
[7] A.G. Aǧargün and D.D. Anderson and S. Valdez-Leon, Unique factorization in commutative rings with
zero divisors, Comm. Algebra 27:4 (1999), 1967–1974.
[8] A.G. Aǧargün, D.D. Anderson and S. Valdes-Leon, Factorization in commutative rings with zero divisors,
III, Rocky Mountain J. of Math. 31:1 (2001), 1–21.
[9] M. Axtell, U-factorizations in commutative rings with zero-divisors, Comm. Algebra 30:3 (2002), 1241–
1255.
[10] M. Axtell, S. Forman, N. Roersma and J. Stickles, Properties of U-factorizations, International Journal
of Commutative Rings 2:2 (2003), 83–99.
[11] C.R. Fletcher, Unique factorization rings, Proc. Cambridge Philos. Soc. 65 (1969), 579–583.
[12] C.R. Fletcher, The structure of unique factorization rings, Proc. Cambridge Philos. Soc. 67 (1970),
535–540.
[13] C.P. Mooney, Generalized factorization in commutative rings with zero-divisors, Houston J. of Math.
To appear.
Reinhart Center, Viterbo University, 900 Viterbo Drive, La Crosse, WI 54601
E-mail address: [email protected]
| 0 |
Constructing Datasets
for Multi-hop Reading Comprehension Across Documents
Johannes Welbl
Pontus Stenetorp
Sebastian Riedel
arXiv:1710.06481v1 [cs.CL] 17 Oct 2017
University College London
{j.welbl,p.stenetorp,s.riedel}@cs.ucl.ac.uk
Abstract
Most Reading Comprehension methods limit
themselves to queries which can be answered
using a single sentence, paragraph, or document. Enabling models to combine disjoint
pieces of textual evidence would extend the
scope of machine comprehension methods,
but currently there exist no resources to train
and test this capability. We propose a novel
task to encourage the development of models
for text understanding across multiple documents and to investigate the limits of existing
methods. In our task, a model learns to seek
and combine evidence – effectively performing multi-hop (alias multi-step) inference. We
devise a methodology to produce datasets for
this task, given a collection of query-answer
pairs and thematically linked documents. Two
datasets from different domains are induced,1
and we identify potential pitfalls and devise
circumvention strategies. We evaluate two
previously proposed competitive models and
find that one can integrate information across
documents. However, both models struggle to
select relevant information, as providing documents guaranteed to be relevant greatly improves their performance. While the models
outperform several strong baselines, their best
accuracy reaches 42.9% compared to human
performance at 74.0% – leaving ample room
for improvement.
1
Introduction
Devising computer systems capable of answering
questions about knowledge described using text has
been a longstanding challenge in Natural Language
1
Available at http://qangaroo.cs.ucl.ac.uk
Figure 1: A sample from the W IKI H OP dataset where it
is necessary to combine information spread across multiple documents to infer the correct answer.
Processing (NLP). Contemporary end-to-end Reading Comprehension (RC) methods can learn to extract the correct answer span within a given text and
approach human-level performance (Kadlec et al.,
2016; Seo et al., 2017).
However, for existing tasks, relevant information
is often concentrated locally within a single sentence, emphasising the role of locating, matching,
and aligning information between query and support
text.2 For example, Weissenborn et al. (2017) observed that a simple binary word-in-query indicator
feature boosted the relative accuracy of a baseline
model by 27.9%.
We argue that, in order to further the ability of
machine comprehension methods to extract knowledge from text, we must move beyond a scenario
where relevant information is coherently and explicitly stated within a single document. Methods with
2
Although annotators are encouraged to pose complex
queries (Rajpurkar et al., 2016).
this capability would benefit search and Question
Answering (QA) applications where the required information cannot be found in one location. They
would also aid Information Extraction (IE) applications, such as discovering drug-drug interactions by
connecting protein interactions reported across different publications.
Figure 1 shows an example from W IKIPEDIA,
where the goal is to identify the country property
of the Hanging Gardens of Mumbai. This cannot
be inferred solely from the article about them without additional background knowledge – since the answer is not stated explicitly. However, several of
the linked articles mention the correct answer India (and other countries), but cover different topics
(e.g. Mumbai, The Arabian Sea, etc.). Finding the
answer requires multi-hop reasoning:3 figuring out
that the Hanging Gardens are located in Mumbai,
and then, from a second document, that Mumbai is a
city in India.
We define a novel RC task in which a model
should learn to answer queries by combining evidence stated across documents. We introduce a
methodology to induce datasets for this task and derive two datasets. The first, W IKI H OP, uses sets of
W IKIPEDIA articles where answers to queries about
specific properties of an entity cannot be located in
the entity’s article. In the second dataset, M ED H OP,
the goal is to establish drug-drug interactions based
on scientific findings about drugs, proteins, as well
as their interactions, found across multiple M ED LINE abstracts. For both datasets we draw upon
existing Knowledge Bases (KBs), W IKIDATA and
D RUG BANK, as ground truth, utilising distant supervision (Mintz et al., 2009) to induce the data – similar to Hewlett et al. (2016) and Joshi et al. (2017).
We establish that for 74.1% and 68.0% of the
samples, the answer can be inferred from the given
documents by a human annotator. Still, constructing multi-document datasets is challenging; we encounter and prescribe remedies for several pitfalls
associated with their assembly – for example spurious co-locations of answers and specific documents.
For both datasets we then establish several strong
baselines and evaluate the performance of two previ-
Figure 2: A bipartite graph connecting entities and documents mentioning them. The bold edges are those traversed for the first fact in the small KB on the right; yellow highlighting indicates documents in Sq and candidates in Cq . Check and cross indicate correct and false
answer candidates.
ously proposed competitive RC models (Seo et al.,
2017; Weissenborn et al., 2017). We find that one
can integrate information across documents, but neither excels at selecting relevant information from a
larger set of documents as their accuracy increases
significantly when provided only documents guaranteed to be relevant. The best model reaches 42.9%,
compared to human performance at 74.0%, indicating ample room for improvement.
In summary, our key contributions are as follows:
Firstly, proposing a cross-document multi-step RC
task, as well as a general dataset induction strategy. Secondly, assembling two datasets from different domains and identifying dataset construction
pitfalls and remedies. Thirdly, establishing multiple
baselines, including two recently proposed RC models, as well as analysing model behaviour in detail
through ablation studies.
2
Task & Dataset Construction Method
We will now formally define the multi-hop RC task,
and a generic methodology to construct multi-hop
RC datasets. Later, in Sections 3 and 4 we will
demonstrate how this method is applied in practice
by creating datasets for two different domains.
Task Formalisation A model is given a query q, a
set of supporting documents Sq , and a set of candidate answers Cq – all of which are mentioned in Sq .
The goal is to identify the correct answer a∗ ∈ Cq
by drawing on the support documents Sq . Queries
could potentially have several true answers when not
3
In contrast to Simple Question Answering (Bordes et al.,
2015a), where only a single piece of information is needed.
2
the end points corresponds to a correct answer to q.5
When traversing the graph starting at s, several
of the end points will be visited, though generally
not all; those visited define the candidate set Cq . If
however the correct answer a∗ is not among them we
discard the entire (q, a∗ ) pair. The documents visited
to reach the end points will define the support document set Sq . That is, Sq comprises chains of documents leading not only from the query subject to the
correct answer candidate, but also to type-consistent
false answer candidates.
With this methodology, relevant textual evidence
for (q, a∗ ) will be spread out across documents
along the chain connecting s and a∗ – ensuring
that multi-hop reasoning goes beyond resolving coreference within a single document. Note that including other type-consistent candidates alongside
a∗ as end points in the graph traversal – and thus
into the support documents – renders the task considerably more challenging (Jia and Liang, 2017).
Models could otherwise identify a∗ in the documents by simply relying on type consistency heuristics. It is worth pointing out that by introducing
false candidates we counterbalance a type consistency bias, in contrast to Hermann et al. (2015) and
Hill et al. (2015). We will next describe how we apply this generic dataset construction methodology in
two domains to create the W IKI H OP and M ED H OP
datasets.
constrained to rely on a specific set of support documents – e.g. queries about the parent of a certain
individual. However, in our setup each sample has
only one true answer among Cq and Sq .
Note that even though we will utilise background
information during dataset assembly, such information will not be available to a model: the document
set will be provided in random order and without
any metadata. While certainly beneficial, this would
distract from our goal of fostering end-to-end RC
methods that infer facts by combining separate facts
stated in text.
Dataset Assembly We will now describe a generic
method to construct datasets for the aforementioned
task, in a way such that finding the answer to a query
depends on multiple documents with distinct pieces
of relevant information. We assume that there exists
a document corpus D, together with a KB containing fact triples (s, r, o) – with subject entity s, relation r, and object entity o. For example, one such
fact could be (Hanging Gardens of Mumbai, country, India). We start with individual KB facts and
transform them into query-answer pairs by leaving
the object slot empty, i.e. q=(s, r, ?) and a∗ =o.
Next, we define a directed bipartite graph, where
vertices on one side correspond to documents in D,
and vertices on the other side are entities from the
KB – see Figure 2 for an example. A document node
d is connected to an entity e if e is mentioned in d,
though there may be further constraints when defining the graph connectivity. For a given (q, a∗ ) pair,
the candidates Cq and support documents Sq ⊆ D
are identified by traversing the bipartite graph using
breadth-first search; the documents visited will become the support documents Sq .
As the traversal starting point, we use the node
belonging to the subject entity s of the query q. As
traversal end points, we use the set of all entity nodes
that are type-consistent answers to q.4 Note that
whenever there is another fact (s, r, o0 ) in the KB,
i.e. a fact producing the same q but with a different
a∗ , we will not include o0 into the set of end points
for this sample. This ensures that precisely one of
3
W IKI H OP
W IKIPEDIA contains an abundance of humancurated cross-domain information and has several structured resources such as infoboxes and
W IKIDATA (Vrandečić, 2012) associated with it.
W IKIPEDIA has thus been used for a wealth of research to build datasets posing queries about a single
sentence (Morales et al., 2016; Levy et al., 2017) or
article (Yang et al., 2015a; Hewlett et al., 2016; Rajpurkar et al., 2016). However, no attempt has been
made to construct a cross-document multi-step RC
dataset based on W IKIPEDIA.
A recently proposed RC dataset is W IKI R EAD ING (Hewlett et al., 2016), where W IKIDATA tuples (item, property, answer) are aligned with
4
To determine entities which are type-consistent with the
relation r of the query, we consider all entities in the KB which
are observed as object in a fact with r as relation type – including the correct answer.
5
Here we rely on a closed-world assumption; that is, we assume that the facts in the KB state all true facts.
3
the W IKIPEDIA articles regarding their item. The
tuples define a slot filling task with the goal of predicting the answer, given an article and property.
One problem with using W IKI R EADING as an extractive RC dataset is that 54.4% of the samples
do not state the answer explicitly in the given article (Hewlett et al., 2016). However, we observed
that some of the articles accessible by following hyperlinks from the given article often state the answer,
alongside other plausible false answer candidates.
3.1
property country, this would be the set
{France, Russia, . . . }.
Graph traversal is executed up to a maximum
chain length of 3 documents. To not pose unreasonable computational constraints to RC models, examples with more than 64 different support documents
or 100 candidates are discarded, resulting in a loss
of ≈ 1% of the data.
DATA
3.2
Mitigating Dataset Biases
Dataset creation is always fraught with the risk of
inducing unintended errors and biases (Chen et al.,
2016; Schwartz et al., 2017). As Hewlett et al.
(2016) only carried out limited analysis of their
W IKI R EADING dataset, we present an analysis of
the downstream effects we observe on W IKI H OP.
Assembly
We now apply the methodology from Section 2 to
create a multi-hop dataset with W IKIPEDIA as document corpus and W IKIDATA as structured knowledge triples.6 In this setup, (item, property,
answer) W IKIDATA triples correspond to (s, r, o)
triples, and the item and property of each sample together form our query q – e.g. “(Hanging Gardens of Mumbai, country, ?)”. Similar to Yang et al.
(2015a) we only use the first paragraph of each article, since relevant information is more likely to be
stated in the beginning. Starting with all samples in
W IKI R EADING, we first remove samples where the
answer is stated explicitly in the W IKIPEDIA article
about the item.7
The bipartite graph is structured as follows:
(1) for edges from articles to entities: all articles
mentioning an entity e are connected to e; (2) for
edges from entities to articles: each entity e is only
connected to the W IKIPEDIA article about the entity.
Traversing the graph is then equivalent to iteratively
following hyperlinks to new articles about the anchor text entities.
For a given query-answer pair, the item entity
is chosen as starting point for the graph traversal. A traversal will always pass through the article about the item, since this is the only document connected from there. The end point set includes the correct answer alongside other type consistent candidate expressions, which are determined
by considering all facts belonging to W IKI R EAD ING training examples, selecting those triples with
the same property as in q and keeping their
answer expressions. As an example, for the W IKI -
Candidate Frequency Imbalance A first observation is that there is a significant bias in the answer
distribution of W IKI R EADING. For example, in the
majority of the samples the property country has
the United States of America as answer. A simple
majority class baseline would thus prove successful,
but tell us little about multi-hop reasoning. To combat this issue, we subsampled the dataset to ensure
that examples of any one particular answer candidate make up no more than 0.1% of the dataset, and
omitted articles about the United States.
Document-Answer Correlations A problem
unique to our multi-document setting is the possibility of spurious correlations between candidates
and documents induced by the graph traversal
method. In fact, if we were not to address this
issue, a model specifically designed to exploit these
regularities could achieve 74.6% accuracy (detailed
in Section 6).
Concretely, we observed that certain documents
frequently co-occur with the correct answer, independently of the query. For example, if the article about London is present among Sq in a country
query, the answer is likely to be United Kingdom, independent of the query type or entity in question. If
the article about FIFA appears, the answer is likely
to be association football. Appendix A contains a
list with several additional examples.
We designed a statistic to measure this effect
and then used it to sub-sample the dataset. The
statistic counts how often a candidate c is observed
6
We use the W IKIDATA triples present in W IKI R EADING.
We thus use a disjoint subset of W IKI R EADING compared
to Levy et al. (2017) to construct W IKI H OP.
7
4
al., 2012; Segura-Bedmar et al., 2013). However,
as shown by Peng et al. (2017), cross-sentence relation extraction increases the number of available
relations. It is thus likely that cross-document interactions would further improve recall, which is of
particular importance considering interactions that
are never stated explicitly – but rather need to be
inferred from separate pieces of evidence. The
promise of multi-hop methods is finding and combining individual observations that can suggest previously unobserved DDIs, aiding the process of
making scientific discoveries.
DDIs are caused by Protein-Protein Interaction (PPI) chains, forming biomedical pathways.
If we consider PPI chains across documents,
we find examples like in Figure 3. Here the
first document states that the drug Leuprolide
causes GnRH receptor-induced synaptic potentiations, which can be blocked by the protein
Progonadoliberin-1. The last document states that
another drug, Triptorelin, is a superagonist of the
same protein. It is therefore likely to increase the
potency of Leuprolide, describing a way in which
the two drugs interact. Besides the true interaction
there is also a false candidate Urofollitropin that,
although mentioned together with GnRH receptor
within one document, provides no textual evidence
indicating interactions with Leuprolide.
Figure 3: Sample from the M ED H OP dataset.
as the correct answer when a certain document is
present in Sq across training set samples. More formally, for a given document d and answer candidate c, let cooccurrence(d, c) denote the total count
of how often c co-occurs with d in a sample where
c is also the correct answer. We use this statistic
to filter the dataset, by discarding samples with at
least one document-candidate pair (d, c) for which
cooccurrence(d, c) > 20. This successfully mitigated the dataset bias, empirically supported by the
experiments in Section 6.
4
M ED H OP
Following the same general methodology, we next
construct a second dataset for the domain of molecular biology – a field that has been undergoing exponential growth in the number of publications (Cohen
and Hunter, 2004). The promise of applying NLP
methods to cope with this increase has led to research efforts in IE (Hirschman et al., 2005; Kim et
al., 2011) and QA for biomedical text (Hersh et al.,
2007; Nentidis et al., 2017). There is a plethora of
manually curated structured resources (Ashburner et
al., 2000; The UniProt Consortium, 2017) which can
either serve as ground truth or to induce training data
using distant supervision for NLP systems (Craven
and Kumlien, 1999; Bobic et al., 2012). Existing
RC datasets are either severely limited in size (Hersh
et al., 2007) or cover a very diverse set of query
types (Nentidis et al., 2017), complicating the application of neural models that have seen successes
for other domains (Wiese et al., 2017).
A task that has received significant attention is
detecting Drug-Drug Interactions (DDIs) (Gurulingappa et al., 2012). Existing DDI efforts have focused on explicit mentions of interactions in single sentences (Gurulingappa et al., 2012; Percha et
4.1
Assembly
We construct M ED H OP using D RUG BANK (Law
et al., 2014) as structured knowledge resource and
research paper abstracts from M EDLINE as documents. There is only a single relation type for
D RUG BANK facts, interacts with, that connects
pairs of drugs – an example of a M ED H OP query
would thus be “(Leuprolide, interacts with, ?)”. We
start by processing the 2016 M EDLINE release using the preprocessing pipeline employed for the
BioNLP 2011 Shared Task (Stenetorp et al., 2011).
We restrict the set of entities in the bipartite graph
to drugs in D RUG BANK and human proteins in
S WISS -P ROT (Bairoch et al., 2004). That is, the
graph has drugs and proteins on one side, and M ED LINE abstracts on the other.
The edge structure is as follows: (1) there is an
edge from a document to all proteins mentioned in it.
(2) there is an edge between a document and a drug,
5
if this document also mentions a protein known to
be a target for the drug according to D RUG BANK.
This edge is bidirectional, i.e. can be traversed both
ways, since there is no canonical document describing each drug – thus one can “hop” to any document
mentioning the drug and its target. (3) there is an
edge from a protein p to a document mentioning p,
but only if the document also mentions another protein p0 which is known to interact with p according to
R EACTOME (Fabregat et al., 2016). Given our distant supervision assumption, these additionally constraining requirements err on the side of precision.
As a mention, similar to Percha et al. (2012), we
consider any exact match of a name variant of a drug
or human protein in D RUG BANK or S WISS -P ROT.
For a given DDI (drug1 , interacts with, drug2 ), we
then select drug1 as the starting point for the graph
traversal. As possible end points we consider any
other drug, apart from drug1 and those interacting
with drug1 other than drug2 . Similar to W IKI H OP,
we exclude samples with more than 64 support documents and impose a maximum document length of
300 tokens plus title.8
dataset proportion
0.06
0.04
0.02
0.00
0
10
20
30
40
number of documents
50
60
Figure 4: Support documents per training sample.
W IKI H OP
M ED H OP
Train
Dev
Test
Total
43,738
1,620
5,129
342
2,451
546
51,318
2,508
Table 1: Dataset sizes for our respective datasets.
5
Dataset Analysis
Table 1 shows the dataset sizes. Note that W IKI H OP
inherits the train, development, and test set splits
from W IKI R EADING – i.e. the full dataset creation,
filtering, and sub-sampling pipeline is executed on
each set individually. Also note that sub-sampling
according to document-answer collocation (Section 3.2) significantly reduces the size of W IKI H OP
from ≈528, 000 training samples to ≈44, 000. Figure 4 illustrates the distribution of the number of
support documents per sample. W IKI H OP shows a
Poisson-like behaviour – most likely due to structural regularities in W IKIPEDIA– whereas M ED H OP
exhibits a bimodal distribution, in line with our observation that certain drugs and proteins have far
more interactions and studies associated with them.
Document Sub-sampling The bipartite graph for
M ED H OP is orders of magnitude more densely connected than for W IKI H OP. This can lead to potentially large support document sets Sq , to a degree
where it becomes computationally infeasible for a
majority of existing RC models. After the traversal
has finished, we thus subsample documents by first
adding a set of documents that connects the drug
in the query with its answer. We then iteratively
add documents to connect false candidates until we
reach the limit of 64 documents – while ensuring
that all candidates have the same number of paths
through the bipartite graph.
5.1
Mitigating Candidate Frequency Imbalance
Some drugs interact with more drugs than others
– Aspirin for example interacts with 743 other
drugs, but Isotretinoin with only 34. This leads
to similar candidate frequency imbalance issues
as with W IKI H OP – but due to its smaller size
M ED H OP is difficult to sub-sample. Nevertheless
we can successfully combat this issue by masking
entity names, detailed in Section 6.2.
8
WikiHop
MedHop
0.08
Qualitative Analysis
To establish the quality of the data and analyse potential distant supervision errors, we sample and annotate 100 samples from each development set.
W IKI H OP Table 2 lists characteristics along with
the proportion of samples that exhibit them. For
45%, the true answer either uniquely follows from
multiple texts directly or is suggested as likely. For
26%, more than one candidate is plausibly supported by the documents, including the correct an-
Same restriction as the journal PLoS ONE.
6
swer. This is often due to hypernymy, where
the appropriate level of granularity for the answer is difficult to predict – e.g. (west suffolk,
administrative entity, ?)
with candidates
suffolk and england. This is a direct consequence
of including type-consistent false answer candidates
from W IKIDATA, which can lead to questions with
several true answers. For 9% of cases a single document suffices; these samples contain a linked document that states enough information about item and
answer together. Finally, although our task is significantly more complex than most previous tasks
where distant supervision has been applied, the distant supervision assumption is only violated for 20%
of the samples – a proportion similar to previous
work (Riedel et al., 2010). These cases can either
be due to conflicting information between W IKI DATA and W IKIPEDIA (8%), e.g. when the date of
birth for a person differs between W IKIDATA and
what is stated in the W IKIPEDIA article, or because
the answer is consistent but cannot be inferred from
the support documents (12%). When answering 100
questions, the annotator knew the answer prior to
reading the documents for 9%, and produced the
correct answer after reading the document sets for
74% of the cases.
36%
9%
15%
11%
9%
Answer does not follow.
W IKIDATA/W IKIPEDIA discrepancy.
12%
8%
Table 2: Qualitiative analysis of W IKI H OP samples.
velopment and test set.9 Annotators were shown the
query-answer pair as a fact, and the chain of relevant
documents leading to the answer – similar to our
qualitative analysis of M ED H OP. They were then
instructed to answer (1) whether they knew the fact
before, (2) whether the fact follows from the texts
(with options “fact follows”, “fact is likely”, and
“fact does not follow”), and (3), whether a single or
several of the documents are required. Each sample was shown to three annotators and a majority
vote was used to aggregate the annotations. Annotators were familiar with the fact 4.6% of the time;
prior knowledge of the fact is thus not likely to be a
confounding effect on the other judgements. Interannotator agreement as measured with Fleiss’ kappa
is 0.253 in (2), and 0.281 in (3) – indicating a fair
overall agreement. Overall, 9.5% of samples have
no clear majority in (2).
Among the samples with a majority judgement,
59.8% are cases where the fact “follows”, for 14.2%
the fact was judged as “likely”, and as “not follow”
for 25.9%. This again provides good justification
for the distant supervision strategy employed during
dataset construction.
Among the samples with a majority vote for (2)
of either “follows” or “likely”, 55.9% were marked
with majority vote as requiring multiple documents
to infer the fact, and 44.1% as requiring only a single document. The latter number is larger than initially expected, given the construction of samples
through graph traversal. However, when inspecting
cases judged as “single” more closely, we observed
that many indeed provide a clear hint about the correct answer within one document, but without stating it explicitly. For example, for the fact (witold ci-
M ED H OP Since both document complexity and
number of documents per sample was significantly
larger compared to W IKI H OP (see Figure 4) it was
not feasible to ask an annotator to read all support
documents for 100 samples. We thus opted to verify the dataset quality by providing only the subset
of documents relevant to support the correct answer,
i.e. those traversed along the path reaching the answer. The annotator was asked if the answer to the
query “follows”, “is likely”, or “does not follow”,
given the relevant documents. 68% of the cases were
considered as “follows” or as “is likely”. The majority of cases violating the distant supervision assumption were errors due to the lack of a necessary
PPI in one of the connecting documents.
5.2
Unique multi-step answer.
Likely multi-step unique answer.
Multiple plausible answers.
Ambiguity due to hypernymy.
Only single document required.
Crowdsourced Human Annotation
We asked human annotators on Amazon Mechanical Turk to evaluate samples of the W IKI H OP de-
9
While desirable, crowdsourcing the same annotations for
M ED H OP is not feasible since it requires specialist knowledge.
7
chy, country of citizenship, poland) with documents
d1 : Witold Cichy (born March 15, 1986 in Wodzisaw
lski) is a Polish footballer[...] and d2 : Wodzisaw
lski[...] is a town in Silesian Voivodeship, southern
Poland[...], the information provided in d1 suffices
for a human given the background knowledge that
Polish is an attribute related to Poland, removing the
need for d2 to infer the answer.
6
candidate with the highest TF-IDF similarity score:
arg max[max TF-IDF(q + c, s)]
c∈Cq
This section describes experiments on W IKI H OP
and M ED H OP with the goal of establishing the performance of several baseline models, including recent neural RC models. We empirically demonstrate
the importance of mitigating dataset biases, probe
whether multi-step behaviour is beneficial for solving the task, and investigate if RC models can learn
to perform lexical abstraction.
arg max[max(cooccurrence(d, c))]
c∈Cq
d∈Sq
(2)
Extractive RC models: FastQA & BiDAF In
our experiments we evaluate two recently proposed
LSTM-based extractive QA models: the Bidirectional Attention Flow model (BiDAF, Seo et al.
(2017)), and FastQA (Weissenborn et al., 2017),
which have shown robust performance across several extractive QA datasets. These models predict
an answer span within a single document. In order
to apply them in a multi-document setting, we concatenate all d ∈ Sq and add document separator tokens. During training, the first answer mention in the
concatenated document serves as the gold span; at
test time we measure accuracy based on exact string
matching.
The order of the concatenated support documents
is randomised. In a preliminary experiment we
trained models using various document order permutations, but found that the performance did not
change significantly. We thus conclude that the specific order chosen does not have a major impact on
the experiments we conducted.
For BiDAF, the default hyperparameters from the
implementation of Seo et al. (2017) are used,11 with
pretrained GloVe (Pennington et al., 2014) embeddings. However, we restrict the maximum document length to 8,192 tokens and hidden size to 20,
and train for 5,000 iterations with batchsize 16 in order to fit the model into memory.12 For FastQA we
use the implementation provided by the authors, also
Models
Random Selects a random candidate; note that the
number of candidates differs between samples.
Max-mention Predicts the most frequently mentioned candidate in the support documents Sq of a
sample – randomly breaking ties.
Majority-cand.-per-query-type Predicts the candidate c ∈ Cq that was most frequently observed as
the true answer in the training set, given the query
type of q. For W IKI H OP, the query type is the property p of the query, for M ED H OP there is only the
single query type – interacts with.
TF-IDF Retrieval-based models are known to be
strong QA baselines if candidate answers are provided (Clark et al., 2016; Welbl et al., 2017). They
search for individual documents based on keywords
in the question, but typically do not combine information across documents. The purpose of this baseline is to see if it is possible to identify the correct
answer from a single document alone through lexical correlations. The model forms its prediction as
follows: For each candidate c, the concatenation of
the query q with c is fed as an OR query into the
whoosh text retrieval engine.10 It then predicts the
10
(1)
Document-cue During dataset construction we
observed that certain document-answer pairs appear
more frequently than others, to the effect that the
correct candidate is often indicated solely by the
presence of certain documents in Sq . This baseline
captures how easy it is for a model to exploit these
informative document-answer co-occurrences. It
predicts the candidate with highest score across Cq :
Experiments
6.1
s∈Sq
11
https://github.com/allenai/bi-att-flow
Given the multi-document nature of our datasets we have
an increased number of tokens per sample, thus the additional
memory requirements.
12
https://pypi.python.org/pypi/Whoosh/
8
ing with a single support document is not enough to
build a strong predictive model for both datasets.
The Document-cue baseline can predict more than
a third of samples correctly, for both datasets, even
after sub-sampling frequent document-answer pairs
in W IKI H OP. The relative strength of this and
other baselines proves to be an important issue when
designing multi-hop datasets, which we addressed
through the measures described in Section 3.2. In
Table 4 we compare the two relevant baselines on
W IKI H OP before and after applying filtering measures. The absolute strength of these baselines before filtering shows how vital addressing this issue
is: 74.6% accuracy could be reached through exploiting the cooccurrence(d, c) statistic alone. This
underlines the paramount importance of investigating and addressing dataset biases that otherwise
would confound seemingly strong RC model performance on a given dataset. The relative drop demonstrates that the measures undertaken can successfully mitigate the issue. A downside to this aggressive filtering is a significantly reduced dataset size,
which renders it infeasible for smaller datasets like
M ED H OP.
Among the two neural RC models, BiDAF is overall strongest across both datasets – this is in contrast
to the reported results for SQuAD where their performance is nearly indistinguishable. This is possibly due to the iterative latent interactions in the
BiDAF architecture: we hypothesise that these are
of increased importance for our task, where information is distributed across documents. Overall it
is worth emphasising that both FastQA and BiDAF,
which are extractive QA models, do not rely on the
candidate options Cq at all – unlike the other baselines – but predict the answer by extracting a span
from the support documents.
In the masked setup all baseline models reliant on
lexical cues fail in face of the randomised answer expressions, since the same answer option has different placeholders in different examples. Especially
on M ED H OP, where dataset sub-sampling is not a
viable option, masking proves to be a valuable alternative, effectively circumventing spurious statistical
correlations that RC models can learn to exploit.
Both neural RC models are able to largely retain
or even improve their strong performance when answers are masked: they are able to leverage the con-
with pre-trained GloVe embeddings, no characterembeddings, no maximum support length, hidden
size 50, and batch size 64 for 50 epochs.
6.2
Lexical Abstraction: Candidate Masking
The presence of lexical regularities among answers is a problem in RC dataset assembly – a
phenomenon already observed by Hermann et al.
(2015).
When comprehending a text, the correct answer
should become clear from its context – rather than
from an intrinsic property of the answer expression.
To evaluate the ability of models to rely on context
alone, we created masked versions of the datasets:
we replace any candidate expression randomly using 100 unique placeholder tokens, e.g. ”Mumbai
is the most populous city in MASK7.” Masking is
consistent within one sample, but generally different
for the same expression if it appears in another sample. This not only removes answer frequency cues,
it also removes statistical correlations between frequent answer strings and support documents. Models consequently cannot base their prediction on intrinsic properties of the answer expression, but have
to rely on the context surrounding the mentions.
6.3
Results & Discussion
Table 3 shows the experimental outcomes for W IK I H OP and M ED H OP, together with results for the
masked setting; we will first discuss the former. A
first observation is that candidate mention frequency
does not produce better predictions than a random
guess. Predicting the answer most frequently observed at training time achieves strong results: as
much as 38.8% and 58.4% on the two datasets. That
is, a simple frequency statistic together with answer
type constraints alone is a relatively strong predictor,
and the strongest overall for the “unmasked” version
of M ED H OP.
The TF-IDF retrieval baseline clearly performs
better than random for W IKI H OP, but is not very
strong overall. That is, the question tokens are helpful to detect relevant documents, but exploiting only
this information compares poorly to the other baselines. On the other hand, as no co-mention of an
interacting drug pair occurs within any single document in M ED H OP, the TF-IDF baseline performs
worse than random. We conclude that lexical match9
Model
W IKI H OP
W IKI H OP Masked
M ED H OP
M ED H OP Masked
Random
Max-mention
Majority-cand.-per-q.type
TF-IDF
Document-cue
11.5
10.6
38.8
25.6
36.7
12.2
13.9
12.0
14.4
7.4
13.9
9.5
58.4
9.0
44.9
14.1
9.2
10.4
8.8
15.2
FastQA
BiDAF
25.7
42.9
35.8
54.5
23.1
47.8
31.3
33.7
Table 3: Test accuracies for the W IKI H OP and M ED H OP datasets.
Model
Document-cue
Maj. candidate
TF-IDF
Train set size
Unfiltered
Filtered
74.6
41.2
43.8
36.7
38.8
25.6
527,773
43,738
6.4
WH
WH-GC
MH
MH-GC
BiDAF
BiDAF mask
42.9
54.5
57.9
81.2
47.8
33.7
86.4
99.3
FastQA
FastQA mask
25.7
35.8
44.5
65.3
23.1
31.3
54.6
51.8
WH
WH-rem
MH
MH-rem
BiDAF
FastQA
54.5
35.8
44.6
38.0
33.7
31.3
30.4
28.6
Table 6: Test accuracy (masked) when only documents
containing answer candidates are given (rem). WH:
W IKI H OP; MH: M ED H OP.
Table 4: Accuracy comparison for simple baseline models on W IKI H OP before and after filtering.
Model
Model
Using only relevant documents
We conducted further experiments to examine the
RC models when presented with only the relevant
documents in Sq , i.e. the chain of documents leading to the correct answer. This allows us to investigate the hypothetical performance of the models
if they were able to select and read only relevant
documents: Table 5 summarises these results. Models improve greatly in this gold chain setup with up
to 81.2% on W IKI H OP in the masked setting for
BiDAF. This demonstrates that RC models are capable of identifying the answer when few or no plausible false candidates are mentioned, which is particularly evident for M ED H OP where documents tend
to discuss only single drug candidates. On the other
hand, it also shows that the models’ answer selection
process is not robust to the introduction of unrelated
documents with type consistent candidates. Lastly,
the results indicate that learning to intelligently select relevant documents before RC may be among
the most promising directions for future model development.
Table 5: Test accuracy when only using relevant documents. GC: gold chain; WH: W IKI H OP; MH: M ED H OP.
text around the candidate expressions. To understand differences in model behaviour between W IK I H OP and M ED H OP , it is worth noting that drug
mentions in M ED H OP are normalised to a unique
single-word identifier, and performance drops under masking. In contrast, for the open-domain setting of W IKI H OP, a reduction of the answer vocabulary to 100 single-token expressions clearly helps
the model in selecting a candidate span, compared
to the multi-token candidate expressions in the unmasked setting. Overall, although both neural RC
models now clearly outperform the other baselines,
they still have large room for improvement compared to human performance at 74% for W IKI H OP.
6.5
Removing relevant documents
To investigate whether the neural RC models can
draw upon information requiring multi-step inference we designed an experiment where we discard
10
all documents from Sq that do not contain a candidate mention, including the first documents traversed. Table 6 shows the results: we can observe that performance drops across the board for
BiDAF. There is a significant drop of 3.3% on M ED H OP, but the drop for W IKI H OP is drastic at 10% –
demonstrating that BiDAF, is able to leverage crossdocuments information. FastQA shows a slight increase of 2.2% for W IKI H OP and a decrease of 2.7%
on M ED H OP. While inconclusive, it is clear that
FastQA with fewer latent interactions than BiDAF
has problems integrating cross-document information.
7
ensures that multi-step inference goes beyond resolving co-reference.
Compositional Knowledge Base Inference
Combining multiple facts is common for structured
knowledge resources which formulate facts using
first-order logic. KB inference methods include
Inductive Logic Programming (Quinlan, 1990;
Pazzani et al., 1991; Richards and Mooney, 1991)
and probabilistic relaxations to logic like Markov
Logic (Richardson and Domingos, 2006; Schoenmackers et al., 2008). These approaches suffer
from limited coverage and inefficient inference,
though efforts to circumvent sparsity have been
undertaken (Schoenmackers et al., 2008; Schoenmackers et al., 2010). A more scalable approach
to composite rule learning is the Path Ranking
Algorithm (PRA) (Lao and Cohen, 2010; Lao et al.,
2011), which performs random walks to identify
salient paths between entities. Gardner et al. (2013)
circumvent the sparsity problems in PRA by introducing synthetic links via dense latent embeddings.
Several other multi-fact inference methods based
on dense representations have been proposed, using
composition functions such as vector addition (Bordes et al., 2014), RNNs (Neelakantan et al., 2015;
Das et al., 2016), and memory networks (Jain,
2016). Another approach is the Neural Theorem
Prover (Rocktäschel and Riedel, 2017), which
uses dense rule and symbol embeddings to learn a
differentiable backward chaining algorithm.
All these previous approaches centre around
learning how to combine facts from a KB, i.e. in a
structured form with pre-defined schema. That is,
they work as part of a pipeline, and either rely on
output of a previous IE step (Banko et al., 2007), or
on direct human annotation (Bollacker et al., 2008)
which tends to be costly and biased in coverage.
However, recent neural RC methods (Seo et al.,
2017; Shen et al., 2017a) have demonstrated that
end-to-end language understanding approaches can
infer answers directly from text – sidestepping intermediate query parsing and IE steps. Our work
aims to evaluate whether end-to-end multi-step RC
models can indeed operate on raw text documents
only – while performing the kind of inference most
commonly associated with logical inference methods operating on structured knowledge.
Related Work
Related Datasets End-to-end text-based QA has
witnessed a surge in interest with the advent of largescale datasets, which have been assembled based
on F REEBASE (Berant et al., 2013; Bordes et al.,
2015b), W IKIPEDIA (Yang et al., 2015b; Rajpurkar
et al., 2016; Hewlett et al., 2016), web search
queries (Nguyen et al., 2016), news articles (Hermann et al., 2015; Onishi et al., 2016), books (Hill
et al., 2015; Paperno et al., 2016), science exams (Welbl et al., 2017), and trivia (Boyd-Graber
et al., 2012; Dunn et al., 2017). Besides TriviaQA (Joshi et al., 2017), all these datasets are confined to single documents, and RC typically does not
require a combination of multiple independent facts.
In contrast, W IKI H OP and M ED H OP are specifically designed for cross-document RC and multistep inference. There exist other multi-hop RC resources, but they are either very limited in size, such
as the FraCaS test suite, or based on synthetic language (Weston et al., 2015). Fried et al. (2015) have
demonstrated that exploiting information from other
related documents based on lexical semantic similarity is beneficial for re-ranking answers in opendomain non-factoid QA. Their method is related to
ours, but the document connections are based on the
lexical semantic similarity between words, whereas
in our approach it is based on the relation between
specific entities. TriviaQA partly involves multi-step
reasoning, but the complexity largely stems from
parsing compositional questions. Our datasets centre around compositional inference from comparatively simple queries and the cross-document setup
11
Acknowledgments
Text-Based Multi-Step Reading Comprehension
A rich collection of neural network models tailored
towards multi-step RC has been developed over the
last few years. Memory networks (Weston et al.,
2014; Sukhbaatar et al., 2015; Kumar et al., 2016)
define a generic model class that iteratively attends
over memory items defined via text, and they show
promising performance on synthetic tasks requiring
multi-step reasoning (Weston et al., 2015). One
common characteristic of neural multi-hop models
is their rich structure that enables matching and interaction between question, context, answer candidates and combinations thereof (Peng et al., 2015;
Weissenborn, 2016; Xiong et al., 2016; Liu and
Perez, 2017), which often is iterated over several
times (Sordoni et al., 2016; Mark et al., 2016; Seo
et al., 2016; Hu et al., 2017) and may contain trainable stopping mechanisms (Graves, 2016; Shen et
al., 2017b). All these methods show promise in
single-document RC, and by design should be capable of integrating multiple facts across documents.
However, thus far they have not been evaluated for a
cross-document multi-step RC task – as in this work.
We would like to thank Tim Dettmers, Pasquale
Minervini, Jeff Mitchell, and Sebastian Ruder for
several helpful comments and feedback regarding an
earlier draft of this paper. This work was supported
by an Allen Distinguished Investigator Award, a
Marie Curie Career Integration Award and an Engineering and Physical Sciences Research Council
scholarship.
References
Michael Ashburner, Catherine A. Ball, Judith A. Blake,
David Botstein, Heather Butler, J. Michael Cherry, Allan P. Davis, Kara Dolinski, Selina S. Dwight, Janan T.
Eppig, et al. 2000. Gene ontology: tool for the unification of biology. Nature Genetics, 25(1):25.
Amos Bairoch, Brigitte Boeckmann, Serenella Ferro, and
Elisabeth Gasteiger. 2004. Swiss-prot: Juggling between evolution and stability. Briefings in Bioinformatics, 5(1):39–55.
Michele Banko, Michael J. Cafarella, Stephen Soderland,
Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of the
20th International Joint Conference on Artifical Intelligence, IJCAI’07, pages 2670–2676, San Francisco,
CA, USA. Morgan Kaufmann Publishers Inc.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy
Liang. 2013. Semantic parsing on freebase from
question-answer pairs. In Proceedings of the 2013
Conference on Empirical Methods in Natural Language Processing, pages 1533–1544.
Tamara Bobic, Roman Klinger, Philippe Thomas, and
Martin Hofmann-Apitius. 2012. Improving distantly
supervised extraction of drug-drug and protein-protein
interactions. In Proceedings of the Joint Workshop on
Unsupervised and Semi-Supervised Learning in NLP,
pages 35–43, Avignon, France, April. Association for
Computational Linguistics.
Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim
Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human
knowledge. In SIGMOD 08 Proceedings of the 2008
ACM SIGMOD international conference on Management of data, pages 1247–1250.
Antoine Bordes, Sumit Chopra, and Jason Weston.
2014. Question answering with subgraph embeddings.
CoRR, abs/1406.3676.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and
Jason Weston. 2015a. Large-scale simple question answering with memory networks.
CoRR,
abs/1506.02075.
Learning Search Expansion Other research addresses expanding the document set available to
a QA system, either in the form of web navigation (Nogueira and Cho, 2016), or via query
reformulation techniques, which often use neural
reinforcement learning (Narasimhan et al., 2016;
Nogueira and Cho, 2017; Buck et al., 2017). While
related, this work ultimately aims at reformulating
queries to better acquire evidence documents, and
not at answering queries through combining facts.
8
Conclusions
We have introduced a new cross-document multihop RC task, devised a generic dataset derivation
strategy and applied it to two separate domains. The
resulting datasets test RC methods in their ability to
perform composite reasoning – something thus far
limited to models operating on structured knowledge
resources. In our experiments we found that contemporary RC models can leverage cross-document information, but a sizeable gap to human performance
remains. Finally, we identified the selection of relevant document sets as the most promising direction
for future research.
12
Antoine Bordes, Nicolas Usunier, Sumit Chopra, and
Jason Weston. 2015b. Large-scale simple question answering with memory networks.
CoRR,
abs/1506.02075.
Jordan Boyd-Graber, Brianna Satinoff, He He, and Hal
Daumé, III. 2012. Besting the quiz master: Crowdsourcing incremental classification games. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and
Computational Natural Language Learning, EMNLPCoNLL ’12, pages 1290–1301, Stroudsburg, PA,
USA. Association for Computational Linguistics.
Christian Buck, Jannis Bulian, Massimiliano Ciaramita,
Andrea Gesmundo, Neil Houlsby, Wojciech Gajewski,
and Wei Wang. 2017. Ask the right questions: Active question reformulation with reinforcement learning. CoRR, abs/1705.07830.
Danqi Chen, Jason Bolton, and Christopher D. Manning.
2016. A thorough examination of the cnn/daily mail
reading comprehension task. In Proceedings of the
54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages
2358–2367, Berlin, Germany, August. Association for
Computational Linguistics.
Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Turney, and Daniel
Khashabi. 2016. Combining retrieval, statistics, and
inference to answer elementary science questions. In
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pages 2580–2586. AAAI
Press.
Kevin Bretonnel Cohen and Lawrence Hunter. 2004.
Natural language processing and systems biology. Artificial intelligence methods and tools for systems biology, pages 147–173.
Mark Craven and Johan Kumlien. 1999. Constructing
biological knowledge bases by extracting information
from text sources. In Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology, pages 77–86. AAAI Press.
Rajarshi Das, Arvind Neelakantan, David Belanger, and
Andrew McCallum. 2016. Chains of reasoning over
entities, relations, and text using recurrent neural networks. arXiv preprint arXiv:1607.01426.
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur
Güney, Volkan Cirik, and Kyunghyun Cho. 2017.
Searchqa: A new q&a dataset augmented with context
from a search engine. CoRR, abs/1704.05179.
Antonio Fabregat, Konstantinos Sidiropoulos, Phani
Garapati, Marc Gillespie, Kerstin Hausmann, Robin
Haw, Bijay Jassal, Steven Jupe, Florian Korninger,
Sheldon McKay, Lisa Matthews, Bruce May, Marija Milacic, Karen Rothfels, Veronica Shamovsky,
Marissa Webber, Joel Weiser, Mark Williams, Guanming Wu, Lincoln Stein, Henning Hermjakob, and Peter D’Eustachio. 2016. The reactome pathway knowledgebase. Nucleic Acids Research, 44(D1):D481–
D487.
Daniel Fried, Peter Jansen, Gustave Hahn-Powell, Mihai
Surdeanu, and Peter Clark. 2015. Higher-order lexical semantic models for non-factoid answer reranking.
Transactions of the Association of Computational Linguistics, 3:197–210.
Matt Gardner, Partha Pratim Talukdar, Bryan Kisiel, and
Tom M. Mitchell. 2013. Improving learning and inference in a large knowledge-base using latent syntactic
cues. In EMNLP, pages 833–838. ACL.
Alex Graves. 2016. Adaptive computation time for recurrent neural networks. CoRR, abs/1603.08983.
Harsha Gurulingappa, Abdul Mateen Rajput, Angus
Roberts, Juliane Fluck, Martin Hofmann-Apitius, and
Luca Toldo. 2012. Development of a benchmark corpus to support the automatic extraction of drug-related
adverse effects from medical case reports. Journal of
Biomedical Informatics, 45(5):885 – 892. Text Mining and Natural Language Processing in Pharmacogenomics.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman,
and Phil Blunsom. 2015. Teaching machines to read
and comprehend. In Advances in Neural Information
Processing Systems, pages 1693–1701.
William Hersh, Aaron Cohen, Lynn Ruslen, and Phoebe
Roberts. 2007. Trec 2007 genomics track overview.
In NIST Special Publication.
Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia
Polosukhin, Andrew Fandrianto, Jay Han, Matthew
Kelcey, and David Berthelot. 2016. Wikireading:
A novel large-scale language understanding task over
wikipedia. arXiv preprint arXiv:1608.03542.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations.
CoRR, abs/1511.02301.
Lynette Hirschman, Alexander Yeh, Christian Blaschke,
and Alfonso Valencia. 2005. Overview of biocreative:
critical assessment of information extraction for biology. BMC Bioinformatics, 6(1):S1, May.
Minghao Hu, Yuxing Peng, and Xipeng Qiu. 2017.
Mnemonic reader for machine comprehension. CoRR,
abs/1705.02798.
Sarthak Jain. 2016. Question answering over knowledge
base using factual memory networks. In Proceedings
of NAACL-HLT, pages 109–115.
Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In
13
Processing Systems (NIPS), Barcelona, Spain, December.
Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction
without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and
the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003–1011,
Suntec, Singapore, August. Association for Computational Linguistics.
Alvaro Morales, Varot Premtoon, Cordelia Avery, Sue
Felshin, and Boris Katz. 2016. Learning to answer
questions from wikipedia infoboxes. In Proceedings
of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1930–1935, Austin,
Texas, November. Association for Computational Linguistics.
Karthik Narasimhan, Adam Yala, and Regina Barzilay.
2016. Improving information extraction by acquiring
external evidence with reinforcement learning. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016,
Austin, Texas, USA, November 1-4, 2016, pages 2355–
2365.
Arvind Neelakantan, Benjamin Roth, and Andrew McCallum. 2015. Compositional vector space models for knowledge base completion. arXiv preprint
arXiv:1504.06662.
Anastasios Nentidis, Konstantinos Bougiatiotis, Anastasia Krithara, Georgios Paliouras, and Ioannis Kakadiaris. 2017. Results of the fifth edition of the bioasq
challenge. In BioNLP 2017, pages 48–57, Vancouver,
Canada,, August. Association for Computational Linguistics.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao,
Saurabh Tiwary, Rangan Majumder, and Li Deng.
2016.
MS MARCO: A human generated machine reading comprehension dataset.
CoRR,
abs/1611.09268.
Rodrigo Nogueira and Kyunghyun Cho. 2016. Webnav:
A new large-scale task for natural language based sequential decision making. CoRR, abs/1602.02261.
Rodrigo Nogueira and Kyunghyun Cho. 2017. Taskoriented query reformulation with reinforcement
learning. CoRR, abs/1704.04572.
Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel,
and David A. McAllester. 2016. Who did what: A
large-scale person-centered cloze dataset. In Proceedings of the 2016 Conference on Empirical Methods in
Natural Language Processing, EMNLP 2016, Austin,
Texas, USA, November 1-4, 2016, pages 2230–2235.
Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro
Empirical Methods in Natural Language Processing
(EMNLP).
Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke
Zettlemoyer. 2017. Triviaqa: A large scale distantly
supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of
the Association for Computational Linguistics, Vancouver, Canada, July. Association for Computational
Linguistics.
Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan
Kleindienst. 2016. Text understanding with the attention sum reader network. CoRR, abs/1603.01547.
Jin-Dong Kim, Yue Wang, Toshihisa Takagi, and Akinori
Yonezawa. 2011. Overview of genia event task in
bionlp shared task 2011. In Proceedings of BioNLP
Shared Task 2011 Workshop.
Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer,
Ishaan Gulrajani James Bradbury, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me
anything: Dynamic memory networks for natural language processing. International Conference on Machine Learning.
Ni Lao and William W Cohen. 2010. Relational retrieval using a combination of path-constrained random walks. Machine learning, 81(1):53–67.
Ni Lao, Tom Mitchell, and William W Cohen. 2011.
Random walk inference and learning in a large scale
knowledge base. In Proceedings of the Conference on
Empirical Methods in Natural Language Processing,
pages 529–539. Association for Computational Linguistics.
Vivian Law, Craig Knox, Yannick Djoumbou, Tim Jewison, An Chi Guo, Yifeng Liu, Adam Maciejewski, David Arndt, Michael Wilson, Vanessa Neveu,
Alexandra Tang, Geraldine Gabriel, Carol Ly, Sakina
Adamjee, Zerihun T. Dame, Beomsoo Han, You Zhou,
and David S. Wishart. 2014. Drugbank 4.0: shedding new light on drug metabolism. Nucleic Acids Research, 42(D1):D1091–D1097.
Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading
comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning
(CoNLL 2017), pages 333–342, Vancouver, Canada,
August. Association for Computational Linguistics.
Fei Liu and Julien Perez. 2017. Gated end-to-end memory networks. In Proceedings of the 15th Conference
of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain,
April 3-7, 2017, Volume 1: Long Papers, pages 1–10.
Neumann Mark, Pontus Stenetorp, and Sebastian Riedel.
2016. Learning to reason with adaptive computation.
In Interpretable Machine Learning for Complex Systems at the 2016 Conference on Neural Information
14
Pezzelle, Marco Baroni, Gemma Boleda, and Raquel
Fernández. 2016. The lambada dataset: Word prediction requiring a broad discourse context. arXiv
preprint arXiv:1606.06031.
M. J. Pazzani, C. A. Brunk, and G. Silverstein. 1991.
A knowledge-intensive approach to learning relational
concepts. In Proc. of the Eighth International Workshop on Machine Learning, pages 432–436, Evanston,
IL.
Baolin Peng, Zhengdong Lu, Hang Li, and Kam-Fai
Wong. 2015. Towards neural network-based reasoning. CoRR, abs/1508.05508.
Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina
Toutanova, and Wen-tau Yih. 2017. Cross-sentence
n-ary relation extraction with graph lstms. Transactions of the Association for Computational Linguistics,
5:101–115.
Jeffrey Pennington, Richard Socher, and Christopher D.
Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543.
Bethany Percha, Yael Garten, and Russ B Altman. 2012.
Discovery and explanation of drug-drug interactions
via text mining. In Pacific symposium on biocomputing, page 410. NIH Public Access.
J. R. Quinlan. 1990. Learning logical definitions from
relations. MACHINE LEARNING, 5:239–266.
P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016.
Squad: 100,000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP).
B. L. Richards and R. J. Mooney. 1991. First-order
theory revision. In Proc. of the Eighth International Workshop on Machine Learning, pages 447–
451, Evanston, IL.
Matthew Richardson and Pedro Domingos.
2006.
Markov logic networks. Mach. Learn., 62(1-2):107–
136, February.
Sebastian Riedel, Limin Yao, and Andrew McCallum.
2010. Modeling relations and their mentions without labeled text. In Proceedings of the 2010 European
Conference on Machine Learning and Knowledge Discovery in Databases: Part III, ECML PKDD’10,
pages 148–163, Berlin, Heidelberg. Springer-Verlag.
Tim Rocktäschel and Sebastian Riedel. 2017. End-toend differentiable proving. In Advances in Neural
Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2017,
December 4-9, 2017, Long Beach, California, United
States, volume abs/1705.11040.
Stefan Schoenmackers, Oren Etzioni, and Daniel S.
Weld. 2008. Scaling textual inference to the web.
In EMNLP ’08: Proceedings of the Conference on
Empirical Methods in Natural Language Processing,
pages 79–88, Morristown, NJ, USA. Association for
Computational Linguistics.
Stefan Schoenmackers, Oren Etzioni, Daniel S. Weld,
and Jesse Davis. 2010. Learning first-order horn
clauses from web text. In Proceedings of the 2010
Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 1088–1098,
Stroudsburg, PA, USA. Association for Computational
Linguistics.
Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila
Zilles, Yejin Choi, and Noah A. Smith. 2017. The
effect of different writing tasks on linguistic style: A
case study of the roc story cloze task. In Proceedings
of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 15–25, Vancouver, Canada, August. Association for Computational
Linguistics.
Isabel Segura-Bedmar, Paloma Martı́nez, and Marı́a Herrero Zazo. 2013. Semeval-2013 task 9 : Extraction of
drug-drug interactions from biomedical texts (ddiextraction 2013). In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2:
Proceedings of the Seventh International Workshop on
Semantic Evaluation (SemEval 2013), pages 341–350,
Atlanta, Georgia, USA, June. Association for Computational Linguistics.
Min Joon Seo, Hannaneh Hajishirzi, and Ali Farhadi.
2016. Query-regression networks for machine comprehension. CoRR, abs/1606.04582.
Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and
Hannaneh Hajishirzi. 2017. Bidirectional attention
flow for machine comprehension. In The International
Conference on Learning Representations (ICLR).
Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu
Chen. 2017a. Reasonet: Learning to stop reading in
machine comprehension. In Proceedings of the 23rd
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, pages
1047–1055, New York, NY, USA. ACM.
Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu
Chen. 2017b. Reasonet: Learning to stop reading in
machine comprehension. In Proceedings of the 23rd
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, pages
1047–1055, New York, NY, USA. ACM.
Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. 2016. Iterative alternating neural attention for
machine reading. CoRR, abs/1606.02245.
Pontus Stenetorp, Goran Topić, Sampo Pyysalo, Tomoko
Ohta, Jin-Dong Kim, and Jun’ichi Tsujii. 2011.
Bionlp shared task 2011: Supporting resources. In
Proceedings of BioNLP Shared Task 2011 Workshop,
15
pages 112–120, Portland, Oregon, USA, June. Association for Computational Linguistics.
Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al.
2015. End-to-end memory networks. In Advances in
neural information processing systems, pages 2440–
2448.
The UniProt Consortium. 2017. Uniprot: the universal protein knowledgebase. Nucleic Acids Research,
45(D1):D158–D169.
Denny Vrandečić. 2012. Wikidata: A new platform
for collaborative data collection. In Proceedings of
the 21st International Conference on World Wide Web,
WWW ’12 Companion, pages 1063–1064, New York,
NY, USA. ACM.
Dirk Weissenborn, Georg Wiese, and Laura Seiffe. 2017.
Fastqa: A simple and efficient neural architecture for
question answering. CoRR, abs/1703.04816.
Dirk Weissenborn. 2016. Separating answers from
queries for neural reading comprehension. CoRR,
abs/1607.03316.
Johannes Welbl, Nelson F. Liu, and Matt Gardner. 2017.
Crowdsourcing multiple choice science questions. In
Proceedings of the Third Workshop on Noisy Usergenerated Text, Copenhagen, Denmark. Association
for Computational Linguistics.
Jason Weston, Sumit Chopra, and Antoine Bordes. 2014.
Memory networks. arXiv preprint arXiv:1410.3916.
Jason Weston, Antoine Bordes, Sumit Chopra, and
Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR,
abs/1502.05698.
Georg Wiese, Dirk Weissenborn, and Mariana L. Neves.
2017. Neural question answering at bioasq 5b. CoRR,
abs/1706.08568.
Caiming Xiong, Victor Zhong, and Richard Socher.
2016. Dynamic coattention networks for question answering. CoRR, abs/1611.01604.
Yi Yang, Wen-tau Yih, and Christopher Meek. 2015a.
Wikiqa: A challenge dataset for open-domain question
answering. In Proceedings of the 2015 Conference on
Empirical Methods in Natural Language Processing,
pages 2013–2018, Lisbon, Portugal, September. Association for Computational Linguistics.
Yi Yang, Wen-tau Yih, and Christopher Meek. 2015b.
Wikiqa: A challenge dataset for open-domain question
answering. In Proceedings of the 2015 Conference on
Empirical Methods in Natural Language Processing,
pages 2013–2018, Lisbon, Portugal, September. Association for Computational Linguistics.
16
WikiHop
MedHop
WikiHop
dataset proportion
dataset proportion
0.05
0.04
0.03
0.02
0.01
0.00
0
10
20
30
40
number of candidates
50
E
Appendix: Document-Cue examples
Appendix: Gold Chain Examples
Appendix: Query Types
Table 9 gives an overview over the 25 most frequent
query types in W IKI H OP and their relative proportion in the dataset. Overall, the distribution across
the 277 query types follows a power law.
D
0.02
0
100
200
300
400
document length
500
600
Apprendix: Document Lengths
Figure 6 shows the distribution of document lengths
for both datasets. Note that the document lengths
in W IKI H OP correspond to the lengths of the first
paragraphs of W IKIPEDIA articles. M ED H OP on the
other hand reflects the length of research paper abstracts, which are generally longer.
Table 8 shows examples of document gold chains
in W IKI H OP. Note that their lengths differ, with a
maximum of 3 documents.
C
0.04
Figure 6: Histogram for document lengths in W IKI H OP
and M ED H OP.
Table 7 shows examples of answers and articles
which frequently appear together in W IKI H OP before filtering.
B
0.06
0.00
60
Figure 5: Histogram for the number of candidates per
sample in W IKI H OP.
A
0.08
Appendix: Candidate and Document
statistics
Figure 5 shows a histogram with the number of candidates per sample in W IKI H OP, and the distribution shows a slow but steady decrease. For M ED H OP, the vast majority of samples has 9 candidates,
which is due to the way documents are selected up
until a maximum of 64 documents is reached. Very
few samples have less than 9 candidates, and samples would have far more false candidates if more
than 64 support documents were included.
17
Answer a
Wikipedia article d
Count Prop.
united states of
america
A U.S. state is a constituent political entity of the United States of America.
68,233 12.9%
united kingdom
England is a country that is part of the United Kingdom.
54,005 10.2%
taxon
In biology, a species (abbreviated sp., with the plural form species abbreviated spp.) is the basic unit of biological classification and a taxonomic
rank.
40,141 7.6%
taxon
A genus (pl. genera) is a taxonomic rank used in the biological classification
38,466 7.3%
united kingdom
The United Kingdom of Great Britain and Northern Ireland, commonly
known as the United Kingdom (UK) or Britain, is a sovereign country in
western Europe.
31,071 5.9%
taxon
Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, evolution, distribution,
identification and taxonomy.
27,609 5.2%
united kingdom
Scotland [...] is a country that is part of the United Kingdom and covers the
northern third of the island of Great Britain.
25,456 4.8%
united kingdom
Wales [...] is a country that is part of the United Kingdom and the island of
Great Britain.
21,961 4.2%
united kingdom
London [...] is the capital and most populous city of England and the United
Kingdom, as well as the most populous city proper in the European Union.
21,920 4.2%
...
...
...
united states of
america
Nevada (Spanish for ”snowy”; see pronunciations) is a state in the Western,
Mountain West, and Southwestern regions of the United States of America.
18,215 3.4%
...
...
...
italy
The comune [...] is a basic administrative division in Italy, roughly equivalent to a township or municipality.
8,785 1.7%
...
...
...
A town is a human settlement larger than a village but smaller than a city.
5,092 1.0%
...
...
...
people’s republic of china
Shanghai [...] often abbreviated as Hu or Shen, is one of the four directcontrolled municipalities of the People’s Republic of China.
3,628 0.7%
human
ment
settle-
Table 7:
Examples with largest cooccurrence(d, c) statistic, before filtering. The Count column states
cooccurrence(d, c); the last column states the corresponding relative proportion of training samples (total 527,773).
18
Query: (the big broadcast of 1937, genre, ?)
Answer: musical film
Text 1: The Big Broadcast of 1937 is a 1936 Paramount Pictures production directed by Mitchell Leisen,
and is the third in the series of Big Broadcast movies. The musical comedy stars Jack Benny, George Burns,
Gracie Allen, Bob Burns, Martha Raye, Shirley Ross [...]
Text 2: Shirley Ross (January 7, 1913 March 9, 1975) was an American actress and singer, notable for her
duet with Bob Hope, ”Thanks for the Memory” from ”The Big Broadcast of 1938”[...]
Text 3: The Big Broadcast of 1938 is a Paramount Pictures musical film featuring W.C. Fields and Bob
Hope. Directed by Mitchell Leisen, the film is the last in a series of ”Big Broadcast” movies[...]
Query: (cmos, subclass of, ?)
Answer: semiconductor device
Text 1: Complementary metal-oxide-semiconductor (CMOS) [...] is a technology for constructing integrated circuits. [...] CMOS uses complementary and symmetrical pairs of p-type and n-type metal oxide
semiconductor field effect transistors (MOSFETs) for logic functions. [...]
Text 2: A transistor is a semiconductor device used to amplify or switch electronic signals[...]
Query: (raik dittrich, sport, ?)
Answer: biathlon
Text 1: Raik Dittrich (born October 12, 1968 in Sebnitz) is a retired East German biathlete who won two
World Championships medals. He represented the sports club SG Dynamo Zinnwald [...]
Text 2: SG Dynamo Zinnwald is a sector of SV Dynamo located in Altenberg, Saxony[...] The main sports
covered by the club are biathlon, bobsleigh, luge, mountain biking, and Skeleton (sport)[...]
Query: (minnesota gubernatorial election, office contested, ?)
Answer: governor
Text 1: The 1936 Minnesota gubernatorial election took place on November 3, 1936. Farmer-Labor Party
candidate Elmer Austin Benson defeated Republican Party of Minnesota challenger Martin A. Nelson.
Text 2: Elmer Austin Benson [...] served as the 24th governor of Minnesota, defeating Republican Martin
Nelson in a landslide victory in Minnesota’s 1936 gubernatorial election.[...]
Query: (ieee transactions on information theory, publisher, ?)
Answer: institute of electrical and electronics engineers
Text 1: IEEE Transactions on Information Theory is a monthly peer-reviewed scientific journal published by the IEEE Information Theory Society [...] the journal allows the posting of preprints [...]
Text 2: The IEEE Information Theory Society (ITS or ITSoc), formerly the IEEE Information Theory
Group, is a professional society of the Institute of Electrical and Electronics Engineers (IEEE) [...]
Query: (country of citizenship, louis-philippe fiset, ?)
Answer: canada
Text1: Louis-Philippe Fiset [...] was a local physician and politician in the Mauricie area [...]
Text2: Mauricie is a traditional and current administrative region of Quebec. La Mauricie National Park is
contained within the region, making it a prime tourist location. [...]
Text3: La Mauricie National Park is located near Shawinigan in the Laurentian mountains, in the Mauricie
region of Quebec, Canada [...]
Table 8: Examples of document gold chains in W IKI H OP. Article titles are boldfaced, the correct answer is underlined.
19
Query Type
Proportion in Dataset
instance of
located in the administrative territorial entity
occupation
place of birth
record label
genre
country of citizenship
parent taxon
place of death
inception
date of birth
country
headquarters location
part of
subclass of
sport
member of political party
publisher
publication date
country of origin
languages spoken or written
date of death
original language of work
followed by
position held
10.71 %
9.50 %
7.28 %
5.75 %
5.27 %
5.03 %
3.45 %
3.16 %
2.46 %
2.20 %
1.84 %
1.70 %
1.52 %
1.43 %
1.40 %
1.36 %
1.29 %
1.16 %
1.06 %
0.92 %
0.92 %
0.90 %
0.85 %
0.82 %
0.79 %
Top 25
Top 50
Top 100
Top 200
72.77 %
86.42 %
96.62 %
99.71 %
Table 9: The 25 most frequent query types in W IKI H OP alongside their proportion in the training set.
20
| 2 |
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE∗
arXiv:1602.08298v2 [] 22 Jan 2018
JOHN AUGUSTINE† , WILLIAM K. MOSES JR.‡ , AMANDA REDLICH§ , AND ELI UPFAL¶
Abstract.
Load balancing is a well-studied problem, with balls-in-bins being the primary framework. The
greedy algorithm Greedy[d] of Azar et al. places each ball by probing d > 1 random bins and placing
the ball in the least loaded of them. With high probability, the maximum load under Greedy[d]
is exponentially lower than the result when balls are placed uniformly randomly. Vöcking showed
that a slightly asymmetric variant, Left[d], provides a further significant improvement. However, this
improvement comes at an additional computational cost of imposing structure on the bins.
Here, we present a fully decentralized and easy-to-implement algorithm called FirstDiff[d] that
combines the simplicity of Greedy[d] and the improved balance of Left[d]. The key idea in FirstDiff[d]
is to probe until a different bin size from the first observation is located, then place the ball. Although
the number of probes could be quite large for some of the balls, we show that FirstDiff[d] requires
only at most d probes on average per ball (in both the standard and the heavily-loaded settings).
Thus the number of probes is no greater than either that of Greedy[d] or Left[d]. More importantly,
we show that FirstDiff[d] closely matches the improved maximum load ensured by Left[d] in both the
standard and heavily-loaded settings. We further provide a tight lower bound on the maximum load
up to O(log log log n) terms. We additionally give experimental data that FirstDiff[d] is indeed as
good as Left[d], if not better, in practice.
Key words.
allocation
Load balancing, FirstDiff, Balanced allocation, Randomized algorithms, Task
AMS subject classifications. 60C05, 60J10, 68R05
1. Introduction. Load balancing is the study of distributing loads across multiple entities such that the load is minimized across all the entities. This problem
arises naturally in many settings, including the distribution of requests across multiple servers, in peer-to-peer networks when requests need to be spread out amongst
the participating nodes, and in hashing. Much research has focused on practical
implementations of solutions to these problems [12, 6, 13].
Our work builds on several classic algorithms in the theoretical balls-in-bins
model. In this model, m balls are to be placed sequentially into n bins and each
ball probes the load in random bins in order to make its choice. Here we give a new
algorithm, FirstDiff[d], which performs as well as the best known algorithm, Left[d],
while being significantly easier to implement.
The allocation time for a ball is the number of probes made to different bins before
placement. The challenge is to balance the allocation time versus the maximum bin
load. For example, using one probe per ball, i.e. placing each ball uniformly at
∗ An earlier version of this paper appeared in [1]. The previous version had only expectation
upper bounds for average number of probes and no lower bounds. This version has lower bounds on
maximum load and high probability upper bounds for average number of probes, as well as cleaner
proofs for the same.
† Department of Computer Science & Engineering, Indian Institute of Technology Madras, Chennai, India. ([email protected]). Supported by the IIT Madras New Faculty Seed Grant, the
IIT Madras Exploratory Research Project, and the Indo-German Max Planck Center for Computer
Science (IMPECS).
‡ Department of Computer Science & Engineering, Indian Institute of Technology Madras, Chennai, India. ([email protected]).
§ Department of Mathematics, Bowdoin College, ME, USA ([email protected]). This material
is based upon work supported by the National Science Foundation under Grant No. DMS-0931908
while the author was in residence at the Institute for Computational and Experimental Research in
Mathematics in Providence, RI, during the Spring 2014 semester.
¶ Department of Computer Science, Brown University, RI, USA ([email protected]).
1
2
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
random, the maximum load of any bin when m = n will be lnlnlnnn (1 + o(1)) (with high
probability1 ) and total allocation time of n probes [11]. On the other hand, using d
probes per ball and placing the ball in the lightest bin, i.e. Greedy[d], first studied by
Azar et. al [2], decreases the maximum load to lnlnlndn + O(1) with allocation time of
nd. In other words, using d ≥ 2 choices improves the maximum load exponentially,
at a linear allocation cost.
Vöcking [15] introduced a slightly asymmetric algorithm, Left[d], which quite
surprisingly guaranteed (w.h.p.) a maximum load of dlnlnlnφnd + O(1) (where φd is a
constant between 1.61 and 2 when d ≥ 2) using the same allocation time of nd probes
as Greedy[d] when m = n. This analysis of maximum load for Greedy[d] and Left[d]
was extended to the heavily-loaded case (when m ≫ n) by Berenbrink et al. [3].
However, Left[d] in [15] and [3] utilizes additional processing. Bins are initially sorted
into groups and treated differently according to group membership. Thus practical
implementation, especially in distributed settings, requires significant computational
effort in addition to the probes themselves.
Our Contribution. We present a new algorithm, FirstDiff[d]. This algorithm requires no pre-sorting of bins; instead FirstDiff[d] uses real-time feedback to adjust its
number of probes for each ball2 .
The natural comparison is with the classic Greedy[d] algorithm; FirstDiff[d] uses
the same number of probes, on average, as Greedy[d] but produces a significantly
smaller maximum load. In fact, we show that the maximum load is as small as that of
Left[d] when m = n. Furthermore, it is comparable to Left[d] when heavily loaded. For
both the m = n and heavily loaded cases, FirstDiff[d] has much lower computational
overhead than Left[d].
This simpler implementation makes FirstDiff[d] especially suitable for practical
applications; it is amenable to parallelization, for example, and requires no central
control or underlying structure. Some applications have a target maximum load and
aim to minimize the necessary number of probes. From this perspective, our algorithm
again improves on Greedy[d]: the maximum load of FirstDiff[ln d] is comparable to that
of Greedy[d], and uses exponentially fewer probes per ball.
Theorem 3.1 Use FirstDiff[d], where maximum number of probes allowed per ball is
22d/3 , to allocate n balls into n bins. The average number of probes required per ball
is at most d on expectation and w.h.p. Furthermore, the maximum load of any bin is
log n
at most log
n ≥ max(2, n0 ), where
0.66d + O(1) with high probability when d ≥ 4 and
4
log n
≤ n12 .
n0 is the smallest value of n such that for all n > n0 , 36 log n 72e5n
Theorem 4.1 Use FirstDiff[d], where maximum number of probes allowed per ball
is 2d/2.17 , to allocate m balls into n bins. When m ≥ 72(nλ log n + n) where λ is
taken from Lemma 4.5, n ≥ n0 where n0 is the smallest value of n that satisfies
0.00332n(λ log n + 1)d/2d/2.17 ≥ log n, and d ≥ 6, it takes at most d probes on average
to place every ball on expectation and with high probability. Furthermore, for an
absolute constant c,
Pr Max. load of any bin >
m
n
+
log log n
0.46d
+ c log log log n ≤ c(log log n)−4 .
Our technique for proving that the average number of probes is bounded is novel
1 We use the phrase “with high probability” (or w.h.p. in short) to denote probability of the form
1 − O(n−c ) for some suitable c > 0. Furthermore, every log in this paper is to base 2 unless otherwise
mentioned.
2 Thus we are concerned with the average number of probes per ball throughout this paper.
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE
3
to the best of our knowledge. As the number of probes required by each ball is
dependent on the configuration of the balls-in-bins at the time the ball is placed,
the naive approach to computing its expected value quickly becomes too conditional.
Instead, we show that this conditioning can be eliminated by carefully overcounting
the number of probes required for each configuration, leading to a proof that is then
quite simple. The heavily-loaded case is significantly more complex than the m = n
case; however the basic ideas remain the same.
The upper bound on the maximum load is proved using the layered induction
technique. However, because FirstDiff[d] is a dynamic algorithm, the standard recursion used in layered induction must be altered. We use coupling and some more
complex analysis to adjust the standard layered induction to this context.
We furthermore provide a tight lower bound on the maximum load for a broad
class of algorithms which use variable probing.
Theorem 5.1 Let Alg[k] be any algorithm that places m balls into n bins, where
m ≥ n, sequentially one by one and satisfies the following conditions:
1. At most k probes are used to place each ball.
2. For each ball, each probe is made uniformly at random to one of the n bins.
3. For each ball, each probe is independent of every other probe.
The maximum load of any bin after placing all m balls using Alg[k] is at least m
n +
ln ln n
ln k − Θ(1) with high probability.
We use the above theorem to provide a lower bound on the maximum load of
FirstDiff[d], which is tight up to O(log log log n) terms.
Theorem 5.2 The maximum load of any bin after placing m balls into n bins using
FirstDiff[d], where maximum number of probes allowed per ball is 2Θ(d) , is at least
ln ln n
m
n + Θ(d) − Θ(1) with high probability.
Related Work. Several other algorithms in which the number of probes performed
by each ball is adaptive in nature have emerged in the past, such as work done by
Czumaj and Stemann [5] and by Berenbrink et al. [4].
Czumaj and Stemann [5] present an interesting “threshold” algorithm. First they
define a process Adaptive-Allocation-Process, where each load value has an associated
threshold and a ball is placed when the number of probes that were made to find a bin
of a particular load exceeded the associated threshold. Then, by carefully selecting the
thresholds for the load values, they develop M-Threshold, where each ball probes bins
until it finds one whose load is within some predetermined threshold. The bounds
on maximum load and on average allocation time are better than our algorithm’s,
but the trade-off is that computing the required threshold value often depends on the
knowledge (typically unavailable in practical applications) of the total number of balls
that will ever be placed. Furthermore their proofs are for m = n and don’t extend
easily to when m > n.
More recently, Berenbrink et al. [4] develop a new threshold algorithm, Adaptive,
which is similar to M-Threshold but where the threshold value used for a given ball
being placed depends on the number of balls placed thus far. They analyze this algorithm when m ≥ n and also extend the analysis of M-Threshold from [5] to the
m ≥ n case. They show that both algorithms have good bounds on maximum load
and average allocation time, but again this comes at the trade-off of requiring some
sort of global knowledge when placing the balls. In the case of Adaptive, each ball
must know what its order in the global placement of balls is, and in the case of M-
4
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
Threshold, each ball must know the total number of balls that will ever be placed.
Our algorithm is unique in that it requires no such global knowledge at all; it is able
to make decisions based on the probed bins’ load values alone.
Definitions. In the course of this paper we will use several terms from probability
theory, which we define below for convenience.
Consider two Markov chains At and Bt over time t ≥ 0 with state spaces S1 and
S2 respectively. A coupling (cf. [8]) of At and Bt is a Markov chain (At , Bt ) over time
t ≥ 0 with state space S1 × S2 such that At and Bt maintain their original transition
probabilities.
Consider two vectors u, v ∈ Zn . Let u′ and v ′ be permutations of u and v
′
respectively such that u′i ≥ u′i+1 and vi′ ≥ vi+1
for all 1 ≤ i ≤ n − 1. We say u
majorizes v (or v is majorized by u) when
i
i
P
P
u′j ≥
vj′ , ∀1 ≤ i ≤ n.
j=1
j=1
For a given allocation algorithm C which places balls into n bins, we define the load
vector ut ∈ (Z∗ )n of that process after t balls have been placed as follows: the ith
index of ut denotes the load of the ith bin (we can assume a total order on the bins
according to their IDs). Note that ut , t ≥ 0, is a Markov chain.
Consider two allocation algorithms C and D that allocate m balls. Let the load
vectors for C and D after t balls have been placed using the respective algorithms be
ut and v t respectively. We say that C majorizes D (or D is majorized by C) if there
is a coupling between C and D such that ut majorizes v t for all 0 ≤ t ≤ m.
Berenbrink et al. [3] provide an illustration of the above ideas being applied in
the load balancing context.
We also use Theorem 2.1 from Janson [7] in order to achieve high probability
concentration bounds on geometric random variables. We first set up the terms in
the theorem and then restate it below. Let X1 , . . . , Xn be n ≥ 1 geometric random
n
P
Xi ,
variables with parameters p1 , . . . , pn respectively. Define p∗ = mini pi , X =
and µ = E[X] =
n
P
i=1
i=1
1
pi .
Now we have the following lemma.
Lemma 1.1 (Theorem 2.1 in [7]).
P r(X ≥ Λµ) ≤ e−p∗ µ(Λ−1−ln Λ) .
For any p1 , . . . , pn ∈ (0, 1] and any Λ ≥ 1,
Organization of Paper. The structure of this paper is as follows. In Section 2, we
define the model formally and present the FirstDiff[d] algorithm. We then analyze the
algorithm when m = n in Section 3 and give a proof that the total number of probes
used by FirstDiff[d] to place n balls is nd with high probability, while the maximum
log n
bin load is still upper bounded by log
0.66d + O(1) with high probability. We provide
the analysis of the algorithm when m > n in Section 4, namely that the number
of probes is on average d per ball with high probability and the maximum bin load
log log n
is upper bounded by m
n + 0.46d + O(log log log n) with probability close to 1. We
provide a matching lower bound for maximum bin load tight up to the O(log log log n)
term for algorithms with variable number of probes and FirstDiff[d] in particular in
Section 5. In Section 6, we give experimental evidence that our FirstDiff[d] algorithm
indeed results in a maximum load that is comparable to Left[d] when m = n. Finally,
we provide some concluding remarks and scope for future work in Section 7.
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE
5
2. The FirstDiff[d] Algorithm. The idea behind this algorithm is to use probes
more efficiently. In the standard d-choice model, effort is wasted in some phases. For
example, early on in the distribution, most bins have size 0 and there is no need to
search before placing a ball. On the other hand, more effort in other phases would
lead to significant improvement. For example, if .9n balls have been distributed, most
bins already have size at least 1 and thus it is harder to avoid creating a bin of size
2. FirstDiff[d] takes this variation into account by probing until it finds a difference,
then making its decision.
This algorithm uses probes more efficiently than other, fixed-choice algorithms,
while still having a balanced outcome. Each ball probes at most 2Θ(d) bins (where
d ≥ 6 and by extension 2Θ(d) > 2) uniformly at random until it has found two bins
with different loads (or a bin with zero load) and places the ball in the least loaded of
the probed bins (or the zero loaded bin). If all 2Θ(d) probed bins are equally loaded,
the ball is placed (without loss of generality) in the last probed bin. The pseudocode
for FirstDiff[d] is below. Note that we use the Θ() to hide a constant value. The exact
values are different for m = n and m ≫ n and are 2/3 and 1/2.17 respectively.
Algorithm 1 FirstDiff[d]
(Assume 2Θ(d) > 2. The following algorithm is executed for each ball.)
1: Repeat 2Θ(d) times
2:
Probe a new bin chosen uniformly at random
3:
if The probed bin has zero load then
4:
Place the ball in the probed bin and exit
5:
if The probed bin has load that is different from those probed before then
6:
Place the ball in the least loaded bin (breaking ties arbitrarily) and exit
7: Place the ball in the last probed bin
As we can see, the manner in which a ball can be placed using FirstDiff[d] can be
classified as follows:
1. The first probe was made to a bin with load zero.
2. All probes were made to bins of the same load.
3. One or more probes were made to bins of larger load followed by a probe to
a bin of lesser load.
4. One or more probes were made to bins of lesser load followed by a probe to
a bin of larger load.
3. Analysis of FirstDiff[d] when m = n.
Theorem 3.1. Use FirstDiff[d], where maximum number of probes allowed per
ball is 22d/3 , to allocate n balls into n bins. The average number of probes required per
ball is at most d on expectation and w.h.p. Furthermore, the maximum load of any
log n
bin is at most log
n0 ),
0.66d + O(1) with high probability when d ≥ 4 and n ≥ max(2,
4
72e log n
≤ n12 .
where n0 is the smallest value of n such that for all n > n0 , 36 log n
5n
Proof. First, we show that an upper bound on the average number of probes per
ball is d on expectation and w.h.p. Subsequently, we show that the maximum load at
the end of placing all n balls is as desired w.h.p.
6
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
3.1. Proof of Number of Probes.
Lemma 3.2. The number of probes required to place m = n balls into n bins using
FirstDiff[d], where maximum number of probes allowed per ball is 22d/3 , is at most nd
on expectation and with high probability when d ≥ 4.
Proof. Let k be the maximum number of probes allowed to be used by FirstDiff[d]
per ball, i.e. k = 22d/3 . We show that the total number of probes required to place
all balls does not exceed 1.5n log k w.h.p. and thus nd probes are required to place
all balls.
Let the balls be indexed from 1 to n in the order in which they are placed. Our
analysis proceeds in two phases. For a value of T that will be fixed subsequently, the
first T + 1 balls are analyzed in the first phase and remaining balls are analyzed in
the second. Consider the ball indexed by t, 1 ≤ t ≤ n. Let Xt be the random variable
denoting the number of probes it takes for FirstDiff[d] to place ball t.
Phase One: t ≤ T + 1. We couple FirstDiff[d] with the related process that probes
until it finds a difference in bin loads or runs out of probes, without treating empty
bins as special; in other words, the FirstDiff[d] algorithm without lines 3 and 4. One
additional rule for the related process is that if an empty bin is probed first, then after
the process finishes probing, the ball will be placed in that first probed bin, i.e. the
empty bin. Note that this is a valid coupling; if an empty bin is probed then under
both FirstDiff[d] and this process the ball is placed in an empty bin, and if no empty
bin is probed the two processes are exactly the same. Let Yt be the number of probes
required by this related process to place ball t in the configuration where there are
t − 1 bins of load 1 and n − t + 1 bins of load 0. Notice that for any configuration
of balls in bins, Xt ≤ Yt ; furthermore, the configuration after placement under both
FirstDiff[d] and this new process is the same. You can see this by a simple sequence
of couplings.
First, choose some arbitrary configuration with nαi bins of size i for i = 0, 1, 2, . . ..
That configuration will be probed until bins of two different sizes are discovered,
i.e. until the set probed
Pn intersects two distinct αi and αj . Couple this with the
configuration that has i=1 nαi bins of size 1 and the rest of size 0. This configuration
requires more probes than the original configuration; it continues until the set probed
intersects α0 and α6=0 . Second, note that the configuration with t − 1 bins of size 1
and n − t + 1 bins of size 0 requires even more probes than this one. This is because
restricting the bins to size 1 can only decrease the number of empty bins. Finally,
note that the ball’s placement under either FirstDiff[d] or this new process leads to
the same configuration at time t + 1 (up to isomorphism); if FirstDiff[d] places a ball
in an empty bin, so does this process.
We first derive the expected value of Yt . The expected number of probes used
by FirstDiff[d] is upper bounded by the expected number of probes until a size-0 bin
appears, i.e. the expected number of probes used by the FirstDiff[d] algorithm without
line 1. This is of course n/(n − t + 1). The overall expected number of probes for the
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE
7
first T + 1 steps is
E[Y ] = E[
T
+1
X
Yt ]
t=1
=
T
+1
X
E[Yt ]
t=1
≤
T
+1
X
t=1
=n
n
n−t+1
n
X
1
i
i=n−T
∼ n(log n − log(n − T ))
n
= n log
.
n−T
Now we will find T such that the expected number of probes in phase one, E[Y ],
is n log k, i.e.
n
= n log k.
n log
n−T
Solving, we get T = n(1 − 1/k). Now, recall that we want a high probability bound
on the number of probes required to place each ball in Phase One when running
Pn(1−1/k)
FirstDiff[d], i.e.
Xt . Recall that Xt ≤ Yt , and as such a high probability
t=0
Pn(1−1/k)
Yt suffices. We can now use Lemma 1.1 with Λ = 1.01, µ =
bound on t=0
n log k, and p∗ = k1 .
n(1−1/k)
n(1−1/k)
X
X
Pr
Xt ≥ 1.01n log k ≤ P r
Yt ≥ 1.01n log k
t=0
t=0
≤e
1
·(n log k)·(1.01−1−ln 1.01)
−k
1
since k << n
≤O
n
Phase Two: t > T + 1. Rather than analyzing in detail, we use the fact that the
number of probes for each ball is bounded by k, i.e. Xt ≤ k, ∀t > T + 1. So the
number of probes overall in this phase is at most
k(n − T − 1) = k(n − n(1 − 1/k) − 1) = n − k.
So the total number of probes w.h.p. is 1.01n log k + n − k ≤ 1.5n log k (when
k = 22d/3 and d ≥ 4). When k = 22d/3 , an upper bound on the number of probes to
place all n balls is nd probes on expectation and w.h.p., as desired.
3.2. Proof of Maximum Load.
Lemma 3.3. The maximum load in any bin after using FirstDiff[d], where maximum number of probes allowed per ball is 22d/3 , to allocate n balls into n bins is at
log n
n ≥ max(2, n0 ), where n0
most log
0.66d + O(1) with high probability when d ≥ 4 and
4
log n
is the smallest value of n such that for all n > n0 , 36 log n 72e5n
≤ n12 .
8
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
Proof. While the proof follows along the lines of the standard layered induction
argument [8, 14], we have to make a few non-trivial adaptations to fit our context
where the number of probes is not fixed.
Let k be the maximum number of probes allowed to be used by FirstDiff[d] per
ball, i.e. k = 22d/3 . Define vi as the fraction of bins of load at least i after n balls are
placed. Define ui as the number of balls of height at least i after n balls are placed.
It is clear that vi ∗ n ≤ ui .
We wish to show that the Pr(Max. load ≥ logloglogk n + γ) ≤ n1c for some constants
γ ≥ 1 and c ≥ 1. Set i∗ = logloglogk n + 11 and γ = 15. Equivalently, we wish to show
that Pr(vi∗ +4 > 0) ≤ n1c for some constant c ≥ 1.
In order to aid us in this proof, let us define a non-increasing series of numbers
1
.
β11 , β12 , . . . , βi∗ as upper bounds on v11 , v12 , . . . vi∗ . Let us set β11 = 11
Now,
Pr(vi∗ +4 > 0) = Pr(vi∗ +4 > 0|vi∗ ≤ βi∗ )Pr(vi∗ ≤ βi∗ )
+ Pr(vi∗ +4 > 0|vi∗ > βi∗ )Pr(vi∗ > βi∗ )
≤ Pr(vi∗ +4 > 0|vi∗ ≤ βi∗ ) + Pr(vi∗ > βi∗ )
= Pr(vi∗ +4 > 0|vi∗ ≤ βi∗ )
+ Pr(vi∗ > βi∗ |vi∗ −1 ≤ βi∗ −1 )Pr(vi∗ −1 ≤ βi∗ −1 )
+ Pr(vi∗ > βi∗ |vi∗ −1 > βi∗ −1 )Pr(vi∗ −1 > βi∗ −1 )
≤ Pr(vi∗ +4 > 0|vi∗ ≤ βi∗ )
∗
(1)
+
i
X
Pr(vi > βi |vi−1 ≤ βi−1 ) + Pr(v11 > β11 )
i=12
Here, Pr(v11 > β11 ) = 0. It remains to find upper bounds for the remaining two
terms in the above equation.
We now derive a recursive relationship between the βi ’s for i ≥ 11. βi+1 acts as
an upper bound for the fraction of bins of height at least i + 1 after n balls are placed.
In order for a ball placed to land up at height at least i + 1, one of 3 conditions must
occur:
• All k probes are made to bins of height at least i.
• Several probes are made to bins of height at least i and one is made to a bin
of height at least i + 1.
• One probe is made to a bin of height at least i and several probes are made
to bins of height at least i + 1.
Thus the probability that a ball is placed at height at least i + 1, conditioning on
vj ≤ βj for j ≤ i + 1 at that time, is
≤ βik + βi βi+1 1 + βi + βi2 + . . . + βik−2
k−2
2
+ . . . + βi+1
+ βi βi+1 1 + βi+1 + βi+1
!
k−1
1 − βi+1
1 − βik−1
k
+
≤ βi + βi βi+1
1 − βi
1 − βi+1
1
≤ βik + β11 βi+1 2 ∗
1 − β11
2β
i+1
≤ βik +
10
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE
9
Let vi+1 (t) be the fraction of bins with load at least i + 1 after the 1 ≤ t ≤ n ball
is placed in a bin.
Let t∗ = min[arg mint vi+1 (t) > βi+1 , n], i.e. t∗ is the first t such that vi+1 (t) >
βi+1 or n if there is no such t. The probability that t∗ < n is bounded by the
i+1
) is greater than βi+1 n.
probability that a binomial random variable B(n, βik + 2β10
Fix βi+1 =
10 k
3 βi
≥
2n(βik +
n
2βi+1
10
)
. Then using a Chernoff bound, we can say that
with high probability, t∗ = n or vi+1 ≤ βi+1 , so long as e
some constant c ≥ 1.
n
, e−
Now, so long as βi+1 ≥ 18 log
n
18 log n
. This
the value of βi dips below
n
log β11 = − log 11 and log βi+1 = log( 10
3 )
log βi∗ = log
10
3
βik +
2βi+1
βik +
10
−
3
= O( n1c ) for
2βi+1
10
3
= O( n1c ). Notice that at i = i∗ ,
can be seen by solving the recurrence with
+ k log βi .
(1 + k + k 2 + . . . + k logk log n−1 )
+ k logk logn (− log 11)
log log n
10
k k
−1
= log
− (log n)(log 11)
3
k−1
10
− log 11
≤ (log n) log
3
≤ −1.7 log n
Therefore, as it is, βi∗ ≤
n
, we set
βi at least at 18 log
n
(2)
1
n1.7
≤
18 log n
n
βi+1 = max
when n ≥ 2. In order to keep the value of
10 k 18 log n
β ,
3 i
n
With the values of βi defined, we proceed to bound Pr(vi > βi |vi−1 ≤ βi−1 ), ∀12 ≤
i ≤ i∗ . For a given i,
Pr(vi > βi |vi−1 ≤ βi−1 ) = Pr(nvi > nβi |vi−1 ≤ βi−1 )
≤ Pr(ui > nβi |vi−1 ≤ βi−1 )
We upper bound the above inequality using the following idea. Let Yr be an
indicator variable set to 1 when the following 2 conditions are met: (i) the rth ball
placed is of height at least i and (ii) vi−1 ≤ βi−1 . Yr is set to 0 otherwise. Now
2
k
βi ≤
for all 1 ≤ r ≤ n, the probability that Yr = 1 is upper bounded by βi−1
+ 10
βi
3
2
β
+
β
≤
.
Therefore,
the
probability
that
the
number
of
balls
of
height
at
10 i
10 i
2
βi
least i exceeds βi is upper bounded by Pr(B(n, 2 ) > nβi ), where B(·, ·) is a binomial
random variable with given parameters.
µδ2
Recall the Chernoff bound, for 0 < δ ≤ 1, Pr(X ≥ (1 + δ)µ) ≤ e− 3 , where X is
the sum of independent Poisson trials and µ is the expectation of X. If we set δ = 1,
then we have
10
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
βi
) > nβi )
2
n · ( β2i )
−
3
≤e
Pr(vi > βi |vi−1 ≤ βi−1 ) ≤ Pr(B(n,
n
n · ( 18 log
)
n
18 log n
6
, ∀i ≤ i∗ )
≤e
(since βi ≥
n
1
≤ 3
n
−
Thus we have
∗
i
X
Pr(vj > βj |vj−1 ≤ βj−1 ) ≤
j=12
log log n
n3
∗
(3)
=⇒
i
X
j=l+1
Pr(vj > βj |vj−1 ≤ βj−1 ) ≤
1
(since n ≥ 2)
2n2
Finally, we need to upper bound Pr(vi∗ +4 > 0|vi∗ ≤ βi∗ ). Consider a particular bin
of load at least i∗ . Now the probability that a ball will fall into that bin is
!!
1
1
+ 2βi∗ +1
≤ · βik−1
∗
n
1 − βik∗ +1
1
1
≤ · βik−1
+ 2βi∗ +1
∗
n
1 − β11
22
1
+ βi∗ (since βi is a non-increasing function)
≤ · βik−1
∗
n
10
1 32
≤ ·
· βi∗ (since k ≥ 2 and βi∗ ≤ 1)
n 10
Now, we upper bound the probability that 4 balls fall into a given bin of load at
least i∗ and then use a union bound over all the bins of load at least i∗ to show that
the probability that the fraction of bins of load at least βi∗ +4 exceeds 0 is negligible.
First, the probability that 4 balls fall into a given bin of load at least βi∗ is
1 32
≤ Pr(B(n, ( ·
∗ βi∗ )) ≥ 4)
n 10
4
n
1 32
·
· βi∗
≤
n 10
4
4
1
1 32
·
· βi∗ ·
≤ e·n·
n 10
4
4
32 eβi∗
·
≤
10
4
Taking the union bound across all possible βi∗ n bins, we have the following inequality
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE
Pr(vi∗ +4 > 0|vi∗
=⇒ Pr(vi∗ +4 > 0|vi∗
=⇒ Pr(vi∗ +4 > 0|vi∗
(4)
11
4
32 eβi∗
≤ βi∗ ) ≤ (βi∗ n) ·
·
10
4
4
32 18e log n
·
≤ βi∗ ) ≤ (18 log n) ·
10
4n
1
≤ βi∗ ) ≤
(since n ≥ n0 )
2n2
Putting together equations 1, 3, and 4, we get
1
1
+ 2
2n2
2n
1
≤ 2
n
Pr(vi∗ +4 > 0) ≤
Thus
log log n
Pr Max. Load ≥
+ 15 = Pr(vi∗ +4 > 0)
log k
1
≤ 2
n
From Lemma 3.2 and Lemma 3.3, we immediately arrive at Theorem 3.1.
4. Analysis of FirstDiff[d] when m ≫ n.
Theorem 4.1. Use FirstDiff[d], where maximum number of probes allowed per
ball is 2d/2.17 , to allocate m balls into n bins. When m ≥ 72(nλ log n + n) where λ
is taken from Lemma 4.5, n ≥ n0 where n0 is the smallest value of n that satisfies
0.00332n(λ log n + 1)d/2d/2.17 ≥ log n, and d ≥ 6, it takes at most d probes on average
to place every ball on expectation and with high probability. Furthermore, for an
absolute constant c,
Pr Max. load of any bin >
m
n
+
log log n
0.46d
+ c log log log n ≤ c(log log n)−4 .
Proof. First we show that the average number of probes per ball is at most d on
expectation and w.h.p. We then show the maximum load bound holds w.h.p.
4.1. Proof of Number of Probes.
Remark: The earlier version of this paper [1] had a different proof in this subsection.
The overall idea of overcounting the number of probes remains the same but the
specific argument of how to justify and go about such an overcounting is changed and
cleaner now. More specifically, we have replaced Lemmas 4.2, 4.3, and 4.4 with the
argument that follows the header “Overcounting method” and concludes at the header
“Expectation bound”. Also note that Lemma 4.6 from our earlier version is no longer
required due to the way we’ve constructed our argument.
The main difficulty with analyzing the number of probes comes from the fact that
the number of probes needed for each ball depends on where each of the previous balls
were placed. Intuitively, if all the previous balls were placed such that each bin has
the same number of balls, the number of probes will be 2d/2.17 . On the other hand, if a
significant number of bins are at different load levels, then, the ball will be placed with
very few probes. One might hope to prove that the system always displays a variety of
12
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
loads, but unfortunately, the system (as we verified experimentally) oscillates between
being very evenly loaded and otherwise. Therefore, we have to take a slightly more
nuanced approach that takes into account that the number of probes cycles between
high (i.e. as high as 2d/2.17 ) when the loads are even and as low as 2 when there is
more variety in the load.
Lemma 4.2. When m ≥ 72(nλ log n + n) where λ is taken from Lemma 4.5, n ≥
n0 where n0 is the smallest value of n that satisfies 0.00332n(λ log n + 1)d/2d/2.17 ≥
log n, and d ≥ 6, using FirstDiff[d], where maximum number of probes allowed per ball
is 2d/2.17 , takes at most md probes on expectation and with high probability to place
the m balls in n bins.
Proof. Let the maximum number of probes allowed per ball using FirstDiff[d] be
k, i.e. k = 2d/2.17 . Throughout this proof, we will assume that the maximum load
of any bin is at most m/n + λ log n, which holds with high probability owing to
Lemma 4.5. The low probability event that the maximum load exceeds m/n + λ log n
will contribute very little to the overall number of probes because the probability
that any ball exceeds a height of m/n + λ log n will be an arbitrarily small inverse
polynomial in n. Therefore, such a ball will contribute o(1) probes to the overall
number of probes even when we liberally account k probes for each such ball (as long
as k ≪ n). Let m balls be placed into n bins; we assume m ≥ 72(nλ log n + n).
In order to prove the lemma, we proceed in three stages. In the first stage, we
consider an arbitrary sequence of placing m balls into the n bins. We develop a
method that allows us to overcount the number of probes required to place the ball at
each step of this placement. In the second stage, we proceed to calculate the expected
number of probes required to place the balls. Finally in the third stage, we show how
to get a high probability bound on the number of probes required to place each ball.
Overcounting method
First, we couple the process of FirstDiff[d] with a similar process where the zero bin
condition to place balls is not used. In this similar process, if a zero bin is probed first,
more probes are made until either a bin with a different load is probed or until all k
probes are made. Then, the ball is placed in the first bin probed (the zero bin). In
case the first bin probed is not a zero bin, then the process acts exactly as FirstDiff[d].
It is clear that this process will take more probes than FirstDiff[d] while still making
the same placements as FirstDiff[d]. Thus, any upper bounds on number of probes
obtained for this process apply to FirstDiff[d]. For the remainder of this proof, we
analyze this process.
Now we describe the method we use to overcount the number of probes. Consider
an arbitrary sequence of placing m balls into n bins. We will describe a method
to associate each configuration that arises from such a placement with a “canonical”
configuration, which we define later in this proof. Each such canonical configuration
requires more probes to place the ball than the actual configuration. We ensure
that the mapping of actual configurations to canonical configurations is a one-to-one
mapping. Thus, by counting the number of probes required to place a ball in every
possible canonical configuration, we overcount the number of probes required to place
every ball.
Imagine coupling according to probe sequences. That is, consider all [n]k × [n]k ×
. . . × [n]k = [n]km possible sequences of probes. Each timestep corresponds to a
particular length-k sequence of bin labels [n], which direct the bins to be probed.
Note that the probe sequences are each equally likely, and that they fully determine
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE
13
the placement.
For each probe sequence S, let X S be the sequence of configurations generated by
S. Let xS be the sequence of numbers-of-probes-used at each timestep of X S . (Note
that S gives k potential probes at each timestep, but the entries of xS are often less
than k, as often the ball is placed without using the maximum number of probes.)
For convenience, we drop the superscript S subsequently and use X and x to refer to
the vectors.
Now we give a few definitions. Consider a particular bin with balls placed in
it, one on top of another. If exactly ℓ − 1 balls are placed below a given ball, we
say that ball is at height ℓ or level ℓ. Balls of the same height are said to be of the
same level. We say a level contains b balls if there are b balls at that level. A level
containing n balls is said to be complete. A level containing at least one ball but less
than n balls is said to be incomplete. For a given configuration, consider the highest
complete level ℓ. For the given configuration, we define a plateau as any level ≥ ℓ
with at least one ball at that level. Intuitively, the plateaus for the configuration are
the highest complete level and any higher incomplete levels. Notice that it is possible
that a given configuration may only have one plateau when there are no incomplete
levels. Further, notice that for a given configuration if two plateaus exist at levels ℓ
and ℓ + 2 with number of balls b1 and b2 , it implies that there exists a plateau at level
ℓ + 1 with number of balls in [b2 , b1 ].
Consider a particular configuration Xi in the sequence X. Call its number of
plateaus p. Consider the plateaus in increasing order of their levels and call them
ℓ1 , ℓ2 , . . . ℓp . Call the number of balls at each level ℓi , bi .
We now define the canonical configuration Cℓ,b . This is the configuration with no
balls at level greater than ℓ, b balls at level ℓ, and n balls at every level less than ℓ.
We will associate each configuration Xi with a set of canonical configurations Ci .
For each plateau of Xi at level ℓj , include the canonical configuration Cℓj ,bj in the
set.
Note that for any probe sequence (not just the specific sequence S), the number of
probes utilized by Xi (e.g. xi for the sequence S) is less than or equal to the number
of probes utilized by any of the configurations in Ci . Therefore the expected number
of probes used to place a ball in configuration Xi is less than or equal to the expected
number of probes used to place a ball in any of the configurations in Ci . We now
describe a way to select one particular configuration out of Ci to associate with each
configuration Xi . We choose this configuration such that the mapping between all
configurations Xi , 1 ≤ i ≤ m, and the selected canonical configurations will be a oneto-one mapping. Furthermore, we show that every selected canonical configurations
will be unique from the others and thus the set of all selected canonical configurations
will not be a multiset. Thus, by counting the number of probes required to place balls
in every possible canonical configuration, we overcount the number of probes required
to place balls in any sequence S.
Look at the canonical configurations associated with configurations Xi , 1 ≤ i ≤ m
over some entire sequence. Let C0 refer to the set of canonical configurations before
any ball is placed and let it be the empty set. The set of canonical configurations Ci−1
differs from Ci in at most three configurations. Consider that i − 1 balls have been
placed and there are now p plateaus with levels ℓi , 1 ≤ i ≤ p and corresponding bi ,
1 ≤ i ≤ p values.
• If the ith ball is placed at level ℓj , level ℓj+1 exists (≥ 1 balls are present at
level ℓj+1 ), and bj+1 6= n − 1, then Ci = (Ci−1 \{Cℓj+1 ,bj+1 }) ∪ {Cℓj+1 ,bj+1 +1 }.
• If level ℓj+1 exists and bj+1 = n − 1, then Ci = (Ci−1 \{Cℓj ,bj , Cℓj+1 ,bj+1 }) ∪
14
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
{Cℓj+1 ,n }.
• If level ℓj+1 does not exist, then Ci = Ci−1 ∪ {Cℓj +1,1 }.
Notice that in every scenario, there is exactly one configuration that is added to
Ci−1 to get Ci . We denote the newly-added configuration as the selected canonical
configuration of Ci .
Now, for a given sequence of configurations Xi , 1 ≤ i ≤ m it is clear that each
selected canonical configuration is uniquely chosen. We show in the following lemma,
Lemma 4.3, that every selected canonical configuration in the sequence is unique and
different from the other selected canonical configurations. In other words, the set of
selected canonical configurations will not be a multiset. Furthermore, it has earlier
been established that for any configuration and one of its canonical configurations, it
takes at least as many probes to place a ball in the latter as it does in the former.
Thus, by calculating the S
number of probes it would take to place balls in all possible
m
canonical configurations i=1 Ci , we can overcount the number of probes required to
place all balls into bins.
Lemma 4.3. The set of selected canonical configurations for any sequence of configurations Xi , 1 ≤ i ≤ m will not be a multiset.
Proof. Consider an arbitrary sequence and within it an arbitrary configuration Xi
for some 1 ≤ i ≤ m. To reach this configuration, a ball was placed previously in some
level ℓ − 1 and extended level ℓ from b balls to b + 1 balls, 0 ≤ b ≤ n − 1. The selected
canonical configuration for Xi will be Cℓ,b+1 . Since balls can only be added and never
deleted, once a level is extended to some b + 1 number of balls, placing another ball
can never extend that same level to b + 1 balls. That level can only be henceforth
extended to a larger number of balls up to n balls. Thus a given configuration Cℓ,b+1
will never appear twice in the set of selected canonical configurations.
Expectation bound
As mentioned earlier, we assume the maximum load of any bin is m/n+ λ log n. Thus,
for any given sequence of placements, the final configuration never has any balls at
level m/n + λ log n + 1. Thus, when calculating the number of probes taken to place
all balls, we need only consider the number of probes required to place a ball in every
canonical configuration Cℓ,b , 0 ≤ ℓ ≤ m/n + λ log n, 0 ≤ b ≤ n − 1.
For a given canonical configuration Cℓ,b let Yℓ,b be a random variable denoting
the number of probes required to place the ball using FirstDiff[d] without the zero
bin condition. Let Y be a random variable denoting the total number of probes
required to place a ball in each of the possible canonical configurations. Thus Y =
Pm/n+λ log n Pn−1
ℓ=1
b=0 Yℓ,b .
For a given configuration Cℓ,b , a ball is placed when either it first hits bins of level
ℓ several times and then a bin of level ℓ − 1, it hits bins of level ℓ − 1 several times and
then a bin of level ℓ, or it makes k probes. Thus, using geometric random variables,
we see that E[Yℓ,b ] = min(b/(n − b) + (n − b)/b, k). For the first nk and last nk − 1
canonical configurations for a given level, let us give away the maximum number of
probes, i.e. Yℓ,b = k for any ℓ and for 0 ≤ b ≤ nk − 1 and n − nk + 1 ≤ b ≤ n − 1.
We now want to calculate the expected number of probes for the middle canonical
configurations.
Therefore
15
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE
m
n +λ log n
n− n
k
X
X
b= n
k
ℓ=0
n
X n − b n−
Xk b
E[Yℓ,b ] =
+ λ log n + 1
+
n
b
n
n n − b
b= k
b= k
n
n
n−
n− n
n− n
n−
m
Xk 1
Xk b
Xk 1
Xk n − b
+ λ log n + 1 n
−
+n
−
=
n
n b
n b
n n − b
n n − b
b= k
b= k
b= k
b= k
n
n
n−
k
m
Xk 1
X
4n
1
+ λ log n + 1 n
+n
− 2n +
=
n
b
y
k
b= n
y=n− n
k
k
n
n− k
m
X 1
2
=
+ λ log n + 1 2n
− 1 +
n
b
k
n
b= k
n
2
n
− log
−1+
≈ 2 (m + nλ log n + n) log n −
k
k
k
2
= 2 (m + nλ log n + n) log(k − 1) − 1 +
k
m
n− n
k
High probability bound
Now, we may apply Lemma 1.1 with Λ = 1.01, µ taken from above, and p∗ = k1 .
m
n
Pr
≤e
+λ log n n− n
k
X
ℓ=0
X
b= n
k
1
·2(m+nλ log
−k
≤O
Yℓ,b > 2.02(m + nλ log n + n)(log(k − 1) − 1 +
2
n+n)(log(k−1)−1+ k
)·(1.01−1−ln 1.01)
2
)
k
1
(since k > 2 and n ≥ n0 )
n
Therefore, with high probability, the total number of probes
m
n +λ log n
Y =
X
n−1
X
X
X
ℓ=0
Yℓ,b
b=0
m
n
n +λ log n k −1
≤
ℓ=0
b=0
m
n +λ log
Yℓ,b +
X
ℓ=0
n n− n
k
X
m
n +λ log n
(Yℓ,b ) +
b= n
k
X
ℓ=0
n−1
X
Yℓ,b
b=n− n
k +1
n
n
m
2
+
+ λ log n + 1 k + 2.02 (m + nλ log n + n) log(k − 1) − 1 +
+ λ log n + 1 k
−1
n
k
k
n
k
≤ 2.14(m + nλ log n + n) log k
≤
m
≤ 2.17m log k (since m ≥ 72(nλ log n + n))
Thus when m balls are placed into n bins, an upper bound on both the expected
total probes and the total probes with high probability is 2.17m log k. Therefore on
expectation and with high probability, the number of probes per ball is at most d
since k = 2d/2.17 .
16
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
4.2. Proof of Maximum Load.
Lemma 4.4. Use FirstDiff[d], where maximum number of probes allowed per ball
is 2d/2.17 , to allocate m balls into n bins. For any m, for an absolute constant c,
Pr Max. load of any bin >
m
n
+
log log n
0.46d
+ c log log log n ≤ c(log log n)−4 .
Proof. This proof follows along the lines of that of Theorem 2 from [14]. In order
to prove Lemma 4.4, we make use of a theorem from [10] which gives us an initial,
loose, bound on the gap Gt between the maximum load and average load for an
arbitrary m. We then use a lemma to tighten this gap. We use one final lemma to
show that if this bound on the gap holds after all m balls are placed, then it will hold
at any time prior to that.
First, we establish some notation. Let k be the maximum number of probes
permitted to be made per ball by FirstDiff[d], i.e. k = 2d/2.17 . After placing nt
balls, let us define the load vector X t as representing the difference between the load
of each bin and the average load (as in [3, 10]). Without loss of generality we order the individual values of the vector in non-increasing order of load difference, i.e.
X1t ≥ X2t ≥ . . . ≥ Xnt . So Xit is the load in the ith most loaded bin minus t. For
convenience, denote X1t (i.e. the gap between the heaviest load and the average) as Gt .
Initial bound on gap
We now give an upper bound for the gap between the maximum loaded bin and the
average load after placing some arbitrary number of balls nt. In other words, we show
Pr(Gt ≥ x) is negligible for some x. This x will be our initial bound on the gap Gt .
Lemma 4.5. For arbitrary constant c, after placing an arbitrary nt balls into bins
n
bn
under FirstDiff[d], there exist some constants a and b such that Pr(Gt ≥ c log
a ) ≤ nc .
1
t
Thus there exists a constant λ that gives Pr(G ≥ λ log n) ≤ nc for a desired c value.
In order to prove Lemma 4.5, we need two additional facts. The first is the following
basic observation:
Lemma 4.6. FirstDiff[d] is majorized by Greedy[2] when d ≥ 2.
Proof. Let the load vectors for FirstDiff[d] and Greedy[2] after t balls have been
placed using the respective algorithms be ut and v t respectively. Now we follow
the standard coupling argument (refer to Section 5 in [3] for an example). Couple
FirstDiff[d] with Greedy[2] by letting the bins probed by Greedy[2] be the first 2 bins
probed by FirstDiff[d]. We know that FirstDiff[d] makes at least 2 probes when d ≥ 2.
It is clear that FirstDiff[d] will always place a ball in a bin with load less than or equal
to that of the bin chosen by Greedy[2]. This ensures that if majorization was preserved
prior to the placement of the ball, then the new load vectors will continue to preserve
majorization; again, see [3] for a detailed example. Initially, u0 is majorized by v 0
since both vectors are the same. Using induction, it can be seen that if ut is majorized
by v t at the time the tth ball was placed, it would continue to be majorized at time
t + 1, 0 ≤ t ≤ m − 1. Therefore, FirstDiff[d] is majorized by Greedy[2] when d ≥ 2.
The other fact is the following theorem about Greedy[d] taken from [10] (used
similarly in [14] as Theorem 3).
Theorem 4.7. [10] Let Y t be the load vector generated by Greedy[d]. Then for
every d > 1 there exist positive constants
a and
b such that for all n and all t,
P a|Y t |
E
e i
≤ bn.
i
We are now ready to prove Lemma 4.5.
17
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE
Proof of Lemma 4.5.. Combining Lemma 4.6 with Theorem 4.7 tells us that, if
X t is the load vector generated by FirstDiff[d],
!
X
a|Xit |
≤ bn.
e
E
i
Clearly, Pr(Gt ≥
c log n
a )
Pr(Gt ≥
t
= Pr(eaG ≥ nc ). Observe that
P
t
t
ea|Xi | ≥ eaG . Then
i
t
c log n
) = Pr(eaG ≥ nc )
a
t
E[eaG ]
≤
(by Markov’s inequality)
nc
bn
≤ c (by Theorem 4.7 and Lemma 4.6)
n
and the theorem is proved.
Reducing the gap
Lemma 4.5 gives an initial bound on Gt of order log n. The next step is to reduce it
to our desired gap value. For this reduction, we use a modified version of Lemma 2
from [14], with a similar but more involved proof. We now give the modified lemma
and prove it.
Lemma 4.8. For every k, there exists a universal constant γ such that the fol1
lowing holds: for any t, ℓ, L such that 1 ≤ ℓ ≤ L ≤ n 4 , L = Ω(log log n) and
1
t
Pr(G ≥ L) ≤ 2 ,
3
+ n12 ,
Pr(Gt+L ≥ logloglogk n + ℓ + γ) ≤ Pr(Gt ≥ L) + 16bL
eaℓ
where a and b are the constants from Theorem 4.7.
Proof. This proof consists of many steps. We first observe that Lemma 4.8 follows directly from Lemma 4.5 for sufficiently small n. We then use layered induction
to bound the proportion of bins of each size for larger n. This in turn allows us to
compute our desired bound on the probability of a large gap occurring.
Proof of
Define n1
Define n2
Lemma 4.8 for smaller values of n
to be the minimum value of n such that L ≥ 2 (recall L = Ω(log log n)).
tobe the minimum
value of n such that
1
(18 log n) ∗
4
18en 4 log n
n
≤
1
2n2 .
Define n3 to be the minimum value of n such that
n ≥ 54 log n. Define absolute constant n0 = max(n1 , n2 , n3 ).
Notice that, when n ≤ n0 , Lemma 4.5 implies that Lemma 4.8 holds with γ =
O(log n0 ). If n ≤ n0 , then
Pr(Gt+ℓ ≥ log log n + ℓ + γ) ≤ Pr(Gt+L ≥ γ).
Consider the right hand side of Lemma 4.8.
Pr(Gt ≥ L) +
1
16bL3
+ 2 ≥ n−2 ,
exp(aℓ) n
so it will be sufficient to prove the inequality
Pr(Gt+L ≥ γ) ≤ n−2 .
18
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
Since there are no conditions on t in Lemma 4.5, we may rewrite it as
Pr(Gt+L ≥ λ log n) ≤ n−c .
Let c = 2 and compute the constant λ accordingly. Set γ = λ log n0 ≥ λ log n. Then
n−2 ≥ Pr(Gt+L ≥ λ log n) ≥ Pr(Gt+L ≥ γ),
and we are done.
Rewriting initial probability inequality
We now prove Lemma 4.8 assuming n > n0 . Start by rewriting the probability in
terms of Pr(Gt ≥ L).
Pr(Gt+L ≥
log log n
log log n
+ ℓ + γ) = Pr(Gt+L ≥
+ ℓ + γ|Gt ≥ L)Pr(Gt ≥ L)
log k
log k
log log n
+ Pr(Gt+L ≥
+ ℓ + γ|Gt < L)Pr(Gt < L)
log k
log log n
≤ Pr(Gt ≥ L) + Pr(Gt+L ≥
+ ℓ + γ|Gt < L)
log k
To prove the theorem, then, it is enough to to show that
3
1
Pr(Gt+L ≥ logloglogk n + ℓ + γ|Gt < L) ≤ 16bL
eaℓ + n2 .
Bins’ loads
Define vi to be the fraction of bins of load at least t + L + i after (t + L)n balls are
placed. Let us set i∗ = logloglogk n + ℓ and set γ = 4. Using this new notation, we want to
show that Pr(Gt+L ≥ i∗ + 4|Gt < L) is negligible. This can be thought of as showing
that the probability of the fraction of bins of load at least t + L + i∗ + 4 exceeding 0
after (t + L)n balls are placed, conditioned on the event that Gt < L, is negligible.
Suppose we have a non-increasing series of numbers β0 , β1 , . . . , βi , . . . that are
upper bounds for v0 , v1 , . . . , vi , . . .. Then we know that
Pr(vi∗ +4 > 0) = Pr(vi∗ +4 > 0|vi∗ ≤ βi∗ )Pr(vi∗ ≤ βi∗ ) + Pr(vi∗ +4 > 0|vi∗ > βi∗ )Pr(vi∗ > βi∗ )
≤ Pr(vi∗ +4 > 0|vi∗ ≤ βi∗ ) + Pr(vi∗ > βi∗ )
∗
i
X
≤ Pr(vi∗ +4 > 0|vi∗ ≤ βi∗ ) +
Pr(vj > βj |vj−1 ≤ βj−1 ) + Pr(vℓ > βℓ ) (successively expanding
j=ℓ+1
and bounding the Pr(vi∗ > βi∗ ) term and its derivatives)
Conditioning both sides on Gt < L, we have
Pr(vi∗ +4 > 0|Gt < L) ≤ Pr(vi∗ +4 > 0|vi∗ ≤ βi∗ , Gt < L)
∗
+
i
X
Pr(vi > βi |vi−1 ≤ βi−1 , Gt < L)
i=ℓ+1
(5)
+ Pr(vℓ > βℓ |Gt < L)
It remains to find appropriate βi values. We use a layered induction approach to show
that vi ’s don’t exceed the corresponding βi ’s with high probability. This then allows
19
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE
us to upper bound each of the 3 components of equation 5.
Base case of layered induction
In order to use layered induction, we need a base case. Let us set βℓ =
in the statement of the theorem. Now,
1
8L3 ,
for the ℓ
T
Pr((vℓ > 8L1 3 ) (Gt < L))
Pr(Gt < L)
1
1
≤ 2 · Pr(vℓ >
) (since, by the statement of the theorem Pr(Gt < L) ≥ )
3
8L
2
8bL3
≤ 2 · aℓ (applying Markov’s inequality and using Theorem 4.7)
e
16bL3
≤ aℓ
e
Pr(vℓ > βℓ |Gt < L) =
Therefore we have the third term of Equation 5 bounded:
(6)
Pr(vℓ > βℓ |Gt < L) ≤
16bL3
eaℓ
Recurrence relation for layered induction
We now define the remaining βi values recursively. Note that for all i ≥ ℓ, βi ≤ βℓ .
Let ui be defined as the number of balls of height at least t + L + i after (L + t)n balls
are placed.
Initially there were nt balls in the system. Then we threw another nL balls into
the system. Remember that t+L is the average load of a bin after nL balls are further
placed. Because we condition on Gt < L, we have it that any ball of height i, i ≥ 1,
must have been one of the nL balls placed.
Therefore the number of bins of load t + L + i + 1 after (t + L)n balls are placed is
upper bounded by the number of balls of height at least t+ L + i + 1. So vi+1 n ≤ ui+1 .
In order to upper bound vi+1 , we can upper bound ui+1 .
Recall the algorithm places a ball in a bin of load t + L + i if it probes k times
and sees a bin of load t + L + i each time; or if it probes j < k times and sees a bin
of load t + L + i each time, then probes a bin of load ≥ t + L + i + 1; or if it probes
j < k times and sees a bin of load at least t + L + i + 1 each time (where the load of
the bin probed each time is the same), then probes a bin of load t + L + i. Thus the
probability that a ball will end up at height at least t + L + i + 1 is
k−2
2
≤ βik + βi βi+1 1 + βi + βi2 + . . . + βik−2 + βi βi+1 1 + βi+1 + βi+1
+ . . . + βi+1
!
k−1
1 − βi+1
1 − βik−1
k
≤ βi + βi βi+1
+
1 − βi
1 − βi+1
1
≤ βik + βl βi+1 2 ∗
1 − βl
2βi+1
≤ βik +
8L3 − 1
Let vi+1 (f ) be the fraction of bins with load at least t + i + 1 after the tn + f th ,
1 ≤ f ≤ nL, ball is placed in a bin.
20
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
Let f ∗ = min[arg minf vi+1 (f ) > βi+1 , nL], i.e. f ∗ is the first f such that
vi+1 (f ) > βi+1 or nL if there is no such f . By our preceding argument, the probability that f ∗ < nL is bounded by the probability that a binomial random variable
2βi+1
B(nL, ≤ βik + 8L
3 −1 ) is greater than βi+1 nL.
Fix
2βi+1
2nL(βik + 8L
3 −1 )
8L3 − 1
k
βi+1 = 2L 3
β ≥
.
8L − 4L − 1 i
n
Then using a Chernoff bound,
we can
say that with high probability, f ∗ = nL or
vi+1 ≤ βi+1 , so long as e−
βik +
2βi+1
8L3 −1
3
= O( n1c ) for some constant c ≥ 1.
2βi+1
βik +
8L3 −1
−
3
n
Now, so long as βi+1 ≥ 18 log
, e
= O( n1c ). In other words, this
n
upper bound holds for the placement of all nt + nL balls.
We now show that according to the previous recurrence relation, βi∗ dips below
We later propose a modified recurrence relation which sets the value of βi to
n
. This ensures
the maximum of the value of obtained from the recurrence and 18 log
n
18 log n
. This upper bound will be used later in the argument. We have,
that βi∗ =
n
from the value of βℓ and the above discussion,
log βℓ = −3 log(2L) and
3
−1
log βi+1 = k log βi + log(2L) + log( 8L8L
3 −4L−1 )
Solving the recursion for log βℓ+log log n , we get
2L(8L3 − 1)
k log log n − 1
− 3k log log n log(2L)
log
log βℓ+log log n =
k−1
8L3 − 4L − 1
8L3 − 1
≤ k log log n (−3k + 4) log(2L) + log
8L3 − 4L − 1
8L3 − 1
≤ k log log n (−6 + 4) log(2L) + log
8L3 − 4L − 1
18 log n
.
n
≤ k log log n ((−1.5) log(2L)) (when L ≥ 2)
≤ 2log log n ((−1.5) log(2L))
≤ (−1.5)(log n)
n
,
Therefore, βi∗ ≤ n−1.5 , when L ≥ 2. Since n ≥ n1 , we have L ≥ 2. Thus βi∗ < 18 log
n
as desired.
Now, we need to bound Pr(vi > βi |vi−1 ≤ βi−1 , Gt < L) for all i’s from ℓ + 1 to
3
−1
k 18 log n
i∗ . Let us set βi+1 = max(2L 8L8L
).
3 −4L−1 βi ,
n
Using the values of βi generated above, we prove that for all i such that ℓ + 1 ≤
i ≤ i∗ , Pr(vi > βi |vi−1 ≤ βi−1 , Gt < L) ≤ n13 .
For a given i,
Pr(vi > βi |vi−1 ≤ βi−1 , Gt < L) = Pr(nvi > nβi |vi−1 ≤ βi−1 , Gt < L)
≤ Pr(ui > nβi |vi−1 ≤ βi−1 , Gt < L)
We now upper bound the above inequality using the following idea. Let Yr be an
indicator variable set to 1 when all three of the following conditions are met: (i) the
21
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE
nt + rth ball placed is of height at least t + L + i, (ii) vi−1 ≤ βi−1 and (iii) Gt < L.
Yr is set to 0 otherwise. Now for all 1 ≤ r ≤ nL, the probability that Yr = 1 is upper
2βi+1
βi
8L3 −1
k
t
bounded by βik + 8L
3 −1 ≤ 8L3 −4L−1 βi−1 ≤ 2L . Since we condition on G < L, the
number of balls of height at least t + L or more come only from the nL balls placed.
Therefore, the probability that the number of balls of height at least n + L + i exceeds
βi
βi is upper bounded by Pr(B(nL, 2L
) > βi ), where B(., .) is a binomial random
variable with given parameters.
µδ2
According to Chernoff’s bound, for 0 < δ ≤ 1, Pr(X ≥ (1 + δ)µ) ≤ e− 3 , where
X is the sum of independent Poisson trials and µ is the expectation of X. If we set
δ = 1, then we have
Pr(vi > βi |vi−1 ≤ βi−1 , Gt < L)
βi
≤ Pr(B(nL,
) > βi )
2L
n ∗ ( β2i )
−
3
≤e
n
n ∗ ( 18 log
)
n
18 log n
6
≤e
(since βi ≥
, ∀i ≤ i∗ )
n
1
≤ 3
n
−
Thus we bound the middle term in Equation 5
∗
i
X
Pr(vj > βj |vj−1 ≤ βj−1 , Gt < L) ≤
log log n
n3
Pr(vj > βj |vj−1 ≤ βj−1 , Gt < L) ≤
1
(since n ≥ n1 )
2n2
j=ℓ+1
∗
(7)
=⇒
i
X
j=ℓ+1
Top layers of layered induction
Finally, we need to upper bound the first term in Equation 5,
Pr(vi∗ +4 > 0|vi∗ ≤ βi∗ , Gt < L). Consider a bin of load at least i∗ . We will upper
bound the probability that a ball falls into this specific bin. Regardless of how the
probes are made for that ball, one of them must be made to that specific bin. Thus
we have a formula similar to our original recursion, but with a factor of 1/n.
22
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
Therefore the probability that a ball will fall into that bin is
≤
≤
≤
≤
≤
≤
1
1 k−1 1
+ βi∗ +1 1 + βi∗ +1 + βi2∗ +1 + . . . + βik−2
βi
+ βi∗ +1 1 + βi∗ + βi∗ + . . . + βik−2
∗
∗ +1
n
n
n
1
1
k−1
· βi∗ + 2βi∗ +1
n
1 − βi∗ +1
1
1
· βik−1
+ 2βi∗
∗
n
1 − βi∗
1
2n
· βik−1
βi∗
+
∗
n
n − 18 log n
1 3n − 18 log n
·
· βi∗ (since k ≥ 2 and βi∗ ≤ 1)
n n − 18 log n
4
· βi∗ (since n > n3 )
n
Now, we upper bound the probability that 4 balls fall into a given bin of load
at least βi∗ and then use a union bound over all the bins of height at least βi∗ to
show that the probability that the fraction of bins of load at least βi∗ +4 exceeds 0 is
negligible.
First, the probability that 4 balls fall into a given bin of load at least βi∗ is
4
≤ Pr(B(nL, ( · βi∗ )) ≥ 4)
n
4
nL
4
· βi∗
≤
n
4
4
4
1
≤ e · nL · ( · βi∗ ) ·
n
4
4
≤ (eLβi∗ )
Taking the union bound across all possible βi∗ n bins, we have the following inequality
4
Pr(vi∗ +4 > 0|vi∗ ≤ βi∗ , Gt < L) ≤ (βi∗ n) · (eLβi∗ )
4
18eL log n
t
∗
∗
∗
=⇒ Pr(vi +4 > 0|vi ≤ βi , G < L) ≤ (18 log n) ·
n
1
(since n ≥ n2 )
=⇒ Pr(vi∗ +4 > 0|vi∗ ≤ βi∗ , Gt < L) ≤
2n2
(8)
Putting together equations 5, 6, 7, and 8, we get
16bL3
1
1
+ 2+ 2
eaℓ
2n
2n
1
16bL3
+ 2
≤
aℓ
e
n
Pr(vi∗ +4 > 0|Gt < L) ≤
Thus
23
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE
Pr(Gt+L ≥
log log n
+ ℓ + 4|Gt < L) = Pr(vi∗ +4 > 0|Gt < L)
log k
1
16bL3
+ 2
≤
eaℓ
n
Finally
Pr(Gt+L ≥
log log n
log log n
+ ℓ + 4) ≤ Pr(Gt ≥ L) + Pr(Gt+L ≥
+ ℓ + 4|Gt < L)
log k
log k
1
16bL3
≤ Pr(Gt ≥ L) + aℓ + 2
e
n
Hence Lemma 4.8 is proved.
By Lemma 4.5, we know that at some arbitrary time t, the gap will be O(log n)
with high probability. Now applying Lemma 4.8 once with L = O(log n) and ℓ =
O(log log n) with appropriately chosen constants, we get Pr(Gt+L ≥ logloglogk n +O(log log n)+
γ) ≤ O((log log n)−4 ). Applying the lemma again with L = O(log log n) and ℓ =
O(log log log n) with appropriately chosen constants, we get
c
Pr(Gt > logloglogk n + c log log log n) ≤ (log log
n)4 when time t = ω(log n).
We now show that as more balls are placed, the probability that the gap exceeds
a particular value increases. This is true by Lemma 4 from [14]:
′
′
Lemma 4.9. [14] For t ≥ t′ , Gt is stochastically dominated by Gt . Thus E[Gt ] ≤
′
E[Gt ] and for every z, Pr(Gt ≥ z) ≤ Pr(Gt ≥ z).
Although the setting is different in [14], their proof of Lemma 4.9 applies here
as well. Thus knowing the gap is large when time t = ω(log n) with probability
O((log log n)−4 ), implies that for all values of t′ < t, the gap exceeds the desired value
with at most the same probability. Substituting k = 2d/2.17 in Pr(Gt > logloglogk n +
c
c log log log n) ≤ (log log
n)4 and modifying the inequality to talk about max. load after
m balls have been thrown results in the lemma statement.
Thus concludes the proof of Lemma 4.4.
Putting together Lemma 4.2 and Lemma 4.4, we get Theorem 4.1.
5. Lower Bound on Maximum Bin Load. We now provide a lower bound
to the maximum load of any bin after using FirstDiff[d] as well as other types of algorithms which use a variable number of probes for Class 1 type algorithms as defined
by Vöcking [15]. Class 1 algorithms are those where for each ball, the locations are
chosen uniformly and independently at random from the bins available. We first give
a general theorem for this type of algorithm and then apply it to FirstDiff[d].
Theorem 5.1. Let Alg[k] be any algorithm that places m balls into n bins, where
m ≥ n, sequentially one by one and satisfies the following conditions:
1. At most k probes are used to place each ball.
2. For each ball, each probe is made uniformly at random to one of the n bins.
3. For each ball, each probe is independent of every other probe.
The maximum load of any bin after placing all m balls using Alg[k] is at least m
n +
ln ln n
−
Θ(1)
with
high
probability.
ln k
24
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
Proof. We show that Greedy[k] is majorized by Alg[k], i.e. Greedy[k] always performs better than Alg[k] in terms of load balancing. Thus any lower bound that
applies to the max. load of any bin after using Greedy[k] must also apply to Alg[k].
Let the load vectors for Greedy[k] and Alg[k] after t balls have been placed using
the respective algorithms be ut and v t respectively. We use induction on the number
of balls placed to prove our claim of majorization. Initially, no ball is placed and by
default u0 is majorized by v 0 . Assume that ut−1 is majorized by v t−1 . We now use the
standard coupling argument to prove the induction hypothesis. For the placement of
the tth ball, let Alg[k] use wt probes. Couple Greedy[k] with Alg[k] by letting the first
wt bins probed by Greedy[k] be the same bins probed by Alg[k]. Greedy[k] will always
make at least wt probes and thus possibly makes probes to lesser loaded bins than
those probed by Alg[k]. Since Greedy[k] places a ball into the least loaded bin it finds,
it will place a ball into a bin with load at most the same as the one chosen by Alg[k].
Therefore ut is majorized by v t . Thus by induction, we see that ut is majorized by v t
for all 0 ≤ t ≤ m. Therefore Greedy[k] is majorized by Alg[k].
It is known that the max. load of any bin after the placement of m balls into n
ln ln n
bins (m ≥ n) using Greedy[k] is at least m
n + ln k − Θ(1) with high probability [3].
Therefore, the same lower bound also applies to Alg[k].
Now we are ready to prove our lower bound on the max. load of any bin after
using FirstDiff[d].
Theorem 5.2. The maximum load of any bin after placing m balls into n bins
using FirstDiff[d], where maximum number of probes allowed per ball is 2Θ(d) , is at
ln ln n
least m
n + Θ(d) − Θ(1) with high probability.
Proof. We see that FirstDiff[d] uses at most 2Θ(d) probes and satisfies the requirements of Theorem 5.1. Thus by substituting k = 2Θ(d) , we get the desired bound.
Table 1
Experimental results for the maximum load for n balls and n bins based on 100 experiments for
each configuration. Note that the maximum number of probes per ball in FirstDiff[d], denoted as k,
is chosen such that the average number of probes per ball is fewer than d.
n
28
Greedy[d]
2...11%
3...87%
4... 2%
d = 2, k = 3
Left[d]
FirstDiff[d]
2...43%
2...81%
3...57%
3...19%
Greedy[d]
2...88%
3...12%
d = 3, k = 10
Left[d]
FirstDiff[d]
2...100%
2...100%
Greedy[d]
2...100%
d = 4, k = 30
Left[d]
FirstDiff[d]
2...100%
2...100%
2...10%
3...90%
2...12%
3...88%
2...96%
3... 4%
2...100%
2...93%
3... 7%
2...100%
2...100%
2...100%
3...100
2...31%
3...69%
2...100%
3...100%
2...49%
3...51%
2...100%
3...98%
4... 2%
2...100%
2...100%
3...100%
3...100%
3...100%
4...100%
3...96%
4... 4%
2...100%
2...100%
3...100%
3...100%
3...100%
4...100%
3...37%
4...63%
212
3...99%
4... 1%
3...100%
216
3...63%
4...37%
2...100%
220
3...100%
2...100%
224
3...100%
6. Experimental Results. We experimentally compare the performance of
FirstDiff[d] with Left[d] and Greedy[d] in Table 1. Similar to the experimental results in [15], we perform all 3 algorithms in different configurations of bins and d
BALANCED ALLOCATION: PATIENCE IS NOT A VIRTUE
25
values. Let k be the maximum number of probes allowed to be used by FirstDiff[d]
per ball. For each value of d ∈ [2, 4], we choose a corresponding value of k such
that the average number of probes required by each ball in FirstDiff[d] is at most d.
For each configuration, we run each algorithm 100 times and note the percentage of
times the maximum loaded bin had a particular value. It is of interest to note that
FirstDiff[d], despite using on average less than d probes per ball, appears to perform
better than both Greedy[d] and FirstDiff[d] in terms of maximum load.
7. Conclusions and Future Work. In this paper, we have introduced a novel
algorithm called FirstDiff[d] for the well-studied load balancing problem. This algorithm combines the benefits of two prominent algorithms, namely, Greedy[d] and
Left[d]. FirstDiff[d] generates a maximum load comparable to that of Left[d], while
being as fully decentralized as Greedy[d]. From another perspective, we observe that
FirstDiff[log d] and Greedy[d] result in a comparable maximum load, while the number
of probes used by FirstDiff[log d] is exponentially smaller than that of Greedy[d]. In
other words, we exhibit an algorithm that performs as well as an optimal algorithm,
with significantly less computational requirements. We believe that our work has
opened up a new family of algorithms that could prove to be quite useful in a variety
of contexts spanning both theory and practice.
A number of questions arise out of our work. From a theoretical perspective, we
are interested in developing a finer-grained analysis of the number of probes; experimental results suggest the number of probes used to place the ith ball depends on
the congruence class of i modulo n. From an applied perspective, we are interested in
understanding how FirstDiff[d] would play out in real world load balancing scenarios
like cloud computing, where the environment (i.e. the servers, their interconnections,
etc.) and the workload (jobs, applications, users, etc.) are likely to be a lot more
heterogeneous and dynamic.
Acknowledgements. We are thankful to Anant Nag for useful discussions and
developing a balls-in-bins library [9] that was helpful for our experiments. We are
also grateful to Thomas Sauerwald for his helpful thoughts when he visited Institute
for Computational and Experimental Research in Mathematics (ICERM) at Brown
University. Finally, John Augustine and Amanda Redlich are thankful to ICERM for
having hosted them as part of a semester long program.
REFERENCES
[1] J. Augustine, W. K. Moses Jr., A. Redlich, and E. Upfal, Balanced allocation: patience
is not a virtue, in Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on
Discrete Algorithms, Society for Industrial and Applied Mathematics, 2016, pp. 655–671.
[2] Y. Azar, A. Z. Broder, A. R. Karlin, and E. Upfal, Balanced allocations, SIAM journal
on computing, 29 (1999), pp. 180–200.
[3] P. Berenbrink, A. Czumaj, A. Steger, and B. Vöcking, Balanced allocations: The heavily
loaded case, SIAM Journal on Computing, 35 (2006), pp. 1350–1385.
[4] P. Berenbrink, K. Khodamoradi, T. Sauerwald, and A. Stauffer, Balls-into-bins with
nearly optimal load distribution, in Proceedings of the 25th ACM symposium on Parallelism
in algorithms and architectures, ACM, 2013, pp. 326–335.
[5] A. Czumaj and V. Stemann, Randomized allocation processes, Random Structures & Algorithms, 18 (2001), pp. 297–331.
[6] S. Fu, C.-Z. Xu, and H. Shen, Randomized load balancing strategies with churn resilience in
peer-to-peer networks, Journal of Network and Computer Applications, 34 (2011), pp. 252–
261.
[7] S. Janson, Tail bounds for sums of geometric and exponential variables, Technical report,
(2014).
26
J. AUGUSTINE, W. K. MOSES JR., A. REDLICH, AND E. UPFAL
[8] M. Mitzenmacher and E. Upfal, Probability and computing: Randomized algorithms and
probabilistic analysis, Cambridge University Press, 2005.
[9] A. Nag, Problems in Balls and Bins Model, master’s thesis, Indian Institute of Technology
Madras, India, 2014.
[10] Y. Peres, K. Talwar, and U. Wieder, The (1+ β)-choice process and weighted balls-into-bins,
in Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms,
Society for Industrial and Applied Mathematics, 2010, pp. 1613–1619.
[11] M. Raab and A. Steger, “balls into bins”–a simple and tight analysis, in Randomization and
Approximation Techniques in Computer Science, Springer, 1998, pp. 159–170.
[12] H. Shen and C.-Z. Xu, Locality-aware and churn-resilient load-balancing algorithms in structured peer-to-peer networks, Parallel and Distributed Systems, IEEE Transactions on, 18
(2007), pp. 849–862.
[13] X.-J. Shen, L. Liu, Z.-J. Zha, P.-Y. Gu, Z.-Q. Jiang, J.-M. Chen, and J. Panneerselvam,
Achieving dynamic load balancing through mobile agents in small world p2p networks,
Computer Networks, 75 (2014), pp. 134–148.
[14] K. Talwar and U. Wieder, Balanced allocations: A simple proof for the heavily loaded case,
in Automata, Languages, and Programming, Springer, 2014, pp. 979–990.
[15] B. Vöcking, How asymmetry helps load balancing, Journal of the ACM (JACM), 50 (2003),
pp. 568–589.
| 8 |
arXiv:1401.0062v4 [] 23 Jun 2016
Bernoulli 22(4), 2016, 2301–2324
DOI: 10.3150/15-BEJ729
The combinatorial structure of beta negative
binomial processes
CREIGHTON HEAUKULANI1 and DANIEL M. ROY2
1
Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, CB2
1PZ, United Kingdom. E-mail: [email protected]
2
Department of Statistical Sciences, University of Toronto, 100 St. George Street, Toronto, ON,
M5S 3G3, Canada. E-mail: [email protected]
We characterize the combinatorial structure of conditionally-i.i.d. sequences of negative binomial
processes with a common beta process base measure. In Bayesian nonparametric applications,
such processes have served as models for latent multisets of features underlying data. Analogously, random subsets arise from conditionally-i.i.d. sequences of Bernoulli processes with a
common beta process base measure, in which case the combinatorial structure is described by
the Indian buffet process. Our results give a count analogue of the Indian buffet process, which
we call a negative binomial Indian buffet process. As an intermediate step toward this goal,
we provide a construction for the beta negative binomial process that avoids a representation
of the underlying beta process base measure. We describe the key Markov kernels needed to
use a NB-IBP representation in a Markov Chain Monte Carlo algorithm targeting a posterior
distribution.
Keywords: Bayesian nonparametrics; Indian buffet process; latent feature models; multisets
1. Introduction
The focus of this article is on exchangeable sequences of multisets, that is, set-like objects
in which repetition is allowed. Let Ω be a complete, separable metric space equipped with
its Borel σ-algebra A and let Z+ := {0, 1, 2, . . .} denote the non-negative integers. By a
point process on (Ω, A), we mean a random measure X on (Ω, A) such that X(A) is a
Z+ -valued random variable for every A ∈ A. Because (Ω, A) is Borel, we may write
X=
X
δγk
(1.1)
k≤κ
for a random element κ in Z+ := Z+ ∪ {∞} and some – not necessarily distinct – random
elements γ1 , γ2 , . . . in Ω. We will take the point process X to represent the multiset of
This is an electronic reprint of the original article published by the ISI/BS in Bernoulli,
2016, Vol. 22, No. 4, 2301–2324. This reprint differs from the original in pagination and
typographic detail.
1350-7265
c
2016 ISI/BS
2
C. Heaukulani and D.M. Roy
its unique atoms γk with corresponding multiplicities X{γk }. We say X is simple when
X{γk } = 1 for all k ≤ κ, in which case X represents a set.
In statistical applications, latent feature models associate each data point yn in a
dataset with a latent point process Xn from an exchangeable sequence of simple point
processes, which we denote by (Xn )n∈N := (X1 , X2 , . . .). The unique atoms among the
sequence (Xn )n∈N are referred to as features, and a data point is said to possess those
features appearing in its associated point process. We can also view these latent feature
models as generalizations of mixture models that allow data points to belong to multiple,
potentially overlapping clusters [2, 10]. For example, in an object recognition task, a
model for a dataset consisting of street camera images could associate each image with
a subset of object classes – for example, “trees”, “cars”, and “houses”, etc. – appearing
in the images. In a document modeling task, a model for a dataset of news articles could
associate each document with a subset of topics – for example, “politics”, “Europe”, and
“economics”, etc. – discussed in the documents. Recent work in Bayesian nonparametrics
utilizing exchangeable sequences of simple point processes have focused on the Indian
buffet process (IBP) [7, 10], which characterizes the marginal distribution of the sequence
(Xn )n∈N when they are conditionally-i.i.d. Bernoulli processes, given a common beta
process base measure [11, 24].
If the point processes (Xn )n∈N are no longer constrained to be simple, then data
points can contain multiple copies of features. For example, in the object recognition
task, an image could be associated with two cars, two trees, and one house. In the
document modeling task, an article could be associated with 100 words from the politics
topic, 200 words from the Europe topic, and 40 words from the economics topic. In this
article, we describe a count analogue of the IBP called the negative binomial Indian buffet
processes (NB-IBP), which characterizes the marginal distribution of (Xn )n∈N when it
is a conditionally i.i.d. sequence of negative binomial processes [3, 28], given a common
beta process base measure. This characterization allows us to describe new Markov Chain
Monte Carlo algorithms for posterior inference that do not require numerical integrations
over representations of the underlying beta process.
1.1. Results
e0 be a non-atomic, finite measure on Ω, and let Π be a Poisson (point)
Let c > 0, let B
process on Ω × (0, 1] with intensity
e0 (ds).
(ds, dp) 7→ cp−1 (1 − p)c−1 dpB
(1.2)
As this intensity is non-atomic and merely σ-finite, ΠPwill have an infinite number of
∞
atoms almost surely (a.s.), and so we may write Π = j=1 δ(γj ,bj ) for some a.s. unique
random elements b1 , b2 , . . . in (0, 1] and γ1 , γ2 , . . . in Ω. From Π, construct the random
measure
B :=
∞
X
j=1
b j δγj ,
(1.3)
The combinatorial structure of beta negative binomial processes
3
which is a beta process [11]. The construction of B ensures that the random variables
B(A1 ), . . . , B(Ak ) are independent for every finite, disjoint collection A1 , . . . , Ak ∈ A, and
B is said to be completely random or equivalently, have independent increments [14]. We
review completely random measures in Section 2.
The conjugacy of the family of beta distributions with various other exponential families carries over to beta processes and randomizations by probability kernels lying in these
same exponential families. The beta process is therefore a convenient choice for further
randomizations, or in the language of Bayesian nonparametrics, as a prior stochastic
process. For example, previous work has focused on the (simple) point process that takes
each atom γj with probability bj for every j ≥ 1, which is, conditioned on B, called a
Bernoulli process (with base measure B) [24]. In this article, we study the point process
∞
X
(1.4)
ζj δγj ,
X :=
j=1
where the random variables ζ1 , ζ2 , . . . are conditionally independent given B and
ζj |bj ∼ NB(r, bj ),
j ∈ N,
(1.5)
for some parameter r > 0. Here, NB(r, p) denotes the negative binomial distribution with
parameters r > 0, p ∈ (0, 1], whose probability mass function (p.m.f.) is
NB(z; r, p) :=
(r)z z
p (1 − p)r ,
z!
z ∈ Z+ ,
(1.6)
where (a)n := a(a + 1) · · · (a + n − 1) with (a)0 := 1 is the nth rising factorial. Note that,
conditioned on B, the point process X is the (fixed component) of a negative binomial
process [3, 28]. Unconditionally, X is the ordinary component of a beta negative binomial
process, which we formally define in Section 2.
Conditioned on B, construct a sequence of point processes (Xn )n∈N that are i.i.d.
copies of X. In this case, (Xn )n∈N is an exchangeable sequence of beta negative binomial
processes, and our primary goal is to characterize the (unconditional) distribution of the
sequence. This task is non-trivial because the construction of the point process X in
equation (1.4) is not finitary in the sense that no finite subset of the atoms of B determines X with probability one. In the case of conditionally-i.i.d. Bernoulli processes, the
unconditional distributions of the measures remain in the class of Bernoulli processes,
and so a finitary construction is straightforwardly obtained with Poisson (point) processes. Then the distribution of the sequence, which Thibaux and Jordan [24] showed is
characterized by the IBP, may be derived immediately from the conjugacy between the
classes of beta and Bernoulli processes [11, 13, 24]. While conjugacy also holds between
the classes of beta and negative binomial processes [3, 28], the unconditional law of the
point process X is no longer that of a negative binomial process; instead, it is the law of
a beta negative binomial process.
Existing constructions for beta negative binomial processes truncate the number of
atoms in the underlying beta process and typically use slice sampling to remove the
error introduced by this approximation asymptotically [3, 19, 23, 28]. In this work, we
4
C. Heaukulani and D.M. Roy
instead provide a construction for the beta negative binomial process directly, avoiding
a representation of the underlying beta process. To this end, note that while the beta
process B has a countably infinite number of atoms a.s., it can be shown that B is still an
a.s. finite measure [11]. It follows as an easy consequence that the point process X is a.s.
finite as well and, therefore, has an a.s. finite number of atoms, which we represent with a
Poisson process. The atomic masses are then characterized by the digamma distribution,
introduced by Sibuya [21], which has p.m.f. (for parameters r, θ > 0) given by
digamma(z; r, θ) :=
1
(r)z
z −1 ,
ψ(r + θ) − ψ(θ) (r + θ)z
z ≥ 1,
(1.7)
where ψ(a) := Γ′ (a)/Γ(a) denotes the digamma function. In Section 3, we prove the
following:
Theorem 1.1. Let Y be a Poisson process on (Ω, A) with finite intensity
e0 (ds),
ds 7→ c[ψ(c + r) − ψ(c)]B
(1.8)
Pκ
that is, Y = k=1 δγk for a Poisson random variable κ with mean c[ψ(c + r) −
e0 (Ω) and i.i.d. random variables (γk )k∈N , independent from κ, each with distribuψ(c)]B
e
e0 (Ω). Let (ζk )k∈N be an independent collection of i.i.d. digamma(r, c) random
tion B0 /B
variables. Then
d
X=
κ
X
ζk δγk ,
(1.9)
k=1
where X is the beta negative binomial process defined in equation (1.4).
With this construction and conjugacy (the relevant results are reproduced in Section 4),
characterizing the distribution of (Xn )n∈N is straightforward. However, in applications
we are only interested in the combinatorial structure of the sequence (Xn )n∈N , that is,
the pattern of sharing amongst the atoms while ignoring the locations of the atoms
themselves. More precisely, for every n ∈ N, let Hn := Zn+ \ {0n } be the set of all lengthn sequences of non-negative integers, excluding the all-zero sequence. Elements in Hn
are called histories, and can be thought of as representations of non-empty multisets
of [n] := {1, . . . , n}. For every h ∈ Hn , let Mh be the number of elements s ∈ Ω such
that Xj {s} = h(j) for all j ≤ n. By the combinatorial structure of a finite subsequence
X[n] := (X1 , . . . , Xn ), we will mean the collection (Mh )h∈Hn of counts, which together can
be understood as representations of multisets of histories. These counts are combinatorial
in the following sense: Let φ: (Ω, A) → (Ω, A) be a Borel automorphism on (Ω, A), that
is, a measurable permutation of Ω whose inverse is also measurable, and define the
transformed processes Xjφ := Xj ◦ φ−1 , for every j ≤ n, where each atom s is repositioned
to φ(s). The collection (Mh )h∈Hn is invariant to this transformation, and it is in this
sense that they only capture the combinatorial structure. In Section 4, we prove the
following.
The combinatorial structure of beta negative binomial processes
5
Theorem 1.2. The probability mass function of (Mh )h∈Hn is
P{Mh = mh : h ∈ Hn }
P
(cT )
= Q
h∈Hn
h∈Hn
where S(h) :=
mh
mh !
P
exp(−cT [ψ(c + nr) − ψ(c)])
j≤n h(j),
Y
h∈Hn
"
(1.10)
#mh
n
Γ(S(h))Γ(c + nr) Y (r)h(j)
Γ(c + nr + S(h)) j=1 h(j)!
,
e0 (Ω) > 0.
for every h ∈ Hn , and T := B
As one would expect, equation (1.10) is reminiscent of the p.m.f. for the IBP, and
indeed the collection (Mh )h∈Hn is characterized by what we call the negative binomial
Indian buffet process, or NB-IBP. Let beta-NB(r, α, β) denote the beta negative binomial
distribution (with parameters r, α, β > 0), that is, we write Z ∼ beta-NB(r, α, β) if there
exists a beta random variable p ∼ beta(α, β) such that Z|p ∼ NB(r, p). In the NB-IBP, a
sequence of customers enters an Indian buffet restaurant:
• The first customer
– selects Poisson(cγ[ψ(c + r) − ψ(c)]) distinct dishes, taking digamma(r, c) servings
of each dish, independently.
• For n ≥ 1, the (n + 1)st customer
– takes beta-NB(r, Sn,k , c + nr) servings of each previously sampled dish k; where
Sn,k is the total number of servings taken of dish k by the first n customers;
– selects Poisson(cγ[ψ(c + (n + 1)r) − ψ(c + nr)]) new dishes to taste, taking
digamma(r, c + nr) servings of each dish, independently.
The interpretation here is that, for every h ∈ Hn , the count Mh is the number of
dishes k such that, for every j ≤ n, customer j took h(j) servings of dish k. Then the
sum S(h) in equation (1.10) is the total number of servings taken of dish k by the first
n customers. Because the NB-IBP is the combinatorial structure of a conditionally i.i.d.
process, its distribution, given in Theorem 1.2, must be invariant to every permutation
of the customers. We can state this property formally as follows.
Theorem 1.3 (Exchangeability). Let π be a permutation of [n] := {1, . . . , n}, and, for
h ∈ Hn , note that the composition h ◦ π ∈ Hn is given by (h ◦ π)(j) = h(π(j)), for every
j ≤ n. Then
d
(Mh )h∈Hn = (Mh◦π )h∈Hn .
(1.11)
The exchangeability of the combinatorial structure and its p.m.f. in equation (1.10)
allows us to develop Gibbs sampling techniques analogous to those originally developed
for the IBP [7, 17]. In particular, because the NB-IBP avoids a representation of the
beta process underlying the exchangeable sequence (Xn )n∈N , these posterior inference
algorithms do not require numerical integration over representations of the beta process.
We discuss some of these techniques in Section 5.
6
C. Heaukulani and D.M. Roy
2. Preliminaries
Here, we review completely random measures and formally define the negative binomial
and beta negative binomial processes. We provide characterizations via Laplace functionals and conclude the section with a discussion of related work.
2.1. Completely random measures
Let M(Ω, A) denote the space of σ-finite measures on (Ω, A) equipped with the σalgebra generated by the projection maps µ 7→ µ(A) for all A ∈ A. A random measure
ξ on (Ω, A) is a random element in M(Ω, A), and we say that ξ is completely random
or has independent increments when, for every finite collection of disjoint, measurable
sets A1 , . . . , An ∈ A, the random variables ξ(A1 ), . . . , ξ(An ) are independent. Here, we
briefly review completely random measures; for a thorough treatment, the reader should
consult Kallenberg [12], Chapter 12, or the classic text by Kingman [14]. Every completely
random measure ξ can be written as a sum of three independent parts
X
X
pδs
a.s.,
(2.1)
ξ = ξ̄ +
ϑs δs +
s∈A
(s,p)∈η
called the diffuse, fixed, and ordinary components, respectively, where:
1. ξ̄ is a non-random, non-atomic measure;
2. A ⊆ Ω is a non-random countable set whose elements are referred to as the fixed
atoms and whose masses ϑ1 , ϑ2 , . . . are independent random variables in R+ (the nonnegative real numbers);
3. η is a Poisson process on Ω × (0, ∞) whose intensity Eη is σ-finite and has diffuse
projections onto Ω, that is, the measure (Eη)(· × (0, ∞)) on Ω is non-atomic.
In this article, we will only study purely-atomic completely random measures, which
therefore have no diffuse component. It follows that we may characterize the law of ξ by
(1) the distributions of the atomic masses in the fixed component, and (2) the intensity
of the Poisson process underlying the ordinary component.
2.2. Definitions
By a base measure on (Ω, A), we mean a σ-finite measure B on (Ω, A) such that B{s} ≤ 1
for all s ∈ Ω. For the remainder of the article, fix a base measure B0 . We may write
X
e0 +
B0 = B
b̄s δs
(2.2)
s∈A
e0 ; a countable set A ⊆ Ω; and constants b̄1 , b̄2 , . . . in
for some non-atomic measure B
1
(0, 1]. As discussed in the Introduction, a convenient model for random base measures
1 Note
e0 (in the Introduction) to be merely σ-finite.
that we have relaxed the condition on B
The combinatorial structure of beta negative binomial processes
7
are beta processes, a class of completely random measures introduced by Hjort [11]. For
the remainder of the article, let c: Ω → R+ be a measurable function, which we call a
concentration function (or parameter when it is constant).
Definition 2.1 (Beta process). A random measure B on (Ω, A) is a beta process
with concentration function c and base measure B0 , written B ∼ BPL (c, B0 ), when it is
purely atomic and completely random, with a fixed component
X
ind
(2.3)
ϑs δs ,
ϑs ∼ beta(c(s)b̄s , c(s)(1 − b̄s )),
s∈A
and an ordinary component with intensity measure
e0 (ds).
(ds, dp) 7→ c(s)p−1 (1 − p)c(s)−1 dpB
(2.4)
It is straightforward to show that a beta process is itself a base measure with probability
one. This definition of the beta process generalizes the version given in the introduction
to a non-homogeneous process with a fixed component. Likewise, we generalize our earlier
definition of a negative binomial process to include an ordinary component.
Definition 2.2 (Negative binomial process). A point process X on (Ω, A) is a
negative binomial process with parameter r > 0 and base measure B0 , written X ∼
NBP(r, B0 ), when it is purely atomic and completely random, with a fixed component
X
ϑs δs ,
ind
ϑs ∼ NB(r, b̄s ),
(2.5)
s∈A
and an ordinary component with intensity measure
e0 (ds).
(ds, dp) 7→ rδ1 (dp)B
(2.6)
The fixed component in this definition was given by Broderick et al. [3] and Zhou et
al. [28] (and by Thibaux [25] for the case r = 1). Here, we have additionally defined an
ordinary component, following intuitions from Roy [20].
The law of a random measure is completely characterized by its Laplace functional,
and this representation is often simpler to manipulate: From Campbell’s theorem, or a
version of the Lévy–Khinchin formula for Borel spaces, one can show that the Laplace
functional of X is
Z
Y
r
1 − b̄s
e0 (ds)
f 7→ E[e−X(f ) ] = exp − (1 − e−f (s) )rB
,
(2.7)
1 − b̄s e−f (s)
s∈A
R
where f ranges over non-negative measurable functions and X(f ) := f (s)X(ds).
Finally, we define beta negative binomial processes via their conditional law.
8
C. Heaukulani and D.M. Roy
Definition 2.3 (Beta negative binomial process). A random measure X on (Ω, A)
is a beta negative binomial process with parameter r > 0, concentration function c, and
base measure B0 , written
X ∼ BNBP(r, c, B0 ),
if there exists a beta process B ∼ BPL (c, B0 ) such that
X|B ∼ NBP(r, B).
(2.8)
This characterization was given by Broderick et al. [3] and can be seen to match
a special case of the model in Zhou et al. [28] (see the discussion of related work in
Section 2.3). It is straightforward to show that a beta negative binomial process is also
completely random, and that its Laplace functional is given by
Z
r
1−p
−1
c(s)−1
e0 (ds)
E[e−X(f ) ] = exp −
1−
c(s)p
(1
−
p)
dp
B
1 − pe−f (s)
(2.9)
Y Z 1 − p r
beta(p; c(s)b̄s , c(s)(1 − b̄s )) dp,
×
1 − pe−f (s)
s∈A
for f : Ω → R+ measurable, where we note that the factors in the product term take the
form of the Laplace transform of the beta negative binomial distribution.
2.3. Related work
The term “negative binomial process” has historically been reserved for processes with
negative binomial increments – a class into which the process we study here does not fal
– and these processes have been long-studied in probability and statistics. We direct the
reader to Kozubowski and Podgórski [15] for references.
One way to construct a process with negative binomial increments is to rely upon the
fact that a negative binomial distribution is a gamma mixture of Poisson distributions. In
particular, similarly to the construction by Lo [16], consider a Cox process X directed by
a gamma process G with finite non-atomic intensity. So constructed, X has independent
increments with negative binomial distributions. Like the beta process (with a finite
intensity underlying its ordinary component), the gamma process has, with probability
one, a countably infinite number of atoms but a finite total mass, and so the Cox process
X is a.s. finite as well. Despite similarities, a comparison of Laplace functionals shows that
the law of X is not that of a beta negative binomial process. Using an approach directly
analogous to the derivation of the IBP in [10], Titsias [26] characterizes the combinatorial
structure of a sequence of point processes that, conditioned on G, are independent and
identically distributed to the Cox process X. See Section 4 for comments. This was
the first count analogue of the IBP; the possibility of a count analogue arising from
beta negative binomial processes was first raised by Zhou et al. [28], who described the
The combinatorial structure of beta negative binomial processes
9
distribution of the number of new dishes sampled by each customer. Recent work by
Zhou, Madrid and Scott [29], independent of our own and proceeding along different
lines, describes a combinatorial process related to the NB-IBP (following a re-scaling of
the beta process intensity).
Finally, we note that another negative binomial process without negative binomial
increments was defined on Euclidean space by Barndorff-Nielsen and Yeo [1] and extended
to general spaces by Grégoire [9] and Wolpert and Ickstadt [27]. These measures are
generally Cox processes on (Ω, A) directed by random measures of the form
Z
ν(t, ds)G(dt),
ds 7→
R+
where G is again a gamma process, this time on R+ , and ν is a probability kernel from
Ω to R+ , for example, the Gaussian kernel.
3. Constructing beta negative binomial processes
Before providing a finitary construction for the beta negative binomial process, we make
a few remarks on the digamma distribution. For the remainder of the article, define
λr,θ := ψ(θ + r) − ψ(θ) for some r, θ > 0. Following a representation by Sibuya [21],
we may relate the digamma and beta negative binomial distributions as follows: Let
Z ∼ digamma(r, θ) and define W := Z − 1, the latter of which has p.m.f.
P{W = w} = (θλr,θ )−1
w+r
beta-NB(w; r, 1, θ),
w+1
w ∈ Z+ .
(3.1)
Deriving the Laplace transform of the law of W is straightforward, and because E[e−tW ] =
et E[e−tZ ], one may verify that the Laplace transform of the digamma distribution is given
by
r
Z
1−p
−1
−tZ
1−
Ψr,θ (t) := E[e
] = 1 − λr,θ
p−1 (1 − p)θ−1 dp.
(3.2)
1 − pe−t
The form of equation (3.1) suggests the following rejection sampler, which was first
proposed by Devroye [6], Proposition 2, Remark 1: Let r > 0 and let (Un )n∈N be an i.i.d.
sequence of uniformly distributed random numbers. Let
i.i.d.
(Yn )n∈N ∼ beta-NB(r, 1, θ),
and define η := inf{n ∈ N : max{r, 1} · Un <
Yi +r
Yi +1 }.
Then
Yη + 1 ∼ digamma(r, θ),
and
Eη =
max{r, 1}
;
θ[ψ(r + θ) − ψ(θ)]
Eη < max{r, r−1 }.
10
C. Heaukulani and D.M. Roy
With digamma random variables, we provide a finitary construction for the beta negative binomial process. The following result generalizes the statement given by Theorem 1.1 (in the Introduction) to a non-homogeneous process, which also has a fixed
component.
Theorem 3.1. Let r > 0, and let ϑ := (ϑs )s∈A be a collection of independent random
variables with
ϑs ∼ beta-NB(r, c(s)b̄s , c(s)(1 − b̄s )),
s∈A.
(3.3)
Let Y be a Poisson process on (Ω, A), independent from ϑ, with (finite) intensity
e0 (ds).
ds 7→ c(s)[ψ(c(s) + r) − ψ(c(s))]B
(3.4)
Pκ
Write Y = k=1 δγk for some random element κ in Z+ and a.s. unique random elements γ1 , γ2 , . . . in Ω, and put F := σ(κ, γ1 , γ2 , . . .). Let (ζj )j∈N be a collection of random
variables that are independent from ϑ and are conditionally independent given F , and let
ζj |F ∼ digamma(r, c(γj )),
j ∈ N.
(3.5)
Then
X=
X
ϑs δs +
κ
X
ζj δγj ∼ BNBP(r, c, B0 ).
(3.6)
j=1
s∈A
Proof. We have
F
E [e
−X(f )
]=
Y
E[e
−ϑs f (s)
]×
κ
Y
EF [e−ζj f (γj ) ],
(3.7)
j=1
s∈A
for every f : Ω → R+ measurable. For s ∈ Ω, write g(s) = Ψr,c(s) (f (s)) for the Laplace
transform of the digamma distribution evaluated at f (s), where Ψr,θ (t) is given by equation (3.2). We may then write
κ
Y
EF [e−ζj f (γj ) ] =
κ
Y
g(γj ).
(3.8)
j=1
j=1
Then by the chain rule of conditional expectation, complete randomness, and Campbell’s
theorem,
Z
Y
−X(f )
−ϑs f (s)
e
(3.9)
E[e
]=
E[e
] × exp − (1 − g(s))c(s)λr,c(s) B0 (ds)
Ω
s∈A
=
Y Z
s∈A
1−p
1 − pe−f (s)
r
beta(p; c(s)b̄s , c(s)(1 − b̄s )) dp
(3.10)
The combinatorial structure of beta negative binomial processes
11
Z
r
1−p
−1
c(s)−1
e0 (ds) ,
1−
× exp −
c(s)p
(1
−
p)
dp
B
1 − pe−f (s)
(0,1]×Ω
which is the desired form of the Laplace functional.
A finitary construction for conditionally-i.i.d. sequences of negative binomial processes
with a common beta process base measure now follows from known conjugacy results. In
particular, for every n ∈ N, let X[n] := (X1 , . . . , Xn ). The following theorem characterizes
the conjugacy between the (classes of) beta and negative binomial processes and follows
from repeated application of the results by Kim [13], Theorem 3.3 or Hjort [11], Corollary 4.1. This result, which is tailored to our needs, is similar to those already given by
Broderick et el. [3] and Zhou et al. [28], and generalizes the result given by Thibaux [25]
for the case r = 1.
Theorem 3.2 (Hjort [11], Zhou et al. [28]). Let B ∼ BPL (c, B0 ) and, conditioned on
B, let (Xn )n∈N be a sequence of i.i.d. negative binomial processes with parameter r > 0
and base measure B. Then for every n ∈ N,
c
1
B|X[n] ∼ BPL cn , B0 + Sn ,
(3.11)
cn
cn
P
where Sn := ni=1 Xi and cn (s) := c(s) + Sn {s} + nr, for s ∈ Ω.
Remark 3.1. It follows immediately that, for every n ∈ N, the law of Xn+1 conditioned
on X1 , . . . , Xn is given by
1
c
(3.12)
Xn+1 |X[n] ∼ BNBP r, cn , B0 + Sn .
cn
cn
We may therefore construct this exchangeable sequence of beta negative binomial
processes with Theorem 3.1.
4. Combinatorial structure
We now characterize the combinatorial structure of the exchangeable sequence X[n] in the
e0 ) is non-atomic. In order to make this precise, we
case when c > 0 is constant and B0 (= B
introduce a quotient of the space of sequences of integer-valued measures. Let n ∈ N and
for any pair U := (U1 , . . . , Un ) and V := (V1 , . . . , Vn ) of (finite) sequences of integer-valued
measures, write U ∼ V when there exists a Borel automorphism φ on (Ω, A) satisfying
Uj = Vj ◦ φ−1 for every j ≤ n. It is easy to verify that ∼ is an equivalence relation. Let
[[U ]] denote the equivalence class containing U . The quotient space induced by ∼ is itself
a Borel space, and can be related to the Borel space of sequences of Z+ -valued measures
by coarsening the σ-algebra to that generated by the functionals
Mh (U1 , . . . , Un ) := #{s ∈ Ω : ∀j ≤ n, Uj {s} = h(j)},
h ∈ Hn , j ≤ n,
(4.1)
12
C. Heaukulani and D.M. Roy
where #A denotes the cardinality of A, and Hn := Zn+ \ {0n } is the space of histories
defined in the Introduction. The collection (Mh )h∈Hn of multiplicities (of histories) corresponding to X[n] , also defined in the Introduction, then satisfies Mh = Mh (X[n] ) for
every h ∈ Hn . The collection (Mh )h∈Hn thus identifies a point in the quotient space
induced by ∼. Our aim is to characterize the distribution of (Mh )h∈Hn , for every n ∈ N.
(ℏ)
Let ℏ ∈ Hn , and define Hn+1 := {h ∈ Hn+1 : ∀j ≤ n, h(j) = ℏ(j)} to be the collection
of histories in Hn+1 that agree with ℏ on the first n entries. Then note that
X
Mℏ =
Mh ,
ℏ ∈ Hn ,
(4.2)
(ℏ)
h∈Hn+1
that is, the multiplicities (Mh )h∈Hn+1 at stage n + 1 completely determine the multiplicities (Mℏ )ℏ∈Hn at all earlier stages. It follows that
P{Mh = mh : h ∈ Hn+1 } = P{Mℏ = mℏ : ℏ ∈ Hn }
× P{Mh = mh : h ∈ Hn+1 |Mℏ = mℏ : ℏ ∈ Hn },
where mℏ =
P
(ℏ)
h∈Hn+1
(4.3)
mh for ℏ ∈ Hn . The structure of equation (4.3) suggests an induc-
tive proof for Theorem 1.2.
4.1. The law of Mh for h ∈ H1
Note that H1 is isomorphic to N and that the collection (Mh )h∈H1 counts the number of
atoms of each positive integer mass. It follows from Theorem 1.1 and a transfer argument
[12], Propositions 6.10, 6.11 and 6.13, that there exists:
e0 (Ω) < ∞;
1. a Poisson random variable κ with mean cT λr,c , where T := B
2. an i.i.d. collection of a.s. unique random elements γ1 , γ2 , . . . in Ω;
3. an i.i.d. collection (ζj )j∈N of digamma(r, c) random variables;
all mutually independent, such that
X1 =
κ
X
ζj δγj
a.s.
j=1
It follows that
Mh = #{j ≤ κ: ζj = h(1)}
and κ =
P
h∈H1
a.s., for h ∈ H1 ,
(4.4)
Mh a.s. Therefore,
P{Mh = mh : h ∈ H1 }
X
X
mh .
mh P Mh = mh : h ∈ H1 κ =
=P κ=
h∈H1
h∈H1
(4.5)
The combinatorial structure of beta negative binomial processes
13
Because ζ1 , ζ2 , . . . are i.i.d., the collection (Mh )h∈H1 has a multinomial distribution conditioned on its sum κ. Namely, Mh counts the number of times, in κ independent trials,
that the multiplicity h(1) arises from a digamma(r, c) distribution. In particular,
X
P Mh = mh : h ∈ H1 κ =
mh
h∈H1
(4.6)
P
( h∈H1 mh )! Y
m
[digamma(h(1); r, c) h ].
= Q
(m
!)
h
h∈H1
h∈H1
It follows that
P{Mh = mh : h ∈ H1 }
P
(cT λr,c )
= Q
h∈H1
mh
h∈H1 (mh !)
exp(−cT λr,c )
Y
(4.7)
mh
[digamma(h(1); r, c)
].
h∈H1
4.2. The conditional law of Mh for h ∈ Hn+1
Let Sn :=
Pn
j=1 Xj .
Recall that s(ℏ) :=
Sn =
P
j≤n ℏ(j)
Mℏ
X X
for ℏ ∈ Hn . We may write
s(ℏ)δωℏ,j ,
(4.8)
ℏ∈Hn j=1
for some collection ω := (ωℏ,j )ℏ∈Hn ,j∈N of a.s. distinct random elements in Ω. It follows
from Remark 3.1, Theorem 1.1, and a transfer argument that there exists:
1. a Poisson random variable κ with mean cT λr,c+nr ;
2. an i.i.d. collection of a.s. unique random elements γ1 , γ2 , . . . in Ω, a.s. distinct also
from ω;
3. an i.i.d. collection (ζj )j∈N of digamma(r, c + nr) random variables;
4. for each ℏ ∈ Hn , an i.i.d. collection (ϑℏ,j )j∈N of random variables satisfying
ϑℏ,j ∼ beta-NB(r, s(ℏ), c + nr)
for j ∈ N;
all mutually independent and independent of X[n] , such that
Xn+1 =
Mℏ
X X
ℏ∈Hn j=1
ϑℏ,j δωℏ,j +
κ
X
ζj δγj
a.s.
(4.9)
j=1
Conditioned on X[n] , the first and second terms on the right-hand side correspond to the
fixed and ordinary components of Xn+1 , respectively. Let
(0)
Hn+1 := {h ∈ Hn+1 : h(j) = 0, j ≤ n}
(4.10)
14
C. Heaukulani and D.M. Roy
be the set of histories h for which h(n + 1) is the first non-zero element. Then, with
probability one,
Mh = #{j ≤ κ: ζj = h(n + 1)}
(0)
for h ∈ Hn+1 ,
(4.11)
and
Mh = #{j ≤ Mℏ : ϑℏ,j = h(n + 1)}
(ℏ)
for ℏ ∈ Hn and h ∈ Hn+1 .
(4.12)
By the stated independence of the variables above, we have
P{Mh = mh : h ∈ Hn+1 |Mℏ = mℏ : ℏ ∈ Hn }
Y
(0)
(ℏ)
= P{Mh = mh : h ∈ Hn+1 }
P{Mh = mh : h ∈ Hn+1 |Mℏ = mℏ }.
(4.13)
ℏ∈Hn
S
(ℏ)
+
Let Hn+1
:= ℏ∈Hn Hn+1 . For every ℏ ∈ Hn , the random variables ϑℏ,1 , ϑℏ,2 , . . . are
i.i.d., and therefore, conditioned on Mℏ , the collection (Mh )h∈H(ℏ) has a multinomial
n+1
distribution. In particular, the product term in equation (4.13) is given by
Y
(ℏ)
P{Mh = mh : h ∈ Hn+1 |Mℏ = mℏ }
ℏ∈Hn
Q
(mℏ !)
= Q ℏ∈Hn
h∈H+ (mh !)
n+1
Y
[beta-NB(h(n + 1); r, S(h) − h(n + 1), c + nr)
mh
].
h∈H+
n+1
The p.m.f. of the beta negative binomial distribution is given by
beta-NB(z; r, α, β) =
(r)z B(z + α, r + β)
,
z
B(α, β)
z ∈ Z+ ,
(4.14)
for positive parameters r, α, P
and β, where B(α, β) := Γ(α)Γ(β)/Γ(α + β) denotes the beta
function. We have that κ = h∈H(0) Mh a.s., and therefore
n+1
(0)
P{Mh = mh : h ∈ Hn+1 }
X
=P κ=
mh
(4.15)
(0)
h∈Hn+1
(0)
× P Mh = mh : h ∈ Hn+1 κ =
X
(0)
h∈Hn+1
mh .
The combinatorial structure of beta negative binomial processes
15
Because ζ1 , ζ2 , . . . are i.i.d., conditioned on the sum κ, the collection (Mh )h∈H(0) has a
n+1
multinomial distribution, and so
(0)
P Mh = mh : h ∈ Hn+1 κ =
P
( h∈H(0) mh )!
n+1
= Q
(0) (mh !)
h∈H
n+1
It follows that
Y
X
mh
(0)
h∈Hn+1
(4.16)
[digamma(h(n + 1); r, c + nr)
mh
].
(0)
h∈Hn+1
P{Mh = mh : h ∈ Hn+1 |Mℏ = mℏ : ℏ ∈ Hn }
P
(0)
(cT λr,c+nr ) h∈Hn+1
P
=
( h∈H(0) mh )!
mh
exp(−cT λr,c+nr )
n+1
Q
(mℏ !)
× Q ℏ∈Hn
+
(mh !)
h∈H
n+1
(4.17)
Y
[beta-NB(h(n + 1); r, S(h) − h(n + 1), c + nr)
Y
[digamma(h(n + 1); r, c + nr)
mh
]
h∈H+
n+1
P
( h∈H(0) mh )!
n+1
× Q
(0) (mh !)
h∈H
n+1
mh
].
(0)
h∈Hn+1
Proof of Theorem 1.2. The proof is by induction. The p.m.f. P{Mh = mh : h ∈ H1 }
is given by equation (4.7), which agrees with equation (1.10) for the case n = 1. The
conditional p.m.f. P{Mh = mh : h ∈ Hn+1 |Mℏ = mℏ : ℏ ∈ Hn } is given by equation (4.17).
By the inductive hypothesis, the p.m.f. P{Mℏ = mℏ : ℏ ∈ Hn } is given by equation (1.10).
Then by equation (4.3), we have
P{Mh = mh : h ∈ Hn+1 }
(
=
(cT )
P
ℏ∈Hn
Q
mℏ )
h∈H+
n+1
×
Y
h∈H+
n+1
"
(
P
(0)
(cT λr,c+nr ) h∈Hn+1
Q
(mh !) h∈H(0) (mh !)
mh )
exp −cT
j=1
n+1
B(S(h) − h(n + 1), c + nr)
n
Y
(r)h(j)
h(j)!
j=1
× beta-NB(h(n + 1); r, S(h) − h(n + 1), c + nr)
×
Y
(0)
h∈Hn+1
n+1
X
[digamma(h(n + 1); r, c + nr)]
mh
.
#mh
λr,c+(j−1)r
!
(4.18)
16
C. Heaukulani and D.M. Roy
In the first product term on the right-hand side of equation (4.18), note that, for every
+
h ∈ Hn+1
,
n
Y
(r)h(j)
beta-NB(h(n + 1); r, S(h) − h(n + 1), c + nr)
B(S(h) − h(n + 1), c + nr)
h(j)!
j=1
= B(S(h), c + (n + 1)r)
n+1
Y
j=1
(r)h(j)
.
h(j)!
In the second product term, note that
Y
m
[digamma(h(n + 1); r, c + nr)] h
(0)
h∈Hn+1
=
mh
Y
(r)h(n+1)
−1
λr,c+nr
B(h(n + 1), c + (n + 1)r)
h(n + 1)!
(0)
h∈Hn+1
−(
P
h∈H
= λr,c+nr
(0)
n+1
mh )
Y
(0)
h∈Hn+1
"
B(h(n + 1), c + (n + 1)r)
n+1
Y
j=1
(r)h(j)
h(j)!
#mh
,
where for the last equality, we have used the fact that h(j) = 0 for every j ≤ n and
P
P
P
(0)
h ∈ Hn+1 . Note that ℏ∈Hn mℏ + h∈H(0) mh = h∈Hn+1 mh . Then equation (4.18) is
n+1
equal to
P
(cT )
Q
h∈Hn+1
mh
h∈Hn+1 (mh !)
×
Y
h∈Hn+1
"
exp −cT
n+1
X
[ψ(c + jr) − ψ(c + (j − 1)r)]
j=1
B(S(h), c + (n + 1)r)
n+1
Y
j=1
(r)h(j)
h(j)!
# mh
!
(4.19)
.
Pn+1
Noting that j=1 [ψ(c + jr) − ψ(c + (j − 1)r)] = ψ(c + (n + 1)r) − ψ(c), we obtain the
expression in equation (1.10) for n + 1, as desired.
By construction, equation (1.10) defines the finite-dimensional
marginal distributions
S
of the stochastic process (Mh )h∈H∞ with index set H∞ := n∈N Hn . The exchangeability
result given by Theorem 1.3 then follows from the exchangeability of the sequence X[n] .
5. Applications in Bayesian nonparametrics
In Bayesian latent feature models, we assume that there exists a latent set of features
and that each data point possesses some (finite) subset of the features. The features then
The combinatorial structure of beta negative binomial processes
17
determine the distribution of the observed data. In a nonparametric setting, exchangeable
sequences of simple point processes can serve as models for the latent sets of features.
Similarly, exchangeable sequences of point processes, like those that can be constructed
from beta negative binomial processes, can serve as models of latent multisets of features.
In particular, atoms are features and their (integer-valued) masses indicate multiplicity.
In this section, we develop posterior inference procedures for exchangeable sequences of
beta negative binomial processes.
5.1. Representations as random arrays/matrices
A convenient way to represent the combinatorial structure of an exchangeable sequence
of point processes is via an array/matrix W of non-negative integers, where the rows
correspond to point processes and columns correspond to atoms appearing among the
point processes. Informally, given an enumeration of the set of all atoms appearing in
X[n] , the entry Wi,j associated with the ith row and jth column is the multiplicity/mass
of the atom labeled j in the ith point process Xi .
More carefully, fix n ∈ N and let (Mh )h∈Hn be the combinatorial structure of a sequence
X1 , . . . , Xn of conditionally i.i.d. negative binomial processes, given a shared beta process
e0 of
base measure with concentration
parameter c > 0 and non-atomic base measure B
P
finite mass T . Let κ := h∈Hn Mh be the number of unique atoms among X[n] . Then W
is an n × κ array of non-negative integers such that, for every h ∈ Hn , there are exactly
Mh columns of W equal to h, where h is thought of as a length-n column vector. Note
that W will have no columns when κ = 0.
All that remains is to order the columns of W . Every total order on Hn induces a
unique ordering of the columns of W . Titsias [26] defined a unique ordering in this way,
analogous to the left-ordered form defined by Griffiths and Ghahramani [10] for the IBP.
In particular, for h, h′ ∈ Hn , let denote the lexicographic order given by: h h′ if and
only if h = h′ or h(η) < h′ (η), where η is the first coordinate where h and h′ differ. We
say W is left-ordered when its columns are ordered according to . Because there is a
bijection between combinatorial structures (Mh )h∈Hn and their unique representations
by left-ordered arrays, the probability mass function of W is given by equation (1.10).
Other orderings have been introduced in the literature: If we permute the columns of
W uniformly at random, then W is the analogue of the uniform random labeling scheme
described by Broderick, Pitman and Jordan [4] for the IBP. Note that the number of
distinct ways of ordering the κ columns is given by the multinomial coefficient
Q
κ!
h∈Hn
Mh !
,
(5.1)
where the denominator arises from the fact that there are Mh indistinguishable columns
for every history h ∈ Hn . The following result is then immediate:
Theorem 5.1. Let W be a uniform random labeling of (Mh )h∈Hn described above, let
n×k
w ∈ Z+
be an array of non-negative integers with n rows and k ≥ 0 non-zero columns,
18
and for every j ≤ k, let sj :=
C. Heaukulani and D.M. Roy
Pn
i=1 wi,j
be the sum of column j. Then
"
#
k
n
Y
Γ(sj )Γ(c + nr) Y (r)wi,j
(cT )k
. (5.2)
exp(−cT [ψ(c + nr) − ψ(c)])
P{W = w} =
k!
Γ(sj + c + nr) i=1 wi,j !
j=1
An array representation makes it easy to visualize some properties of the model. For
example, in Figure 1 we display several simulations from the NB-IBP with varying values
of the parameters T, c, and r. The columns are displayed in the order of first appearance,
and are otherwise ordered uniformly at random. (A similar ordering was used by Griffiths
and Ghahramani [10] to introduce the IBP.) The relationship of the model to the values of
T and c are similar to the characteristics described by Ghahramani, Griffiths and Sollich
[7] for the IBP, with the parameter r providing flexibility with respect to the counts in
the array. In particular, the total number of features, κ, is Poisson distributed with mean
cT [ψ(c + nr) − ψ(c)], which increases with T , c, and r. From the NB-IBP, we know that
the expected number of features for the first (and therefore, by exchangeability, every)
row is T . Because of the ordering we have chosen here, the rows are not exchangeable,
despite the sequence X[n] being exchangeable. (In contrast, a uniform random labeling
W is row exchangeable and, conditioned on κ, column exchangeable.) Finally, note that
the mean of the digamma(r, c) distribution exists for c > 1 and is given by
r
,
(c − 1)(ψ(r + c) − ψ(c))
(5.3)
which increases with r and decreases with c. This is the expected multiplicity of each
feature for the first row, which again, by exchangeability, must hold for every row. We
may therefore summarize the effects of changing each of these parameters (as we hold
the others constant) as follows:
• Increasing the mass parameter T increases both the expected total number of features and the expected number of features per row, while leaving the expected multiplicities of the features unchanged.
• Increasing the concentration parameter c increases the expected total number of
features and decreases the expected multiplicites of the features, while leaving the
expected number of features per row unchanged.
• Increasing the parameter r increases both the expected total number of features
and the expected multiplicities of the features, while leaving the expected number
of features per row unchanged.
These effects can be seen in the first, second, and third rows of Figure 1, respectively. We
note that r has a weak effect on the expected total number of features (seen in the third
row of Figure 1), and c has a weak effect on the expected multiplicities of the features
(seen in the second row of Figure 1). The model may therefore be effectively tuned with T
and c determining the size and density of the array, and r determining the multiplicities.
The most appropriate model depends on the application at hand, and in Section 5.3 we
discuss how these parameters may be inferred from data.
The combinatorial structure of beta negative binomial processes
19
Figure 1. Simulated Z+ -valued arrays from the NB-IBP. Dots are positive entries, the magnitudes of which determine the size of the dot. The total mass parameter T is varied along the
top row; the concentration parameter c is varied along the middle row; the negative binomial
parameter r is varied along the bottom row. See the text for a summary of how these parameters
affect the expected number of features in total, features per row, and feature multiplicities.
20
C. Heaukulani and D.M. Roy
5.2. Examples
Latent feature models with associated multiplicities and unbounded numbers of features
have found several applications in Bayesian nonparametric statistics, and we now provide
some examples. In these applications, the features represent latent objects or factors
underlying a dataset comprised of n groups of measurements y1 , . . . , yn , where each group
yi is comprised of Di measurements yi = (yi,1 , . . . , yi,Di ). In particular, Wi,j denotes the
number of instances of object/factor j in group i.
These nonparametric latent feature representations lend themselves naturally to mixture models with an unbounded number of components. For example, consider a variant
of the models by Sudderth et al. [22] and Titsias [26] for a dataset of n street camera
images where the latent features are interpreted as object classes that may appear in
the images, such as “building”, “car”, “road”, etc. The count Wi,j models the relative
number of times object class j appears in image i. For every i ≤ n, image yi consists of Di
local patches yj,1 , . . . , yj,Di detected in the image, which are (collections of) continuous
variables representing, for example, color, hue, location in the image, etc. Let κ be the
number of columns of W , that is, the number of features. The P
local patches in image i are
modeled as conditionally i.i.d. draws from a mixture of Si = κj=1 Wi,j Gaussian distributions, where Wi,j of these components are associated with feature j. For k = 1, 2, . . . ,
(j,k)
(j,k)
(j,k)
:= (mi , Σi ) denote the mean and covariance of the Gaussian components
let Θi
associated with feature j for image i. Let zi,d = (j, k) when yi,d is assigned to compo(j,k)
nent k ≤ Wi,j associated with feature j ≤ κ. Conditioned on Θ := (Θi )i≤n,j≤κ,k≤Wi,j
and the assignments z := (zi,d )i≤n,d≤Di , the distribution of the measurements admits a
conditional density
p(y|W, Θ, z) =
Di
n Y
Y
z
z
N (yi,d ; mi i,d , Σi i,d ).
(5.4)
i=1 d=1
(j,k)
To share statistical strength across images, the parameters Θi
Bayesian prior:
(j,k)
Θi
i.i.d.
|Θ(j) ∼ ν(Θ(j) )
Θ
(j) i.i.d.
∼ ν0
are given a hierarchical
for every i and k,
for every j.
(5.5)
(5.6)
A typical choice for ν(·) is the family of Gaussian–inverse-Wishart distributions with
feature-specific parameters Θ(j) drawn i.i.d. from a distribution ν0 . Finally, for every image i ≤ n, conditioned on W , the assignment variables zi,1 , . . . , zi,Di for the local patches
in image n are assumed to form a multivariate Pólya urn scheme, arising from repeated
draws from a Dirichlet-distributed probability vector over {(j, k) : j ≤ κ, k ≤ Wi,j }. The
parameters for the Dirichlet distributions are tied in a similar fashion to Θ. The interpretation here is that local patch d in image i is assigned to one of the Si instances of
the latent objects appearing in the image. The number of object instances to which a
The combinatorial structure of beta negative binomial processes
21
patch may be assigned is specific to the image, but components across all images that
correspond to the same feature will be similar.
Latent feature representations are also a natural choice for factor analysis models.
Canny [5] and Zhou et al. [28] proposed models for text documents in terms of latent
features representing topics. More carefully, let yi,v be the number of occurrences of word
v in document i. Conditioned on W and a collection of non-negative topic-word weights
Θ := (θj,v )j≤κ,v≤V , the word counts are assumed to be conditionally i.i.d. and
yi,v |W, Θ ∼ Poisson
κ
X
j=1
!
Wi,j θj,v .
(5.7)
In other words, the expected number of occurrences of word v in document i is a linear
sum of a small number of weighted factors. The features here are interpreted as topics:
words v such that θj,v is large are likely to appear many times. There are a total of
κ topics that are shared across the documents. The topic-word weights Θ are typically
chosen to be i.i.d. Gamma random variates, although there may be reason to prefer
priors with dependency enforcing further sparsity. This general setup has been applied
to other types of data including, for example, recommendations [8], where yi,v represents
the rating a Netflix user i assigns to a film v.
5.3. Conditional distributions
Let W be a uniform random labelling of a NB-IBP as described in Section 5.1. In the
applications described above, computing the posterior distribution of W is the first step
towards most other inferential goals. Existing inference schemes use stick-breaking representations, that is, they represent (a truncation of) the beta process underlying W . This
approach has some advantages, including that the entries of W are then conditionally independent negative binomial random variables. On the other hand, the random variables
representing the truncated beta process, as well as the truncation level itself, must be
marginalized away using auxiliary variable methods or other techniques [3, 19, 23, 28].
Here, we take advantage of the structure of the NB-IBP and do not represent the beta
process. The result is a set of Markov (proposal) kernels analogous to those originally
derived for the IBP [7, 10].
The models described in Section 5.2 associate every feature with a latent parameter.
Therefore, conditioned on the number of columns κ, let Θ = (θ1 , . . . , θκ ) be an i.i.d. sequence drawn from some non-atomic distribution νΘ , and assume that the data y admits
a conditional density p(y|W, Θ). We will associate the jth column of W with Θj , and so
the pair (W, Θ) can be seen as an alternative representation for an exchangeable sequence
X[n] of beta negative binomial processes. By Bayes’ rule, the posterior distributions admits a conditional density
p(W, Θ|y) ∝ p(y|W, Θ) × p(W, Θ),
(5.8)
22
C. Heaukulani and D.M. Roy
where p(W, Θ) is a density for the joint distribution of (W, Θ). We describe two Markov
kernels that leave this distribution invariant. Combined, these kernels give a Markov
chain Monte Carlo (MCMC) inference procedure for the desired posterior.
The first kernel resamples individual elements Wi,j , conditioned on the remaining
elements of the array (collectively denoted by W−(i,j) ), the data y, and the parameters
Θ. By Bayes’ rule, and the independence of Θ and W given κ, we have
P{Wi,j = z|y, W−(i,j) , Θ}
∝ p(y|{Wi,j = z}, W−(i,j) , Θ) × P{Wi,j = z|W−(i,j) }.
(5.9)
Recall that the array W is row-exchangeable, and so, in the language of the NBIBP, we may associate the ith row with the final customer at the buffet. The count
Wi,j is the number of servings the customer takes of dish j, which has been served
P
(−i)
(−i)
> 0, we have
:= i′ 6=i Wi′ ,j times previously. When Sj
Sj
(−i)
Wi,j |W−(i,j) ∼ beta-NB(r, Sj
, c + (n − 1)r).
(5.10)
Therefore, we can simulate from the unnormalized, unbounded discrete distribution in
equation (5.9) using equation (5.10) as a Metropolis–Hastings proposal, or we could use
inverse transform sampling where the normalization constant is approximated by an
importance sampling estimate.
Following Meeds et al. [17], the second kernel resamples the number, positions, and
(−i)
values of those singleton columns j ′ such that Sj ′ = 0. Simultaneously, we propose a
corresponding change to the sequence of latent parameters Θ, preserving the relative
ordering with the columns of W . This corresponding change to Θ cancels out the effect
of the κ! term appearing in the p.m.f. of the array W . Let Ji be the number of singleton
columns, that is, let
(−i)
Ji = #{j ≤ κ: Wi,j > 0 and Sj
= 0},
(5.11)
which we note may be equal to zero. Because we are treating the customer associated
with row i as the final customer at the buffet, Ji may be interpreted as the number of
new dishes sampled by the final customer, in which case, we know that
Ji ∼ Poisson(cT [ψ(c + nr) − ψ(c + (n − 1)r)]).
(5.12)
We therefore propose a new array W ∗ by removing the Ji singleton columns from the
array and insert Ji∗ new singleton columns at positions drawn uniformly at random, where
Ji∗ is sampled from the (marginal) distribution of Ji given in equation (5.12). Like those
columns that were removed, each new column has exactly one non-zero entry in the ith
row: We draw each non-zero entry independently and identically from a digamma(r, c +
(n − 1)r) distribution, which matches the distribution of the number of servings the last
customer takes of each newly sampled dish.
The combinatorial structure of beta negative binomial processes
23
Finally, we form a new sequence of latent parameters Θ∗ by removing those entries
from Θ associated with the Ji columns that were removed from W and inserting Ji∗
new entries, drawn i.i.d. from νΘ , at the same locations corresponding to the Ji∗ newly
∗
introduced columns. Let κ∗ := κ − Ji + Ji∗ , and note that there were Jκ∗ possible ways
i
to insert the new columns. Therefore, the proposal density is
−1
κ∗
q(W , Θ |W, Θ) =
Poisson(Ji∗ ; cT [ψ(c + nr) − ψ(c + (n − 1)r)])
Ji∗
(5.13)
Y
Y
∗
νΘ (θ).
×
digamma(Wi,j
; r, c + (n − 1)r)
∗
∗
j≤κ∗
θ∈Θ∗ \Θ
With manipulations similar to those in the proof of Theorem 1.2, it is straightforward
to show that a Metropolis–Hastings kernel accepts a proposal (W ∗ , Θ∗ ) with probability
min{1, α∗ }, where
α∗ =
p(y|W ∗ , Θ∗ )
.
p(y|W, Θ)
(5.14)
Combined with appropriate Metropolis–Hastings moves that shuffle the columns of W
and resample the latent parameters Θ, we obtain a Markov chain whose stationary distribution is the conditional distribution of W and Θ given the data y.
Another benefit of the characterization of the distribution of W in (5.1) is that numerically integrating over the real-valued concentration, mass, and negative binomial
parameters c, T , and r, respectively, are straightforward with techniques such as slice
sampling [18]. In the particular case when T is given a gamma prior distribution, say
T ∼ gamma(α, β) for some positive parameters α and β, the conditional distribution
again falls into the class of gamma distributions. In particular, the conditional density is
p(T |W, κ) ∝ T α+κ−1 exp(−cT [ψ(c + nr) − ψ(c)] − βT )
∝ gamma(T ; α + κ, β + cT [ψ(c + nr) − ψ(c)]).
(5.15)
(5.16)
Acknowledgements
We thank Mingyuan Zhou for helpful feedback and for pointing out the relation of our
work to that of Sibuya [21]. We also thank Yarin Gal and anonymous reviewers for
feedback on drafts. This research was carried out while C. Heaukulani was supported by
the Stephen Thomas studentship at Queens’ College, Cambridge, with funding also from
the Cambridge Trusts, and while D.M. Roy was a research fellow of Emmanuel College,
Cambridge, with funding also from a Newton International Fellowship through the Royal
Society.
24
C. Heaukulani and D.M. Roy
References
[1] Barndorff-Nielsen, O. and Yeo, G.F. (1969). Negative binomial processes. J. Appl.
Probab. 6 633–647. MR0260001
[2] Broderick, T., Jordan, M.I. and Pitman, J. (2013). Cluster and feature modeling from
combinatorial stochastic processes. Statist. Sci. 28 289–312. MR3135534
[3] Broderick, T., Mackey, L., Paisley, J. and Jordan, M.I. (2014). Combinatorial clustering and the beta-negative binomial process. IEEE Trans. Pattern Anal. Mach. Intell.
37 290–306. Special issue on Bayesian nonparametrics.
[4] Broderick, T., Pitman, J. and Jordan, M.I. (2013). Feature allocations, probability
functions, and paintboxes. Bayesian Anal. 8 801–836. MR3150470
[5] Canny, J. (2004). Gap: A factor model for discrete data. In Proceedings of the 27th Annual
International ACM SIGIR Conference on Research and Development in Information
Retrieval, Sheffield, United Kingdom.
[6] Devroye, L. (1992). Random variate generation for the digamma and trigamma distributions. J. Stat. Comput. Simul. 43 197–216. MR1389440
[7] Ghahramani, Z., Griffiths, T.L. and Sollich, P. (2007). Bayesian nonparametric latent
feature models. In Bayesian Statistics 8. Oxford Sci. Publ. 201–226. Oxford: Oxford
Univ. Press. MR2433194
[8] Gopalan, P., Ruiz, F.J.R., Ranganath, R. and Blei, D.M. (2014). Bayesian nonparametric Poisson factorization for recommendation systems. In Proceedings of the 17th
International Conference on Artificial Intelligence and Statistics, Reykjavik, Iceland.
[9] Grégoire, G. (1984). Negative binomial distributions for point processes. Stochastic Process. Appl. 16 179–188. MR0724064
[10] Griffiths, T.L. and Ghahramani, Z. (2006). Infinite latent feature models and the Indian
buffet process. In Advances in Neural Information Processing Systems 19, Vancouver,
Canada.
[11] Hjort, N.L. (1990). Nonparametric Bayes estimators based on beta processes in models
for life history data. Ann. Statist. 18 1259–1294. MR1062708
[12] Kallenberg, O. (2002). Foundations of Modern Probability, 2nd ed. Probability and Its
Applications (New York). New York: Springer. MR1876169
[13] Kim, Y. (1999). Nonparametric Bayesian estimators for counting processes. Ann. Statist.
27 562–588. MR1714717
[14] Kingman, J.F.C. (1967). Completely random measures. Pacific J. Math. 21 59–78.
MR0210185
[15] Kozubowski, T.J. and Podgórski, K. (2009). Distributional properties of the negative
binomial Lévy process. Probab. Math. Statist. 29 43–71. MR2553000
[16] Lo, A.Y. (1982). Bayesian nonparametric statistical inference for Poisson point processes.
Z. Wahrsch. Verw. Gebiete 59 55–66. MR0643788
[17] Meeds, E., Ghahramani, Z., Neal, R.M. and Roweis, S.T. (2007). Modeling dyadic
data with binary latent factors. In Advances in Neural Information Processing Systems
20, Vancouver, Canada.
[18] Neal, R.M. (2003). Slice sampling. Ann. Statist. 31 705–767. With discussions and a
rejoinder by the author. MR1994729
[19] Paisley, J., Zaas, A., Woods, C.W., Ginsburg, G.S. and Carin, L. (2010). A stickbreaking construction of the beta process. In Proceedings of the 27th International
Conference on Machine Learning, Haifa, Israel.
The combinatorial structure of beta negative binomial processes
25
[20] Roy, D.M. (2014). The continuum-of-urns scheme, generalized beta and Indian buffet
processes, and hierarchies thereof. Preprint. Available at arXiv:1501.00208.
[21] Sibuya, M. (1979). Generalized hypergeometric, digamma and trigamma distributions.
Ann. Inst. Statist. Math. 31 373–390. MR0574816
[22] Sudderth, E.B., Torralba, A., Freeman, W.T. and Willsky, A.S. (2005). Describing
visual scenes using transformed Dirichlet processes. In Advances in Neural Information
Processing Systems 18, Vancouver, Canada.
[23] Teh, Y.W., Görür, D. and Ghahramani, Z. (2007). Stick-breaking construction for the
Indian buffet process. In Proceedings of the 11th International Conference on Artificial
Intelligence and Statistics, San Juan, Puerto Rico.
[24] Thibaux, R. and Jordan, M.I. (2007). Hierarchical beta processes and the Indian buffet
process. In Proceedings of the 11th International Conference on Artificial Intelligence
and Statistics, San Juan, Puerto Rico.
[25] Thibaux, R.J. (2008). Nonparametric Bayesian models for machine learning. Ph.D. thesis,
EECS Department, Univ. California, Berkeley. MR2713095
[26] Titsias, M. (2007). The infinite gamma-Poisson feature model. In Advances in Neural
Information Processing Systems 20.
[27] Wolpert, R.L. and Ickstadt, K. (1998). Poisson/gamma random field models for spatial
statistics. Biometrika 85 251–267. MR1649114
[28] Zhou, M., Hannah, L., Dunson, D. and Carin, L. (2012). Beta-negative binomial process
and Poisson factor analysis. In Proceedings of the 29th International Conference on
Machine Learning, Edinburgh, United Kingdom.
[29] Zhou, M., Madrid, O. and Scott, J.G. (2014). Priors for random count matrices derived
from a family of negative binomial processes. Preprint. Available at arXiv:1404.3331v2.
Received June 2014 and revised March 2015
| 10 |
arXiv:1509.02612v4 [] 11 Mar 2016
ROOTS OF UNITY IN ORDERS
H. W. LENSTRA, JR. AND A. SILVERBERG
Communicated by John Cremona
Abstract. We give deterministic polynomial-time algorithms that, given an order, compute the
primitive idempotents and determine a set of generators for the group of roots of unity in the
order. Also, we show that the discrete logarithm problem in the group of roots of unity can be
solved in polynomial time. As an auxiliary result, we solve the discrete logarithm problem for
certain unit groups in finite rings. Our techniques, which are taken from commutative algebra,
may have further potential in the context of cryptology and computer algebra.
1. Introduction
An order is a commutative ring whose additive group is isomorphic to Zn for some non-negative
integer n. The present paper contains algorithms for computing the idempotents and the roots of
unity of a given order.
In algorithms, we specify an order A by listing a system of “structure constants” aijk ∈ Z with
i, j, k ∈ {1, 2, . . . , n}; these determine the multiplicationPin A in the sense that for some Z-basis
e1 , e2 , . . . , en of the additive group of A, one has ei ej = nk=1 aijk ek for all i, j. The elements of A
are then represented by their coordinates with respect to that basis.
An idempotent of a commutative ring R is an element e ∈ R with e2 = e, and we denote by id(R)
the set of idempotents. An idempotent e ∈ id(R) is called primitive if e 6= 0 and for all e′ ∈ id(R)
one has ee′ ∈ {0, e}; let prid(R) denote the set of primitive idempotents of R.
Orders A have only finitely many idempotents, but they may have more than can be listed
by a polynomial-time algorithm; however, if one knows prid(A), then one implicitly knows id(A),
since there
P is a bijection from the set of subsets of prid(A) to id(A) that sends W ⊂ prid(A) to
eW = e∈W e ∈ id(A). For prid(A) we have the following result.
Theorem 1.1. There is a deterministic polynomial-time algorithm (Algorithm 6.1) that, given an
order A, lists all primitive idempotents of A.
A root of unity in a commutative ring R is an element of finite order of the group R∗ of invertible
elements of R; we write µ(R) for the set of roots of unity in R, which is a subgroup of R∗ .
As with idempotents, orders A have only finitely many roots of unity, but possibly more than can
be listed by a polynomial-time algorithm, and to control µ(A) we shall use generators and relations.
If S is a finite system of generators for an abelian group G, then by a set of defining relations for
S we mean Q
a system of generators for the kernel of the surjective group homomorphism ZS → G,
(ms )s∈S 7→ s∈S sms .
2010 Mathematics Subject Classification. 16H15 (primary), 11R54, 13A99 (secondary).
Key words and phrases. orders; algorithms; roots of unity; idempotents.
This material is based on research sponsored by DARPA under agreement number FA8750-13-2-0054 and by the
Alfred P. Sloan Foundation. The U.S. Government is authorized to reproduce and distribute reprints for Governmental
purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of
the authors and should not be interpreted as necessarily representing the official policies or endorsements, either
expressed or implied, of DARPA or the U.S. Government.
1
2
H. W. LENSTRA, JR. AND A. SILVERBERG
Theorem 1.2. There is a deterministic polynomial-time algorithm (Algorithm 13.2) that, given an
order A, produces a set S of generators of µ(A), as well as a set of defining relations for S.
Theorem 1.2, which provides a key ingredient in an algorithm for lattices with symmetry that
was recently developed by the authors [6, 7], is our main result, and its proof occupies most of the
paper. It makes use of several techniques from commutative algebra that so far have found little
employment in an algorithmic context. A sketch appeared in Proposition 4.7 of [6].
We shall also obtain a solution to the discrete logarithm problem in µ(A) and all its subgroups,
and more generally in all subgroups of the group µ(A ⊗Z Q), which is still finite. Note that A ⊗Z Q
is a ring containing A as a subring, and that a Z-basis for A is a Q-basis for the additive group of
A ⊗Z Q. If one replaces µ(A) by µ(A ⊗Z Q) in Theorem 1.2, then it remains true, and in fact it
becomes much easier to prove (Proposition 3.5). Our solution to the discrete logarithm problem in
µ(A ⊗Z Q) and all of its subgroups, in particular in µ(A), reads as follows.
Theorem 1.3. There is a deterministic polynomial-time algorithm that, given an order A, a finite
system T of elements of µ(A ⊗Z Q), and an element ζ ∈ A ⊗Z Q, decides whetherQζ belongs to the
subgroup hT i ⊂ µ(A ⊗Z Q) generated by T , and if so finds (mt )t∈T ∈ ZT with ζ = t∈T tmt .
We shall prove Theorem 1.3 in section 7, as a consequence of the results on µ(A ⊗Z Q) in section 3
and a number of formal properties of “efficient presentations” of abelian groups that are developed
in section 7.
A far-reaching generalization of Theorem 1.3, in which µ(A ⊗Z Q) is replaced by the full unit
group (A ⊗Z Q)∗ , is proven in [8].
Of the many auxiliary results that we shall use, there are two that have independent interest.
The first concerns the discrete logarithm problem in certain unit groups of finite rings, and it reads
as follows.
Theorem 1.4. There is a deterministic polynomial-time algorithm that, given a finite commutative
ring R and a nilpotent ideal I ⊂ R, produces a set S of generators of the subgroup 1 + I ⊂ R∗ , as
well as a set of defining relations for S. Also, there is a deterministic polynomial-time algorithm
that, given R and I as before, as well as a finite system T of elements of 1 + I and an element
ζ ∈Q
R, decides whether ζ belongs to the subgroup hT i ⊂ 1 + I, and if so finds (mt )t∈T ∈ ZT with
ζ = t∈T tmt .
The proof of this theorem is given in section 11. It depends on the resemblance of 1 + I to the
additive group I, in which the discrete logarithm problem is easy.
The second result that we single out for special mention is of a purely theoretical nature. Let R
be a commutative ring. For the purposes of this paper, commutative rings have an identity element
1 (which is 0 if and only if the ring is the 0 ring). We call R connected if #id(R) = 2 or, equivalently,
if id(R) = {0, 1} and R 6= {0}. A polynomial f ∈ R[X] is called separable (over R) if f and its
formal derivative f ′ generate the unit ideal in R[X]. For example, f = X 2 − X is separable because
(f ′ )2 − 4f = 1.
Theorem 1.5. Let R be a connected commutative ring, and let f ∈ R[X] be separable. Then f 6= 0
and #{r ∈ R : f (r) = 0} ≤ deg(f ).
For the elementary proof, see section 8.
While, technically, one must admit that Theorem 1.5 plays only a modest role in the paper, it
does convey an important message, namely that zeroes of polynomials that are separable are easier
to control than zeroes of other polynomials. Thus, X 2 − X is separable over any R, while X m − 1
(for m ∈ Z>0 ) is separable if and only if m · 1 ∈ R∗ , a condition that for a non-zero order and m > 1
is never satisfied; accordingly, Theorem 1.1 is much easier to prove than Theorem 1.2.
We next provide an overview of the algorithms that underlie Theorems 1.1 and 1.2. In both cases,
one starts by reducing the problem, in a fairly routine manner, to the special case in which each
ROOTS OF UNITY IN ORDERS
3
element of A is a zero of some separable polynomial in Q[X]; for the rest of the introduction we
assume that the latter condition is satisfied. Then the Q-algebra E = A ⊗Z Q can be written as the
product of finitely many algebraic number fields E/m, with m ranging over the finite set Spec(E)
of prime ideals of E; hence prid(E) is in bijection with Spec(E). The image of A ⊂ E under the
map E → E/m may
Q be identified with the ring A/(m ∩ A), so that A becomes a subring of the
product ring B = m∈Spec(E) A/(m ∩ A); this is also an order, and it is “close” to A in the sense
that the abelian group B/A is finite. The ring B has many idempotents, in the sense that id(B)
equals all of id(E), and #prid(B) = #Spec(E). To determine which subsets W ⊂ prid(B) give rise
to idempotents that lie in A, we define a certain graph Γ(A) with vertex set Spec(E) such that the
connected components of Γ(A) correspond exactly to the primitive idempotents of A. This leads to
Theorem 1.1.
To prove Theorem 1.2, one likewise starts from B, generators for µ(B) being easily found by
standard algorithms from algebraic number theory. However, there is no standard way of computing
µ(A) = µ(B) ∩ A, which is the intersection of a multiplicative group and an additive group, and
we must proceed in an indirect way. For a prime number p, denote by µ(A)p the group of roots of
unity in A that are of p-power order, and likewise µ(B)p . Then µ(A) is generated by its subgroups
µ(A)p = µ(B)p ∩ A, with p ranging over the set of primes dividing #µ(B); all these p are “small”.
It will now suffice to fix p and determine generators for µ(A)p . To this end, we introduce the
intermediate order A ⊂ C ⊂ B defined by C = A[1/p] ∩ B. The finite abelian group B/C is of order
coprime to p, and it turns out that this makes it relatively easy to determine µ(C)p = µ(B)p ∩ C;
in fact, one of the results (Proposition 8.1(b)) leading up to Theorem 1.5 stated above shows that
this can be done by exploiting the graph Γ(C) that we encountered in the context of idempotents.
The passage to µ(A)p = µ(C)p ∩ A is of an entirely different nature, as C/A is of order a power of
p. It is here that we have to invoke Theorem 1.4 for certain finite rings R that are of p-power order.
It is important to realize that the only reason that an intersection such as µ(A) = µ(B)∩A is hard
to compute is that µ(B), though finite, may be large—testing each element of µ(B) for membership
in A will not lead to a polynomial-time algorithm. By contrast, the exponent of each group µ(B)p
is small (Lemma 3.3(iv)), so results stating that certain subgroups of µ(B)p are cyclic—of which
there are several in the paper—are valuable in obtaining a polynomial bound for the runtime of our
algorithm.
2. Definitions and examples
From now on, when we say commutative Q-algebra we will mean a commutative Q-algebra that
is finite-dimensional as a Q-vector space. See [1, 3] for background on commutative rings and linear
algebra.
Definition 2.1. If A is an order whose additive group is isomorphic to Zn , we call n the rank of
A.
If the number of idempotents in R is finite, then each idempotent is the sum of a unique subset
of prid(R), and one has #id(R) = 2#prid(R) .
Definition 2.2. A commutative ring R is called connected if #{x ∈ R : x2 = x} = 2.
Definition 2.3. If R is a commutative ring, let Spec(R) denote the set of prime ideals of R.
Although we do not use it, we point out that a commutative ring R is connected if and only if
R 6= 0 and R cannot be written as a product of 2 non-zero rings. The definition is motivated by
the fact that a commutative ring R is connected if and only if Spec(R) is connected. (A topological
space is connected if and only if it has exactly 2 open and closed subsets.)
Notation 2.4. If G is a group and p is a prime number, define
r
Gp = {g ∈ G : g p = 1 for some r ∈ Z≥0 }.
4
H. W. LENSTRA, JR. AND A. SILVERBERG
Definition 2.5. Suppose R is a commutative ring. A polynomial f ∈ R[X] is separable over R if
where if f =
R[X]f + R[X]f ′ = R[X],
P
t
i−1
i
′
.
i=1 iai X
i=0 ai X then f =
Pt
One can show that if f is a monic polynomial over a commutative ring R, then f is separable
over R if and only if its discriminant is a unit in R.
Definition 2.6. Suppose E is a commutative Q-algebra. If α ∈ E, then α is separable over Q if
there exists a separable polynomial f ∈ Q[X] such that f (α) = 0. Let Esep denote the set of y ∈ E
that are separable over Q. We say E is separable over Q if Esep = E.
We note that Esep is a commutative Q-algebra (see for example Theorem 1.1 of [8]).
Definition 2.7. Suppose R is a commutative ring. An element x ∈ R is called nilpotent if there
exists n ∈ Z>0 such that xn = 0. An ideal I of R is called nilpotent if there exists n ∈ Z>0 such
that I n = 0, where I n is the product of I with
√ itself
√ n times. The set of nilpotent elements of R is
an ideal, called the nilradical and denoted 0 or 0R .
Examples 2.8. The polynomial X 2 − X is separable over every ring. A linear polynomial aX + b
is separable over R if and only if the R-ideal generated by a and b is R. If m ∈ Z≥0 , then the
polynomial X m − 1 is separable over R if and only if m · 1 is a unit in R.
Example 2.9. Suppose f (X) ∈ Z[X] is a monic polynomial of degree n. Then the ring Z[X]/(f ) is
an order of rank n. We remark that the map e 7→ gcd(e, f ) is a bijection from the set of idempotents
of Z[X]/(f ) to {g ∈ Z[X] : g is monic, g|f, and R(g, f /g) = ±1}, where R(g, f /g) is the resultant
of g and f /g.
Example 2.10. If G is a finite group of order 2n with a fixed element u of order 2, then ZhGi =
Z[G]/(u + 1) is a connected order of rank n, and µ(ZhGi) = G (see Remark 16.3 of [7]).
Example 2.11. If n ∈ Z>0 and A = {(ai )ni=1 ∈ Zn : ai ≡ aj mod 2 for all i, j} with componentwise
addition and multiplication, then A is a connected order, µ(A) = {(±1, . . . , ±1)}, and #µ(A) = 2n .
For large n, computing a set of generators for µ(A) is feasible, even when listing all elements of µ(A)
is not.
Example 2.12. Suppose A = Z[ζp ], where p is a prime and ζp is a primitive p-th root of unity in
C. Then A has rank p − 1. If p > 2, then µ(A) = hζp i × h−1i.
3. Finite Q-algebras
The following two results are from commutative algebra. These results and basic algorithms for
commutative Q-algebras are given in [8].
Proposition 3.1. If E is a commutative Q-algebra, then the map
√ ∼
Esep ⊕ 0 −
→ E, (x, y) 7→ x + y
is an isomorphism of Q-vector spaces, and the natural map E →
morphism of Q-algebras
Y
∼
E/m.
→
Esep −
Q
m∈Spec(E)
E/m induces an iso-
m∈Spec(E)
In algorithms, we specify a commutative Q-algebra E by listing a system of structure constants
aijk ∈ Q that determines the multiplication in E with respect to some Q-basis, just as we did for
orders in the introduction.
ROOTS OF UNITY IN ORDERS
5
Algorithm 3.2. There is a deterministic polynomial-time algorithm
that given a commutative
Q√
√
∼
→ Esep ⊕ 0 that is
algebra E, computes a Q-basis for Esep ⊂ E, a Q-basis for 0, the map E −
the inverse to the first isomorphism from Proposition 3.1, all m ∈ Spec(E), the fields E/m, and the
natural maps E → E/m.
Lemma 3.3. If E is a commutative Q-algebra, then:
∼ L
(i) µ(E) = µ(Esep ) −
→ m∈Spec(E) µ(E/m);
(ii) µ(E) is finite;
(iii) each µ(E/m) is a finite cyclic group;
(iv) if µ(E) has an element of order pk with p a prime, then ϕ(pk ) ≤ dimQ (E), where ϕ is
Euler’s ϕ-function.
Proof. Part (i) holds by Proposition 3.1 and the fact that X r − 1 is separable over Q for all r ∈ Z>0 .
If µ(E) has an element of prime power order pk , then Q(ζpk ) ⊂ E/m for some m, where ζpk is a
primitive pk -th root of unity. Thus ϕ(pk ) ≤ [E/m : Q] ≤ dimQ (E). Since each E/m is a number
field, µ(E/m) is cyclic.
Algorithm 3.4. The algorithm takes as input a commutative Q-algebra E and produces a set of
generators S of µ(E) as well as a set R of defining relations for S.
(i) For each n ∈ Spec(E), use the algorithm in [4] to find all zeroes of X r − 1 over E/n, for
r = 1, 2, . . . , 2[E/n : Q]2 , let ζn ∈ (E/n)∗ be an element of maximal order among the zeroes
found, and let k(n) be its order.
(ii) For each n ∈ Spec(E), use linear algebra to compute the unique element ηn ∈ Esep
that
Q under the second isomorphism from Proposition 3.1 maps to (1, . . . , 1, ζn , 1, . . . , 1) ∈
m µ(E/m) (with ζn in the n-th position). Output S = {ηn ∈ µ(E) : n ∈ Spec(E)} and
R = {(0, . . . , 0, k(n), 0, . . . , 0) ∈ ZSpec(E) : n ∈ Spec(E)}.
Proposition 3.5. Algorithm 3.4 produces correct output and runs in polynomial time.
Proof. If the number field E/n contains a primitive r-th root of unity, then it contains the r-th
cyclotomic field, which has degree ϕ(r) over Q; hence ϕ(r) ≤ [E/n : Q] and r ≤ 2ϕ(r)2 ≤ 2[E/n : Q]2 .
Together with Lemma 3.3(i), this implies that the algorithm is correct. It runs in polynomial time
by [4].
Algorithm 3.6. The algorithm takes as input a commutative Q-algebra E, an element γ ∈ E, and
a set S = {ηn ∈ µ(E) : n ∈ Spec(E)} of generators for µ(E) as computed
Q by Algorithm 3.4. It tests
whether γ ∈ µ(E), and if so, finds (an )n∈Spec(E) ∈ ZSpec(E) with γ = n∈Spec(E) ηnan .
(i) Use linear algebra to test if γ ∈ Esep . If not, terminate with “no” (that is, γ 6∈ µ(E)).
(ii) Otherwise, for each n ∈ Spec(E) compute the image γn of γ in E/n, and let ζn (as in
Algorithm 3.4) be the image of ηn in E/n. Try a = 0, 1, 2, . . . , #µ(E/n) − 1 until γn = ζna ,
and let an = a. If for some n no an exists, terminate with “no”.
(iii) Otherwise, output (an )n∈Spec(E) .
That Algorithm 3.6 produces correct output and runs in polynomial time follows from Lemma
3.3, since µ(E/n) = hζn i.
4. Orders
From now on, suppose that A is an order. Let
E = AQ = A ⊗Z Q,
Asep = A ∩ Esep .
Since Esep /Asep ⊂ E/A = AQ /A is a torsion group, one has Esep = (Asep )Q .
6
H. W. LENSTRA, JR. AND A. SILVERBERG
Lemma 4.1. We have id(Esep ) = id(E), id(Asep ) = id(A), and µ(Asep ) = µ(A).
Proof. This holds because the polynomials X 2 − X and X r − 1 are separable over Q for all r ∈
Z>0 .
Algorithm 4.2. The algorithm takes as input an order A and it computes the Q-algebras E and
Esep ⊂ E, as well as the order Asep = A ∩ Esep , giving a Z-basis for Asep expressed both in the given
Z-basis of A and in the Q-basis for Esep .
(i) We use the given Z-basis for A as a √
Q-basis for E, with the same structure constants.
(ii) Let π1 : A → Esep and
√ π2 : A → 0 be the compositions of the inclusion A ⊂ E with
∼
0 from Algorithm 3.2 followed by the natural projections to Esep
the map
E
−
→
E
⊕
sep
√
and 0, respectively. Using Algorithm 3.2, compute a Q-basis for Esep and the rational
matrices describing π1 and π2 . Applying the kernel algorithm in §14 of [5] to an integer
multiple of the matrix for π2 , compute a Z-basis for Asep = ker(π2 ) expressed in the given
Z-basis for A. Applying π1 to this Z-basis, one obtains the same Z-basis expressed in the
Q-basis for Esep .
Algorithm 4.2 is clearly correct and polynomial time.
5. Graphs attached to rings
Lemma 5.1. Suppose thatTR is a commutative ring, S is a finite set ofQideals of R that are not
R itself, and suppose that a∈S a = {0}. Identify R with its image in a∈S R/a. Suppose that
Q
e = (ea )a∈S ∈ {0, 1}S ⊂ a∈S R/a. Then e ∈ R if and only if ea = eb in {0, 1} for all a, b ∈ S such
that a + b 6= R.
Proof. First suppose e ∈ R. Suppose a, b ∈ S and a + b 6= R. Choose e′a ∈ {0, 1} ⊂ R whose image
in R/a is ea = e + a, and choose e′b ∈ {0, 1} ⊂ R whose image in R/b is eb = e + b. Then e′a ≡ e
mod a and e′b ≡ e mod b, so e′a ≡ e ≡ e′b mod (a + b). Since a + b 6= R we have 1 6∈ a + b. Thus,
e′a = e′b in {0, 1}, as desired.
Conversely, suppose that ea = eb in {0, 1} for all a, b ∈ S with a+b 6= R. Let T = {a ∈ S : ea = 1}
and U = {b ∈ S : eb = 0}. Then S = T ⊔ U . Pick a ∈ T and b ∈ U . By our assumption, a + b = R.
Thus, there exist xa,b ∈ a and ya,b ∈ b suchQ
that 1 = xa,b + ya,b . It follows that ya,b ≡ 1 mod a and
ya,b ≡ 0 mod b. For all a ∈ Q
T , define za = b∈U ya,b ∈ R. Then za ≡ 1 mod a and za ≡ 0 modulo
each b ∈ U . Define e′ = 1 − a∈T (1 − za ) ∈ R. Then e′ ≡ 1 modulo each a ∈ T , and e′ ≡ 0 modulo
each b ∈ U . Thus, e′ ≡ ea mod a for each a ∈ S, so e′ = e.
We say that D is an order in a separable Q-algebra if D is an order and DQ = D ⊗Z Q is separable.
Definition 5.2. Suppose that D is an order in a separable Q-algebra DQ . For m, n ∈ Spec(DQ )
with m 6= n, let
n(D, m, n) = #(D/((m ∩ D) + (n ∩ D))),
and let Γ(D) denote the graph on Spec(DQ ) defined by connecting distinct vertices m, n ∈ Spec(DQ )
by an edge if and only if n(D, m, n) > 1.
Lemma 5.3. n(D, m, n) ∈ Z>0 .
Proof. Let R = D/((m ∩ D) + (n ∩ D)). Then n(D, m, n) = #R. Letting −Q = − ⊗Z Q, we have
RQ = DQ /((mQ ∩ DQ ) + (nQ ∩ DQ )) = DQ /(m + n) = 0
so R is torsion. Since R is finitely generated as an abelian group, it is finite, so n(D, m, n) ∈ Z>0 .
ROOTS OF UNITY IN ORDERS
7
Example 5.4. Let r ∈ Z[X] be monic. Then D = Z[X]/(f ) is an order in a separable Q-algebra
if and only if f is squarefree. Suppose f is squarefree. Then DQ = Q[X]/(f ), and Spec(DQ )
is in bijection with the set of monic irreducible factors g of f in Z[X], each g corresponding to
m = (g)/(f ). If g, h correspond to m, n, respectively, then n(D, m, n) = |R(g, h)|, with R denoting
the resultant.
Suppose
D is an order in a separable Q-algebra. It is natural to ask whether the decomposition
∼ Q
→ m∈Spec(DQ ) DQ /m (Proposition 3.1) gives rise to a decomposition of the order D. This
DQ −
depends on the idempotents that are present in D. The graph Γ(D) tells us which idempotents
occur in D (see Lemma 5.1 and Proposition 5.7).
Notation 5.5. Suppose that D is an order in a separable Q-algebra. If W ⊂ Spec(DQ ), define
Y
DQ /m) = {0, 1}Spec(DQ )
eW = (em )m∈Spec(DQ ) ∈ id(
m∈Spec(DQ )
by em = 1 if m ∈ W and em = 0 if m 6∈ W .
Algorithm 5.6. The algorithm takes an order D in a separable Q-algebra and computes the graph
Γ(D), its connected components, and its weights n(D, m, n) for all m, n ∈ Spec(DQ ).
(i) Use Algorithm 3.2 to compute Spec(DQ ) and the maps DQ → DQ /m for m ∈ Spec(DQ ).
(ii) For each m ∈ Spec(DQ ) compute m∩D = ker(D → DQ /m) by applying the kernel algorithm
in §14 of [5].
(iii) For all m 6= n ∈ Spec(DQ ), apply the image algorithm in §14 of [5] to compute a Z-basis of
image((m ∩ D) ⊕ (n ∩ D) → D) = (m ∩ D) + (n ∩ D)
expressed in a Z-basis of D, and compute n(D, m, n) as the absolute value of the determinant
of the matrix whose columns are those basis vectors.
(iv) Use the numbers n(D, m, n) to obtain the graph Γ(D) and its connected components.
The algorithm runs in polynomial time by well-known graph algorithms (see for example [2]).
Proposition 5.7. Suppose that D is an order in a separable Q-algebra.
Q
(i) Suppose e = (em )m∈Spec(DQ ) ∈ id( m DQ /m) = {0, 1}Spec(DQ ) . Then the following are
equivalent:
(a) e ∈ D,
(b) em = en whenever m and n are connected in Γ(D),
(c) em = en whenever m and n are in the same connected component of Γ(D).
(ii) Let Ω denote the set of connected components of the graph Γ(D) and recall eW from Definition 5.5. Then W 7→ eW gives a bijection
Y
∼
DQ /m.
Ω−
→ prid(D) ⊂ D ⊂
m∈Spec(DQ )
T
Proof. Apply Lemma 5.1 with R = D and S = {m ∩ D : m ∈ Spec(DQ )}. We have a∈S a =
T
Q
Q
S
m (m ∩ D) = {0} since D injects into
mD
QQ /m. Identifying id( DQ /m) with {0, 1} , Lemma
5.1 implies that if e = (em )m∈Spec(DQ ) ∈ id( DQ /m), then e ∈ D if and only if em = en for all
m, n ∈ Spec(DQ ) that are connected in Γ(D). It follows that for each e = (em )m ∈ id(D) the
components em are constant (0 or 1) on each connected component of Γ(D). Part (i) now follows.
It also follows that there is a bijection
P
{subsets of Ω} → id(D)
defined by T 7→ W ∈T eW with inverse e = (em )m 7→ {W ∈ Ω : em = 1 for all m ∈ W }. Under this
bijection, prid(D) corresponds to Ω, and this gives the bijection in (ii).
8
H. W. LENSTRA, JR. AND A. SILVERBERG
Remark 5.8. In particular, by Proposition 5.7(ii) an order D is connected if and only if Γ(D) is
connected.
6. Finding idempotents
The set of idempotents of an order may be too large to compute, but the set of primitive idempotents is something that we are able to efficiently compute.
Algorithm 6.1. Given an order A, the algorithm outputs the set of primitive idempotents of A.
(i) Use Algorithm 4.2 to compute Asep .
(ii) Use Algorithm 5.6 to compute the graph Γ(Asep ) and its connected components.
Q
(iii) For each connected component W of Γ(Asep ), with eW ∈ {0, 1}Spec(E) ⊂ m∈Spec(E) E/m
as in Notation 5.5, use
the inverse of the square matrix with Q-coefficients that gives the
∼ Q
→ m∈Spec(E) E/m of Proposition 3.1 to lift eW to Esep . Output these
natural map Esep −
lifts.
If follows from Proposition 5.7(ii) that the lift eW to Esep is in Asep , and that Algorithm 6.1 gives
the desired output prid(A). It is clear that it runs in polynomial time.
7. Discrete logarithms
In this section, we suppose that G is a multiplicatively written abelian group with elements
represented by finite bitstrings. All algorithms in the present section have G as part of their input.
Thus, saying that they are polynomial-time means that their runtime is bounded by a polynomial
function of the length of the parameters specifying G plus the length of the rest of the input. We
suppose that polynomial-time algorithms for the group operations and for equality testing in G are
available.
Definition 7.1. We say hS|Ri is an efficient presentation for G if S is a finite set, and we have
a map f = fS : S → G satisfying:
Q
(a) f (S) generates G, i.e., the map gS : ZS → G, (bs )s∈S 7→ s∈S f (s)bs is surjective,
(b) R ⊂ ZS is a finite set of generators for ker(gS ),
(c) we have a polynomial-time algorithm
input γ ∈ G finds an element of gS−1 (γ) (i.e.,
Q that on
cs
S
finds (cs )s∈S ∈ Z such that γ = s∈S f (s) ).
Notation 7.2. Suppose hS|Ri is an efficient presentation for G. Define
X
ρ : ZR → ZS , ρ((mr )r∈R ) =
mr r.
r∈R
Suppose T is a finite set and we have a map fT : T → G. By abuse of notation we usually suppress
the maps fS and fT and write s for fS (s) and fT (s) and write hT i for hfT (T )i. Define
Y
gT : ZT → hT i, (bt )t∈T 7→
tbt .
t∈T
Q
Define h = hT : ZT → ZS by using (c) to write each t ∈ T as t = s∈S scs,t and defining
X
h((bt )t∈T ) = (
bt cs,t )s∈S ∈ ZS
t∈T
so that gT = gS ◦ h.
For the remainder of this section we suppose that an efficient presentation hS|Ri for an abelian
group G is given.
Algorithm 7.3. The algorithm takes as input G, an efficient presentation hS|Ri for G, and a finite
set T with a map T → G, and outputs a finite set U = UT of generators for ker(gT ).
ROOTS OF UNITY IN ORDERS
9
(i) Define h − ρ : ZT × ZR → ZS by (h − ρ)(x, y) = h(x) − ρ(y). Use the kernel algorithm in
§14 of [5] to compute a finite set V of generators for ker(h − ρ).
(ii) Compute the image U of V under the projection map ZT × ZR ։ ZT , (x, y) 7→ x.
Theorem 7.4. Algorithm 7.3 produces correct output and runs in polynomial time.
Proof. We have:
x ∈ ker(gT ) ⇐⇒ h(x) ∈ ker(gS ) = im(ρ)
⇐⇒ ∃y ∈ ZR such that h(x) = ρ(y)
⇐⇒ ∃y ∈ ZR such that (h − ρ)(x, y) = 0
⇐⇒ ∃y ∈ ZR such that (x, y) ∈ hV i
⇐⇒ x ∈ proj(hV i) = hproj(V )i = hU i.
Algorithm 7.5. The algorithm takes as input G, an efficient presentation hS|Ri for G, a finite set
T with a map T → G, and an element γ ∈ G, and decidesQ
whether γ ∈ hT i, and if it is, produces an
element of gT−1 (γ) (i.e., finds (ct )t∈T ∈ ZT such that γ = t∈T tct ).
(i) Apply Algorithm 7.3 with T ∪ {γ} in place of T to find a finite set of generators UT ∪{γ} ⊂
ZT ∪{γ} for ker(gT ∪{γ} ), where
gT ∪{γ} : ZT ∪{γ} = ZT × Z{γ} → G,
(x, n) 7→ gT (x)γ n .
(ii) Map the elements u ∈ UT ∪{γ} ⊂ ZT ∪{γ} = ZT ×Z{γ} to their Z{γ} -components u(γ) ∈ Z. If
P
P
UT ∪{γ}
u∈UT ∪{γ} nu u(γ) with (nu )u∈UT ∪{γ} ∈ Z
u∈UT ∪{γ} u(γ)Z 6= Z then γ 6∈ hT i; if 1 =
P
T ∪{γ}
T
{γ}
T
= Z ×Z
is in
then γ ∈ hT i and the Z -component of − u∈UT ∪{γ} nu u ∈ Z
gT−1 (γ).
Algorithm 7.6. The algorithm takes as input G, an efficient presentation hS|Ri for G, and a finite
set T with a map T → G, and outputs an efficient presentation hT |UT i for hT i.
(i) Apply Algorithm 7.3 to obtain a set UT of relations.
(ii) Output the presentation hT |UT i.
Theorem 7.7. Algorithms 7.5 and 7.6 produce correct output and run in polynomial time. In
particular, if one has an efficient presentation for G, and T is a finite set with a map T → G, then
hT |UT i is an efficient presentation for hT i.
Proof. We have:
γ ∈ hT i ⇐⇒ ∃x ∈ ZT such that γ = gT (x)
⇐⇒ ∃x ∈ ZT such that (−x, 1) ∈ ker(gT ∪{γ} : ZT × Z → G) = hUT ∪{γ} i
⇐⇒ 1 ∈ im(proj : hUT ∪{γ} i ⊂ ZT × Z → Z)
X
⇐⇒ ∃(nu )u∈UT ∪{γ} , ∃x ∈ ZT such that
nu u = (−x, 1)
u
where proj is projection onto the second component.
Algorithm 7.8. The algorithm takes as input G, an efficient presentation hS|Ri for G, finite sets
T and T ′ , and maps fT : T → G and fT ′ : T ′ → G, and outputs a finite set of generators for the
kernel of the composition ZT → G → G/hT ′ i, where ZT → G is the map gT .
10
H. W. LENSTRA, JR. AND A. SILVERBERG
(i) Apply Algorithm 7.3 to the finite set T ⊔ T ′ and the map T ⊔ T ′ → G obtained from fT
and fT ′ , to obtain generators for the kernel of the map
′
′
ZT × ZT = ZT ⊔T → G,
(x, y) 7→ gT (x) − gT ′ (y).
(ii) Project these generators to their ZT -component.
Theorem 7.9. Algorithm 7.8 produces correct output and runs in polynomial time.
Proof. We have:
x ∈ ker(ZT → G/hT ′ i) ⇐⇒ gT (x) ∈ hT ′ i = im(gT ′ )
′
⇐⇒ ∃y ∈ ZT such that gT (x) = gT ′ (y)
′
′
⇐⇒ ∃y ∈ ZT such that (x, y) ∈ ker(ZT × ZT → G)
′
⇐⇒ x ∈ proj(ker(ZT × ZT → G) → ZT )
where proj denotes projection onto the ZT -component.
Proof of Theorem 1.3. One starts by computing E = A ⊗Z Q, using the same structure constants
as for A. Algorithm 3.4 produces a presentation for µ(E), and by Algorithm 3.6 this is an efficient
presentation. Given T and ζ as in Theorem 1.3, one can test whether ζ ∈ E by Algorithm 3.6. Now
Theorem 1.3 is obtained from Algorithm 7.5, with G = µ(E) and γ = ζ.
8. Separable polynomials over connected rings
Proposition 8.1(b) will be used to prove Proposition 10.5 below.
Proposition 8.1. Suppose R is a connected commutative ring, f ∈ R[X], and R[X]f + R[X]f ′ =
R[X]. Then:
(a) if r, s ∈ R and f (r) = f (s) = 0, then r − s ∈ {0} ∪ R∗ ;
(b) if S is a non-zero ring and ϕ : R → S is a ring homomorphism, then the restriction of ϕ
to {r ∈ R : f (r) = 0} is injective;
(c) f 6= 0 and #{r ∈ R : f (r) = 0} ≤ deg(f ).
Proof. Suppose f (r) = f (s) = 0. Write f = (X − r)g and 1 = hf + kf ′ with g, h, k ∈ R[X]. Then
g(r) = f ′ (r) ∈ R∗ . Since g(s) ≡ g(r) mod (r − s)R we can write g(s) = g(r) + (r − s)t with t ∈ R.
Thus, 0 = f (s) = (s − r)g(s) = (s − r)(g(r) + (r − s)t), so
(8.1)
(s − r)g(r) = t(s − r)2 .
Thus, t · (s − r) · g(r)−1 = (t · (s − r) · g(r)−1 )2 , an idempotent. If t · (s − r) · g(r)−1 = 0, then by
(8.1) we have (s − r)g(r) = 0, and thus r − s = 0 since g(r) ∈ R∗ . If t · (s − r) · g(r)−1 = 1, then
r − s ∈ R∗ . This gives (a).
For (b), suppose r, s ∈ R, r 6= s, and f (r) = f (s) = 0. By (a) we have r − s ∈ R∗ . Since
ϕ(1) = 1 6= 0, we have ϕ(r − s) 6= 0.
For (c), let m be a maximal ideal of R. Then R → R/m induces a map
{r ∈ R : f (r) = 0} → {u ∈ R/m : (f mod m)(u) = 0}
that is injective by (b). Since R/m is a field and f mod m ∈ (R/m)[X] is non-zero, we have
#{r ∈ R : f (r) = 0} ≤ deg(f mod m) ≤ deg(f ).
Corollary 8.2. Suppose R is a connected commutative ring, m ∈ Z>0 , and m · 1 ∈ R∗ . Then
{ζ ∈ R : ζ m = 1} is a cyclic subgroup of R∗ whose order divides m.
ROOTS OF UNITY IN ORDERS
11
Proof. Applying Proposition 8.1 with f = X m − 1 gives that the subgroup has order dividing m.
Applying Proposition 8.1 with f = X d − 1 for each divisor d of m gives that this abelian subgroup
has at most d elements of order dividing d, and thus is cyclic.
9. From µ(E) to µ(B)
Fix an order A. Recall that E = AQ = A ⊗Z Q and Asep = A ∩ Esep . For m ∈ Spec(E), the image
of Asep in E/m may be identified with Asep /(m ∩ Asep ); it is a ring of which the additive group is a
finitely generated subgroup of the Q-vector space E/m, so it is an order. We now write
Y
Asep /(m ∩ Asep ).
(9.1)
B=
m∈Spec(E)
This is an order in
Q
m∈Spec(E)
E/m. We identify Asep with its image in B under the map
Y
∼
E/m
→
Esep −
m∈Spec(E)
and identify B with a subring of Esep using the same map. One has
Asep ⊂ B ⊂ Esep .
Since the abelian group B/Asep is both torsion and finitely generated, it is finite, and one has
BQ = Esep . The graph Γ(B) consists of the vertices m ∈ Spec(E) and no edges.
Proposition 9.1. There is a deterministic polynomial-time algorithm that, given an order A, computes a Z-basis for Asep /(m ∩ Asep ) in E/m for every m ∈ Spec(E), a Z-basis for B in Esep , and
the index (B : Asep ).
Proof. One simply computes a Z-basis for Asep as in Algorithm 4.2, and a Z-basis for the image
of the map Asep ⊂ Esep → E/m using the image algorithm in §14 of [5], for each m ∈ Spec(E).
Combining these bases for all m and applying the inverse of the second isomorphism in Proposition
3.1 one finds a Z-basis for B in Esep . The index (B : Asep ) is the absolute value of the determinant
of any matrix expressing a Z-basis for Asep in a Z-basis for B.
Proposition 9.2. For each order A and each m ∈ Spec(E) the group µ(Asep /(m ∩ Asep )) is finite
cyclic. Also, there is a deterministic polynomial-time algorithm that, given A and m, computes a
generator θm of µ(Asep /(m ∩ Asep )), its order, the complete prime factorization of its order, and, for
each prime number p a generator θm,p for µ(Asep /(m ∩ Asep ))p .
Proof. The first statement follows from Lemma 3.3(iii). For θm one can take the first power of the
generator ζm of µ(E/m) found in Algorithm 3.4 that belongs to Asep /(m ∩ Asep ), i.e., for which all
coordinates on a Z-basis of Asep /(m ∩ Asep ) (which is a Q-basis of E/m) are integers. The order
of θm is then easy to write down, and since the prime numbers dividing that order are, by Lemma
3.3(iv), bounded by 1 + rankZ (A), it is also easy to factor into primes. If pk is a prime power exactly
order(θm )/pk
dividing order(θm ), one can take θm,p = θm
.
Proposition 9.3. There is a deterministic polynomial-time algorithm that, given an order A, determines all prime factors p of #µ(B), with B as in (9.1), as well as an efficient presentation for
µ(B) and, for each p, an efficient presentation for µ(B)p .
Proof. This follows directly from Proposition 9.2 and the isomorphisms
Y
Y
µ(Asep /(m ∩ Asep ))p
µ(Asep /(m ∩ Asep )) and µ(B)p ∼
µ(B) ∼
=
=
m∈Spec(E)
in the same way as for µ(E) in section 3.
m∈Spec(E)
12
H. W. LENSTRA, JR. AND A. SILVERBERG
10. From µ(B)p to µ(C)p
Let A, E, Asep , and B be as in the previous section, and fix a prime number p. Let
C = Asep [1/p] ∩ B.
(10.1)
We have
so C is an order with CQ = Esep , and
Asep ⊂ C ⊂ B ⊂ Esep
C = {x ∈ B : pi x ∈ Asep for some i ∈ Z≥0 }.
The group C/Asep is finite of p-power order, and the group B/C is finite of order prime to p.
These orders can be quickly computed from the order of B/Asep computed in Proposition 9.1. We
emphasize that C depends on p.
Let t = (B : C). Then C/Asep = t(B/Asep ), so C = tB + Asep , which is the image of the map
B ⊕ Asep → B, (x, y) 7→ tx + y. Thus one can find a Z-basis for C from the image algorithm in §14
of [5].
Proposition 10.1. Suppose that A is an order and p is a prime. Suppose m, n ∈ Spec(E) with
m 6= n. Then:
(i) C/((m ∩ C) + (n ∩ C)) is the non-p-component of Asep /((m ∩ Asep ) + (n ∩ Asep ));
(ii) m and n are connected in Γ(C) if and only if n(Asep , m, n) 6∈ pZ≥0 .
Proof. For Z = Asep , B, and C, write Z̃ for the finite abelian group Z/((m∩Z)+(n∩Z)) (cf. Lemma
5.3). Let pr = (C : Asep ) and t = (B : C). Then gcd(pr , t) = 1. Since Γ(B) has no edges, we have
1 /
1 /
C̃ o
B̃ = 0 where a map
(m ∩ B) + (n ∩ B) = B, so B̃ = 0. Consider the maps à o
sep
pr
t
d
/ Z̃2 is the map induced by multiplication by d on Z1 . (The maps are well-defined since
Z̃1
Asep ⊂ C ⊂ B and pr C ⊂ Asep and tC ⊂ B.)
1 /
t /
Since B̃ = 0, taking the composition C̃
B̃
C̃ shows that tC̃ = 0. If x ∈ C̃ and
pr x = 0, then since gcd(pr , t) = 1 we have x = 0. Thus, the composition C̃
pr
/ Ãsep
injection, and thus an automorphism α of the finite abelian group C̃. It follows that Ãsep
surjective and C̃
pr
1
/ C̃ is an
1
/ C̃ is
/ Ãsep is injective. Further, letting Ãsep [pr ] denote the kernel of multiplication
by pr in Ãsep , we have
1
ker( Ãsep
/ C̃ ) = ker( Ãsep
1
pr
/ C̃
/ Ãsep ) = Ãsep [pr ] .
This gives a split short exact sequence
0
/ Ãsep [pr ]
/ Ãsep o
1
p α−1
/ C̃
/0
r
with C̃ killed by t. Thus C̃ is the non-p-component of Ãsep , proving (i).
We have n(Asep , m, n) 6∈ pZ≥0 if and only if Ãsep is not a p-group, i.e., if and only if C̃ 6= 0 (by
(i)). But C̃ 6= 0 if and only if m and n are connected in Γ(C). This gives (ii).
One could compute Γ(C) by applying Algorithm 5.6 with D = C. Thanks to Proposition 10.1
we can compute Γ(C) without actually computing C, as follows.
Algorithm 10.2. The algorithm takes an order A and the numbers n(Asep , m, n), and computes
the graph Γ(C) and its connected components.
ROOTS OF UNITY IN ORDERS
13
(i) Connect two vertices m and n if and only if n(Asep , m, n) 6∈ pZ≥0 .
(ii) Output the associated graph and the connected components.
Definition 10.3. If W ⊂ Spec(E), let CW denote the image of C in the quotient
Y
Asep /(m ∩ Asep )
m∈W
of B.
Lemma 10.4.QLet Ω denote the set of connected components of the graph Γ(C). Then the natural
map F : C → W ∈Ω CW is an isomorphism.
Proof. The map F is injective, since
C⊂B=
Y Y
W ∈Ω m∈W
Asep /(m ∩ Asep ).
If fW : C ։ CW is the natural map, eQ
Notation 5.5 with D = C, and x =
W is as defined in P
(fW (cW ))W ∈Ω is an arbitrary element of W ∈Ω CW , then F ( W ∈Ω cW eW ) = x, so F is surjective.
The result now follows from Proposition 5.7(ii).
Proposition 10.5. Suppose A is an order and p is a prime number. Recall C as defined in (10.1).
Fix a subset W ⊂ Spec(E) for which the induced subgraph of Γ(C) is connected. Then:
(i) the ring CW is connected,
(ii) the natural map µ(CW )p → µ(C{m} )p is injective for all m ∈ W ,
(iii) the group µ(CW )p is cyclic,
(iv) if W ′ is a non-empty subset of W , then the natural map µ(CW )p → µ(CW ′ )p is injective.
Proof. Part (i)Qfollows from Lemma 5.1.
Let BW = m∈W Asep /(m ∩ Asep ). We have
id(CW [1/p]) ⊂ id
Y
m∈W
E/m
!
= id(BW ).
Recall B from (9.1). Since (B : C) is coprime to p, so is (BW : CW ). Suppose e ∈ id(CW [1/p]).
Then e ∈ id(BW ) and there exists m ∈ Z − pZ such that me ∈ CW (e.g., m = (BW : CW )). Further,
there exists k ∈ Z≥0 such that pk e ∈ CW . Since m and pk are coprime, we have e ∈ CW . Thus,
id(CW [1/p]) = id(CW ) = {0, 1}, so CW [1/p] is connected. Now by Corollary 8.2 with R = CW [1/p]
and m = #µ(CW [1/p])p , the group µ(CW [1/p])p is cyclic, so its subgroup µ(CW )p is cyclic as
well, which is (iii). Also, by Proposition 8.1(b) with R = CW [1/p] and f = X m − 1, the map
µ(CW [1/p])p → µ(CW ′ [1/p])p is injective for each non-empty W ′ ⊂ W . This implies (iv). With
W ′ = {m} one obtains (ii).
Remark 10.6. If A is a connected order in a separable Q-algebra and p is a prime number that
does not divide #(B/A), then µ(A)p is cyclic. This follows from Proposition 10.5(iii); C = A since
E = Esep and p ∤ #(B/A), and one can take C = CW since A is connected.
By Proposition 10.5(ii,iii), if W is a connected component of Γ(C), then the natural map
µ(CW )p → µ(A/(m ∩ A))p
is injective for all m ∈ W , and µ(CW )p is cyclic. This gives an efficient algorithm for computing
µ(CW )p , and thus a set of generators for µ(C)p , as follows.
Algorithm 10.7. Given an order A and a prime p, the algorithm finds an efficient presentation for
µ(C)p .
14
H. W. LENSTRA, JR. AND A. SILVERBERG
(i) Apply Algorithm 9.2 to compute a generator of the cyclic group µ(Asep /(m ∩ Asep ))p for
each m ∈ Spec(E).
(ii) Apply Algorithm 10.2 to compute Γ(C) and its connected components W .
(iii) For each W , do the following:
(a) Apply the image algorithm in §14 of [5] to compute a basis for the order
Y
CW = image(C →
E/m).
m∈W
(b) Pick m1 ∈ W with #µ(Asep /(m1 ∩ Asep ))p minimal.
(c) Choose
W1 = {m1 } ⊂ W2 = {m1 , m2 } ⊂ . . . ⊂ W
such that #Wi = i for all i ≥ 1, and Wi = Wi−1 ∪ {mi } for all i ≥ 2, and each mi is
connected in Γ(C) to some mj with j < i.
(d) For i = 1, 2, . . . compute each µ(CWi )p , and a generator for it, in succession by using
that µ(CW1 )p = µ(Asep /(m1 ∩ Asep ))p is given, and for i > 1 listing all ordered pairs
in µ(CWi−1 )p × µ(Asep /(mi ∩ Asep ))p and testing whether they are in CWi , and using
that
µ(CWi )p = CWi ∩ (µ(CWi−1 )p × µ(Asep /(mi ∩ Asep ))p ).
This gives a generatorQof µ(CW )p for each W in the set Ω of connected components
of Γ(C). Let ζW ∈ V ∈Ω µ(CV )p be the element with this generator as its W -th
component, and all other components 1.
Q
(iv) View the set S = {ζW : W ∈ Ω} in µ(C)p via the isomorphism µ(C)p ∼
= W ∈Ω µ(CW )p of
Lemma 10.4, let R = {order(ζW )(W -th basis vector)}, and output hS|Ri.
Proposition 10.8. Algorithm 10.7 gives correct output and runs in polynomial time.
∼ L
∼ Q
→ W µ(CW )p so the output of the
Proof. By Lemma 10.4 we have C −
→ W CW . Thus, µ(C)p −
algorithm is a set of generators for µ(C)p . We have
CWi ⊂ CWi−1 × C{mi } ,
C{mi } = Asep /(mi ∩ Asep ).
Thus,
µ(CWi )p ⊂ µ(CWi−1 )p × µ(Asep /(mi ∩ Asep ))p .
By Proposition 10.5, the group µ(CWi )p injects into each factor, and each factor is cyclic of prime
power order. Each factor has size polynomial in the size of the algorithm’s inputs (given an order of
rank n and an element of order pk , we have ϕ(pk ) ≤ n by Lemma 3.3, so pk ≤ 2n). By Proposition
10.5(ii) the natural map µ(CWi )p → µ(Asep /(m1 ∩ Asep ))p is injective, for all i. As i gets larger, the
groups µ(CWi )p get smaller or stay the same. Thus one can list all ordered pairs, and then efficiently
test whether they are in CWi . It follows from the above that the algorithm runs in polynomial time.
The presentation hS|Ri is efficient by Algorithm 7.6 and Proposition 9.3, since µ(C)p ⊂ µ(B)p .
Remark 10.9. A more intelligent algorithm for step (iii)(d) is to use that each µ(CWi )p is cyclic
(by Proposition 10.5(iii)), and that µ(CWi )p ⊂ µ(CWi−1 )p , as follows. Starting with i = 1 and
incrementing i, proceed as follows in place of step (d). If µ(CWi−1 )p is trivial, stop. Otherwise, take
an element a1 ∈ µ(CWi−1 )p of order p and for each of the p − 1 elements b1 ∈ µ(Asep /(mi ∩ Asep ))p
of order p test whether (a1 , b1 ) ∈ CWi . If there are none, stop (the group is trivial for that Wi ). If
there is such a pair (a1 , b1 ) ∈ µ(CWi ), if #µ(CWi )p = p then stop with (a1 , b1 ) as generator, and
otherwise take each a2 ∈ µ(CWi−1 )p that is a p-th root of a1 and for each of the p possible choices of
elements b2 ∈ µ(Asep /(mi ∩ Asep ))p that are a p-th root of b1 , test whether (a2 , b2 ) ∈ CWi . As soon
as such is found, if #µ(CWi )p = p2 then stop with (a2 , b2 ) as generator, and otherwise continue this
process. Injecting into each component implies one only needs to check ordered pairs with the same
ROOTS OF UNITY IN ORDERS
15
order in each component. Since #µ(CWi )p divides #µ(CWi−1 )p , one only needs to go up to elements
of order #µ(CWi−1 )p . The number of trials is < plogp (#µ(CWi−1 )p ), since there are p choices each
time, and there are logp (#µ(CWi−1 )p ) steps. The final (aj , bj ) found is a generator for µ(CWi )p .
11. Nilpotent ideals in finite rings
Suppose R is a finite commutative ring and I is a nilpotent ideal of R. Algorithm 11.3 below
solves the discrete logarithm problem in the multiplicative group 1 + I, using the finite filtration:
1 + I ⊃ 1 + I 2 ⊃ 1 + I 4 ⊃ · · · ⊃ 1,
i
i+1
the fact that the map x 7→ 1 + x is an isomorphism from the additive group I 2 /I 2
to the
2i
2i+1
multiplicative group (1 + I )/(1 + I
), and the fact that the discrete logarithm problem is easy
in these additive groups.
We specify a finite commutative ring by giving a presentation for its additive group, i.e., a finite set
of generators and a finite set of relations, and for every pair of generators their product is expressed
as a Z-linear combination of the generators.
The following result can be shown using standard methods.
Proposition 11.1. There is a deterministic polynomial-time algorithm that, given a finite commutative ring R and 2 ideals I1 and I2 of R such that I2 ⊂ I1 , computes an efficient presentation of
the finite abelian group I1 /I2 .
√
Lemma 11.2. Suppose R is a finite commutative ring, I is an ideal of R such that I ⊂ 0R , and
i
i+1
i
for each S
i ∈ Z≥0 the set Bi is a subset of I 2 such that Bi ∪ I 2
generates the additive group I 2 .
Let B = i≥0 Bi . Then 1 + I = h1 + b : b ∈ Bi (as a multiplicative group).
i
Proof. Since I is nilpotent, 1 + I 2 is a multiplicative group for all i ∈ Z≥0 . We have
i
i+1
I 2 /I 2
i+1
∼
i
i+1
−
→ (1 + I 2 )/(1 + I 2
)
i
i+1
via x 7→ 1 + x. Since Bi ∪ I 2
generates the additive group I 2 , we have that Bi + I 2
generates
k
k
2i
2i+1
2k+1
I /I
. If I
= 0, then Bk generates I 2 and 1 + Bk generates the multiplicative group 1 + I 2 .
It now follows that 1 + B generates 1 + I.
√
Algorithm 11.3. Given a finite commutative ring R, an ideal I of R such that I ⊂ 0, for each
i
i+1
i
i ∈ Z≥0 a subset Bi of I 2 such that Bi ∪ I 2
generates the additive group I 2Q, with all but finitely
B
manySBi = ∅, and x ∈ I, the algorithm computes (mb )b∈B ∈ Z with 1 + x = b∈B (1 + b)mb , where
B = i≥0 Bi , as follows.
(i) Let x0 = x. For i = 0, 1, . . . use Proposition 11.1 to find (mb )b∈Bi ∈ ZBi such that
X
i+1
i
i+1
mb b mod I 2
(in I 2 /I 2 ).
xi ≡
b∈Bi
i+1
Define xi+1 ∈ I 2
by
1 + xi+1 = (1 + xi )
Y
(1 + b)−mb .
b∈Bi
As soon as xi+1 = 0, terminate, setting mb = 0 for all b ∈ Bj with j > i and outputting
(mb )b∈B ∈ ZB .
Proposition 11.4. Algorithm 11.3 is a deterministic algorithm that produces correct outputs in
polynomial time.
16
H. W. LENSTRA, JR. AND A. SILVERBERG
j
Proof. Since I is a nilpotent ideal, there exists j ∈ Z≥0 such that I 2 = 0. Then xj = 0 and the
algorithm gives
Y
Y
(1 + b)mb =
(1 + b)mb
1 + x = 1 + x0 =
b∈
S
i<j
Bi
b∈B
as desired.
Lemma 11.5. There is a deterministic polynomial-time
algorithm that, given a finite commutative
√
i
ring R, an ideal I of R such that I ⊂ 0, and for each i ∈ Z≥0 a subset Bi of I 2 such that
i+1
i
Bi ∪ I 2
generates
the additive groupSI 2 , computes a Z-basis for the kernel of the map ZB → 1 + I,
Q
mb
(mb )b∈B 7→ b (1 + b) , where B = i≥0 Bi .
Proof. Let Cj =
S
k≥j
j
Bj . We proceed by induction on decreasing j. We have h1 + Cj i = 1 + I 2
j
(applying Lemma 11.2 with I 2 in place of I). Assume we already have defining relations for 1 + Cj ,
Q
j
i.e., we have generators for the kernel of ZCj → 1 + I 2 , (mb )b∈Cj 7→ b∈Cj (1 + b)mb , and would like
to find defining relations for 1 + Cj−1 . Proposition 11.1 gives an algorithm for finding a basis for the
Q
j−1
j
j
kernel of ZBj−1 → I 2 /I 2 , (nb )b∈Bj−1 7→ b∈Bj−1 nb b + I 2 in polynomial time. For each defining
Q
P
j
j
relation (nb )b∈Bj−1 for Bj−1 + I 2 we have b∈Bj−1 nb b ≡ 0 mod I 2 so b∈Bj−1 (1 + b)nb ≡ 1 mod
j
(1 + I 2 ). Algorithm 11.3 gives a polynomial-time algorithm to find (mb′ )b′ ∈Cj ∈ ZCj such that
Q
Q
j
nb
= b′ ∈Cj (1 + b′ )mb′ ∈ 1 + I 2 . Then ((nb )b∈Bj−1 , (−mb′ )b′ ∈Cj ) is in the kernel of
b∈Bj−1 (1 + b)
j−1
the map ZCj−1 → 1 + I 2 , and these relations along with the defining relations for 1 + Cj form a
set of defining relations for 1 + Cj−1 .
Theorem 11.6. There is a deterministic
√ polynomial-time algorithm that, given a finite commutative
ring and an ideal I of R such that I ⊂ 0, produces an efficient presentation h1 + B|Ri for 1 + I.
i
Proof. Apply the algorithm in Proposition 11.1 to obtain for each i ∈ Z≥0 a set Bi ⊂ I 2 such that
i+1
i
Bi ∪ I 2
generates the additive group I 2 . S
Since I is nilpotent, we can take Bi = ∅ for all but
finitely many i. By Lemma 11.2 the set B = i≥0 Bi has the property that 1 + B generates 1 + I.
Defining relations R are given by Lemma 11.5, and part (c) of Definition 7.1 holds by Proposition
11.4.
Theorem 1.4 now follows from Theorem 11.6 and Algorithm 7.6.
Remark 11.7. Suppose R is a finite commutative ring, I ⊂ R is a nilpotent ideal, and R′ is a
subring of R. Let I ′ = I ∩ R′ . The algorithm in Theorem 11.6 gives efficient presentations for the
multiplicative groups 1 + I and 1 + I ′ . We can apply Algorithm 7.8 with G = 1 + I ⊂ R∗ , and T ′ a
set of generators for 1 + I ′ , and T a set of generators for some subgroup of 1 + I. In the next section
we will apply this to our setting.
√
Example 11.8. Let R = Z/p2 Z and I = 0R = pZ/p2 Z. Then I 2 = 0, and 1 + I is the order
∼
→ Z/pZ, 1 + x 7→ x/p is a group
p subgroup of (Z/p2 Z)∗ ∼
= Z/pZ × Z/(p − 1)Z. The map 1 + I −
isomorphism, so the discrete logarithm problem is easy in 1 + I.
√
∼
Example 11.9. Let R = Z/p4 Z and I = 0R = pZ/p4 Z. Then I 4 = 0. Here, the map 1 + I −
→
Z/p3 Z, 1 + x 7→ x/p is not a group homomorphism. The discrete logarithm problem is easy in 1 + I
not because it is (isomorphic to) an additive group, but because there is a filtration of additive
groups, namely, (1 + I)/(1 + I 2 ) ∼
= I/I 2 and (1 + I 2 )/(1 + I 4 ) ∼
= I 2 /I 4 = I 2 .
ROOTS OF UNITY IN ORDERS
17
12. From µ(C)p to µ(A)p
Let A be an order and let p be a prime. Recall C from Definition 10.1 and let
f = {x ∈ C : xC ⊂ Asep },
which is the largest ideal of C that is contained in A. We shall see that C/f is a finite ring, and it
has Asep /f as a subring. Suppose we are given a set M ⊂ C ∗ such that µ(C)p = hM i. Let
X
I=
(ζ − 1)(C/f),
I ′ = I ∩ (Asep /f).
ζ∈M
Define
g1 : ZM ։ µ(C)p ,
(aζ )ζ∈M 7→
Y
ζ aζ ,
ζ∈M
let g2 : µ(C)p → 1 + I be the natural map ζ 7→ ζ + f, let ĝ : µ(C)p → (1 + I)/(1 + I ′ ) denote the
composition of g2 with the quotient map, define g : ZM → 1 + I by g = g2 ◦ g1 , and define
(12.1)
ψ : ZM → (1 + I)/(1 + I ′ )
by
ψ = ĝ ◦ g1 .
Proposition 12.1. With notation as above,
p
(i) I is a nilpotent ideal of C/f, i.e., I ⊂ 0C/f;
(ii) I ′ is a nilpotent ideal of Asep /f;
(iii) C/f is a finite ring of p-power order,
(iv) µ(A)p is the kernel of the map ĝ;
(v) µ(A)p is the image of ker(ψ) under the map g1 .
p
Proof. Since C/A is killed by pr for some r ∈ Z≥0 , we have pr ∈ f, so p ∈ 0C/f , so p is in every
prime ideal of C/f. Suppose ζ ∈ µ(C)p . Then the image p
of ζ in every field of characteristic p is
1. Thus, ζ − 1 is in every prime ideal of C/f, so ζ − 1 ∈ 0C/f . By the definition of I we have
p
I ⊂ 0C/f , and (i) and (ii) follow.
Since pr ∈ f we have pr C ⊂ f, so C/f is a quotient of C/pr C, which is a finite ring of p-power
order. This gives (iii).
Part (iv) follows directly from the definitions, and then (v) follows from (iv).
Algorithm 12.2. The algorithm takes as input an order A, a prime p, and a finite set of generators
M for µ(C)p , and computes a finite set of generators for µ(A)p .
(i) Compute the finite abelian group C/Asep and
Hom(C, C/Asep ) ∼
= (C/Asep ) ⊕ (C/Asep ) ⊕ · · · ⊕ (C/Asep )
(ii)
(iii)
(iv)
(v)
(with rankZ (C) summands C/Asep ), and compute f as the kernel of the group homomorphism Asep → Hom(C, C/Asep ) sending x ∈ Asep to the map y 7→ xy + Asep . Next compute
the finite rings Asep /f ⊂ C/f. This entire step can be done using standard algorithms for
finitely generated abelian groups.
Apply the algorithm in Theorem 11.6 with R = C/f and the I of this section to obtain an
efficient presentation for 1 + I.
Apply the algorithm in Theorem 11.6 with R = Asep /f and I ′ in place of I to obtain a finite
set T ′ of generators for 1 + I ′ .
Apply Algorithm 7.8 with G = 1 + I, the efficient presentation from step (ii), T = M , and
T ′ from step (iii) to obtain a finite set of generators S ′ for ker(ZT → G/hT ′ i).
Take the image of S ′ under the map g1 : ZM → µ(C)p .
Theorem 12.3. Algorithm 12.2 produces correct output and runs in polynomial time.
18
H. W. LENSTRA, JR. AND A. SILVERBERG
Proof. Since C/f and Asep /f are finite commutative rings, and I and I ′ are nilpotent, Theorem 11.6
is applicable in steps (ii) and (iii). The map ZM = ZT → G/hT ′ i = (1 + I)/(1 + I ′ ) in step (iv) is
our map ψ from (12.1). By Proposition 12.1(v), step (v) produces generators for µ(A)p .
13. Finding roots of unity
Algorithm 13.1. Given an order A, the algorithm outputs a finite set of generators for µ(A).
(i) Use Algorithm 3.2 to compute Esep , all m ∈ Spec(E), the fields E/m, and the natural maps
E → E/m.
(ii) Apply Algorithm 4.2 to compute Asep = A ∩ Esep .
(iii) Apply Algorithm 9.1 to compute for each m ∈ Spec(E) the subring Asep /(m ∩ Asep ) of
Esep /m.
(iv) Apply the algorithm in Proposition 9.2 to compute, for each m ∈ Spec(E), a generator θm
for µ(Asep /(m ∩ Asep )), its order, the prime factorization of its order, and for each prime p
dividing its order a generator θm,p of µ(Asep /(m ∩ Asep ))p .
(v) For each prime p dividing the order of at least one of the groups µ(Asep /(m ∩ Asep )), do the
following:
(a) Use the image algorithm in §14 of [5] to compute a Z-basis for C = Asep [1/p] ∩ B (as
discussed in §10 above, just before Proposition 10.1).
(b) Apply Algorithm 10.7 to compute an efficient presentation for µ(C)p .
(c) Apply Algorithm 12.2 to compute generators for µ(A)p .
(vi) Generators for these groups µ(A)p form a set of generators for µ(A).
That Algorithm 13.1 produces correct output and runs in polynomial time follows immediately.
We can now obtain a deterministic polynomial-time algorithm that, given an order A, determines
an efficient presentation for µ(A).
Algorithm 13.2. The algorithm takes an order A and produces an efficient presentation for µ(A).
(i) Apply the algorithm in Proposition 9.3 to obtain an efficient presentation hS|Ri for µ(B).
(ii) Apply Algorithm 13.1 to obtain a finite set of generators for µ(A).
(iii) Apply Algorithm 7.6 with G = µ(B) to obtain an efficient presentation for µ(A).
Example 13.3. Let A = Z[X]/(X 4 − 1). Then with p = 2:
B = C = Z[X]/(X − 1) × Z[X]/(X + 1) × Z[X]/(X 2 + 1) ∼
= Z × Z × Z[i],
and (C : A) = 8. We identify X with (1, −1, i) ∈ Z × Z × Z[i]. Then
µ(A)2 = µ(A) ⊂ µ(B) = µ(C)2 = h(−1, 1, 1), (1, −1, 1), (1, 1, i)i.
We have
of index 64 in C, and
f = 4Z × 4Z × 2Z[i]
C/f = Z/4Z × Z/4Z × Z[i]/2Z[i] = Z/4Z × Z/4Z × F2 [ε]
with ε = 1 + i. The index 8 subring of C/f generated by (1, −1, 1 + ε) is A/f. Alternatively,
A/f = (Z/4Z)[Y ]/(2Y, Y 2 )
where Y = X − 1 = (0, 2, ε) ∈ A/f. With M = {(−1, 1, 1), (1, −1, 1), (1, 1, i)} we have
p
I = (2Z/4Z) × (2Z/4Z) × (εF2 [ε]) = 0C/f ,
I 2 = 0, and
I ′ = I ∩ (A/f) =
p
0A/f = {0, 2, Y, Y + 2}.
ROOTS OF UNITY IN ORDERS
19
With ψ as in (12.1), we have ψ(a, b, c) = a + b + c + 2Z ∈ Z/2Z and
ker(ψ) = {(a, b, c) ∈ ZM : a + b + c is even} = Z · (2, 0, 0) + Z · (1, 1, 0) + Z · (1, 0, 1).
Algorithm 13.1 outputs
µ(A) = µ(A)2 = h−X 2 i × h−X 3 i = hX, −1i ∼
= Z/2Z × Z/4Z.
Example 13.4. Let A = Z[X]/(X 12 − 1). Then
E = Q[X]/(X 12 − 1) ∼
= Q × Q × Q(ζ3 ) × Q(i) × Q(ζ3 ) × Q(ζ12 )
and
B = Z[X]/(X − 1) × Z[X]/(X + 1) × Z[X]/(X 2 + X + 1)
× Z[X]/(X 2 + 1) × Z[X]/(X 2 − X + 1) × Z[X]/(X 4 − X 2 + 1) ֒→ E.
We have for the discriminants of the orders:
|∆B | = 1 · 1 · 3 · 4 · 3 · 122 ,
|∆A | = 1212 ,
so
#(B/A) =
p
|∆A |/|∆B | = 29 · 34 .
Thus if p = 2 then (C : A) = 29 , while if p = 3 then (C : A) = 34 . The graph Γ(B) consists of 6
vertices with no edges. With the numbers n(A, m, n) on the edges, the graph Γ(A) is:
(X + 1)
♠♠
2 ♠♠♠♠
(X − 1)
♠♠♠
♠♠♠
2
◗◗◗
◗◗◗ 2
◗◗◗
◗◗◗
(X 2 + 1)
3
9
(X 2 − X + 1)
◗◗◗
♠♠
♠
◗
♠
◗
4♠♠
◗◗4 ◗
◗◗◗
♠♠♠
◗
♠♠♠
(X 2 + X + 1)
(X 4 − X 2 + 1)
3
4
Suppose p = 2. Then the graph Γ(C) is:
•
•
•
•
•
Q
•
We have µ(C)2 =
µ(CW )2 with the product running over the 3 connected components W .
The left 2 W ’s give µ(CW )2 = {±1}, while the remaining one gives µ(CW )2 = h−X 3 i. This gives
−X 3 , −1 ∈ µ(A)2 .
20
H. W. LENSTRA, JR. AND A. SILVERBERG
Suppose p = 3. Then the graph Γ(C) is:
•
Q
•
•
⑧ ❅❅❅
⑧
❅❅
⑧
❅❅
⑧⑧
⑧⑧
•
⑧ ❅❅❅
⑧
❅❅
⑧
❅❅
⑧⑧
⑧⑧
•
•
We have µ(C)3 = µ(CW )3 with the product running over the 2 connected components W . The
top W has µ(CW )3 = {1}, while for the bottom W one has that µ(CW )3 is generated by the image
of X 4 , and this gives X 4 ∈ µ(A)3 .
Continuing the algorithm by hand is more complicated than in the previous example. However,
we note that here A is the order ZhGi defined in [7] with G = h−1i × hXi ∼
= Z/2Z × Z/12Z, and it
follows from Remark 16.3 of [7] that µ(A) = G = h−1i × hXi.
References
[1] M. F. Atiyah and I. G. Macdonald, Introduction to commutative algebra, Addison-Wesley Publishing Co., Reading, MA, 1969.
[2] J. Hopcroft and R. Tarjan, Algorithm 447: efficient algorithms for graph manipulation, Communications of the
ACM, 16, no. 6 (1973) 372–378.
[3] S. Lang, Algebra, Third edition, Graduate Texts in Mathematics 211, Springer-Verlag, New York, 2002.
[4] A. K. Lenstra, Factoring polynomials over algebraic number fields, in Computer algebra (London, 1983), Lect.
Notes in Comp. Sci. 162, Springer, Berlin, 1983, 245–254.
[5] H. W. Lenstra, Jr., Lattices, in Algorithmic number theory:
lattices, number fields, curves and
cryptography, Math. Sci. Res. Inst. Publ. 44, Cambridge Univ. Press, Cambridge, 2008, 127–181,
http://library.msri.org/books/Book44/files/06hwl.pdf.
[6] H. W. Lenstra, Jr. and A. Silverberg, Revisiting the Gentry-Szydlo Algorithm, in Advances in Cryptology—
CRYPTO 2014, Lect. Notes in Comp. Sci. 8616, Springer, Berlin, 2014, 280–296.
[7] H. W. Lenstra, Jr. and A. Silverberg, Lattices with symmetry, to appear in Journal of Cryptology,
https://eprint.iacr.org/2014/1026.
[8] H. W. Lenstra, Jr. and A. Silverberg, Algorithms for commutative algebras over the rational numbers,
http://arxiv.org/abs/1509.08843.
Mathematisch Instituut, Universiteit Leiden, The Netherlands
E-mail address: [email protected]
Department of Mathematics, University of California, Irvine, CA 92697
E-mail address: [email protected]
| 0 |
Published as a conference paper at ICLR 2018
F EW- SHOT AUTOREGRESSIVE D ENSITY E STIMATION :
TOWARDS LEARNING TO LEARN DISTRIBUTIONS
S. Reed, Y. Chen, T. Paine, A. van den Oord, S. M. A. Eslami, D. Rezende, O. Vinyals, N. de Freitas
{reedscot,yutianc,tpaine}@google.com
arXiv:1710.10304v4 [] 28 Feb 2018
A BSTRACT
Deep autoregressive models have shown state-of-the-art performance in density
estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and
unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which
humans learns across many vision tasks. In this paper, we show how 1) neural
attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed
modifications to PixelCNN result in state-of-the art few-shot density estimation on
the Omniglot dataset. Furthermore, we visualize the learned attention policy and
find that it learns intuitive algorithms for simple tasks such as image mirroring on
ImageNet and handwriting on Omniglot without supervision. Finally, we extend
the model to natural images and demonstrate few-shot image generation on the
Stanford Online Products dataset.
1
I NTRODUCTION
Contemporary machine learning systems are still far behind humans in their ability to rapidly learn
new visual concepts from only a few examples (Lake et al., 2013). This setting, called few-shot
learning, has been studied using deep neural networks and many other approaches in the context of
discriminative models, for example Vinyals et al. (2016); Santoro et al. (2016). However, comparatively little attention has been devoted to the task of few-shot image density estimation; that is, the
problem of learning a model of a probability distribution from a small number of examples. Below
we motivate our study of few-shot autoregressive models, their connection to meta-learning, and
provide a comparison of multiple approaches to conditioning in neural density models.
W HY AUTOREGRESSIVE MODELS ?
Autoregressive neural networks are useful for studying few-shot density estimation for several reasons. They are fast and stable to train, easy to implement, and have tractable likelihoods, allowing
us to quantitatively compare a large number of model variants in an objective manner. Therefore we
can easily add complexity in orthogonal directions to the generative model itself.
Autoregressive image models factorize the joint distribution into per-pixel factors:
P (x|s; θ) =
N
Y
P (xt |x<t , f (s); θ)
(1)
t=1
where θ are the model parameters, x ∈ RN are the image pixels, s is a conditioning variable, and f
is a function encoding this conditioning variable. For example in text-to-image synthesis, s would
be an image caption and f could be a convolutional or recurrent encoder network, as in Reed et al.
(2016). In label-conditional image generation, s would be the discrete class label and f could simply
convert s to a one-hot encoding possibly followed by an MLP.
A straightforward approach to few-shot density estimation would be to simply treat samples from
the target distribution as conditioning variables for the model. That is, let s correspond to a few data
examples illustrating a concept. For example, s may consist of four images depicting bears, and the
task is then to generate an image x of a bear, or to compute its probability P (x|s; θ).
1
Published as a conference paper at ICLR 2018
A learned conditional density model that conditions on samples from its target distribution is in fact
learning a learning algorithm, embedded into the weights of the network. This learning algorithm is
executed by a feed-forward pass through the network encoding the target distribution samples.
W HY LEARN TO LEARN DISTRIBUTIONS ?
If the number of training samples from a target distribution is tiny, then using standard gradient
descent to train a deep network from scratch or even fine-tuning is likely to result in memorization
of the samples; there is little reason to expect generalization. Therefore what is needed is a learning
algorithm that can be expected to work on tiny training sets. Since designing such an algorithm has
thus far proven to be challenging, one could try to learn the algorithm itself. In general this may
be impossible, but if there is shared underlying structure among the set of target distributions, this
learning algorithm can be learned from experience as we show in this paper.
For our purposes, it is instructive to think of learning to learn as two nested learning problems, where
the inner learning problem is less constrained than the outer one. For example, the inner learning
problem may be unsupervised while the outer one may be supervised. Similarly, the inner learning
problem may involve only a few data points. In this latter case, the aim is to meta-learn a model that
when deployed is able to infer, generate or learn rapidly using few data s.
A rough analogy can be made to evolution: a slow and expensive meta-learning process, which
has resulted in life-forms that at birth already have priors that facilitate rapid learning and inductive
leaps. Understanding the exact form of the priors is an active, very challenging, area of research
(Spelke & Kinzler, 2007; Smith & Gasser, 2005). From this research perspective, we can think of
meta-learning as a potential data-driven alternative to hand engineering priors.
The meta-learning process can be undertaken using large amounts of computation and data. The
output is however a model that can learn from few data. This facilitates the deployment of models
in resource-constrained computing devices, e.g. mobile phones, to learn from few data. This may
prove to be very important for protection of private data s and for personalisation.
F EW- SHOT LEARNING AS INFERENCE OR AS A WEIGHT UPDATE ?
A sample-conditional density model Pθ (x|s) treats meta-learning as inference; the conditioning
samples s vary but the model parameters θ are fixed. A standard MLP or convolutional network can
parameterize the sample encoding (i.e. meta-learning) component, or an attention mechanism can
be used, which we will refer to as PixelCNN and Attention PixelCNN, respectively.
A very different approach to meta-learning is taken by Ravi & Larochelle (2016) and Finn et al.
(2017a), who instead learn unconditional models that adapt their weights based on a gradient step
computed on the few-shot samples. This same approach can also be taken with PixelCNN: train an
unconditional network Pθ0 (x) that is implicitly conditioned by a previous gradient ascent step on
log Pθ (s); that is, θ0 = θ − α∇θ log Pθ (s). We will refer to this as Meta PixelCNN.
In Section 2 we connect our work to previous attentive autoregressive models, as well as to work on
gradient based meta-learning. In Section 3 we describe Attention PixelCNN and Meta PixelCNN
in greater detail. We show how attention can improve performance in the the few-shot density
estimation problem by enabling the model to easily transmit texture information from the support
set onto the target image canvas. In Section 4 we compare several few-shot PixelCNN variants on
simple image mirroring, Omniglot and Stanford Online Products. We show that both gradient-based
and attention-based few-shot PixelCNN can learn to learn simple distributions, and both achieve
state-of-the-art likelihoods on Omniglot.
2
R ELATED WORK
Learning to learn or meta-learning has been studied in cognitive science and machine learning for
decades (Harlow, 1949; Thrun & Pratt, 1998; Hochreiter et al., 2001). In the context of modern deep
networks, Andrychowicz et al. (2016) learned a gradient descent optimizer by gradient descent, itself
parameterized as a recurrent network. Chen et al. (2017) showed how to learn to learn by gradient
descent in the black-box optimization setting.
2
Published as a conference paper at ICLR 2018
Ravi & Larochelle (2017) showed the effectiveness of learning an optimizer in the few-shot learning
setting. Finn et al. (2017a) advanced a simplified yet effective variation in which the optimizer is
not learned but rather fixed as one or a few steps of gradient descent, and the meta-learning problem
reduces to learning an initial set of base parameters θ that can be adapted to minimize any task
loss Lt by a single step of gradient descent, i.e. θ0 = θ − α∇Lt (θ). This approach was further
shown to be effective in imitation learning including on real robotic manipulation tasks (Finn et al.,
2017b). Shyam et al. (2017) train a neural attentive recurrent comparator function to perform oneshot classification on Omniglot.
Few-shot density estimation has been studied previously using matching networks (Bartunov &
Vetrov, 2016) and variational autoencoders (VAEs). Bornschein et al. (2017) apply variational inference to memory addressing, treating the memory address as a latent variable. Rezende et al.
(2016) develop a sequential generative model for few-shot learning, generalizing the Deep Recurrent Attention Writer (DRAW) model (Gregor et al., 2015). In this work, our focus is on extending
autoregressive models to the few-shot setting, in particular PixelCNN (van den Oord et al., 2016).
Autoregressive (over time) models with attention are well-established in language tasks. Bahdanau
et al. (2014) developed an attention-based network for machine translation. This work inspired a
wave of recurrent attention models for other applications. Xu et al. (2015) used visual attention to
produce higher-quality and more interpretable image captioning systems. This type of model has
also been applied in motor control, for the purpose of imitation learning. Duan et al. (2017) learn a
policy for robotic block stacking conditioned on a small number of demonstration trajectories.
Gehring et al. (2017) developed convolutional machine translation models augmented with attention
over the input sentence. A nice property of this model is that all attention operations can be batched
over time, because one does not need to unroll a recurrent net during training. Our attentive PixelCNN is similar in high-level design, but our data is pixels rather than words, and 2D instead of 1D,
and we consider image generation rather than text generation as our task.
3
3.1
M ODEL
F EW- SHOT LEARNING WITH ATTENTION P IXEL CNN
In this section we describe the model, which we refer to as Attention PixelCNN. At a high level,
it works as follows: at the point of generating every pixel, the network queries a memory. This
memory can consist of anything, but in this work it will be a support set of images of a visual
concept. In addition to global features derived from these support images, the network has access to
textures via support image patches. Figure 2 illustrates the attention mechanism.
In previous conditional PixelCNN works, the encoding f (s) was shared across all pixels. However,
this can be sub-optimal for several reasons. First, at different points of generating the target image
x, different aspects of the support images may become relevant. Second, it can make learning
difficult, because the network will need to encode the entire support set of images into a single
global conditioning vector, fed to every output pixel. This single vector would need to transmit
information across all pairs of salient regions in the supporting images and the target image.
Time
Supports + attention
Sample
Supports + attention
Sample
Supports + attention
Sample
Figure 1: Sampling from Attention PixelCNN. Support images are overlaid in red to indicate the
attention weights. The support sets can be viewed as small training sets, illustrating the connection
between sample-conditional density estimation and learning to learn distributions.
3
Published as a conference paper at ICLR 2018
To overcome this difficulty, we propose to replace the simple encoder function f (s) with a contextsensitive attention mechanism ft (s, x<t ). It produces an encoding of the context that depends on
the image generated up until the current step t. The weights are shared over t.
We will use the following notation. Let the target image be x ∈
RH×W ×3 . and the support set images
be s ∈ RS×H×W ×3 , where S is the
number of supports.
To capture texture information, we
encode all supporting images with a
shallow convolutional network, typically only two layers. Each hidden
unit of the resulting feature map will
have a small receptive field, e.g. corresponding to a 10 × 10 patch in a
support set image. We encode these
support images into a set of spatiallyindexed key and value vectors.
reduce
sum
KxKxP
1x1xP
ft(s, x<t)
mul
α
KxKx1
pvalue
KxKxP
qt
pkey
KxKxP
attn
WxHx3
WxHx3
Support image, s
WxHxP
Target image, x
Figure 2: The PixelCNN attention mechanism.
After encoding the support images in
parallel, we reshape the resulting S ×
K × K × 2P feature maps to squeeze out the spatial dimensions, resulting in a SK 2 × 2P matrix.
p = fpatch (s) = reshape(CNN(s), [SK 2 × 2P ])
key
p
value
= p[:, 0 : P ], p
= p[:, P : 2P ]
(2)
(3)
where CNN is a shallow convolutional network. We take the first P channels as the patch key vectors
2
2
pkey ∈ RSK ×P and the second P channels as the patch value vectors pvalue ∈ RSK ×P . Together
these form a queryable memory for image generation.
To query this memory, we need to encode both the global context from the support set s as well
as the pixels x<t generated so far. We can obtain these features simply by taking any layer of a
PixelCNN conditioned on the support set:
qt = PixelCNNL (f (s), x<t ),
(4)
where L is the desired layer of hidden unit activations within the PixelCNN network. In practice we
use the middle layer.
To incorporate the patch attention features into the pixel predictions, we build a scoring function using q and pkey . Following the design proposed by Bahdanau et al. (2014), we compute a normalized
matching score αtj between query pixel qt and supporting patch pkey
as follows:
j
etj = v T tanh(qt + pkey
j )
PSK 2
αtj = exp(etj )/ k=1 exp(eik ).
The resulting attention-gated context function can be written as:
PSK 2
ft (s, x<t ) = j=1 αtj pvalue
j
(5)
(6)
(7)
which can be substituted into the objective in equation 1. In practice we combine the attention
context features ft (s, x<t ) with global context features f (s) by channel-wise concatenation.
This attention mechanism can also be straightforwardly applied to the multiscale PixelCNN architecture of Reed et al. (2017). In that model, pixel factors P (xt |x<t , ft (s, x<t )) are simply replaced
by pixel group factors P (xg |x<g , fg (s, x<g )), where g indexes a set of pixels and < g indicates all
pixels in previous pixel groups, including previously-generated lower resolutions.
We find that a few simple modifications to the above design can significantly improve performance.
First, we can augment the supporting images with a channel encoding relative position within the
image, normalized to [−1, 1]. One channel is added for x-position, another for y-position. When
4
Published as a conference paper at ICLR 2018
patch features are extracted, position information is thus encoded, which may help the network
assemble the output image. Second, we add a 1-of-K channel for the supporting image label, where
K is the number of supporting images. This provides patch encodings information about which
global context they are extracted from, which may be useful e.g. when assembling patches from
multiple views of an object.
3.2
F EW- SHOT LEARNING WITH M ETA P IXEL CNN
As an alternative to explicit conditioning with attention, in this section we propose an implicitlyconditioned version using gradient descent. This is an instance of what Finn et al. (2017a) called
model-agnostic meta learning, because it works in the same way regardless of the network architecture. The conditioning pathway (i.e. flow of information from supports s to the next pixel xt )
introduces no additional parameters. The objective to minimize is as follows:
L(x, s; θ) = − log P (x; θ0 ), where θ0 = θ − α∇θ Linner (s; θ)
(8)
A natural choice for the inner objective would be Linner (s; θ) = − log P (s; θ). However, as shown
in Finn et al. (2017b) and similar to the setup in Neu & Szepesvári (2012), we actually have considerable flexibility here to make the inner and outer objectives different.
Any learnable function of s and θ could potentially learn to produce gradients that increase
log P (x; θ0 ). In particular, this function does not need to compute log likelihood, and does not even
need to respect the causal ordering of pixels implied by the chain rule factorization in equation 1.
Effectively, the model can learn to learn by maximum likelihood without likelihoods.
As input features for computing Linner (s, θ), we use the L-th layer of spatial features q =
PixelCNNL (s, θ) ∈ RS×H×W ×Z , where S is the number of support images - acting as the batch
dimension - and Z is the number of feature channels used in the PixelCNN. Note that this is the
same network used to model P (x; θ).
The features q are fed through a convolutional network g (whose parameters are also included in θ)
producing a scalar, which is treated as the learned inner loss Linner . In practice, we used α = 0.1,
and the encoder had three layers of stride-2 convolutions with 3 × 3 kernels, followed by L2 norm
of the final layer features. Since these convolutional weights are part of θ, they are learned jointly
with the generative model weights by minimizing equation 8.
Algorithm 1 Meta PixelCNN training
1: θ: Randomly initialized model parameters
2: p(s, x) : Distribution over support sets and target outputs.
3: while not done do
. Training loop
4:
{si , xi }M
∼
p(s,
t).
.
Sample
a
batch
of
M
support
sets
and
target outputs
i=1
5:
for all si , xi do
6:
qi = P ixelCN NL (si , θ)
. Compute support set embedding as L-th layer features
7:
θi0 = θ − α∇θ g(qi , θ)
. Adapt θ using Linner (si , θ) = g(qi , θ)
P
0
8:
θ = θ − β∇θ i − log P (xi , θi )
. Update parameters using maximum likelihood
Algorithm 1 describes the training procedure for Meta PixelCNN. Note that in the outer loop step
(line 8), the distribution parametrized by θi0 is not explicitly conditioned on the support set images,
but implicitly through the weight adaptation from θ in line 7.
4
E XPERIMENTS
In this section we describe experiments on image flipping, Omniglot, and Stanford Online Products.
In all experiments, the support set encoder f (s) has the following structure: in parallel over support
images, a 5 × 5 conv layer, followed by a sequence of 3 × 3 convolutions and max-pooling until the
spatial dimension is 1. Finally, the support image encodings are concatenated and fed through two
fully-connected layers to get the support set embedding.
5
Published as a conference paper at ICLR 2018
4.1
I MAGE N ET F LIPPING
As a diagnostic task, we consider the problem of image flipping as few-shot learning. The “support
set” contains only one image and is simply the horizontally-flipped target image. A trivial algorithm
exists for this problem, which of course is to simply copy pixel values directly from the support to
the corresponding target location. We find that the Attention PixelCNN did indeed learn to solve the
task, however, interestingly, the baseline conditional PixelCNN and Meta PixelCNN did not.
We trained the model on ImageNet (Deng et al., 2009) images resized to 48×48 for 30K steps using
RMSProp with learning rate 1e−4 . The network was a 16-layer PixelCNN with 128-dimensional
feature maps at each layer, with skip connections to a 256-dimensional penultimate layer before
pixel prediction. The baseline PixelCNN is conditioned on the 128-dimensional encoding of the
flipped image at each layer; f (s) = f (x0 ), where x0 is the mirror image of x. The Attention
PixelCNN network is exactly the same for the first 8 layers, and the latter 8 layers are conditioned
also on attention features ft (s, x<t ) = ft (x0 , x<t ) as described in section 3.1.
Source
Without attention
Source
With attention
Figure 3: Horizontally flipping ImageNet images. The network using attention learns to mirror,
while the network without attention does not.
Figure 3 shows the qualitative results for several validation set images. We observe that the baseline
model without attention completely fails to flip the image or even produce a similar image. With
attention, the model learns to consistently apply the horizontal flip operation. However, it is not
entirely perfect - one can observe slight mistakes on the upper and left borders. This makes sense
because in those regions, the model has the least context to predict pixel values. We also ran the
experiment on 24 × 24 images; see figure 6 in the appendix. Even in this simplified setting, neither
the baseline conditional PixelCNN or Meta PixelCNN learned to flip the image.
Quantitatively, we also observe a clear difference between the baseline and the attention model. The
baseline achieves 2.64 nats/dim on the training set and 2.65 on the validation set. The attention
model achieves 0.89 and 0.90 nats/dim, respectively. During sampling, Attention PixelCNN learns
a simple copy operation in which the attention head proceeds in right-to-left raster order over the
input, while the output is written in left-to-right raster order.
4.2
O MNIGLOT
In this section we benchmark our model on Omniglot (Lake et al., 2013), and analyze the learned
behavior of the attention module. We trained the model on 26 × 26 binarized images and a 45 − 5
split into training and testing character alphabets as in Bornschein et al. (2017).
To avoid over-fitting, we used a very small network architecture. It had a total of 12 layers with 24
planes each, with skip connections to a penultimate layer with 32 planes. As before, the baseline
model conditioned each pixel prediction on a single global vector computed from the support set.
The attention model is the same for the first half (6 layers), and for the second half it also conditions
on attention features.
The task is set up as follows: the network sees several images of a character from the same alphabet,
and then tries to induce a density model of that character. We evaluate the likelihood on a held-out
example image of that same character from the same alphabet.
All PixelCNN variants achieve state-of-the-art likelihood results (see table 1). Attention PixelCNN
significantly outperforms the other methods, including PixelCNN without attention, across 1, 2, 4
6
Published as a conference paper at ICLR 2018
Model
Bornschein et al. (2017)
Gregor et al. (2016)
Conditional PixelCNN
Attention PixelCNN
Number of support set examples
1
2
4
8
0.128(−−)
0.123(−−)
0.117(−−)
− − (−−)
0.079(0.063) 0.076(0.060) 0.076(0.060) 0.076(0.057)
0.077(0.070) 0.077(0.068) 0.077(0.067) 0.076(0.065)
0.071(0.066) 0.068(0.064) 0.066(0.062) 0.064(0.060)
Table 1: Omniglot test(train) few-shot density estimation NLL in nats/dim. Bornschein et al. (2017)
refers to Variational Memory Addressing and Gregor et al. (2016) to ConvDRAW.
and 8-shot learning. PixelCNN and Attention PixelCNN models are also fast to train: 10K iterations
with batch size 32 took under an hour using NVidia Tesla K80 GPUs.
We also report new results of training a ConvDRAW Gregor et al. (2016) on this task. While the
likelihoods are significantly worse than those of Attention PixelCNN, they are otherwise state-ofthe-art, and qualitatively the samples look as good. We include ConvDRAW samples on Omniglot
for comparison in the appendix section 6.2.
PixelCNN Model
Conditional PixelCNN
Attention PixelCNN
Meta PixelCNN
Attention Meta PixelCNN
NLL test(train)
0.077(0.067)
0.066(0.062)
0.068(0.065)
0.069(0.065)
Table 2: Omniglot NLL in nats/pixel with four support examples. Attention Meta PixelCNN is a
model combining attention with gradient-based weight updates for few-shot learning.
Meta PixelCNN also achieves state-of-the-art likelihoods, only outperformed by Attention PixelCNN (see Table 2). Naively combining attention and meta learning does not seem to help. However, there are likely more effective ways to combine attention and meta learning, such as varying
the inner loss function or using multiple meta-gradient steps, which could be future work.
Supports
PixelCNN
Attention PixelCNN
Meta PixelCNN
Figure 4: Typical Omniglot samples from PixelCNN, Attention PixelCNN, and Meta PixelCNN.
Figure 1 shows several key frames of the attention model sampling Omniglot. Within each column,
the left part shows the 4 support set images. The red overlay indicates the attention head read
weights. The red attention pixel is shown over the center of the corresponding patch to which it
attends. The right part shows the progress of sampling the image, which proceeds in raster order.
We observe that as expected, the network learns to attend to corresponding regions of the support
7
Published as a conference paper at ICLR 2018
set when drawing each portion of the output image. Figure 4 compares results with and without
attention. Here, the difference in likelihood clearly correlates with improvement in sample quality.
4.3
S TANFORD O NLINE P RODUCTS
In this section we demonstrate results on natural images from online product listings in the Stanford
Online Products Dataset (Song et al., 2016). The data consists of sets of images showing the same
product gathered from eBay product listings. There are 12 broad product categories. The training
set has 11, 318 distinct objects and the testing set has 11, 316 objects.
The task is, given a set of 3 images of a single object, induce a density model over images of
that object. This is a very challenging problem because the target image camera is arbitrary and
unknown, and the background may also change dramatically. Some products are shown cleanly
with a white background, and others are shown in a usage context. Some views show the entire
product, and others zoom in on a small region.
For this dataset, we found it important to use a multiscale architecture as in Reed et al. (2017).
We used three scales: 8 × 8, 16 × 16 and 32 × 32. The base scale uses the standard PixelCNN
architecture with 12 layers and 128 planes per layer, with 512 planes in the penultimate layer. The
upscaling networks use 18 layers with 128 planes each. In Attention PixelCNN, the second half of
the layers condition on attention features in both the base and upscaling networks.
Source
With Attn
Source
No Attn
With Attn
No Attn
Figure 5: Stanford online products. Samples from Attention PixelCNN tend to match textures and
colors from the support set, which is less apparent in samples from the non-attentive model.
Figure 5 shows the result of sampling with the baseline PixelCNN and the attention model. Note
that in cases where fewer than 3 images are available, we simply duplicate other support images.
We observe that the baseline model can sometimes generate images of the right broad category,
such as bicycles. However, it usually fails to learn the style and texture of the support images. The
attention model is able to more accurately capture the objects, in some cases starting to copy textures
such as the red character depicted on a white mug.
8
Published as a conference paper at ICLR 2018
Interestingly, unlike the other datasets we do not observe a quantitative benefit in terms of test likelihood from the attention model. The baseline model and the attention model achieve 2.15 and 2.14
nats/dim on the validation set, respectively. While likelihood appears to be a useful objective and
when combined with attention can generate compelling samples, this suggests that other quantitative
criterion besides likelihood may be needed for evaluating few-shot visual concept learning.
5
C ONCLUSIONS
In this paper we adapted PixelCNN to the task of few-shot density estimation. Comparing to several
strong baselines, we showed that Attention PixelCNN achieves state-of-the-art results on Omniglot
and also promising results on natural images. The model is very simple and fast to train. By looking
at the attention weights, we see that it learns sensible algorithms for generation tasks such as image
mirroring and handwritten character drawing. In the Meta PixelCNN model, we also showed that
recently proposed methods for gradient-based meta learning can also be used for few-shot density
estimation, and also achieve state-of-the-art results in terms of likelihood on Omniglot.
R EFERENCES
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul,
Brendan Shillingford, and Nando de Freitas. Learning to learn by gradient descent by gradient
descent. 2016.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly
learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
S Bartunov and DP Vetrov. Fast adaptation in generative models with generative matching networks.
arxiv preprint 1612.02192, 2016.
Jörg Bornschein, Andriy Mnih, Daniel Zoran, and Danilo J. Rezende. Variational memory addressing in generative models. 2017.
Yutian Chen, Matthew W. Hoffman, Sergio Gomez Colmenarejo, Misha Denil, Timothy P. Lillicrap,
and Nando de Freitas. Learning to learn for global optimization of black box functions. In ICML,
2017.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale
hierarchical image database. In CVPR, pp. 248–255, 2009.
Yan Duan, Marcin Andrychowicz, Bradly Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever,
Pieter Abbeel, and Wojciech Zaremba.
One-shot imitation learning.
arXiv preprint
arXiv:1703.07326, 2017.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation
of deep networks. 2017a.
Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, and Sergey Levine. One-shot visual imitation learning via meta-learning. arXiv preprint arXiv:1709.04905, 2017b.
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional
sequence to sequence learning. arXiv preprint arXiv:1705.03122, 2017.
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo J. Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. In Proceedings of The 32nd International Conference
on Machine Learning, pp. 1462–1471, 2015.
Karol Gregor, Frederic Besse, Danilo J. Rezende, Ivo Danihelka, and Daan Wierstra. Towards
conceptual compression. In Advances In Neural Information Processing Systems, pp. 3549–3557,
2016.
Harry F Harlow. The formation of learning sets. Psychological review, 56(1):51, 1949.
9
Published as a conference paper at ICLR 2018
Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent.
In ICANN, pp. 87–94. Springer, 2001.
Brenden M Lake, Ruslan R Salakhutdinov, and Josh Tenenbaum. One-shot learning by inverting a
compositional causal process. In NIPS, pp. 2526–2534, 2013.
Gergely Neu and Csaba Szepesvári. Apprenticeship learning using inverse reinforcement learning
and gradient methods. arXiv preprint arXiv:1206.5264, 2012.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017.
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.
Generative adversarial text-to-image synthesis. In ICML, pp. 1060–1069, 2016.
Scott E. Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez, Ziyu Wang, Dan Belov, and
Nando de Freitas. Parallel multiscale autoregressive density estimation. In ICML, 2017.
Danilo J. Rezende, Ivo Danihelka, Karol Gregor, Daan Wierstra, et al. One-shot generalization
in deep generative models. In Proceedings of The 33rd International Conference on Machine
Learning, pp. 1521–1529, 2016.
Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Metalearning with memory-augmented neural networks. In ICML, 2016.
Pranav Shyam, Shubham Gupta, and Ambedkar Dukkipati. Attentive recurrent comparators. In
ICML, 2017.
Linda Smith and Michael Gasser. The development of embodied cognition: Six lessons from babies.
Artificial life, 11(1-2):13–29, 2005.
Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted
structured feature embedding. In IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), 2016.
Elizabeth S Spelke and Katherine D Kinzler. Core knowledge. Developmental science, 10(1):89–96,
2007.
Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 1998.
Aäron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray
Kavukcuoglu. Conditional image generation with PixelCNN decoders. In NIPS, 2016.
Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one
shot learning. In NIPS, 2016.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich
Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual
attention. In International Conference on Machine Learning, pp. 2048–2057, 2015.
10
Published as a conference paper at ICLR 2018
6
6.1
A PPENDIX
A DDITIONAL SAMPLES
PixelCNN
Attention
PixelCNN
Meta
PixelCNN
Figure 6: Flipping 24×24 images, comparing global-conditional, attention-conditional and gradientconditional (i.e. MAML) PixelCNN.
6.2
Q UALITATIVE COMPARISON TO C ONV D RAW
Although all PixelCNN variants outperform the previous state-of-the-art in terms of likelihood, prior
methods can still produce high quality samples, in some cases clearly better than the PixelCNN samples. Of course, there are other important factors in choosing a model that may favor autoregressive
models, such as training time and scalability to few-shot density modeling on natural images. Also,
the Attention PixelCNN has only 286K parameters, compared to 53M for the ConvDRAW. Still, it
is notable that likelihood and sample quality lead to conflicting rankings of several models.
The conditional ConvDraw model used for these experiments is a modification of the models introduced in (Gregor et al., 2015; Rezende et al., 2016), where the support set images are first encoded
with 4 convolution layers without any attention mechanism and then are concatenated to the ConvLSTM state at every Draw step (we used 12 Draw-steps for this paper). The model was trained using
the same protocol used for the PixelCNN experiments.
Attention PixelCNN samples
Test NLL = 0.065 nats/dim
ConvDRAW samples
Test NLL = 0.076 nats/dim
Support set examples
Figure 7: Comparison to ConvDRAW in 4-shot learning.
11
| 9 |
Capri: A Control System for Approximate Programs
Swarnendu Biswas
Yan Pei
Donald S. Fussell
University of Texas at Austin (USA)
University of Texas at Austin (USA)
University of Texas at Austin (USA)
[email protected]
[email protected]
[email protected]
Keshav Pingali
arXiv:1706.00767v1 [] 2 Jun 2017
University of Texas at Austin (USA)
[email protected]
Abstract
Approximate computing trades off accuracy of results for resources such as energy or computing time. There is a
large and rapidly growing literature on approximate computing that has focused mostly on showing the benefits of
approximation. However, we know relatively little about how to control approximation in a disciplined way.
This document briefly describes our published work of controlling approximation for non-streaming programs that
have a set of “knobs” that can be dialed up or down to control the level of approximation of different components in the
program. The proposed system, Capri, solves this control problem as a constrained optimization problem. Capri uses
machine learning to learn cost and error models for the program, and uses these models to determine, for a desired
level of approximation, knob settings that optimize metrics such as running time or energy usage. Experimental results
with complex benchmarks from different problem domains demonstrate the effectiveness of this approach.
This report outlines improvements and extensions to the existing Capri system to address its limitations, including a
complete rewrite of the software, and discusses directions for follow up work. The document also includes instructions
and guidelines for using the new Capri infrastructure.
1
Introduction
There is growing interest in approximate computing as a way of reducing the energy and time required to execute
applications [2, 4, 49, 55, 63]. In conventional computing, programs are usually treated as implementations of
mathematical functions, so there is a precise output that must computed for a given input. In many problem domains,
it is sufficient to produce some approximation of this output; for example, when rendering a scene in graphics, it is
acceptable to take computational short-cuts if human beings cannot tell the difference in the rendered scene.
In this paper, we focus on a class of approximate programs that we call tunable approximate programs. Intuitively,
these programs have one or more knobs or parameters that can be changed to vary the fidelity of the produced output.
These knobs might control the number of iterations performed by a loop [8, 44], determine the precision with which
floating-point computations are performed [46, 51], or switch between precise and approximate hardware [16]; for the
purposes of this paper, the source of approximation does not matter so long as the fidelity of the output is changed by
adjusting the knobs.
There is now a fairly large literature on this subject, some of which is surveyed in Section 2. Most of this work
addresses what we call the forward problem in this paper: they show that for some programs, particular techniques such
as skipping loop iterations or tasks, within limits, degrade output quality in an acceptable way while reducing energy
or running time. Other work has focused on type systems and static analyses to ensure that computational short-cuts
do not affect portions of the program that may be critical to correctness such as control-flow decisions or memory
management [11, 34, 49].
However, exploiting approximation effectively requires the solution to what we call the inverse problem in this
paper: given a program with knobs that control execution parameters like the number of the iterations executed by a
loop and a lower bound on output quality, how do we set the knobs optimally to minimize energy or running time? This
is a classical optimal control problem. What makes the problem particularly difficult is that for most programs, optimal
values of knob settings are very dependent on the values of inputs, as we show in Section 3, so auto-tuning, the standard
parameter optimization technique used in computer systems, is not useful.
1
This paper describes our published work on solving the inverse problem for tunable approximate programs [56].
Roughly speaking, given a permissible error for the output, we want to set the knobs to minimize computational costs,
such as running time or energy, while meeting the error constraint. The work describes a solution to the proactive
control problem for non-streaming programs that consist of components controlled by one or more knobs and in which
the error and cost behaviors are substantially different for different inputs. Our approach is to treat the control problem
as a constrained optimization problem in which an objective function such as energy is minimized, subject to constraints
such as a lower bound on the acceptable output quality. The major challenge is that this formulation requires us to
know the objective and constraint functions, but in general these are complex functions that we do not know and cannot
write down in closed form. We deal with this by modeling these functions using machine learning techniques. The
resulting Capri control system [56], which is an example of open-loop control [3], is fairly successful in controlling
approximation in a principled way in complex applications from several domains including machine learning, image
processing and graph analytics.
This paper extends the scope of our published work [56], by first highlighting limitations of the existing control
system, such as, potential lack of scalability, and neglecting the prediction error from cost and error models. We discuss
follow up work to Capri that addresses these issues. A requisite for extending Capri is to reimplement the system in a
scalable and modular fashion. This paper discusses our new implementation in detail to acquaint potential users with
the internals of the Capri control system. We present approximation results with the new Capri implementation.
2
Related Work
Approximation opportunities in software and hardware. Loop perforation [55] explores skipping iterations during
loop execution. Rinard explores randomly discarding tasks in parallel applications [43]. Rinard [44] and Campanoni
et al. [10] explore relaxing synchronization in parallel applications. Karthik et al. explore different algorithmic
level approximation schemes on a video summarization algorithm [58]. Samadi et al. develop methods to recognize
patterns in programs that provide approximation opportunities [47]. These techniques could be used to provide knobs
automatically and thus complement our work.
A distortion model using linear regression was used by Rinard to demonstrate the feasibility of their approximation
techniques [43]. The results in this paper (Section 4.4) show that linear regression is not useful for modeling quality
and cost.
Researchers have proposed several hardware designs for exploiting approximate computing [15, 16, 33, 41, 50, 54].
Our techniques can be useful in choosing how to most efficiently map programs onto such hardware and thus increase
the effectiveness of such approaches.
Reactive control of streaming applications. In this problem, the system is presented with a stream of inputs in
which successive inputs are assumed to be correlated with each other, and results from processing one input can be used
to tune the computation for succeeding inputs. The Green System [4] periodically monitors QoS values and recalibrates
using heuristics whenever the QoS is lower than a specified level. PowerDial [25] leverages feedback control theory
for recalibration. Argo [20] is an autotuning system for adapting application performance to changes in multicore
resources. SAGE [48] exploits this approach on GPU platforms. Fang et al. use simulated annealing to adjust the knob
settings [17]. The problem considered in this paper is fundamentally different since it involves proactive control of an
application with a single input rather than reactive control for a stream of inputs. However, the techniques described in
this paper may be applicable to reactive control as well.
Auto-tuning. Auto-tuning explores a space of exact implementations to optimize a cost metric like running time; in
contrast, the control problem defined in this paper deals with both error and cost dimensions. Several papers [2, 14]
have extended the PetaBricks [1] auto-tuning system to include an error bound. Ding et al. group training inputs into
clusters based on user-provided features, and auto-tuning is used to find optimal knob settings for each cluster for given
error bounds [14]. For a new input, optimal knob settings for the same error bounds are determined by classifying
the input into one of the clusters and using the predetermined knob settings for that cluster. Auto-tuning is used by
Precimonious [46] to lower precision of floating point types to improve performance for a particular accuracy constraint.
The main difference between our approach and auto-tuning approaches is that our approach builds error and cost
models that can be used to control knobs for any error constraint presented during the online phase, without requiring
re-training. Since auto-tuning approaches do not build models, they do not have the ability to generalize their results
2
from the constraints they were trained for to other constraints. Note that the clustering-classification approach can be
combined with our approach by clustering the training inputs and building a different model for each cluster.
Programming language support. EnerJ [49] proposes a type system to separate exact and approximate data in
the program. Rely [11] uses static analysis techniques to quantify the errors in programs on approximate hardware.
Ringenburg et al. [45] developed tools for debugging approximate programs. None of these tools deal with controlling
the tradeoff of error versus cost.
Error guarantees. Zhu et al. formulated a randomized program transformation which trades off expected error versus
performance as an optimization problem [63]. However, their formulation assumes very small variations of errors across
inputs, an assumption violated in all of our complex real-world benchmark applications. They also assume the existence
of an a priori error bound for each approximation in the program and that the error propagation is bounded by a linear
function. These assumptions make it hard to apply this approach to real-world applications. For example, we know
of no non-trivial error bounds for our benchmarks. Chisel [34] extends Rely [11] to use integer linear programming
(ILP) to optimize the selection of instructions/data executed/stored in approximate hardware. The ILP constraints are
generated by static analysis, which propagates errors through the program. While they consider input reliability, i.e. the
probability that an input contains errors, they do not deal with input sensitivity of the error function. Moreover, their
error propagation method requires that the error function be differentiable and their static analysis technique cannot
deal with input-dependent loops, which are common in our benchmarks and many other applications.
ApproxHadoop [21] applies statistical sampling theory to Hadoop tasks for controlling input sampling and task
dropping. While statistical sampling theory gives nice error guarantees, the application of this technique is restricted.
Mahajan et al. [29] uses neural networks to predict whether to invoke approximate accelerators or execute precise code
for a quality constraint.
Analytic properties of programs. Several techniques exist to verify whether a program is Lipschitz-continuous [12].
Smooth interpretation [13] can smooth out irregular features of a program. Given the input variability exhibited in our
applications, analytic properties usually provide very loose error bounds and are not helpful for setting knobs.
3
Problem Formulation
We describe the formulation of the proactive control problem we use in this paper, justifying it by describing other
reasonable formulations and explaining why we do not use them. To keep notation simple, we consider a program that
can be controlled with two knobs K1 and K2 that take values from finite sets κ1 and κ2 respectively. We write K1 : κ1
and K2 : κ2 to denote this, and use k1 and k2 to denote particular settings of these knobs. The formulation generalizes
to programs with an arbitrary number of knobs in an obvious way.
It is convenient to define the following functions.
• Output: In general, the output value of the tunable program is a function of the input value i, and knob settings k1
and k2 . Let f (i, k1 , k2 ) be this function.
• Error/quality degradation: Let fe (i, k1 , k2 ) be the magnitude of the output error or quality degradation for input i
and knob settings k1 and k2 .
• Cost: Let fc (i, k1 , k2 ) be the cost of computing the output for input i with knob settings k1 and k2 . This can be
the running time, energy or other execution metric to be optimized.
We formulate the control problem as an optimization problem in which the error is bounded for the particular input
of interest. This optimization problem is difficult to solve, so we formulate a different problem in which the expected
error over all inputs is less than the given error bound, with some probability. This gives the implementation flexibility
in finding low-cost solutions.
One way to formulate the control problem informally is the following: given an input value and a bound on the
output error, find knob settings that (i) meet the error bound and (ii) minimize the cost. This can be formulated as the
following constrained optimization problem.
Problem Formulation 1. Given:
3
error
1e−02
●
●
●
●
●
●
●
●
●
●
● ●●● ●
●
●
●●● ●
●●
●●●●●
●
●
●● ●
●
● ●
●
●
● ●
●●
●
● ●● ●●●
●
●
● ●●● ●●●●●●●● ●
●●●
● ● ●
●
●●●
●● ● ●● ●
●
●● ●●
●
●
●●
●
●
●
●
●
●●
● ●● ●●●●●●●
●
●●
●
●
●●
●●
●
●
●●
●●
●
●
●
●
●
●
●●
●
●● ●●●
●●
●●●● ●●
●
●
●
●
●
●●
● ●●●
●
● ●
●●
● ●●●
●
●
●
●●
●● ●
●
●● ● ●● ● ●● ● ●
●
●
●●●●●
●
● ●
●● ●
●● ●● ●●
●
●
●
●●●●●● ●●●●
●●
●
●
●●● ●●
●
●●●●
●
●
●●●
●
●
● ● ●● ●
● ● ●
●
●
● ●●
●
●
●●
●● ●
●
● ●● ●●
●●●● ●
●
●
●
● ●●● ●
●●●
●
●
●●●●● ●● ●
● ●
●●
●
●
●●
●
●● ● ●● ●
●● ●●●●●●●
● ●●●
●
● ●●
●
●
● ●
●
● ●●
● ●● ●
●
● ●
● ●
●
●●
●
●
●
●●●
●●● ● ●
●
●
●●
●
●●●
● ●●●
●● ●
●
●
●
●
●
●
●●
●
●
●●
●●●
●
● ●●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●●● ●
●
●
●
●●
●●
● ●
●
●
●●
●● ● ●
●●
●●●
●●
● ● ● ●●●●● ●●
●
● ●● ●● ● ● ● ●●●●
●
●
●
● ● ●●
●
●
●
●
●
● ●● ●●●
●
● ● ●●
●
●●
●
●●●●
● ● ●●
● ●●●
●●
●●
●●
●●●
●
●● ●●● ●● ●
●● ●
●
●●
●●
●
●
●
●
●
● ●●
●
●
●● ● ● ● ● ●
●
●
●
● ●●●
●
●●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●●
●●
●●●●
●
●
●●
●●
●●
● ●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●● ●●
●
●●
●
● ●
●
●
●●
●
●●
●● ●●● ●
●
●
●
●●
●
●●●
●
●●
● ●●● ●
●● ●●
●
●
●●●●
●●●●●
●
●
●
● ●●
●● ●
●
●●●
●
●
●
●
●
●
●
●
●
● ●● ●
●●
● ● ●
●● ●
●
●●
●●● ● ●
●●●
●
●●
●
● ●●●●●● ● ●●●●
● ●
●
●●●●
●
●
●
●●
●
●
●
●
●●
●
●
● ● ● ● ● ●●
●●
●●
●
●
●
●
● ● ●
●
●
● ●
●
● ● ● ●● ●●●●●●●
●●
●● ● ●
●
●●
●●
●
●
●
●●
●●
●
●●
● ●● ●●
●●●●
●
● ●
●
●
●
●
●
●
●
●
● ● ●●● ●●●
●
●●●●
●
●
●
●●
●
●●
● ●
●●●●●●
●●
●
●
●●
●
●
●●
●●
●●
●●●
● ● ●●
●
●
●
●
●●
●
●
●●●
●●
●
●
●●
●
●
●
●
●
●●
●●
●
●
●
●
●
●
●●● ●●●●●
●
●
●
●
● ●●
●
●●
●●
●● ●
●
●●
●● ●
●
●●
●● ● ● ●●●
●
●
●●
●●
●
●
●
●
●
●●● ●
●
●
●
●●
●
●●●
●
●
●
●
●●
●●
●●● ●
●
●●
●
●● ●
●
●●
●
●●
●●
●● ● ●
●
●●
● ●
●
● ●
●
●● ● ● ● ●●
●●●
●
●●
●
●
●●
●
●
●●●
●●●●●
● ● ●●
●●
●
●●
●●
●
●
●
●
●
●●
●
●
●
●●
● ●
●●
●●●●●
●
●
●
●●●●●
●
●
● ●●● ●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●●
●
●● ●●● ●●●
●
●●
●
●
●
●●●● ●
●●●
●●●
●●
●
●
●
●●●
●
●
●
●
●
●
●
●●
● ●● ●●●●
●
●
● ●●●
●●●●
●●
●
●●● ●●●●
●●
●
●
●●
●
●● ●● ●
●
●●
●●
●●●
●● ● ● ● ● ● ● ● ●●●
●
●●
●
●
●●
● ● ●
●
●
●
●●●●●●● ● ●
●●
●
●
●●
●● ● ● ●
●
●
●
● ●●● ●
●
●
●●
●●●
●●
●
●●●
●
●
● ●
● ●
●
●
●●
●
●
●●
●●
● ●
●
●●●
●●
●●
● ● ● ●●
●
●
●
●
●●
●
●●
●
●
●
●●
● ●●●
●
●
●●
●● ●
●
●●
●●
●●
●
●
●●●
●
●
●●
●
●●
●
●● ●
●
● ●●● ● ● ●●● ●
●●●●●● ●●●●●
●
●
●
●
●
●
●
●●●
●
●●
●
●
●
●●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●●●
●
●
●●●
●
●
●●● ●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
● ●●
●
●●
●
●
●
●●●●●
●
●
●●
●
● ● ●●●●●
●
●●
●●●●
●
●
●
●
●
●● ● ●●●●
●
●
●
●
●●
●●
●●
●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●● ●●●●
●
●
●
●
●●●●●
●●
●
●
●●●●
●●
●
●
●
●●
●
●
●●
●
● ●●
●
●
●
●
●●
●
●
●
●●
●
●●
●●●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●●
●
●
●●
●
●
●●
●
●● ●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●●●
● ●●
●● ●
●
●
●
●●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●●●●
● ●
●●
●● ●
● ●● ● ●●
●●
●● ●●●●
● ● ●● ●
●●
●
●
●●
●●
●● ●
●
●
●
●
●
●●
●●●
●●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●●
●●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●●
●
●
●●
●●
●●
●●
●●
●
●
●
●
●●
●●●
●
●● ●
●
●●
●●
●
●
●
●●●
●
●●●
●
●
●
●●
●
●
●●
●●●
●
●●
●
●●
●
●●
●
●
●●
●●●
●●●●
●
●
●
●
●
●●
●
●
●
●●
●
●
●●
●●
●
●●
●
●●●
●●
●●●
●
●●●●●
●
●●
●
● ●● ●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ● ● ●●● ● ● ●
●
●
●●
●●
●●
●
●
●
●
●●
●
●
●● ●
●
●
●
●●
●
●
●●
●
●●●
●
●
●●●
● ● ●
●
●●
●
●●
●
●●●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●●
●●
●
●●
●
●●
●
●●
●●
●● ●● ●●●●●●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●●●●●●●●●●●
●
●
●●
●
●
●
●●●
●
●
●
●
●●
●●●
●
●●●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●●
●
●
●
●
●●● ●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●●●
●●●
●● ● ● ●
●●
●
●●
●
●
●●
●●
●
●● ●
●
●
● ●
●
●
● ●●
●
●
●●
●
●
●
●
●●●
●●
●●
●● ●● ●●●● ● ● ●●●●● ●
●
●
●●●●●
●
●
●
●
●
●●●
●●
●
●●●
●
●
●● ●
●●
●● ●
●●
●●●●
●
●
●
●●
●
●●
●● ●
●● ●●
●●
●
●●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●●
●
● ●
●●
●●
●● ●
●● ●●●
●●
●●
●●●●●●●
●●
●●●
●
●
●●
●
●●
●
●● ●●●●●●● ● ●
●●
●
● ●
●
●
●
●●●
● ●●
●●
●●●
●
●
●
●●●●
●● ●
● ● ● ●●●●
●
●●●
●
●
●●
●
●
●
●
●
● ●●
●
●
●
●●
●●
●
●
●●
●●●●
●●● ●
●●● ● ●
● ●●●
● ●● ●●
●
● ●●
●●
●
●
●●
● ●
●●●
●
●●
●
●● ●
●●
●
●●
●●
●
●
●● ● ●
●
●
●
●●●
●
●
●●
● ●
●
●
●●●●● ● ●● ●●
●●
●●●
●●●●●●
●●
●
●
●
●
●
● ●●
● ●● ● ● ●●●●
●
●
●● ●
●
●●
● ●●●
●
●
●●●
●●
●
●
●●
●●
●
●●
●● ●●●●
●
●●
●●●
● ●
●
●
●
●
●
●
●
● ● ●●●
●
●● ●●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●●
●
●●
● ● ● ●●
●●
●●●
●● ●
●●
●
●
●● ●
●
●●
●● ●
●●
●●●●
● ●●●●
●●●●●●●●
●
●
●
●●
●
●
●
●
●●
●●●
●●●
●
●
●
●
●● ●
●
●●
●
●
●
●●
● ●
●
●
●●●
●
●
●
●
●
●●●
●● ●●●●
●
●
●
●●
●●
●
●
●●
●
● ●
●
●
●
●●●● ●
●● ●●●●
●
●
●●
●
●●
●
●●●●● ●
●●
● ●●●●
●
●● ●
●
●
●●
●●●●
●
●
●
● ●
●
● ●●●
●●● ●
● ●●
●●●
●
●● ●
●
●●
●
●●
●
●
● ● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●●
●
●●
●
● ●●
●
●●
●●
●
●
●
● ●● ● ●●
●●
●
●
●●
●
●
●●●
●●● ●●
●●●
●
●●●
● ● ●●
●●
●●●●
●
●●● ●● ●
●●
●
●●
● ●
●
●● ● ●
●
●●●
●●
●●●
●
●●
●
●
●
●
●
●●●
●● ● ● ●
●●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
● ●●
● ●●
●●
●●
● ●●●●
●
●
●●
● ●●
●
●
●
●●
●
●
●
●
●
●●● ● ●● ●●
●●
●
●● ●● ●
●
●
●●●●
●● ●
●●●
●
●● ●
●
●●
●●
●
●
●
●
●
●
● ●
●
●●●
●
●
●
●
●
●
●
●
●
●●●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●● ● ●●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
● ●●
●●●
●
●●●●●
●
●
●
●●●●●
●
●●●
●●●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●
●●
●
●● ●●
●
●
●
●
●
●
●
●
●
●●
● ●●
●
●
●●
●
●
●
●●
● ●
●
●
●
● ●●
●
●
●●
●
●
●
●
●
●
●
●
●●●●
●
●●
●●● ● ● ● ● ●● ● ●● ●●
●●●
●●● ●
●
●●
●●
●
●
● ●●
●●
●●
●
●
●● ● ●
●●● ●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
● ●●●●● ●●
●
●
●
●●
●●
●
●●●● ● ● ●
●
●
●
●●
●
●●
● ●● ●
●●
●●
●● ●
●●
●
●
●
●
● ● ●
●
●
●●
●
●●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
● ●●
●
●●
●
●● ●●
●●
●
●
●
●
●●
●●
●●
●
●●
●
●
●● ●●●●
● ●●
●●
●
●
●
●
●
●
● ●● ●●●●●●●
●
●
●●
●
● ● ●●●
●● ●●●● ●
● ●
●●●
●
●
●● ●
●
● ●●
●
●●
●
●
●
●
●●
●●
●
●
●
● ●●
●●
●
●● ●●
●● ●
●
●
●●●●
●
●
●
●
● ●● ●
●
●●●
● ●● ●
●
●●
●
●
●
● ●● ●
●●● ●
●
●
●
●●
●
●
●
●
●
●●●
●
●
● ● ●
●● ●●
●
●●●
● ●
●
●
●●
●● ● ● ●
● ●
●
●●
● ●●
●
●
● ●
●
●
●
●●
●
●
●●●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●●●●
●
●
●
●●
● ● ●●
●
●
●
●●
●
● ●
●
●
● ●●
●●
●
●
●●●
●
●
● ●
●
●
●●
●
●
●
●●●
●
●
●
●
●
●● ●
●
●●●●
●
●
●
●●
●
●
●●
●
●●
●
●●
●
●
●
● ●
●
●●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●●●●●
● ●
●●
●
●
●●
●
●
●●
● ●
●●
●● ●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●●
●
●●
●
●
●
●●
●
●
●
●
●● ●
● ●
●
●●●
●●
●
●
●
●
● ●●
●
●●
●
●
●
●
●
●
●
●
●
●●●
●
●
1e+00
1e−02
error
1e+00
●
●
●
●
●
●
●●●
●
●
●
● ●
●
●
●
●●●
●●
●
●
●
●
●
●
●●●
●●●
●●
●●
●
●
●
●●
● ●
●
●
●
●
●
●
●
●
●
●
1e−04
●
1e−04
●
●
1e+06
1e+08
1e+06
cost in ms
1e+08
cost in ms
Figure 1: Cost vs. error for GEM. Each dot represents Figure 2: Pareto-optimal curves for GEM benchmark. Difone knob setting for one input. Different colors represent ferent lines represent different inputs.
different inputs.
• a program with knobs K1 :κ1 and K2 :κ2 , and
• a set of possible inputs I.
For input i∈I and error bound >0, find k1 ∈κ1 , k2 ∈κ2 such that
• fc (i, k1 , k2 ) is minimized
• fe (i, k1 , k2 ) ≤
In the literature, the constraint fe (i, k1 , k2 ) ≤ is said to define the feasible region, and values of (k1 , k2 ) that
satisfy this constraint for a given input are said to lie within the feasible region for that input. The function fc (i, k1 , k2 )
is the objective function, and a solution to the optimization problem is a point that lies within the feasible region and
minimizes the objective function.
For most tunable programs, this is a very complex optimization problem since the Pareto-optimal knob settings vary
greatly for different inputs [56]. To get a sense of this complexity, consider the GEM benchmark, a graph partitioner for
social network graphs [60] studied in more detail in Sections 4.4 and 6.1. Figure 1 shows the results of running GEM
with a variety of inputs and different knob settings, and measuring the cost (running time of the program) and error of
the output of the resulting programs. In this figure, each point represents the cost and error for a single input graph and
knob settings combination; points that correspond to the same input graph are colored identically. It can be seen that
even for a single input graph, there are many knob combinations that produce the same output error, and that these
combinations have widely different costs.
For a given input graph and output error, we are interested in minimizing cost, so only the leftmost point for
each such combination is of interest. Figure 2 shows these Pareto-optimal points for each input graph. Since these
Pareto-optimal curves are very different for different inputs, it is difficult to produce the Pareto-optimal knob settings
for a given input and output error without exploring much of the space of knob settings for a given input, which is
intractable for non-trivial systems.
One way to simplify the control problem is to require only that the expected output error over all inputs be less
than some specified bound . Since some inputs may be more likely to be presented to the system than others, each
input can be associated with a probability that is the likelihood that input is presented to the system. This lets us give
more weight to more likely inputs, as is done in Valiant’s probably approximately correct (PAC) theory of machine
learning [59]. Since the cost function is still a function of the actual input, knob settings for a given value of will be
different in general for different inputs, but the output error will be within the given error bounds only in an average
4
sense. In our approach, we consider a variation of this optimization problem, inspired by Valiant’s work [59], in which
we are also given a probability π with which the error bound must be met. Intuitively, values of π less than 1 give the
control system a degree of slack in meeting the error constraint, permitting the system to find lower cost solutions. This
control problem can be formulated as an optimization problem as follows.
5
Online
Input
(ε,
π)
Controller
Tunable
Program
Error
Model
Training
Training
Inputs
Training
Inputs
Inputs
Error
Metric
Cost
Model
Model
Builder
Profiler
Cost
Metric
Offline
Blue
boxes
are
provided
by
programmers
Figure 3: Overview of the Capri control system
Problem Formulation 2. Given:
• a program with knobs K1 : κ1 and K2 : κ2 ,
• a set of possible inputs I, and
• a probability function p such that for any i ∈ I, p(i) is the probability of getting input i.
For an input i ∈ I, error bound > 0, and a probability 1 ≥ π > 0 with which this error bound must be met, find
k1 ∈ κ1 , k2 ∈ κ2 such that
• fc (i, k1 , k2 ) is minimized
P
•
p(j) ≥ π
(j∈I)∧(fe (j,k1 ,k2 )≤)
P
If the term
p(j) (denoted by Pe (, k1 , k2 )) is greater or equal to π, then (k1 , k2 ) is in the
(j∈I)∧(fe (j,k1 ,k2 )≤)
feasible region for error bound . For future reference, we call this the fitness of knob setting (k1 , k2 ) for error ;
intuitively, the greater the fitness of a knob setting, the more likely it is that it satisfies the error bound for the given
ensemble of inputs. In the rest of the paper, we refer to Problem Formulation 2 as the “control problem.”
4
Capri: Proactive Control for Approximate Programs
For the complex applications we are interested in, the error function fe (i, k1 , k2 ) and the cost function fc (i, k1 , k2 )
are non-linear functions of the inputs, and it is difficult if not impossible to derive closed-form expressions for them.
Therefore, we use machine learning techniques to build proxies for these functions offline, using a suitable collection of
training inputs. Figure 3 is an overview of the control system, which we call Capri. For a given program, the system
must be provided with a set of training inputs, and metrics for the error/quality of the output and the cost. The offline
portion of the system runs the program on these inputs using a variety of knob settings, and learns the functions fe and
fc . These models are inputs to the controller in the online portion of the system; given an input and values of and π,
the controller solves the control problem to estimate optimal knob settings. In the following, we describe the important
modules of the Capri control system.
4.1
Error Model
The error model is a proxy for the fitness function Pe (, k1 , k2 ) and is used to determine whether a knob setting is in
the feasible region. Intuitively, a knob setting is in the feasible region if the inputs for which the error is between 0 and
6
have a combined probability mass greater than or equal to π. We use Bayesian networks [38] to determine this. A
Bayesian network is a directed acyclic graph (DAG) in which each node represents a random variable in the model and
each edge represents the dependence relationship between the variables corresponding to the nodes of its end points.
There are several ways to model the error probability distribution using a Bayesian network. We use a simple model
in which each of the knobs and the error is modeled as a random variable and the output error depends on all of the
knobs. The disadvantage of this simple model is that the size of the table for the output error is exponential in the
number of knobs (see Section 5.1); however, it works well for the applications we have investigated. Our system allows
new models for error to be plugged in easily into the overall framework (Section 5.1).
4.2
Cost Model
The cost model is the proxy for fc (i, k1 , k2 ). We model both the running time and total energy. For most algorithms, the
running time can vary substantially for different inputs; after all, even for simple algorithms like matrix multiplication,
the running time is a function of the input size. For complex irregular algorithms like the ones considered in this
proposal, running time will depend not only the input size but also on other features of the input. For example, the
running time of a graph clustering algorithm is affected by the number of vertices and edges in the graph as well as the
number of clusters. Therefore, the running time is usually a complex function of input features and knob settings. Our
system currently requires the user to specify what these features are.
We use M5 [42], which builds tree-based models, to model the cost function fc . Input features and knob settings
define a multidimensional space; the tree model divides this space into a set of subspaces, and constructs a linear model
in each subspace. The division into sub-spaces is done automatically by M5, which is a major advantage of using
this system. Intuitively, this model can approximate cost well because the running time does not usually exhibit sharp
discontinuities with respect to knob settings.
4.3
Controller
The control algorithm must search the space of knob settings to find optimal knob settings, using the error and
cost models as proxies for fe and fc respectively. Our system is implemented so that new search strategies can be
incorporated seamlessly. This lets us evaluate model accuracy separately from search accuracy.
We evaluated two search algorithms: exhaustive search and Precimonious search [46]. If the error and cost models
are not expensive to evaluate and each knob has a finite number of settings, we can use exhaustive search. We sweep
over the entire space of knob settings, and for each knob setting, use the error model to determine if that knob setting is
in the feasible region. The cost model is then used to find a minimal cost point in the feasible region. In a large search
space, heuristics-based search is an effective way to trade-off search cost for quality of the result. Precimonious search,
which is based on the delta-debugging algorithm [22], is one such strategy. The algorithm starts with all knobs set to
the highest values, and attempts to lower these settings iteratively. Precimonious can quickly prune the search space but
the solution it finds may be a local minimum. Other search strategies can be implemented easily within Capri.
4.4
Results
We evaluate the control system on the following five complex applications: (i) GEM, a graph partitioner for social
networks [60], (ii) Ferret [5], a content-similarity based image search engine, (iii) ApproxBullet [40], a 3D physics game
engine, (iv) SGDSVM [8], a library for support vector machines, and (v) OpenOrd [31], a library for two-dimensional
graph layouts. We modified the code for these applications to permit control of approximations. These applications
provide between two and five knobs that allow tuning tradeoffs between a user-specified quality metric in each case and
the execution time or energy consumption. In addition, we did a blind test of the system using an unmodified radar
processing application [24], written by Hank Hoffmann at the University of Chicago.
Evaluation of the cost and error models. For each benchmark, we collected a set of inputs. To evaluate the error
and the cost models, the inputs were randomly partitioned into training and testing subsets. We evaluated our control
system for ranging from 0.0 to 1.0 and π ranging from 0.1 to 1.0. Training is done offline, so training time is not as
important as the accuracy of the cost and error models. Training time obviously increases with the number of training
inputs, but even for Ferret, which has the largest training set, it takes only 0.927 seconds to train the error model and
7
Ferret
GEM
●●
●
●
OpenOrd
●●●
●
0.75
●
●
●
●
●
●
●
●
●
● ● ●● ●
●
●
●●
●●
●
●
●●
●
● ●
●●
●●
●●●
●
● ● ●●
● ●
●
●●●● ●
●
●
●
● ●● ●
●
●●●
●
●
●●●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●●
●●
●
●
●●
●
●●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●●●
●
● ●●●
● ●
●●
●●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
● ●
●
●
●●
●
●
●
●
●
●
●●
● ●
●
●●
● ● ●●
●
●
●
● ●●●
●
●
●
●
● ●
●
●
●
●
●
●
●●●● ●●
●
●
●
●
●
●
●
●
●
● ●● ● ●●
●
●
●
●
●
●●●
● ●
●●●
●
●
●
●
●
● ● ●●
●
●
●
●●
● ●
●
●
●
●●●●●
●
●
●
●●
●●
●●
●
●
●
● ●
●●●
●●●
●
●
●●
● ●●●
●
●
●
●
●
●
●
● ●●●
●●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●●
●●●●
●
●
●●
●
●●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
● ●
●
●
●●
●
●
● ●●
●●●
●● ● ●
●
●
●
●● ●
●●
●
●
●
●● ●
●
●● ●● ●
●
●● ●
●
● ●
●●
● ●●
●●
●
●
● ●●● ●●
●
●●●●●
●
●
●●●
●●
● ●●
●
●●●●
●
●
●●
●●●
●●
●
●
●
●
●
● ●
●●
●
●●● ●●
●
●●
●●● ●
●●●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●●●●●
●
●●
●
●
●
●
●
●
●●●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
● ●● ●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
● ●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●●
●
●
●
●
●●●●●●●●
●●
●
●
●
●●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●● ● ● ●
●
●● ●
●●● ●
●
●
●
●●
●● ●
●
●
●
●●
●
●
0.00
●
●●
●
●
●
●
● ●
0.25
●
●
●
●●
●
●
●
● ●
●
●
●
●
0.50
●
●
●
●
●●
●
●● ●
●
●
●
●
Radar
●
●
●
●
1.00
SGD
● ●
●
●
●
●
● ●
●
●●●●
● ●●
●● ●●
●●
●
●
●
●
● ●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
● ●
●
●●
●
●
●●
●
●
● ● ●
● ●
●
●
●
●
●
●
●
●● ● ●
● ●
●●●●●
●
●
● ●
●
●
●
● ●●
●
●●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●●●
●
●
●
●
●●
●●
●
●●
●
●
●
●
●
●
● ●
●● ●
●
●●●
●
●●
●
● ●
●●
●
● ●
●●
●● ●●●
●●
●
●
●●
●
●●
●
● ●
●●●●●●
●●
●
●●●
●
●●
●
●
●
●
●
●
●
●
●
● ●● ●●
●
●
●
●
●
●
●
●
●
● ●●
●●●●
●
●
●
●
●
●
●
●
●
●
●
●
●●●
●●
●
●
●
●
●
●
●●
●
●
●●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Normalized Actual Runtime
Bullet
1.25
1.00
0.75
0.50
0.25
0.00
Bullet
Ferret
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
GEM
OpenOrd
●●
●●
●
●
●●
●
●
●●● ●●●
●●
●●●
●
●
●●●●●●●●●
●●●●●●●●●●●●●
●
●●● ● ● ●●●●●
●
●●●
●●●●●●●●●
● ● ● ●● ● ●
●
●
●
●
●
●
● ● ● ●● ●
●●●●●●●●●
●
● ●●●●●●●●●●
●●●●●●●●●
●
● ●● ● ● ● ● ● ●
●●●●●●●●●
●●● ●
●●●●● ●●●
Radar
● ● ● ● ●
●● ● ● ● ● ● ●
● ●●●●● ●●●● ●●●
●●●●●●●●●●●
SGD
●
●●●●●
●●
●
●●
●
●●● ●●●●●
●●
●●●●●●●●●●●●
● ● ● ● ●
●
●
●●●●●●●●●●●●●
● ● ● ●
●●●
● ●●●●●
●●
●●
● ● ● ● ● ● ● ● ● ● ●●
●●●●●
●● ●●●●●●●
●
●
● ● ● ●
●●●●●●●●●●●●●
●●●●●● ●●
●●●●●●●●●●
●● ●●●●
●●●●●●●●●
●
●
● ●
● ● ●●
● ●
● ● ● ●●
●
●
●
●
●●●●●●●●●
● ●
●●●●● ●●●
●● ●
● ●●●●●● ●
●●●● ●
●
●
●
●
●●●●●●● ●
● ●
●●
●●●●●●
●
●●●●●●
●●
●
●●
●●
●
●
● ●
●
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Measured Fitness
Normalized Predicted Runtime
Predicted Fitness
Figure 4: Accuracy of cost and error models
133.366 seconds (about 2 minutes) to train the cost model (this does not take into account the time to run the application
programs).
Evaluating the accuracy of the cost model for a given application is straightforward: we sweep the space of test
inputs and knob settings, and for each point in this space, we compare the running time predicted by the cost model with
the actual execution time. The top charts in Figure 4 show the results for the applications in our test suite. In each graph,
the x-axis is the predicted running time and the y-axis is the measured running time. If the cost model is perfect, all
points should lie on the y=x line. Figure 4 shows that this is more or less true for Ferret and Radar. For GEM and SGD,
the predicted time is usually less than the actual execution time, and for Bullet and OpenORD, the over-predictions and
under-predictions are more or less evenly distributed. Radar implements a regular algorithm in which running time
depends on the size of the input. In contrast, GEM and OpenORD implement complex graph algorithms, so they are
more irregular in their behavior.
Estimating the accuracy of the error model has to be more indirect since the model does not make error predictions for
individual inputs but only for an ensemble of inputs I. The error model is a proxy for the fitness function Pe (, k1 , k2 ).
This proxy is constructed during the training phase by letting I be the set of training inputs. One way to evaluate the
accuracy of this proxy is to construct another proxy by letting I be the set of test inputs. If the model is accurate, these
two proxy functions, which we call the predicted fitness and measured fitness, will be equal.
The bottom charts of Figure 4 show the results of this experiment. The x and y axes in each graph are the predicted
and measured fitness respectively. We sweep over the space of (discretized) error values and knob settings, and for
each point in this space, we evaluate the two proxy functions and plot the point in the graph. We see that the error
model is very accurate for Bullet, Ferret and Radar, and less so for the other three benchmarks. For GEM and SGD,
most of the points lie above the y=x line, which means that the predicted fitness is usually less than the actual fitness.
Therefore, the feasible region determined by using the model may be smaller than the actual feasible region.
It is important to note that since the error and cost models are used only to rank knob settings in the feasible region,
more accurate models do not necessarily give better solutions to the control problem.
Optimizing run-time performance. Speedup is defined as ratio of the running time at a particular knob setting to
the running time with the knobs set for maximum quality. Table 1 shows speedups for each application for values
between 0 and 0.5 and π values between 0.5 and 1.0. Each entry gives the average speedup over all test inputs for
8
πk
1.0
0.9
0.8
0.7
0.6
0.5
Bullet
0.0 0.1 0.2 0.3 0.4 0.5
1.0 1.0 1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0 1.0 24.3
1.0 1.0 1.0 1.0 9.0 96.6
1.0 1.0 1.0 1.7 96.6 141.3
1.0 1.0 1.0 39.9 115.4 204.6
Ferret
0.0 0.1 0.2 0.3 0.4 0.5
1.0 1.0 1.0 1.0 1.1 1.1
1.1 1.4 1.4 1.6 1.7 1.9
1.1 1.4 1.6 1.6 1.9 1.9
1.1 1.4 1.6 1.7 1.9 2.0
1.2 1.4 1.6 1.7 1.9 2.0
1.2 1.4 1.6 1.7 2.0 2.0
GEM
0.0 0.1 0.2 0.3 0.4 0.5
NA NA 1.3 1.4 1.6 1.9
NA 1.1 1.5 1.9 2.2 2.5
NA 1.2 1.7 2.1 2.4 2.6
NA 1.3 1.7 2.1 2.5 2.7
NA 1.4 2.0 2.2 2.6 2.7
NA 1.7 2.3 2.5 2.7 3.0
OpenOrd
0.0 0.1 0.2 0.3 0.4 0.5
NA 2.0 2.4 6.3 6.3 5.9
NA 2.9 6.5 6.0 5.9 8.4
NA 5.2 6.4 8.7 8.7 8.5
NA 6.3 6.1 8.5 8.8 8.5
NA 6.0 8.5 8.7 8.5 8.5
NA 8.5 8.5 8.6 8.4 8.4
SGD
0.0 0.1 0.2 0.3 0.4 0.5
NA 2.5 5.6 14.1 31.3 77.4
NA 11.8 30.3 43.0 56.3 94.9
NA 39.7 52.5 103.7 165.0 184.0
NA 73.3 101.4 139.9 176.1 271.7
1.0 97.5 136.7 207.0 302.0 395.3
1.0 104.6 161.1 259.1 302.0 418.4
Table 1: Speedups of the tuned programs for a subset of constraint space.
the knob settings found by the control algorithm based on exhaustive search, given (, π) constraints in the intervals
specified by the row and column indices.
Speedups depend on the application and the (, π) constraints. For each application, the top-left corner of the
constraint space is the “hard” region since the error must be low with high probability. The knob settings must be at or
close to maximum, and speedup will be limited. Table entries marked “NA” show where the control system was unable
to find any feasible solution for these hard constraints. In contrast, the bottom-right corner of the constraint space is the
“easier” region, so one would expect higher speedups. This is seen in all benchmarks. Overall, we see that controlling
the knobs in these applications can yield significant speedups in running time.
Effectiveness in finding optimal knob settings. While Table 1 shows speedups obtained from the knob settings
found by the control algorithm in different regions of the constraint space, it does not show how well these constraints
were actually met. To provide context, we have evaluated this both for our method and for a similar method using linear
regression to model both error and running time (linear regression can be seen as the simplest non-trivial model one can
build for these values). For each (, π) combination, we evaluated the quality of the achieved control.
Overall, the control system using the Bayes model for error and the M5 model for cost performs quite well for all
inputs and regions of the constraint space: for most points, it finds solutions and the cost difference from the oracle’s
solution is within 40%. The only noticeable problem is in SGD. A closer study showed that the feasible region found
by the Bayes error model is smaller than it should be and did not contain some low-cost points found by the oracle
control. This can be attributed to the fact that the predicted fitness function for SGD is somewhat conservative, as seen
in Figure 4.
In contrast, the control system based on linear regression performs quite poorly. No solutions are found in most
parts of the space, and even when solutions are found, the cost of the solutions is very sub-optimal.
Performance of the Radar processing application. We also performed a blind test of the system using a radar
processing application [24]. Unlike the five applications described above, this code was already instrumented with
knobs, so we used it out of the box as a blind test for our system. Using our machine-learning-based control scheme,
we were able to obtain speedups over a base fixed system configuration comparable to those obtained by hand tuning.
In contrast, models using linear regression were unable to find solutions in most of the constraint space.
Optimizing energy consumption. We note that a major advantage of our approach is that it can be used to optimize
not just running time but any metric for which a reasonable cost model can be constructed. In this section, we show
the results of applying the system to optimizing energy consumption for the same benchmarks. We measured energy
on a Intel Xeon E5-2630 CPU with 16 GB of memory. We used the Intel RAPL (Running Average Power Limit)
interface and PAPI to measure the energy consumption. This machine does not support DRAM counters, so what is
being measured is the CPU package energy consumption.
Table 2 shows the power savings obtained for our benchmarks for values between 0 and 0.5 and π values between
0.5 and 1.0. Each entry gives the average power savings over all test inputs for the knob settings found by our control
algorithm given (, π) constraints in the intervals specified by the row and column indices. As expected, savings are
greater when the constraints are looser.
9
πk
1.0
0.9
0.8
0.7
0.6
0.5
Bullet
0.0 0.1 0.2 0.3 0.4 0.5
1.0 1.0 1.0 1.0 1.0 1.0
1.0 1.0 1.1 1.0 1.0 1.0
1.0 1.1 1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0 1.0 1.0
1.0 1.0 1.0 1.0 1.0 1.2
1.0 1.0 1.0 1.0 1.2 1.3
Ferret
0.0 0.1 0.2 0.3 0.4 0.5
NA 1.0 1.0 1.0 1.1 1.1
1.0 1.5 1.7 1.8 1.8 1.8
1.1 1.6 1.8 1.8 1.8 1.8
1.1 1.7 1.8 1.8 1.8 1.8
1.4 1.8 1.8 1.8 1.8 1.8
1.4 1.8 1.8 1.8 1.8 1.8
GEM
0.0 0.1 0.2 0.3 0.4 0.5
NA NA 1.3 1.5 1.7 1.9
NA NA 1.7 2.0 2.1 2.3
NA 1.1 1.8 2.1 2.3 2.5
NA 1.2 1.8 2.3 2.5 2.5
NA 1.5 2.1 2.3 2.5 2.8
NA 1.7 2.3 2.5 2.8 2.8
OpenOrd
0.0 0.1 0.2 0.3 0.4 0.5
NA 2.4 6.1 7.2 7.2 8.9
NA 6.0 7.1 7.2 8.9 8.9
NA 6.0 7.2 8.9 8.9 8.9
NA 7.1 7.2 8.9 8.9 8.9
NA 7.2 8.9 8.9 8.9 8.9
NA 8.9 8.9 8.9 8.9 8.9
SGD
0.0 0.1 0.2 0.3 0.4 0.5
NA 21.6 59.5 83.3 108.3 107.3
NA 51.0 98.0 149.2 168.7 262.7
NA 91.0 192.5 266.0 265.0 319.0
NA 112.7 193.6 265.0 338.2 319.0
1.0 110.2 193.6 345.1 341.8 410.2
1.0 129.9 254.2 345.1 420.2 410.2
Radar
0.0 0.1 0.2 0.3 0.4 0.5
1.0 1.0 1.0 1.1 1.1 1.1
1.0 1.0 1.0 1.1 1.1 1.1
1.0 1.0 1.0 1.1 1.1 1.1
1.0 1.0 1.0 1.1 1.1 1.1
1.0 1.0 1.0 1.3 1.3 1.3
1.0 1.0 1.0 1.3 1.3 1.3
Table 2: Energy savings of the tuned programs for a subset of constraint space.
5
Extending Capri
The Capri system is an example of an open-loop control system, which uses a model of the system (in our case, the
tunable program) to determine optimal knob settings before the application is executed. However, the Capri control
system suffers from the following drawbacks.
5.1
Scaling to Large Tunable Programs
The open-loop control system described in Section 4 works well for programs that are a few hundred lines long and
have five or six knobs. This holds true especially for the Bayesian network-based error model that was used as a proxy
function for fe , as discussed in Section 4.1. There are many ways to model the error probability distribution using a
Bayesian network. The original Capri work used a simple model, where the output error E depends directly on the
settings of all of the knobs. Although this is simple, the size of the table for the output error is exponential in the number
of knobs. This was not a problem for the applications studied in the original Capri paper. However, the control system
may suffer from poor performance with applications that provide several (∼ 100s) knobs and therefore have a large
space of knob settings.
There are possible ways to improve the performance of a Bayesian network-based cost model (or based on any
machine learning model), and to scale the open-loop controller. In the following, we discuss few opportunities:
• Reducing the size of the knob space: This can be achieved by (i) reducing the number of knobs that need to be
controlled simultaneously, a process that we call knob orthogonalization, and (ii) reducing the number of settings
for each knob.
The first step is to exploit phase behavior in long-running programs [52, 53]. For example, a Barnes-Hut n-body
code executes the following phases repeatedly: (i) build the spatial decomposition tree, (ii) compute the mass
and center of gravity of each spatial partition, (iii) compute force on each particle, and (iv) update position and
velocity of each particle. At any given point in the execution, the program is executing only one of these phases,
so the overall control problem can be decomposed into a set of smaller control problems, one for each phase,
thereby reducing the number of knobs that need to be controlled simultaneously.
The next step is to reduce the number of knobs by exploiting the 90/10 rule, which says that in most programs,
more than 90% of the execution time is spent in less than 10% of the code. By ignoring knobs outside such “hot”
regions, it may be possible to obtain most of the benefits of optimal control without the effort of controlling every
knob in a program. In Barnes-Hut for example, more than 90% of the time is spent in the force computation
phase, so it may be possible to ignore knobs in all other phases, at least for controlling computation time and
energy.
Once the number of knobs that need to be considered simultaneously has been minimized, reducing the number
of knob settings that need to be considered by the control system can be accomplished by using a mixture of
coarse-grain and fine-grain knobs. If the program output changes relatively slowly with the value of a particular
control variable, a coarse-grain knob with relatively few settings can be used to set the value of that variable,
reducing the size of the search space for optimal knob settings. Profiling with test data can be used to determine
the relative sensitivity of the output to particular control variables.
• Reducing search time for optimal knob settings: Our current control algorithm sweeps the knob space to find
optimal knob settings for a given input and desired quality guarantees. Although exhaustive search has worked
well for the small-scale applications we have considered so far, it obviously does not scale to large numbers
of knobs, so we will develop intelligent search algorithms to find optimal knob settings efficiently in a large
10
knob space. As mentioned in Section 4, we have experimented with the heuristic search strategy used in the
Precimonious system [46]. The results showed, as one might expect, that Precimonious was significantly faster
but found sub-optimal knob settings compared to our current exhaustive search strategy. In particular, for
the OpenOrd application, Precimonious search got stuck in a local minimum that was sub-optimal. We will
investigate search techniques that trade-off computing time for solution quality.
• Scalable error models: The Bayesian error model in Capri has the virtue of being simple, but it does not scale
since the size of the conditional probability distribution tables increases exponentially with the number of knobs.
Abstractly, the error model is a function f (v1 , v2 , ..., vn , eb ) that maps a knob setting (v1 , v2 , ..., vn ) and an
error bound eb to the corresponding probability defined in Problem formulation 2. The simple Bayesian model
explicitly stores the probability for all combinations of knob settings and error bounds. In our studies, we have
found that the probability function changes quite slowly as the knob settings are changed. Therefore, we might
be able to usefully approximate the probability table by partitioning the knob space into subspaces and using
a simple model like a linear model within each subspace. This is what a tool like M5 will do automatically if
it is given the same training data as the Bayesian error model. We are investigating these model compression
techniques.
• Clustering inputs: Instead of building a single error model and cost model to handle all inputs, we can use
clustering techniques [6, 35] to cluster the inputs into a set of classes where in each class, the error and cost
behaviors are similar. For each class, we can build a separate quality and cost model using our approach. At
runtime, a given input is first classified and then the corresponding models are used to set the knobs. This may
improve both the accuracy and the scalability of both learning and querying the quality and cost models since the
complexities of the models can be reduced by considering a subset of input scenarios. Clustering has been used
successfully for auto-tuning in the Petabricks system [1]. Automatic feature extraction and selection techniques
may be useful for this problem; for example, they have been quite successful in the audio domain [32, 36].
5.2
Closed-loop Control
Open-loop control systems cannot adapt during execution to compensate for model error. Such errors can be significant;
for the SGD benchmark for example, the Capri control system does not find some low-cost points found by the oracle
control because the Bayesian error model is overly conservative, as seen in Figure 4.
The need to compensate for modeling errors, particularly in the context of complex systems, presents an opportunity
for closed-loop control. In this approach, a function of the current system state and/or output is fed back as an input
to the control system so that system behavior can be optimized for subsequent computations. Closed-loop control
systems are generally applicable to a large class of iterative and streaming applications that have a notion of “progress.”
For an iterative application, each iteration represents progress, and provides the control system with an opportunity
for correcting the difference between the current value of a system variable and the desired “setpoint.” For streaming
computations, the application processes a sequence of inputs and produces a sequence of outputs, so the processing of
successive inputs represents progress. Closed-loop control systems are well-studied in control theory, and systematic
techniques for designing controllers with provably desirable properties are well understood, especially for linear, timeinvariant systems. These techniques have proved to be adequate for simple cruise control systems in cars, autopilots in
aircraft, audio amplifiers, and basic process control systems in manufacturing.
Recently, there has been a surge in using closed-loop control to build adaptive software and hardware systems for
complex applications. However, there are several challenges: 1) building reasonable initial approximate cost and quality
models, 2) finding effective run-time metrics strongly correlated with cost and error/quality, 3) low overhead profiling
of these run-time metrics, and 4) updating knob settings, and cost and quality models efficiently. These ideas have
been explored in recent papers [4, 18, 19, 23, 26, 27, 30, 37] on adapting traditional control theory for use in computer
applications. Some of these systems consider a combination of system knobs, e.g. the number of cores used and their
clock rate, in addition to application knobs of the sort we use in Capri for open-loop control. However, existing systems
typically use a separate PID (proportional-integral-derivative) controller [3] for each type of knob and employ ad-hoc
techniques to combine these into an overall system. PID controllers have the advantage of not needing system models
but because of this, they cannot ensure optimal control; in addition, composing these controllers in ad hoc ways limits
the degree to which overall system behavioral properties can be guaranteed. They share these properties with systems
based on reinforcement learning [57]. Such ad-hoc techniques are ill-suited for several emerging class of application
11
contexts such as exascale applications, which can execute for several days and which require tuning of additional
system-level knobs related to resource allocation such as load balancing and allocating cores [26].
We propose to extend our model-based open-loop control framework to provide a systematic approach for designing
closed-loop controllers that integrate the use of system and application knobs to achieve predictable, desirable system
behavior. To that end, we will extend the strategy used in the established area of Model-Predictive Control (MPC) for
traditional control systems [9, 61]. Traditional MPC systems are used to design relatively complex process control
systems for industrial plants, and can be more effective than simple PID controllers. Unlike PID controllers, they are
based on specific, closed-form dynamics models of the processes being controlled. Based on these models, an explicit
closed form objective function describing the desired system behavior as well as explicit closed-form constraints on
the range of behaviors allowed can be expressed. This results in a formulation of the control problem as a constrained
optimization problem, where the behavioral objective function is optimized subject to the specified constraints. Note
that this is similar to our formulation for the open-loop problem. In traditional MPC systems, these functions are
closed-form continuous functions to be optimized over an infinite time period. To make the problem computationally
tractable, a finite time horizon is imposed and the optimal trajectory of the control settings over that time horizon
is computed. Since in the real control system, knobs are set at discrete time intervals, the setting computed by the
optimizer for the first time interval is used by the controller. The optimization step is then repeated for the given
horizon, and the first step of the resulting trajectory actually used, and so on. Of course, this comes at the cost of
more expensive computation per time step than for PID controllers. In principle, the traditional MPC method can be
extended to non-linear systems, although the resulting nonlinear optimization problems may be too expensive to solve,
for real-time use, using traditional methods.
In the complex systems we wish to consider, we know of no closed-form models for the cost and quality functions,
but we can model these using machine learning techniques as described in Section 4. These will constitute the initial
approximate cost and quality models for the proposed closed-loop control system. We are currently working on finding
effective run-time metrics strongly correlated with cost and error/quality for the applications discussed in Section 4. We
are analyzing Simultaneous Localization And Mapping (SLAM) applications to determine sensitivity to platform knobs
as well as the best application knobs to use in trading off accuracy for computation time and energy savings [37]. We
are also exploring the incorporation of our MPC-based controllers into systems like APEX [26] for controlling exascale
computations (the current APEX system uses a simple proportional controller to control the number of cores assigned to
a computation, for example). We believe these kinds of real-world applications are a rich source of interesting problems
for our proposed extensions.
We are also working on methods for incrementally updating optimal knob settings using the models and feedback
information about the state of the computation. If this is done at each iteration of a streaming computation, we can
see that this fits the model of MPC control with a time horizon for optimization of a single time-step. The final step
will be to incorporate model updates into the system, as is done in approaches like Kalman filters in traditional control
theory [3]. In our current open-loop control system, we do not take into account the results of previous computations to
refine the models constructed during the initial training. For online control, it may be desirable to incorporate some kind
of model refinement so that subsequent optimization steps improve in quality. A related goal is to develop techniques
for guaranteeing that our systems converge to desired behaviors using this approach. Finally, we will develop techniques
for multi-time-step optimization to provide better results on appropriate problems such as recognizing and tracking the
motion of objects with multi-model sensors.
6
Making Capri Extensible
Section 5 discusses possible extensions to Capri [56]. We are actively working on tailoring the existing Capri implementation to integrate extensions. This section describes our new Capri implementation in detail, and uses applications
from three varied domains to demonstrate the effectiveness of and regression test the new Capri implementation.
6.1
Applications
We evaluate the new Capri system on three complex applications: (i) GEM, the graph partitioner for social networks [60]
which was introduced earlier, (ii) a radar processing application [24] written by researchers at the University of Chicago,
and (iii) SLAMBench, an open source tool designed to assist in the development of simultaneous localisation and
12
mapping (SLAM) algorithms [37]. The code for GEM was modified by us to permit control of approximation, while
Radar and SLAMBench were already setup for control.
Error/Quality definition. To compute the error/quality of the output, we require the user to provide a distance
function that quantifies the difference between an approximate execution and a reference execution for a given input.
The reference execution can be the exact execution if such a thing exists or the best execution in the knob space for that
input. The error is defined as a normalized version of this distance
Error = (d − dmin )/(dmax − dmin )
where d, dmax and dmin represent the distance for a execution, the maximum distance and the minimum distance over
the knob space for the same input. The distance function is application-specific.
6.1.1
GEM
GEM [60] is a graph clustering algorithm for social networks.
Knobs: There are two components; both use a weighted kernel k-means algorithm and have a knob controlling the
number of iterations. Each knob can be set to one of 40 levels. All input graphs are partitioned into 100 clusters in our
experiments.
Error metric: The output of GEM is the cluster assignment of each node in the graph. There is a standard way to
measure the quality of graph clustering, using the notion of a normalized cut, which is defined as follows:
N
N
X
X
edges(Ck , Ci )/edges(Ck )
k=1 i=1,i6=k
where N is the number of clusters, edges(Ck , Ci ) denotes the number of edges between cluster k and cluster i, and
edges(Ck ) denotes the edges inside cluster k
The distance function computes the difference of the normalized cut given two clustering assignments. The reference
execution is the execution achieving the smallest normalized cut.
Input features for modeling cost: the number of vertices in the graph, the number of edges and the number of
clusters.
6.1.2
Radar
We used a radar processing application [24] developed by Hank Hoffmann at the University of Chicago. Unlike the
other applications, this code was already instrumented with knobs, so we used it out of the box as a blind test for our
system. This code is a pipeline with four stages. The first stage (LPF) is a low-pass filter to eliminate high-frequency
noise. The second stage (BF) does beam-forming which allows a phased array radar to concentrate in a particular
direction. The third stage (PC) performs pulse compression, which concentrates energy. The final stage is a constant
false alarm rate detection (CFAR), which identifies targets.
Knobs: The application supports four knobs. The first two knobs change the decimation ratios in the finite impulse
response filters that make up the LPF stage. The third knob changes the number of beams used in the beam former. The
fourth knob changes the range resolution. The application can have 512 separate configurations using these four knobs.
Error metrics: The signal-to-noise ratio (SNR) is used to measure the quality of the detection. The reference
execution is the one achieving the highest SNR.
Input features for modeling cost: No input features are used in this application.
6.1.3
SLAMBench
SLAMBench is an open source tool designed to assist in the development of simultaneous localisation and mapping
(SLAM) algorithms, and evaluation of platforms for implementing those algorithms [37]. It runs on the Linux operating
system, and has been used on X86 and ARM along with various GPUs, from high-end to mobile devices. SLAMBench
combines a framework for quantifying quality-of-result with instrumentation of execution time and energy consumption.
It contains a KinectFusion [39] implementation in C++, OpenMP, OpenCL and CUDA. It offers a platform for a broad
spectrum of future research in jointly exploring the design space of algorithmic and implementation-level optimizations.
13
Knobs: The application supports several algorithmic-level knobs [7], such as, volume resolution, iterative closest
point (ICP) threshold, etc. To minimize the search space over the set of all possible knob combinations, we vary only
those knobs that seem to have a high correlation with the run-time performance and tracking. We used the following
four algorithmic parameters as knobs:
• Compute size ratio - The fractional depth image resolution used as input. As an example, a value of 8 means that
the raw frame is resized to one-eighth resolution.
• ICP threshold - The threshold for the iterative closest point (ICP) algorithm used during the tracking phase in
KinectFusion.
• µ distance - The output volume of KFusion is defined as a truncated signed distance function (TSDF) [39]. Every
volume element (voxel) of the volume contains the best likelihood distance to the nearest visible surface, up to a
truncation distance denoted by the parameter µ.
• Volume resolution - The resolution of the scene being reconstructed. As an example, a 64x64x64 voxel grid
captures less detail than a 256x256x256 voxel grid.
Error metrics: The KinectFusion algorithm reports the absolute trajectory error (ATE) in meters after processing
an input. The ATE measures accuracy, and represents the precision of the computation. Acceptable values are in the
range of few centimeters.
Input features for modeling cost: An input in SLAMBench is a trajectory, which is sequence of depth images.
We have defined the following features that can be extracted from a given trajectory: mean and standard deviation of the
depth values in a frame, and mean and standard deviation of differences in depth values between successive frames. The
first two features track the variation among pixels in a single frame, while the second pair of features aim to capture the
variation in depth values across two images. In other words, it tries to capture the “burstiness” between two successive
frames.
6.2
Implementation
Environment. Capri has been implemented and tested with Python v3.5.
M5 model. The Cubist1 application implements the M5 [42] machine learning model. Training a Cubist/M5 model
requires a schema file which lists the independent and the dependent variables, and a file containing data points that is
used for training. After training, Cubist/M5 generates a set of piecewise-linear rule-based models that balance the need
for accurate prediction against the requirements of intelligibility. Cubist/M5 models generally give better results than
those produced by simple techniques such as multivariate linear regression, while also being easier to understand than
the more complex neural networks. Cubist/M5 scales well to hundreds of attributes.
Capri uses the Cubist2 package available for the R programming language. We have written a Python package
wrapper for interfacing with the Cubist R package. Our new Capri implementation is modular, which makes it easy to
replace the M5 model with other machine learning models.
Source structure. The Capri source is divided into the follow directories:
• lib - Contains the source for Cubist R module, and a Python wrapper for interfacing with Capri.
• scripts - Contains scripts for helping with running applications and parsing the output results.
• src - This directory contains Python modules that implement the control algorithm in Capri.
1 http://rulequest.com/cubist-info.html
2 https://cran.r-project.org/web/packages/Cubist/index.html
14
Running Capri. Capri can be run with Python version ≥ 3.5, and requires the following Python packages: numpy,
psutil, overrides, matplotlib, and ordered_set. These packages can be installed by invoking the following command if
required:
pip3 install −−upgrade numpy psutil overrides matplotlib ordered_set
Since each application has a unique set of knobs and a different range of values, therefore an user of the Capri
system needs to list the details about the knobs and their range of values in a configuration file. The Capri source
provides configuration files for several applications that we have used. The following snippet shows an example for the
GEM application:
[FIXED]
PBS = (1.0;0.05;-0.05)
EBS = (0.05;1;+0.05)
TRAIN_RATIO = 0.75
ACCURATE_KNOBS = {’iter1’: ’40’, ’iter2’: ’40’}
[KNOBS]
NUM_FIRST_ITER = (1;40;+1) # iter1
NUM_SECOND_ITER = (1;40;+1) # iter2
The configuration section FIXED lists experimental settings that are common to all applications. PBS and EBS stand for
the acceptable probability and error bounds, as discussed in the final problem formulation 2 in Section 3. TRAIN_RATIO
specifies the proportion of the experimental data to be used in training the M5 models, the rest of the input data is used
for prediction. ACCURATE_KNOBS specifies the knob configurations that compute the most accurate output, which is
used to compute the “golden value” (i.e., the most accurate output) and scale the error.
Given a configuration file for an application, we have automated all the steps involved in running the Capri control
system with the application. Executing the control system involves four steps:
• run - Run the application with different knob settings to generate experimental data to be then used for offline
machine learning and for prediction. This task does not depend on other tasks, and can be run independently.
Note that running this task over all possible knob settings can take a long time (i.e., several hours to several days
depending on the application).
• stats - Process a set of experimental results to collect statistics. This task depends on output generated by a prior
run task.
• predict - Train the M5 models and compute the feasible region for a given constraint of error bound () and
probability bound (π) (Section 3). This task depends on the run task.
• result - Find the optimal knob setting that minimizes the objective function and meets the constraints set in
Problem formulation 2. It also generates plots and speedups to help compare the performance of the Capri control
system. This task depends on the stats and the predict tasks.
In the following, we show a sample invocation of the Capri control system.
capri −−bench=gem −−input=all −−outputDir=gem−full −−tasks=run,stats,predict,result
To know more about different options to Capri, use
capri −h
Extending Capri with new applications. It is straightforward to add support for new applications to Capri. A user
of the Capri system needs to provide the following information:
• Implement an application-specific module under apps in the src directory. The Capri user should implement how
to compute the cost and the error for the application. Please refer to existing applications for reference.
15
Benchmark
GEM
Radar
SLAMBench
#Total
#Train (75%)
#Test (25%)
43
128
12
33
96
9
10
32
3
Source
[28, 62]
synthetic
[37]
Table 3: Inputs for benchmarks. Inputs are randomly divided into training set and testing set.
Prediction accuracy
1.2
1.0
1.0
1.0
0.8
0.8
0.8
0.6
actual cost
1.2
actual cost
actual cost
Prediction accuracy
1.2
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0.0
0.0
0.2
0.4
0.6
predicted cost
(a) GEM
0.8
1.0
0.0
0.0
0.2
0.4
0.6
predicted cost
(b) Radar application
0.8
1.0
Prediction accuracy
0.0
0.0
0.2
0.4
0.6
predicted cost
0.8
1.0
(c) SLAMBench
Figure 5: Accuracy of the cost model with the new implementation of Capri.
• Provide a configuration file for the application.
• Use Capri to run the application with different knob settings, and then use the controller to predict optimal knobs
for any given performance metric and quality constraints.
7
Evaluation
In this section, we describe results using our new implementation of the Capri control system with GEM, Radar, and
SLAMBench. For each benchmark, we collected a set of inputs as shown in Table 3. We would have liked to have
more training inputs for GEM, and we are currently investigating other sources of getting new inputs for SLAMBench.
To evaluate the error and cost models, inputs were randomly partitioned into a training and a test suite based on the
TRAIN_RATIO.
7.1
Evaluation of the Cost and Error Models
We regression test our new implementation of Capri by comparing the performance of the M5 cost and error models
with prior results (Section 4.4). Figure 5 shows the accuracy of the M5 cost model. As in Figure 4, the black line
represents the y=x line which captures perfect prediction behavior. The green line shows linear regression for the given
data points. From Figures 4 and 5, it is obvious that the behavior of the new M5 cost model closely matches the earlier
result.
Prior published work on Capri used Bayesian network for modeling error [56]. Unlike prior work, our reimplementation uses M5 for modeling error. Figure 6 shows the accuracy of the M5 cost model. Evaluating the accuracy
of the error model is more involved than the cost model, since the error bound needs to be met probabilistically over
an ensemble of inputs. We simulate that by tracking the proportion of the inputs for which the Capri control system’s
predictions meet the given error bound. The black and green lines in the figure have the same meaning as in Figure 5.
From Figures 4 and 6, we see that predictions with an M5 model are within a reasonable match of predictions with a
Bayesian network. Fitness predictions for SLAMBench are wayward, we believe this is due to lack of sufficient training
data and a poor choice of the error function (based on absolute trajectory error). We are investigating ways to fix this
problem with SLAMBench. In particular, we are looking into how to use the RGB-D SLAM dataset from TUM3 and to
generate new trajectories.
3 http://vision.in.tum.de/data/datasets/rgbd-dataset
16
Fitness accuracy
Fitness accuracy
1.0
0.8
0.6
0.6
0.4
Prediction accuracy
1.2
1.0
0.8
actual error
0.8
actual fitness
actual fitness
1.0
0.4
0.6
0.4
0.2
0.2
0.0
0.0
0.0
0.0
0.2
0.4
0.6
predicted fitness
0.8
1.0
(a) GEM
0.2
0.2
0.4
0.6
predicted fitness
0.8
1.0
0.0
0.0
0.2
(b) Radar application
0.4
0.6
predicted error
0.8
1.0
(c) SLAMBench
Figure 6: Accuracy of the error model with the new implementation of Capri.
πk
0.0
0.1
GEM
0.2 0.3
0.4
0.5
0.0
0.1
1.0
0.9
0.8
0.7
0.6
0.5
NA
NA
NA
NA
NA
NA
NA
1.1
1.6
1.8
2.3
2.6
1.3
2.1
2.4
2.8
3.2
3.4
2.2
3.2
3.6
3.9
4.0
4.1
2.3
3.7
4.0
4.4
4.8
4.9
1
1
1
1
1
1
1
1
1
1
1
1
1.7
2.8
3.2
3.5
3.6
3.9
Radar
0.2 0.3
0.4
0.5
0.0
0.1
1.1
1.1
1.1
1.1
1.1
1.1
1.4
1.4
1.4
1.4
1.4
1.4
1.4
1.4
1.4
1.4
1.4
1.4
NA
NA
NA
NA
NA
NA
NA
NA
1.2
1.2
1.3
1.3
1.3
1.3
1.3
1.3
1.3
1.3
SLAMBench
0.2 0.3 0.4
NA
1.2
1.5
1.5
1.5
1.5
1.6
1.6
1.5
1.5
1.5
1.5
1.6
1.6
1.5
1.5
1.5
1.5
0.5
1.5
1.5
1.5
1.5
1.5
1.5
Table 4: Speedups of the tuned programs for a subset of constraint space.
7.2
Speedups
The original Capri work shows that the time for control is relatively small compared to the time taken by the applications
to run. In our reevaluation, we have not measured the proportion of the time taken by the control algorithm to run
compared to the applications. But we evaluate speedup to sanity check the performance of the new Capri implementation.
Speedup is defined as ratio of the running time at a particular knob setting to the running time with the knobs set for
maximum quality.
Table 4 shows speedups for each application for values between 0 and 0.5 and π values between 0.5 and 1.0 (we
show only a portion of the overall constraint space for simplicity). Each entry gives the average speedup over all test
inputs for the knob settings found by the control algorithm based on exhaustive search, given (, π) constraints in the
intervals specified by the row and column indices.
Speedups depend on the application and the (π) constraints. For each application, the top-left corner of the
constraint space is the “hard” region since the error must be low with high probability. The knob settings must be at or
close to maximum, and speedup will be limited. Table entries marked “NA” show where the control system was unable
to find any feasible solution for these hard constraints. In contrast, the bottom-right corner of the constraint space is
the “easier” region, so one would expect higher speedups. This is seen with all the applications. Overall, we see that
controlling the knobs in these applications can yield significant speedups in running time.
7.3
Inversions
The cost and error models in Capri are used only to rank knob settings in the feasible region, so more accurate models
do not necessarily give better solutions to the control problem even if the predictions of the machine learning models
are close to accurate. We say an inversion has occurred for a given constraint of and π when the knob setting predicted
by Capri does not match with the knob settings identified with an oracle. We evaluated the number of inversions that
happened with the M5 model in Capri, by comparing whether the predicted knob settings matched with the knob
settings predicted using an oracle for a given and π constraint. Table 5 show the proportion of inversions that occurred
with the different applications. We denote an inversion has occurred with T, otherwise the entry contains F. The table
shows that the machine learning models in Capri perform reasonably well.
17
πk
0.0
0.1
1.0
0.9
0.8
0.7
0.6
0.5
NA
NA
NA
NA
NA
NA
NA
F
F
F
F
F
GEM
0.2 0.3
F
F
F
T
T
T
T
F
T
F
F
T
0.4
0.5
0.0
0.1
T
T
T
T
F
F
T
F
F
F
T
F
F
F
F
F
F
F
F
F
F
F
F
F
Radar
0.2 0.3
F
F
F
F
F
F
F
F
F
F
F
F
0.4
0.5
0.0
SLAMBench
0.1 0.2 0.3 0.4
F
F
F
F
F
F
F
F
F
F
F
F
NA
NA
NA
NA
NA
NA
NA
NA
T
T
T
F
NA
F
F
F
F
T
F
F
T
T
T
T
F
F
T
T
T
T
0.5
T
T
T
T
T
T
Table 5: Inversion of the tuned programs for a subset of constraint space.
8
Conclusion
Although there is a large body of work on using approximate computing to reduce computation time as well as power
and energy requirements, little is known about how to control approximate programs in a principled way. Previous work
on approximate computing has focused either on showing the feasibility of approximation or on controlling streaming
programs in which error estimates for one input can be used to reactively control error for subsequent inputs.
In this paper, we addressed the problem of controlling tunable approximate programs, which have one or more
knobs that can be changed to vary the fidelity of the output of the approximate computation. We showed how the
proactive control problem for tunable programs can be formulated as an optimization problem, and then gave an
algorithm for solving this control problem by using error and cost models generated using machine learning techniques.
Our experimental results show that this approach performs well on controlling tunable approximate programs.
We extend prior published work called Capri to make the new control system scale to hundreds of knobs, and to
provide optimal control for streaming programs. For controlling streaming programs, we propose to solve a closed-loop
control system with model-predictive control. We showed initial results with our new implementation of Capri to
regression test the system.
References
[1] J. Ansel, C. Chan, Y. L. Wong, M. Olszewski, Q. Zhao, A. Edelman, and S. Amarasinghe. PetaBricks: A Language
and Compiler for Algorithmic Choice. In Proceedings of the 30th ACM SIGPLAN Conference on Programming
Language Design and Implementation, PLDI ’09, pages 38–49, New York, NY, USA, 2009. ACM.
[2] J. Ansel, Y. L. Wong, C. Chan, M. Olszewski, A. Edelman, and S. Amarasinghe. Language and Compiler Support
for Auto-Tuning Variable-Accuracy Algorithms. In Proceedings of the 9th Annual IEEE/ACM International
Symposium on Code Generation and Optimization, CGO ’11, pages 85–96, Washington, DC, USA, 2011. IEEE
Computer Society.
[3] K. J. Åström and R. M. Murray. Feedback Systems: An Introduction for Scientists and Engineers. Princeton
University Press, Princeton, NJ, USA, 2008.
[4] W. Baek and T. M. Chilimbi. Green: A Framework for Supporting Energy-Conscious Programming using
Controlled Approximation. In Proceedings of the 31st ACM SIGPLAN Conference on Programming Language
Design and Implementation, PLDI ’10, pages 198–209, New York, NY, USA, 2010. ACM.
[5] C. Bienia. Benchmarking Modern Multiprocessors. PhD thesis, Princeton, NJ, USA, Jan. 2011. AAI3445564.
[6] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag
New York, Inc., Secaucus, NJ, USA, 2006.
[7] B. Bodin, L. Nardi, M. Z. Zia, H. Wagstaff, G. Sreekar Shenoy, M. Emani, J. Mawer, C. Kotselidis, A. Nisbet,
M. Lujan, B. Franke, P. H. Kelly, and M. O’Boyle. Integrating Algorithmic Parameters into Benchmarking and
Design Space Exploration in 3D Scene Understanding. In Proceedings of the 2016 International Conference on
Parallel Architectures and Compilation, PACT ’16, pages 57–69, New York, NY, USA, 2016. ACM.
18
[8] L. Bottou. Large-Scale Machine Learning with Stochastic Gradient Descent. In Y. Lechevallier and G. Saporta,
editors, Proceedings of the 19th International Conference on Computational Statistics (COMPSTAT’2010), pages
177–186, Heidelberg, 2010. Physica-Verlag HD.
[9] E. F. Camacho and C. A. Bordons. Model Predictive Control in the Process Industry. Springer-Verlag New York,
Inc., Secaucus, NJ, USA, 1997.
[10] S. Campanoni, G. Holloway, G.-Y. Wei, and D. Brooks. HELIX-UP: Relaxing Program Semantics to Unleash
Parallelization. In Proceedings of the 13th Annual IEEE/ACM International Symposium on Code Generation and
Optimization, CGO ’15, pages 235–245, Washington, DC, USA, 2015. IEEE Computer Society.
[11] M. Carbin, S. Misailovic, and M. C. Rinard. Verifying Quantitative Reliability for Programs That Execute on
Unreliable Hardware. In Proceedings of the 2013 ACM SIGPLAN International Conference on Object Oriented
Programming Systems Languages & Applications, OOPSLA ’13, pages 33–52, New York, NY, USA, 2013. ACM.
[12] S. Chaudhuri, S. Gulwani, and R. Lublinerman. Continuity and Robustness of Programs. Communications of the
ACM, 55(8):107–115, Aug. 2012.
[13] S. Chaudhuri and A. Solar-Lezama. Smooth Interpretation. In Proceedings of the 31st ACM SIGPLAN Conference
on Programming Language Design and Implementation, PLDI ’10, pages 279–291, New York, NY, USA, 2010.
ACM.
[14] Y. Ding, J. Ansel, K. Veeramachaneni, X. Shen, U.-M. O’Reilly, and S. Amarasinghe. Autotuning Algorithmic
Choice for Input Sensitivity. In Proceedings of the 36th ACM SIGPLAN Conference on Programming Language
Design and Implementation, PLDI ’15, pages 379–390, New York, NY, USA, 2015. ACM.
[15] H. Esmaeilzadeh, A. Sampson, L. Ceze, and D. Burger. Architecture Support for Disciplined Approximate Programming. In Proceedings of the Seventeenth International Conference on Architectural Support for Programming
Languages and Operating Systems, ASPLOS XVII, pages 301–312, New York, NY, USA, 2012. ACM.
[16] H. Esmaeilzadeh, A. Sampson, L. Ceze, and D. Burger. Neural Acceleration for General-Purpose Approximate
Programs. In Proceedings of the 2012 45th Annual IEEE/ACM International Symposium on Microarchitecture,
MICRO-45, pages 449–460, Washington, DC, USA, 2012. IEEE Computer Society.
[17] S. Fang, Z. Du, Y. Fang, Y. Huang, Y. Chen, L. Eeckhout, O. Temam, H. Li, Y. Chen, and C. Wu. Performance
Portability Across Heterogeneous SoCs Using a Generalized Library-Based Approach. ACM Transactions on
Architecture and Code Optimization, 11(2):21:1–21:25, June 2014.
[18] A. Farrell and H. Hoffmann. MEANTIME: Achieving Both Minimal Energy and Timeliness with Approximate
Computing. In 2016 USENIX Annual Technical Conference (USENIX ATC 16), pages 421–435, Denver, CO, June
2016. USENIX Association.
[19] A. Filieri, H. Hoffmann, and M. Maggio. Automated Design of Self-adaptive Software with Control-Theoretical
Formal Guarantees. In Proceedings of the 36th International Conference on Software Engineering, ICSE 2014,
pages 299–310, New York, NY, USA, 2014. ACM.
[20] D. Gadioli, G. Palermo, and C. Silvano. Application Autotuning to Support Runtime Adaptivity in Multicore
Architectures. In SAMOS XV, 2015.
[21] I. Goiri, R. Bianchini, S. Nagarakatte, and T. D. Nguyen. ApproxHadoop: Bringing Approximations to MapReduce
Frameworks. In Proceedings of the Twentieth International Conference on Architectural Support for Programming
Languages and Operating Systems, ASPLOS ’15, pages 383–397, New York, NY, USA, 2015. ACM.
[22] R. Hildebrandt and A. Zeller. Simplifying Failure-Inducing Input. In Proceedings of the 2000 ACM SIGSOFT
International Symposium on Software Testing and Analysis, ISSTA ’00, pages 135–145, New York, NY, USA,
2000. ACM.
[23] H. Hoffmann. JouleGuard: Energy Guarantees for Approximate Applications. In Proceedings of the 25th
Symposium on Operating Systems Principles, SOSP ’15, pages 198–214, New York, NY, USA, 2015. ACM.
19
[24] H. Hoffmann, A. Agarwal, and S. Devadas. Selecting Spatiotemporal Patterns for Development of Parallel
Applications. IEEE Transactions on Parallel and Distributed Systems, 23(10):1970–1982, Oct. 2012.
[25] H. Hoffmann, S. Sidiroglou, M. Carbin, S. Misailovic, A. Agarwal, and M. Rinard. Dynamic Knobs for Responsive
Power-Aware Computing. In Proceedings of the Sixteenth International Conference on Architectural Support for
Programming Languages and Operating Systems, ASPLOS XVI, pages 199–212, New York, NY, USA, 2011.
ACM.
[26] K. Huck, A. Porterfield, N. Chaimov, H. Kaiser, A. Malony, T. Sterling, and R. Fowler. An Autonomic Performance
Environment for Exascale. Supercomputing frontiers and innovations, 2(3), 2015.
[27] C. Imes, D. H. K. Kim, M. Maggio, and H. Hoffmann. POET: A Portable Approach to Minimizing Energy Under
Soft Real-time Constraints. In 21st IEEE Real-Time and Embedded Technology and Applications Symposium,
pages 75–86, Apr. 2015.
[28] J. Leskovec. Stanford Large Network Dataset Collection(SNAP). http://snap.stanford.edu/data/.
[29] D. Mahajan, A. Yazdanbakhsh, J. Park, B. Thwaites, and H. Esmaeilzadeh. Prediction-Based Quality Control for
Approximate Accelerators. In Second Workshop on Approximate Computing Across the System Stack, WACAS,
2015.
[30] D. Mahajan, A. Yazdanbaksh, J. Park, B. Thwaites, and H. Esmaeilzadeh. Towards Statistical Guarantees in
Controlling Quality Tradeoffs for Approximate Acceleration. In 2016 ACM/IEEE 43rd Annual International
Symposium on Computer Architecture (ISCA), pages 66–77, June 2016.
[31] S. Martin, W. M. Brown, R. Klavans, and K. W. Boyack. OpenOrd: An Open-Source Toolbox for Large Graph
Layout. volume 7868, 2011.
[32] I. Mierswa and K. Morik. Automatic Feature Extraction for Classifying Audio Data. Machine Learning,
58(2-3):127–149, Feb. 2005.
[33] J. S. Miguel, M. Badr, and N. E. Jerger. Load Value Approximation. In Proceedings of the 47th Annual IEEE/ACM
International Symposium on Microarchitecture, MICRO-47, pages 127–139, Washington, DC, USA, 2014. IEEE
Computer Society.
[34] S. Misailovic, M. Carbin, S. Achour, Z. Qi, and M. C. Rinard. Chisel: Reliability- and Accuracy-Aware
Optimization of Approximate Computational Kernels. In Proceedings of the 2014 ACM International Conference
on Object Oriented Programming Systems Languages & Applications, OOPSLA ’14, pages 309–328, New York,
NY, USA, 2014. ACM.
[35] T. M. Mitchell. Machine Learning. McGraw-Hill, Inc., New York, NY, USA, first edition, 1997.
[36] S. T. Monteiro. Automatic Hyperspectral Data Analysis: A machine learning approach to high dimensional
feature extraction. VDM Verlag Dr. Müller, 2010.
[37] L. Nardi, B. Bodin, M. Z. Zia, J. Mawer, A. Nisbet, P. H. J. Kelly, A. J. Davison, M. Luján, M. F. P. O’Boyle,
G. Riley, N. Topham, and S. Furber. Introducing SLAMBench, a performance and accuracy benchmarking
methodology for SLAM. In IEEE International Conference on Robotics and Automation (ICRA), May 2015.
arXiv:1410.2167.
[38] R. E. Neapolitan. Learning Bayesian Networks. Prentice Hall, 2003.
[39] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges,
and A. Fitzgibbon. KinectFusion: Real-Time Dense Surface Mapping and Tracking. In IEEE ISMAR. IEEE, Oct.
2011.
[40] M. A. Otaduy and M. C. Lin. CLODs: Dual Hierarchies for Multiresolution Collision Detection. In Proceedings
of the 2003 Eurographics/ACM SIGGRAPH Symposium on Geometry Processing, SGP ’03, pages 94–101,
Aire-la-Ville, Switzerland, Switzerland, 2003. Eurographics Association.
20
[41] K. V. Palem. Energy Aware Computing Through Probabilistic Switching: A Study of Limits. IEEE Transactions
on Computers, 54(9):1123–1137, Sept. 2005.
[42] J. R. Quinlan. Learning With Continuous Classes. pages 343–348. World Scientific, 1992.
[43] M. Rinard. Probabilistic Accuracy Bounds for Fault-Tolerant Computations That Discard Tasks. In Proceedings
of the 20th Annual International Conference on Supercomputing, ICS ’06, pages 324–334, New York, NY, USA,
2006. ACM.
[44] M. C. Rinard. Using Early Phase Termination To Eliminate Load Imbalances At Barrier Synchronization Points.
In Proceedings of the 22Nd Annual ACM SIGPLAN Conference on Object-oriented Programming Systems and
Applications, OOPSLA ’07, pages 369–386, New York, NY, USA, 2007. ACM.
[45] M. Ringenburg, A. Sampson, I. Ackerman, L. Ceze, and D. Grossman. Monitoring and Debugging the Quality of
Results in Approximate Programs. In Proceedings of the Twentieth International Conference on Architectural
Support for Programming Languages and Operating Systems, ASPLOS ’15, pages 399–411, New York, NY, USA,
2015. ACM.
[46] C. Rubio-González, C. Nguyen, H. D. Nguyen, J. Demmel, W. Kahan, K. Sen, D. H. Bailey, C. Iancu, and
D. Hough. Precimonious: Tuning Assistant for Floating-Point Precision. In Proceedings of the International
Conference on High Performance Computing, Networking, Storage and Analysis, SC ’13, pages 27:1–27:12, New
York, NY, USA, 2013. ACM.
[47] M. Samadi, D. A. Jamshidi, J. Lee, and S. Mahlke. Paraprox: Pattern-Based Approximation for Data Parallel
Applications. In Proceedings of the 19th International Conference on Architectural Support for Programming
Languages and Operating Systems, ASPLOS ’14, pages 35–50, New York, NY, USA, 2014. ACM.
[48] M. Samadi, J. Lee, D. A. Jamshidi, A. Hormati, and S. Mahlke. SAGE: Self-Tuning Approximation for Graphics
Engines. In Proceedings of the 46th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO-46,
pages 13–24, New York, NY, USA, 2013. ACM.
[49] A. Sampson, W. Dietl, E. Fortuna, D. Gnanapragasam, L. Ceze, and D. Grossman. EnerJ: Approximate Data
Types for Safe and General Low-Power Computation. In Proceedings of the 32Nd ACM SIGPLAN Conference
on Programming Language Design and Implementation, PLDI ’11, pages 164–174, New York, NY, USA, 2011.
ACM.
[50] A. Sampson, J. Nelson, K. Strauss, and L. Ceze. Approximate Storage in Solid-State Memories. In Proceedings
of the 46th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO-46, pages 25–36, New
York, NY, USA, 2013. ACM.
[51] E. Schkufza, R. Sharma, and A. Aiken. Stochastic Optimization of Floating-Point Programs with Tunable Precision.
In Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation,
PLDI ’14, pages 53–64, New York, NY, USA, 2014. ACM.
[52] T. Sherwood, E. Perelman, G. Hamerly, and B. Calder. Automatically Characterizing Large Scale Program
Behavior. In Proceedings of the 10th International Conference on Architectural Support for Programming
Languages and Operating Systems, ASPLOS X, pages 45–57, New York, NY, USA, 2002. ACM.
[53] T. Sherwood, E. Perelman, G. Hamerly, S. Sair, and B. Calder. Discovering and Exploiting Program Phases. IEEE
Micro, 23(6):84–93, Nov. 2003.
[54] M. Shoushtari, A. BanaiyanMofrad, and N. Dutt. Exploiting Partially-Forgetful Memories for Approximate
Computing. Embedded Systems Letters, IEEE, Mar. 2015.
[55] S. Sidiroglou-Douskos, S. Misailovic, H. Hoffmann, and M. Rinard. Managing Performance vs. Accuracy
Trade-offs With Loop Perforation. In Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European
Conference on Foundations of Software Engineering, ESEC/FSE ’11, pages 124–134, New York, NY, USA, 2011.
ACM.
21
[56] X. Sui, A. Lenharth, D. S. Fussell, and K. Pingali. Proactive Control of Approximate Programs. In Proceedings of
the Twenty-First International Conference on Architectural Support for Programming Languages and Operating
Systems, ASPLOS ’16, pages 607–621, New York, NY, USA, 2016. ACM.
[57] R. S. Sutton and A. G. Barto. Introduction to Reinforcement Learning. MIT Press, Cambridge, MA, USA, first
edition, 1998.
[58] K. Swaminathan, C.-C. Lin, A. Vega, A. Buyuktosunoglu, P. Bose, and S. Pankanti. A Case for Approximate
Computing in Real-Time Mobile Cognition. In Second Workshop on Approximate Computing Across the System
Stack, WACAS, 2015.
[59] L. G. Valiant. A Theory of the Learnable. Communications of the ACM, 27(11):1134–1142, Nov. 1984.
[60] J. J. Whang, X. Sui, and I. S. Dhillon. Scalable and Memory-Efficient Clustering of Large-Scale Social Networks.
In Proceedings of the 2012 IEEE 12th International Conference on Data Mining, ICDM ’12, pages 705–714,
Washington, DC, USA, 2012. IEEE Computer Society.
[61] D. I. Wilson and B. R. Young. The Seduction of Model Predictive Control. Electrical & Automation Technology,
pages 27–28, Dec. 2006. ISSN: 1177-2123.
[62] R. Zafarani and H. Liu. Social Computing Data Repository at ASU, 2009. http://socialcomputing.
asu.edu.
[63] Z. A. Zhu, S. Misailovic, J. A. Kelner, and M. Rinard. Randomized Accuracy-Aware Program Transformations
For Efficient Approximate Computations. In Proceedings of the 39th Annual ACM SIGPLAN-SIGACT Symposium
on Principles of Programming Languages, POPL ’12, pages 441–454, New York, NY, USA, 2012. ACM.
22
| 6 |
Day-Ahead Solar Forecasting Based on Multi-level
Solar Measurements
Mohana Alanazi, Mohsen Mahoor, Amin Khodaei
Department of Electrical and Computer Engineering
University of Denver
Denver, USA
[email protected], [email protected], [email protected]
Abstract—The growing proliferation in solar deployment,
especially at distribution level, has made the case for power
system operators to develop more accurate solar forecasting
models. This paper proposes a solar photovoltaic (PV) generation
forecasting model based on multi-level solar measurements and
utilizing a nonlinear autoregressive with exogenous input
(NARX) model to improve the training and achieve better
forecasts. The proposed model consists of four stages of data
preparation, establishment of fitting model, model training, and
forecasting. The model is tested under different weather
conditions. Numerical simulations exhibit the acceptable
performance of the model when compared to forecasting results
obtained from two-level and single-level studies.
Keywords—Solar generation forecast, nonlinear autoregressive
with exogenous input (NARX).
NOMENCLATURE
Pactual
Pforecast
Actual solar generation
Forecasted solar generation
P actual
N
Ec
Ef
ES
Average actual solar generation
Number of sample
Calculated error at customer level
Calculated error at feeder level
Calculated error at substation level
I.
S
INTODUCTION
OLAR FORECASTING plays a key role in the planning,
control and operation of power systems. Although this
viable generation technology is making fast inroads in
electricity grids, solar forecasting is still facing various
challenges, due to the inherent variability and uncertainty in
solar photovoltaic (PV) generation [1], [2].
Numerous factors, including but not limited to the dropping
cost of solar technology, environmental concerns, and the state
and governmental incentives, have made the path for a rapid
growth of solar regeneration. More than 2 GW of solar PV
was installed only in the U.S. in the summer of 2016, which is
43% higher compared to the installed capacity in the same
timeframe in 2015, to achieve an accumulated capacity of 31.6
GW [3]. Accordingly, solar forecasting problem has attracted
more attention to properly incorporate solar generation into
power system planning, operation, and control.
Many research studies are carried out on solar forecasting
problem, and several approaches are suggested to improve
forecasting results [4]-[8]. In [9], a short-term one-hour-ahead
Global Horizontal Irradiance (GHI) forecasting framework is
developed using machine learning and pattern recognition
models. This model reduces normalized mean absolute error
and the normalized root mean square error compared to the
commonly-used persistence method by 16% and 25%,
respectively. In [10], an intelligent approach for wind and
solar forecasting is proposed based on linear predictive coding
and digital image processing. It is shown that the model can
outperform conventional methods and neural networks.
Ensemble methods are quite popular in statistics and machine
learning, as they reap the benefit of multiple predictors to
achieve not only an aggregated, but also a better and reliable
decision. A survey paper on using ensemble methods for wind
power forecasting and solar irradiance forecasting is proposed
in [11]. The paper concludes that the ensemble forecasting
methods in general outperform the non-ensemble ones. A
comprehensive review focusing on the state-of-the-art
methods applied to solar forecasting is conducted in [12]. A
variety of topics including the advantages of probabilistic
forecast methods over deterministic ones, and the current
computational approaches for renewable forecasting is
discussed in this paper.
Historical data are of great importance for solar forecasting.
By leveraging historical data, the solar PV generation can be
forecasted for various time horizons as discussed in [13]. This
study investigates least-square support vector machines,
artificial neural network (ANN), and hybrid statistical models
based on least square support vector machines with wavelet
decomposition. In addition, a variety of measures, including
the root mean square error, mean bias error and mean absolute
error, are employed to evaluate the performance of the
aforementioned methods. The hybrid method based on leastsquare support vector machines and wavelet decomposition
surpasses the other methods. A new two-stage approach for
online forecasting of solar PV generation is proposed in [14].
This approach leverages a clear sky model to achieve a
statistical normalization. Normalized power output is further
forecasted by using two adaptive linear time series models;
autoregressive and autoregressive with exogenous input.
Various types of ANN, including but not limited to recurrent
neural network, feed-forward neural network, and radial basis
function neural network are employed for solar forecasting.
Substation
II.
Feeder level
Customer level
Substation level
Fig. 1 Multi-level Solar PVs installed at different locations.
The ANNs not only can process complex and nonlinear
time series forecast problems, but also can learn and figure out
the relationship between the input and the target output. On
the basis of ANN, a statistical method for solar PV generation
forecasting is proposed in [15]. One of the lessons learned
from this paper is that neural networks can be well-trained to
enhance forecast accuracy. In [16], by levering stationary data
and employing post-processing steps, a feed-forward neural
network-based method for day ahead solar forecasting is
studied. A comprehensive review of solar forecasting by using
different ANNs is provided in [17].
Hybrid models are considered highly effective for solar
forecasting in a way that they reinforce capabilities of each
individual method. Hybrid models reap the benefits of two or
more forecasting methods with the objective of achieving a
better forecast result [18]-[21]. In [22], authors present a
hybrid model consisting of various forecasting methods for a
48-hour-ahead solar forecasting in North Portugal. This study
advocates that the hybrid model attains a significant
improvement compared to statistical models. Another hybrid
short-term model to forecast solar PV generation is studied in
[23]. This hybrid model is formed on the basis of both group
method of data handling and least-square support vector
machine, where the performance of the hybrid model
significantly outperforms the other two methods.
The existing literature in this research area lacks studies on
multi-level data measurements for day-ahead solar PV
generation forecasting. Leveraging the multi-level solar
measurements to provide a more accurate forecasting for the
solar PV generation is the primary objective of this paper. The
solar PV generation, which is measured at various locations
including customer, feeder and substation, is utilized for dayahead solar forecasting with the objective of enhancing the
forecast accuracy. These multi-level measurements could play
an instrumental role in enhancing solar forecasting in terms of
reaching lower error values. The proposed forecasting model,
which will be further discussed in details in this paper,
consists of four stages and takes advantage of multiple
datasets related to specific locations.
The rest of the paper is organized as follows: Section II
discusses outline and the architecture of the proposed
forecasting model. Numerical simulations are presented in
Section III. Discussions and conclusions drawn from the
studies are provided in Section IV.
FORECASTING MODEL OUTLINE AND ARCHITECTURE
Fig. 1 depicts the three levels of solar PV measurements:
customer, feeder, and substation. The proposed model aims to
outperform the forecast applied at each solar measurement
level. The forecast in each level is performed using a
nonlinear autoregressive neural network. The mean absolute
percent error MAPE is accordingly calculated as in (3) for
each level, and denoted as EC, EF, and ES for customer, feeder,
and substation, respectively. This model aims to reduce the
forecasting error to be less than the minimum of EC, EF, and
ES. Fig. 2 depicts the three datasets, which are processed under
different stages and explained in the following.
Hourly solar PV generation at customer
(C), feeder (F), and substation (S) levels
Forecast using NARNN
and calculate the errors
EC, EF , ES
Data Preparation
Preprocessed Data at customer, feeder,
and substation level
Fitting Model
Establish a fitting model using NARNN
at customer, feeder and substation level
Forecasting
Select the best fitting model with
2
maximum R
Train the model
based on the
previous
preprocessed data
Forecast the output using NARX
Calculate the new error (En)
NO
En< min (EC , EF , ES )
Yes
Data post-processing and final output
Fig. 2 The flowchart of the multi-level solar PV generation forecasting.
A. Data Preprocessing and Adjusment
The data used in the simulation represent the total solar PV
generation. The data preparation includes removing offset,
normalization,
removing
nighttime
values,
and
stationarization. More detail about data preprocessing can be
found in [16]. The data preparation is to ensure the quality of
dataset before it is inputted to the forecasting model. This step
includes the simulation of maximum power generated from
solar PV at clear sky conditions. This is achieved by
simulating the maximum solar PV generation at clear sky
conditions using the system advisory model (SAM) provided
by National Renewable Energy Laboratory (NREL) [24]. The
maximum solar irradiance along with different metrological
inputs in clear condition are fed to SAM in order to simulate
the maximum solar PV generation. Fig. 3 presents the
flowchart for data preparation.
Benchmark data
Clear sky solar
PV generation
Data
preprocessing
Stationarity
check
Fig. 3 The flowchart for data preparation.
B. Fitting Model
By using NARNN, the fitting model is created for each
level. In this respect, the NARNN model utilizes a large set of
historical data in order to train the model and then forecast the
output. It is applied to the three datasets, including customer,
feeder, and substation in order to establish the three fitting
models. The best fitting model among the three is selected
using the coefficient of determination R2. The coefficient of
determination examines the proportion variance of the
predicted fitting model. The coefficient of determination can
be expressed mathematically as in (1), where P(t)actual is the
average of the actual data over the number of sample. The R2
ranges from 0 to 1, where 0 represents that the fitting model is
not predictable, and 1 means that the NARNN is able to
predict the fitting without any error. So, the best selected
fitting model among the three is the one with maximum R2.
N
( P(t ) actual P(t ) forecast ) 2
2
t 1
R 1 N
2
t1( P(t ) actual P(t ) actual )
(1)
C. Forecasting
NARX is a time series model that predicts the output using
historical values y(t) as well as inputs x(t). The NARX model
is presented in (2), where d is the number of considered
historical values. The fitting model is fed as an input to NARX
along with the previously preprocessed data. The NARX is
trained and the output is forecasted. Fig. 4 depicts the
architecture of the NARX. The goal is to forecast a day-ahead
solar PV generation with a new error En, which is less than the
minimum of the three errors as shown in the flowchart in
terms of a condition.
(2)
y(t) = f (x(t -1),.., x(t - d), y(t -1),.., y(t - d))
D. Data Post-processing
The output from the forecasting model is post-processed by
denormalizing, adding nighttime values, and calculating the
final solar output as explained in detail in [16]. MAPE and the
root mean square error (RMSE) are calculated as in (3) and
(4), respectively.
MAPE
RMSE
1 N P(t ) actual P(t ) forecast
t 1
N
P(t ) actual
1 N
* ( P(t ) actual P(t ) forecast ) 2
N t 1
Hidden Layer
x(t-1)
…
x(t-d)
(3)
neuron
(4)
Output Layer
fn
fn
Σ
y(t-1)
…
y(t-d)
fn
y(t)
fn
Fig. 4 The architecture of the NARX
III.
NUMERICAL SIMULATIONS
The hourly solar PV generation of three levels including
customer (C), feeder (F), and substation (S), for a specific area
in Denver, Colorado are utilized to perform forecasting. The
data used in this model are available in [25]. The customer
level data are considered as the aggregated customers’ solar
PVs generation for a selected area. The feeder level data are
the aggregated solar PVs generation for each feeder, in which
four feeders are considered in this study. Finally, the
substation level data are the solar PV generation measured at
the substation level. In order to demonstrate the merits of the
proposed model, the following three cases with various
weather conditions are investigated:
Case 1: Forecast using NARNN for each level without data
processing.
Case 2: Forecast using NARX with three-level measurements
and data processing.
Case 3: Forecast using NARX with two-level measurements
and data processing.
Case 4: Forecast using NARX with single-level measurement
and data processing.
Case 1: In this case, by leveraging NARNN, day-ahead solar
PV generation is forecasted for all three levels, while ignoring
data processing. The calculated MAPE and RMSE for the
customer, feeder, and substation levels with different weather
conditions are listed in Table I. As highlighted in Table I, the
customer level forecast has achieved the minimum MAPE as
well as RMSE for the selected weather conditions. This case is
considered as a base case, in which the calculated values are
utilized in order to demonstrate the effectiveness of using the
three-level measurements for forecasting. The objective in the
following case is to apply the proposed model in order to get a
new error that is less than the minimum achieved under this
case.
8.29
23.12
7.34
38.59
6.73
Substation
70.77
10.41
43.65
10.13
53.03
10.37
Case 2: In this case, three-level measurements are
preprocessed in order to ensure the quality of the training data
fed to the forecasting model. This case includes three
forecasting stages: establishing the fitting model from each
measurement level using the NARNN, training the NARX
model using the previously preprocessed datasets, and
forecasting the solar PV generation using the three-level
measurements and the fitting model as input. The fitting
model with the minimum MAPE and the maximum R2 is
selected as input to NARX. Table II exhibits how well the
fitting model is established in terms of R2 and MAPE for the
three-level measurements under different weather conditions.
As highlighted in Table II, the fitting model established by
using the customer level measurement outperforms the ones
established by using the feeder and substation measurements.
In order to show the merit of using three-level measurements
for the same location, the three measurements along with the
best fitting model are fed as inputs to NARX to forecast the
solar PV generation. The forecast is simulated for the same
selected days in Case 1. Table III exhibits the MAPE and
RMSE for the selected days. The forecast errors in this case
are less than the minimum achieved in Case 1. Fig. 5, 6 and 7
depict the forecasted and actual solar PV generation for the
considered sunny, cloudy, and partly cloudy days,
respectively.
TABLE II
THE FITTING MODEL MAPE AND R2 FOR THE CONSIDERED LEVELS UNDER
DIFFERENT WEATHER CONDITIONS
Sunny
Cloudy
Partly Cloudy
Dataset
MAPE
MAPE
MAPE
2
2
level
R
R
R2
(%)
(%)
(%)
Customer
2.39
0.9987
2.29
0.9956
3.49
0.99
Feeder
3.78
0.996
2.80
0.9943
4.02
0.988
Substation
5.95
0.9873
4.98
0.9821
5.37
0.986
Case 3: In this case only two measurements at customer and
feeder levels are used for forecasting. The preprocessed data
along with the best selected fitting model are fed to NARX.
The forecasting performance of this case is shown in Table
III.. In sunny day, this case has reduced the MAPE compared
to Case 1 by 47%. In cloudy and partly cloudy weather
conditions, Case 3 has reduced the MAPE compared to Case 1
by 61% and 19%, respectively.
Case 4: To exhibit the effectiveness of the three-level
measurements, Case 2 is repeated, but only one measurement
(customer level) is included as an input to NARX. Similar to
Weather
Condition
Sunny
Cloudy
Partly
Cloudy
2000
PV power output (kW)
48.81
1500
TABLE III
THE MAPE FOR DIFFERENT CASE STUDIES
Minimum
MAPE
MAPE
MAPE
(Using
(Using
(Using
NARX and
NARX and
NARNN
three-level
two-level
without data
processed
processed
processing)
data)
data)
4.47
1.67
2.38
MAPE
(Using
NARX and
single-level
processed
data)
3.14
6.04
2.10
2.36
2.44
4.09
2.69
3.30
3.39
Forecasted
Actual
1000
500
0
1 2 3 4 5 6 7 8 9 101112131415161718192021222324
Time (hour)
Fig. 5 Actual and forecasted solar PV generation in a sunny day
PV power output (kW)
Feeder
the previous case, the best fitting model based on MAPE and
R2 is fed to NARX along with preprocessed customer level
measurement. Table III shows the forecast error using NARX
with single-level measurement comparing to NARX with
three-level measurements, two-level measurements, and the
minimum forecast error among the single-level measurement
using NARNN without data processing. A single-level
measurements considerably improve the results over NARNN
method, however achieve not as good solution as in two
previous cases with three- and two-level measurements.
700
600
500
400
300
200
100
0
Forecasted
Actual
1 2 3 4 5 6 7 8 9 101112131415161718192021222324
Time (hour)
Fig. 6 Actual and forecasted solar PV generation in a cloudy day
1200
PV power output (kW)
TABLE I
CASE 1: MAPE AND RMSE FOR THE CONSIDERED DATASETS UNDER
DIFFERENT WEATHER CONDITIONS
Sunny
Cloudy
Partly Cloudy
Dataset
RMSE
MAPE
RMSE
MAPE
RMSE
MAPE
level
(kW)
(%)
(kW)
(%)
(kW)
(%)
Customer
44.58
4.47
20.54
6.04
36.11
4.09
1000
Forecasted
Actual
800
600
400
200
0
1 2 3 4 5 6 7 8 9 101112131415161718192021222324
Time (hour)
Fig. 7 Actual and forecasted solar PV generation in a partly cloudy day
As shown in Table III, the model minimizes the forecast
error to outperform the minimum error reported at customer
level. The proposed model has reduced the error compared to
the minimum error in Case 1 by 63%, 65%, and 34% for
sunny, cloudy, and partly cloudy weather conditions,
respectively. Moreover, the merit of using three-level
measurements is shown by comparing the forecast error using
the proposed model with applying two-level measurements to
the model as in Case 3. The MAPE is reduced by 30%, 11%,
and 18% for sunny, cloudy, and partly cloudy weather
conditions, respectively. The three-level measurement also
outperforms Case 4 in which only single-level measurement
are included. The three-level measurements model has
reduced the MAPE by 47%, 14%, and 21% for sunny, cloudy,
and partly cloudy weather conditions, respectively. The
previous cases have shown that forecasting performance is
greatly impacted by the historical data used to train the model.
Multiple historical data for a specific location along with an
appropriate data processing will improve the training step and
minimize the forecasting error.
IV.
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
CONCLUSION
In this paper, a day-ahead solar PV generation forecast
model based on multi-level measurements was proposed. The
proposed model demonstrated an improvement in forecasting
accuracy by reducing the MAPE from 14% to 47% for various
weather conditions, compared to the case when only singlelevel measurements were included. It was further seen that the
data preprocessing was an important step to ensure the quality
of the data before it was used in the training process. The
numerical studies revealed that training the forecasting model
without data preprocessing might adversely impact the
forecasting accuracy. The proposed preprocessing model
could potentially reduce the MAPE by 34% to 65%. It was
further shown that the three-level measurements help achieve
a better forecasting accuracy compared to two-level
measurements. The proposed model can be further enhanced
by including multiple meteorological parameters such as cloud
cover, solar irradiance, and temperature along with three-level
measurements as inputs to NARX.
[14]
REFERENCES
[22]
[1]
[2]
[3]
[4]
[5]
H. Sangrody, et al, "Weather forecasting error in solar energy
forecasting," IET Renewable Power Generation, 2017.
S. Watetakarn and S. Premrudeepreechacharn, "Forecasting of solar
irradiance for solar power plants by artificial neural network,” Smart
Grid Technologies-Asia (ISGT ASIA), 2015 IEEE Innovative, 2015.
S. Kann, J. Baca, M. Shiao, C. Honeyman, A. Perea, and S. Rumery,
“US Solar Market Insight - Q3 2016 - Executive Summary,” GTM
Research,Wood Mackenzie Business and the Solar Energy Industries
Association.
S. Akhlaghi, H. Sangrody, M. Sarailoo, and M. Rezaeiahari, "Efficient
operation of residential solar panels with determination of the optimal
tilt angle and optimal intervals based on forecasting model," IET
Renewable Power Generation, 2017.
H. Sangrody, M. Sarailoo, A. Shokrollahzade, F. Hassanzadeh, and E.
Foruzan "On the Performance of Forecasting Models in the Presence of
Input Uncertainty," North American Power Symposium (NAPS),
Morgantown, West Virginia, USA, 2017.
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[23]
[24]
[25]
J. Remund, R. Perez, and E. Lorenz, “Comparison of solar radiation
forecasts for the USA,” in Proc. of the 23rd European PV Conference,
Valencia, Spain. 2008.
G. Reikard, S. E.Haupt, and T. Jensen, “Forecasting ground-level
irradiance over short horizons: Time series, meteorological, and timevarying parameter models,” Renewable Energy, vol. 112, pp.474-485,
Nov. 2017.
G. Reikard, “Predicting solar radiation at high resolutions: A
comparison of time series forecasts,” Solar Energy, vol. 83, no. 3,
pp.342-349, March 2009.
C. Feng, et al, “Short-term Global Horizontal Irradiance Forecasting
Based on Sky Imaging and Pattern Recognition,” IEEE General
Meeting, Chicago, IL, July 2017.
A. A. Moghaddam and A. Seifi, "Study of forecasting renewable
energies in smart grids using linear predictive filters and neural
networks," IET Renewable Power Generation, vol. 5, no.6, pp. 470480, Dec. 2011.
Y. Ren, P. Suganthan, and N. Srikanth, "Ensemble methods for wind
and solar power forecasting-A state-of-the-art review," Renewable and
Sustainable Energy Reviews, vol. 50, pp. 82-91, 2015.
R. Banos, F. Manzano-Agugliaro, F. Montoya, C. Gil, A. Alcayde, and
J. Gómez, "Optimization methods applied to renewable and sustainable
energy: A review," Renewable and Sustainable Energy Reviews, vol. 15,
no.4, pp. 1753-1766, 2011.
M. G. De Giorgi, P. M. Congedo, M. Malvoni, and D. Laforgia, "Error
analysis of hybrid photovoltaic power forecasting models: A case study
of mediterranean climate," Energy Conversion and Management, vol.
100, pp. 117-130, 2015.
P. Bacher, H.Madsen, and H. A. Nielsen, "Online short-term solar
power forecasting," Solar Energy, vol. 83, no.10, pp. 1772-1783, 2009.
C. Chen, S. Duan, T. Cai, and B. Liu, "Online 24-h solar power
forecasting based on weather type classification using artificial neural
network," Solar Energy, vol. 85, no.11, pp. 2856-2870, 2011.
M. Alanazi and A. Khodaei, "Day-ahead Solar Forecasting Using Time
Series Stationarization and Feed-Forward Neural Network," North
American Power Symposium (NAPS), 2016, Denver, CO, USA, 2016.
A. Mellit and S. A. Kalogirou, "Artificial intelligence techniques for
photovoltaic applications: A review," Prog. Energy Combust. Sci., vol.
34, no. 5, pp. 574–632, Oct. 2008.
M. Alanazi, M. Mahoor and A. Khodaei, "Two-stage hybrid day-ahead
solar forecasting," North American Power Symposium (NAPS),
Morgantown, WV, USA, 2017.
J. Wu and C. K. Chan, "Prediction of hourly solar radiation using a
novel hybrid model of ARMA and TDNN," Solar Energy, vol. 85, no. 5,
pp. 808-817, May 20111.
M. Bouzerdoum, A. Mellit, and A. M. Pavan, "A hybrid model
(SARIMA–SVM) for short-term power forecasting of a small-scale
grid-connected photovoltaic plant," Solar Energy, vol. 98, pp. 226-235,
Dec. 2013.
R. Marquez, et al, "Hybrid solar forecasting method uses satellite
imaging and ground telemetry as inputs to ANNs," Solar Energy, vol.
92, pp. 176-188, June 2013.
J. M. Filipe, R. J. Bessa, J. Sumaili, R. Tome, and J. N. Sousa, “A
hybrid short-term solar power forecasting tool,” in Intelligent System
Application to Power Systems (ISAP), 2015 18th International
Conference on, 2015, pp. 1–6.
M. De Giorgi, M. Malvoni, and P. Congedo, "Comparison of strategies
for multi-step ahead photovoltaic power forecasting models based on
hybrid group method of data handling networks and least square support
vector machine," Energy, vol. 107, pp. 360-373, 2016.
“System Advisor Model (SAM) |.” [Online]. Available:
https://sam.nrel.gov/.
S. Pfenninger and I. Staffell, “Long-term patterns of European PV
output using 30 years of validated hourly reanalysis and satellite data,”
Energy, vol. 114, pp. 1251–1265, Nov. 2016.
| 2 |
Virtual Observatory: From Concept to Implementation
S.G. Djorgovski1,2 and R. Williams2
arXiv:astro-ph/0504006v1 1 Apr 2005
1
Division of Physics, Mathematics, and Astronomy
Center for Advanced Computing Research
California Institute of Technology
Pasadena, CA 91125, USA
2
Abstract.
We review the origins of the Virtual Observatory (VO) concept,
and the current status of the efforts in this field. VO is the response of the
astronomical community to the challenges posed by the modern massive and
complex data sets. It is a framework in which information technology is harnessed to organize, maintain, and explore the rich information content of the
exponentially growing data sets, and to enable a qualitatively new science to
be done with them. VO will become a complete, open, distributed, web-based
framework for astronomy of the early 21st century. A number of significant efforts worldwide are now striving to convert this vision into reality. The technological and methodological challenges posed by the information-rich astronomy
are also common to many other fields. We see a fundamental change in the way
all science is done, driven by the information technology revolution.
1.
The Challenge and the Opportunity
Like all other sciences, and indeed most fields of the modern human endeavor
(commerce, industry, security, entertainment, etc.), astronomy is being deluged
by an exponential growth in the volume and complexity of data. The volume
of information gathered in astronomy is estimated to be doubling every 1.5
years or so (Szalay & Gray 2000), i.e., with the same exponent as the Moore’s
law. This is not an accident: the same technology which Moore’s law describes
(roughly, VLSI) has also given us most astronomical detectors (e.g., CCDs) and
data systems. The current (∼ early 2005) data gathering rate in astronomy is
estimated to be ∼ 1 TB/day, and the content of astronomical archives is now
several hundred TB (Brunner et al. 2002). Note that both the data volume and
the data rate are growing exponentially. Multi-PB data sets are on the horizon.
In addition to the growth in data volume, there has been also a great
increase in data complexity, and generally also quality and homogeneity. The sky
is now being surveyed at a full range of wavelengths, from radio to γ-rays, with
individual surveys producing data sets measured in tens of TB, detecting many
millions or even billions of sources, and measuring tens or hundreds of parameters
for each source. There is also a bewildering range of targeted observations, many
of which carve out multi-dimensional slices of the parameter space.
Yet, our understanding of the universe is clearly not doubling every year
and a half. It seems that we are not yet exploiting the full information content
1
2
Djorgovski & Williams
of these remarkable (and expensive) data sets. There is something of a technological and methodological bottleneck in our path from bits to knowledge.
A lot of valuable data from ground-based observations is not yet archived or
documented properly. A lot of data is hard to find and access in practice, even
if it is available in principle. A multitude of good archives, data depositories,
and digital libraries do exist, and form an indispensable part of the astronomical
research environment today. However, even those functional archives are like an
archipelago of isolated islands in the web, which can be accessed individually
one at a time, and from which usually only modest amounts of data can be
downloaded to the user’s machine where the analysis is done.
Even if one could download the existing multi-TB data sets in their full
glory (a process which would take a long, long time even for the well-connected
users), there are no data exploration and analysis tools readily available, which
would enable actual science with these data to be done in a reasonable and
practical amount of time.
We are thus facing an embarrassment of richness: a situation where we
cannot effectively use the tremendous – and ever growing – amounts of valuable data already in hand. Fortunately, this is a problem where technological
solutions do exist or can be developed on a reasonably short time scale.
2.
The Genesis of the Virtual Observatory Concept
The Virtual Observatory (VO) concept is the astronomical community’s response to these challenges and opportunities. VO is an emerging, open, webbased, distributed research environment for astronomy with massive and complex data sets. It assembles data archives and services, as well as data exploration and analysis tools. It is technology-enabled, but science-driven, providing
excellent opportunities for collaboration between astronomers and computer science (CS) and IT professionals and statisticians. It is also an example of a new
type of a scientific organization, which is inherently distributed, inherently multidisciplinary, with an unusually broad spectrum of contributors and users.
The concept was defined in the 1990’s through many discussions and workshops, e.g., during the IAU Symposium 179, and at a special session at the 192nd
meeting of the AAS (Djorgovski & Beichman 1998). Precursor ideas include,
e.g., efforts on the NASA’s early ADS system, and its ESA counterpart, ESIS,
as well as the development of many significant data and literature archives in
the same period: ADS itself, Simbad and other services at CDS, NED, data
archives from the HST and other space missions, Digital Sky project (by T.
Prince et al.), etc. As many modern digital sky surveys (e.g., DPOSS, SDSS,
2MASS, etc.) started producing terabytes of data, the challenges and opportunities of information-rich astronomy became apparent (see, e.g., Djorgovski et
al. 1997, Szalay & Brunner 1998, Williams et al. 1999, or Szalay et al. 2000).
Two grassroots workshops focused on the idea of VO were held in 1999, at JHU
(organized by A. Szalay, R. Hanisch, et al.), and at NOAO (organized by D. De
Young, S. Strom, et al.).
The early developments culminated in a significant endorsement of the NVO
concept by the U.S. National Academy’s “astronomy decadal survey” (McKee,
Taylor, et al. 2000). This was then explored further in a White Paper (2001),
Virtual Observatory
3
Figure 1.
A Conceptual outline of a VO. User communicates with a portal
that provides data discovery, access, and federation services, which operate on
a set of interconnected data archives and compute resources, available through
standardized web services. User-selected or generated data sets are then fed
into a selection of data exploration and analysis tools, in a way which should
be seamlessly transparent to the user.
and in other contributions to the first major conference on the subject (Brunner,
Djorgovski, & Szalay 2001), from which emerged the architectural concept of
services, service descriptions, and a VO-provided registry. The report of the
U.S. National Virtual Observatory Science Definition Team (2002) provided the
most comprehensive scientific description of the concept and the background up
to that point.
More international conferences followed (e.g., Banday et al. 2001, Quinn &
Gorski 2004), and a good picture of this emerging field can be found in papers
contained in their proceedings. VO projects have been initiated world-wide,
with a good and growing international collaboration between various efforts;
more information and links can be found on their websites. 1 2
Finally, VO can be seen as a connecting tissue of the entire astronomical
system of observatories, archives, and compute services (Fig. 2; Djorgovski
2002). Effectiveness of any observation is amplified, and the scientific potential
increased, as the new data are folded in the system and made available for
additional studies, follow-up observations, etc.
3.
Scientific Roles and Benefits of a VO
The primary role of a VO is to facilitate data discovery (what is already known
of some object, set of objects, a region on the sky, etc.), data access (in an
easy and standardized fashion), and data federation (e.g., combining data from
1
The U.S. National Virtual Observatory (NVO) project website: http://us-vo.org
2
The International Virtual Observatory Alliance (IVOA) website: http://ivoa.net
4
Djorgovski & Williams
Figure 2.
A systemic view of the VO as a complete astronomical research
environment, connecting archives of both ground-based and space-based observations, and providing the tools for their federation and exploration. Analysis of archived observations – some of which may be even real-time data
– then leads to follow-up observations, which themselves become available
within the VO matrix.
different surveys). The next, and perhaps even more important role, is to provide
an effective set of data exploration and analysis tools, which scale well to data
volumes in a multi-TB regime, and can deal with the enormous complexity
present in the data. (By “data” we mean both the products of observations,
and products of numerical simulations.)
While any individual function envisioned for the VO can be accomplished
using existing tools, e.g., federating a couple of massive data sets, exploring
them in a search for particular type of objects, or outliers, or correlations, in
most cases such studies would be too time-consuming and impractical; and many
scientists would have to solve the same issues repeatedly. VO would thus serve
as an enabler of science with massive and complex data sets, and as an efficiency
amplifier. The goal is to enable some qualitatively new and different science,
and not just the same as before, but with a larger quantity of data. We will
need to learn to ask different kinds of questions, which we could not hope to
answer with the much smaller and information-poor data sets in the past.
Looking back at the history of astronomy we can see that technological
revolutions lead to bursts of scientific growth and discovery. For example, in the
1960’s, we saw the rise of radio astronomy, powered by the developments in electronics (which were much accelerated by the radar technology of the World War
II and the cold war). This has led to the discovery of quasars and other powerful
Virtual Observatory
5
active galactic nuclei, pulsars, the cosmic microwave background (which firmly
established the Big Bang cosmology), etc. At the same time, the access to space
opened the fields of X-ray and γ-ray astronomy, with an equally impressive range
of fundamental new discoveries: the very existence of the cosmic X-ray sources
and the cosmic X-ray background, γ-ray bursts (GRBs), and other energetic phenomena. Then, over the past 15 years or so, we saw a great progress powered
by the advent of solid-state detectors (CCDs, IR arrays, bolometers, etc.), and
cheap and ubiquitous computing, with discoveries of extrasolar planets, brown
dwarfs, young and forming galaxies at high redshifts, the cosmic acceleration
(the dark energy), the solution of the mystery of GRBs, and so on. We are now
witnessing the next phase of the IT revolution, which will likely lead to another
golden age of discovery in astronomy. VO is the framework to effect this process.
In astronomy, observational discoveries are usually made either by opening
a new domain of the parameter space (e.g., radio astronomy, X-ray astronomy,
etc.), by pushing further along some axis of the observable parameter space
(e.g., deeper in flux, higher in angular or temporal resolution, etc.), by expanding the coverage of the parameter space and thus finding rare types of objects or
phenomena which would be missed in sparse observations, or by making connections between different types of observations (for example, optical identification
of radio sources leading to the discovery of quasars). Surveys are often a venue
which leads to such discoveries; see, e.g., Harwit (1998) for a discussion. In
a more steady mode of research, application of well understood physics, constrained by observations, leads to understanding of various astronomical objects
and phenomena; e.g., stellar structure and evolution.
This implies two kinds of discovery strategies: covering a large volume of
the parameter space, with many sources, measurements, etc., as is done very well
by massive sky surveys; and connecting as many different types of observations
as possible (e.g., in a multi-wavelength, multi-epoch, or multi-scale manner), so
that the potential for discovery increases as the number of connections, i.e., as
the number of the federated data sets, squared. Both approaches are naturally
suited for the VO.
4.
Technological and Methodological Challenges
There are many non-trivial technological and methodological problems posed by
the challenges of data abundance. We note two important trends, which seem
to particularly distinguish the new, information-rich science from the past:
• Most data will never be seen by humans. This is a novel experience for
scientists, but the sheer volume of TB-scale data sets (or larger) makes
it impractical to do even a most cursory examination of all data. This
implies a need for reliable data storage, networking, and database-related
technologies, standars, and protocols.
• Most data and data constructs, and patterns present in them, cannot be
comprehended by humans directly. This is a direct consequence of a growth
in complexity of information, mainly its multidimensionality. This requires
the use or development of novel data mining (DM) or knowledge discovery
6
Djorgovski & Williams
in databases (KDD) and data understanding (DU) technologies, hyperdimensional visialization, etc. The use of AI/machine-assisted discovery
may become a standard scientific practice.
This is where the qualitative differences in the way science is done in the
21st century will come from; the changes are not just quantitative, based on the
data volumes alone. Thus, a modern scientific discovery process can be outlined
as follows:
1. Data gathering: raw data streams produced by various measuring devices. Instrumental effects are removed and calibrations applied in the
domain-specific manner, usually through some data reduction pipeline
(DRP).
2. Data farming: storage and archiving of the raw and processed data,
metadata, and derived data products, including issues of optimal database
architectures, indexing, searchability, interoperability, data fusion, etc.
While much remains to be done, these challenges seem to be fairly well
understood, and much progress is being made.
3. Data mining: including clustering analysis, automated classification,
outlier or anomaly searches, pattern recognition, multivariate correlation
searches, and scientific visualization, all of them usually in some highdimensional parameter space of measured attributes or imagery. This is
where the key technical challenges are now.
4. Data understanding: converting the analysis results into the actual
knowledge. The problems here are essentially methodological in nature.
We need to learn how to ask new types of questions, enabled by the increases in the data volume, complexity, and quality, and the advances
provided by IT. This is where the scientific creativity comes in.
For example, a typical VO experiment may involve federation of several
major digital sky surveys (in the catalog domain), over some large area of the
sky. Each survey may contain ∼ 108 −109 sources, and measure ∼ 102 attributes
for each source (various fluxes, size and shape parameters, flags, etc.). Each
input catalog would have its own limits and systematics. The resulting data set
would be somewhat heterogeneous parameter space of N ∼ 109 data vectors in
D ∼ 102 − 103 dimensions. An exploration of such a data set may require a
clustering analysis (e.g., how many different types of objects are there? which
object belongs to which class, with what probability?), a search for outliers
(are there rare or unusual objects which do not belong to any of the major
classes?), a search for multivariate correlations (which may connect only some
subsets of measured parameters), etc. For some examples and discussion, see,
e.g., Djorgovski et al. (2001ab, 2002).
The primary challenge is posed by the large size of data volume, and –
especially – large dimensionality. The existing clustering analysis algorithms
do not scale well with the data volume N , or dimensionality D. At best, the
processing time is proportional to N log N , but for some methods it can be
∼ N α , where α ≥ 2. The curse of hyperdimensionality is even worse, with
Virtual Observatory
7
typical scaling as ∼ D β , where β ≥ 2; most off-the-shelf applications can deal
with D < 10. Thus, the computational cost itself may be prohibitive, and novel
approaches and algorithms must be developed.
In addition, there are many possible complications: data heterogeneity, different flux limits, errors which depend on other quantities, etc. In the parameter
space itself, clustering may not be well represented by multivariate Gaussian
clouds (a standard approach), and distributions can have power-law or exponential tails (this greatly complicates the search for anomalies and rare events);
and so on. As we search for outliers in these new rich surveys, the requirement
to eliminate noise and artifacts grows. These are much more significant in an
outlier search than when computing clustering properties and averages,
The difficulties of data federation are exacerbated if data is in different formats and delivery mechanisms, increasing both manual labor and the possibility
of mistakes. Data with inadequate metadata description can be misleading – for
example mistaking different equinoxes for proper motion because the equinox
was not stated. Another difficulty can be that delivery of the data is optimized
for either browsing or for bulk access: it is difficult if the user wants one, but the
other is the only option. The Virtual Observatory has already provided many
well-adopted standards that were built for data fedteration. The VOTable standard, for example (Ochsenbein, Williams, et al. 2001) carries rich metadata
about tables, groups of tables, and the data dictionaries.
The second, and perhaps even more critical part of the curse of hyperdimensionality is the visualization of these highly-dimensional data parameter
spaces. Humans are biologically limited to visualize patterns and scenes in 2 or
3 dimensions, and while some clever tricks have been developed to increase the
maximum visualizable dimensions, in practice it is hard to push much beyond
D = 4 or 5. Mathematically, we understand the meaning of clustering and correlations in an arbitrary number of parameter space dimensions, but how can
we actually visualize such structures? Yet, recognizing and comprehending such
complex data constructs may lead to some crucial new astrophysical insights.
This is an essential part of the intuitive process of scientific discovery, and critical to data understanding and knowledge extraction. Effective and powerful
data visualization, applied in the parameter space itself, must be an essential
part of the interactive clustering analysis.
In many situations, scientifically informed input is needed in designing and
applying the clustering algorithms. This should be based on a close, working collaboration between astronomers and computer scientists and statisticians. There
are too many unspoken assumptions, historical background knowledge specific
to astronomy, and opaque jargon; constant communication and interchange of
ideas are essential.
5.
The Virtual Observatory Implemented
The objective of the Virtual Observatory is to improve and unify access to
astronomical data and services for primarily professional astronomers, but also
for the general public and students. Figure 3 gives an overview. The top bar
of the figure represents this objective: discovery of data and services, reframing
and analysing that data through computation, publishing and dissemination of
8
Djorgovski & Williams
results, and increasing scientific output through collaboration and federation.
The IVOA does not specify or recommend any specific portal or library by
which users can access VO data, but some examples of these portals and tools
are shown in the grey box.
Figure 3.
Internationally adopted architecture for the VO. Services are
split into three kinds: fetching data, computing services, and registry (publishing and discovery). Services are implemented in simple way (web forms)
and as sophisticated SOAP services. The VO does not recommend or endorse
a particular portal for users, but rather encourages variety.
Different vertical arrows represent the different service types and XML
formats by which these portals interface to the IVOA-compliant services. In
the IVOA architecture, we have divided the available services into three broad
classes:
1. Data Services, for relatively simple services that provide access to data.
2. Compute Services, where the emphasis is on computation and federation
of data.
3. Registry Services, to allow services and other entities to be published and
discovered.
These services are implemented at various levels of sophistication, from a
stateless, text-based request-response, up to an authenticated, self-describing
service that uses high-performance computing to build a structured response
Virtual Observatory
9
from a structured request. In the VO, it is intended that services can be used
not just individually, but also concatenated in a distributed workflow, where the
output of one is the input of another.
The registry services facilitate publication and discovery of services. If a
data center (or individual) puts a new dataset online, with a service to provide
access to it, the next step would be to publish that fact to a VO-compliant
registry. One way to do this is to fill in forms expressing who, where, and how
for the service. In due course, registries harvest each other (copy new records)
and so the new dataset service will be known to other VO-registries. When
another person searches a registry (by keyword, author, sky region, wavelength,
etc), they will discover the published services. In this way the VO advances
information diffusion to a more efficient and egalitarian system.
In the VO architecture, there is nobody deciding what is good data and
what is bad data, (although individual registries may impose such criteria if they
wish). Instead, we expect that good data will rise to prominence organically, as
it does on the World Wide Web. We note that while the web has no publishing
restrictions, it is still an enormously useful resource; and we hope the same
paradigm will make the VO registries useful.
Each registry has three kinds of interface: publish, query, and harvest.
People can publish to a registry by filling in web forms in a web portal, thereby
defining services, data collections, projects, organizations, and other entities.
The registry may also accept queries in a one or more languages (for example an
IVOA standard Query Language), and thereby discover entities that satisfy the
specified criteria. The third interface, harvesting, allows registries to exchange
information between themselves, so that a query executed at one registry may
discover a resource that was published at another.
Registry services expect to label each VO resource through a universal identifier, that can be recognized by the initial string ivo://. Resources can contain
links to related resources, as well as external links to the literature, especially
to the Astronomical Data System. The IVOA registry architecture is compliant
with digital library standards for metadata harvesting and metadata schema,
with the intention that IVOA-compliant resources can appear as part of every
University library.
Data services range from simple to sophisticated, and return tabular, image,
or other data. At the simplest level (conesearch), the request is a cone on the
sky (direction/angular radius), and the response is a list of “objects” each of
which has a position that is within the cone. Similar services (SIAP, SSAP) can
return images and spectra associated with sky regions, and these services may
also be able to query on other parameters of the objects.
The OpenSkyQuery protocol drives a data service that allows querying of a
relational database or a federation of databases. In this case, the request is written in a specific XML abstraction of SQL that is part of ADQL (Astronomical
Data Query Language).
The IVOA architecture will also support queries written at a more semantic
level, including queries to the registry and through data services. To achieve this,
the IVOA is developing a structured vocabulary called UCD (Unified Content
Descriptor) to define the semantic type of a quantity.
10
Djorgovski & Williams
The IVOA expects to develop standards for more sophisticated services,
for example for federating and mining catalogs, image processing and source
detection, spectral analysis, and visualization of complex datasets. These services will be implemented in terms of industry-standard mechanisms, working
in collaboration with the grid community.
Members of the IVOA are collaborating with a number of IT groups that
are developing workflow software, meaning a linked set of distributed services
with a dataflow paradigm. The objective is to reuse component services to build
complex applications, where the services are insulated from each other through
well-defined protocols, and therefore easier to maintain and debug. IVOA members also expect to use such workflows in the context of Virtual Data, meaning
a data product that is dynamically generated only when it is needed, and yet a
cache of precomputed data can be used when relevant.
In the diagram above, the lowest layer is the actual hardware, but above
that are the existing data centers, who implement and/or deploy IVOA standard services. Grid middleware is used for high-performance computing, data
transfer, authentication, and service environments. Other software components
include relational databases, services to replicate frequently used collections, and
data grids to manage distributed collections.
A vital part of the IVOA architecture is VOSpace so that users can store
data within the VO. VOSpace stores files and DB tables on the greater internet,
and has a good security model so that legitimate data is secure and illegitimate
data disallowed. VOSpace avoids the need to recover results to the desktop for
storage or to keep them inside the service that generated them. Using VOSpace
establishes access rights and privacy over intermediate results and allows users
to manage their storage remotely.
6.
Examples of Some Prototype Services
There are several deployed applications available at the NVO web site 3 . A registry portal allows the user to find source catalogs, image and spectral services,
data sets and other astronomical resources registered with the NVO. OpenSkyQuery provides sophisticated selection and cross-match services from uploaded
(user) data with numerous catalogues. There are spectrum services for search,
plot, and retrieving SDSS, 2dF, and other archives. The WESIX service asks
the user to upload an image to a source-extraction code, then cross-correlates
the objects found with selected survey catalogs. There is also information about
publishing to the NVO and what that means.
As noted above, these are all implemented with Web Services. This means
that users can effectively scale up their usage of NVO services: when the user
finds the utility of a remote service that can be used by clicking on a humanoriented web page, there is often a further requirement to scale up by scripting
the usage – a machine-oriented interface. We have ensured in the VO architecture that there is always a straightforward programming interface behind web
page.
3
NVO Applications: http://us-vo.org/apps
Virtual Observatory
11
In the following, we examine in more detail the DataScope service.
Figure 4.
The DataScope service “publish and discover” paradigm. After
a new data resource is published to a VO-compliant registry (1), the different
registries harvest each other (2). When a query comes to DataScope (3), the
new resource can be seen in federation with others (4).
Using NVO DataScope scientists can discover and explore hundreds of data
resources available in the Virtual Observatory. Users can immediately discover
what is known about a given region of the sky: they can view survey images from
the radio through the X-ray, explore observations from multiple archives, find
recent articles describing analysis of data in the region, find known interesting or
peculiar objects and survey datasets that cover the region. There is a summary
of all of the available data. Users can download images and tables for further
analysis on their local machines, or they can go directly to a growing set of VO
enabled analysis tools, including Aladin, OASIS, VOPlot and VOStat.
As illustrated in Figure 4, DataScope provides a dynamic, simple to use
explorer for VO data, protocols and analysis tools. Developed by Tom McGlynn
at NASA/GSFC and collaborators at STScI and NCSA, the DataScope uses the
distributed VO registry and VO access protocols to link to archives and catalogs
around the world.
There are web sites that provide rapid collection and federation of multiwavelength imaging, catalog and observation data. This sort of interface has
been built before (NED, Astrobrowse, Skyview, VirtualSky, etc). In these excellent and competent systems, data may be harvested and processed in advance,
and there may be a lot of effort by devoted human curators. There may be links
to remote resources – but no guarantee that anything will actually be found
“under the link”.
However, the DataScope is different from these web sites for two major
reasons. First, the “Publish and Discover” paradigm means that DataScope
12
Djorgovski & Williams
is always up-to-date. When the sky position is given by the user, DataScope
probes a collection of services to get relevant data, and that collection is fetched
dynamically by querying the NVO Resource Registry. Therefore, when a new
data service is created and published to the Registry, that service is immediately
visible to the scientific community as part of Datascope.
Second, the DataScope uses standards. The NVO has defined standard
service types for querying catalog and image servers. This replaces the old
system, where each service implementor would choose an idiosyncratic interface,
meaning that the maker of a federation service would need to learn and program
each data service individually.
7.
Taking a Broader View: Information-Intensive Science for the
21st Century
The modern scientific methodology originated in the 17th century, and a healthy
interplay of analytical and experimental work has been driving the scientific
progress ever since. But then, in mid-20th century, something new came along:
computing as a new way of doing science, primarily through numerical simulations of phenomena too complex to be analytically tractable. Simulations are
thus more than just a substitute for analytical theory: there are many phenomena in the physical universe where simulations (incorporating, of course, the
right physics and equations of motion) are the only way in which some phenomena can be described and predicted. Recall that even the simplest Newtonian
mechanics can solve exactly only a 2-body problem; for N > 3, numerical solutions are necessary. Other examples in astronomy include star and galaxy
formation, dynamics and evolution of galaxies and large-scale structure, stellar
explosions, anything involving turbulence, etc. Simulations relate, can stimulate, or be explained by both analytical theory and experiments or observations.
While numerical simulations and other computational means of solving complex
systems of equations continue to thrive, there is now a new and growing role of
scientific computing, which is data-driven.
Data- or information-driven computing, which spans all of the aspects of a
modern scientific work described above, and more, is now becoming the dominant form of scientific computing, and an essential component of gathering,
storing, preserving, accessing, and, most of all, analyzing massive amounts of
complex data, and extracting knowledge from them. It is fundamentally changing the way in which science is done in the 21st century.
Computationally driven and enabled science also plays an important societal role: it is empowering an unprecedented pool of talent. With distributed
scientific frameworks like VO, which provide open access to data and tools for
their exploration, anyone, anywhere, with a decent internet connection can do
a first rate science, learn about what others area doing, and communicate their
results. This should be a major boon for countries without expensive scientific
facilities, and individuals at small or isolated institutions. The human talent is
distributed geographically much more broadly than money or other resources.
Virtual Observatory
8.
13
Concluding Comments
The VO concept is rapidly spreading in astronomical community worldwide.
Ultimately, it should become “invisible”, and taken for granted: it would be the
operating framework for astronomical research, a semantic web of astronomy.
There is an already effective, world-wide collaboration between various national and trans-national VO efforts in place. The fundamental cyber-infrastructure of interoperable data archives, standard formats, protocols, etc., and a
number of useful prototype services are well under way. The next stage of technological challenges is in the broad area of data exploration and understanding
(DM/KDD/DU). We are confident that continuing productive collaborations
among astronomers, statisticians, and CS/IT scientists and professionals will
bring forth a powerful new toolbox for astronomy with massive and complex
data sets.
Just as technology derives from a progress in science, progress in science,
especially experimental/observational, is driven by the progress in technology.
This positive feedback loop will continue, as the IT revolution unfolds. Practical CS/IT solutions cannot be developed in a vacuum; having real-life testbeds,
and functionality driven by specific application demands is essential. Recall
that the WWW originated as a scientific application. Today, grid technology is
being developed by physicists, astronomers, and other scientists. The needs of
information-driven science are broadly applicable to information-intensive economy in general, as well as other domains (entertainment, media, security, education, etc.). Who knows what world-changing technology, perhaps even on
par with the WWW itself, would emerge from the synergy of computationally
enabled science, and science-driven information technology?
Acknowledgments. We thank the numerous friends and collaborators
who developed the ideas behind the Virtual Observatory, and made it a reality; they include Charles Alcock, Robert Brunner, Dave De Young, Francoise
Genova, Jim Gray, Bob Hanisch, George Helou, Wil O’Mullane, Ray Plante,
Tom Prince, Arnold Rots, Alex Szalay, and many, many others, too numerous
to list here, for which we apologize. We also acknowledge a partial support from
the NSF grants AST-0122449, AST-0326524, AST-0407448, and DMS-0101360,
NASA contract NAG5-9482, and the Ajax Foundation.
References
Banday, A., et al. (editors), 2001, Mining the Sky, ESO Astrophysics Symposia, Berlin:
Springer Verlag
Brunner, R., Djorgovski, S.G., Prince, T., & Szalay, A. 2002, in: Handbook of Massive
Data Sets, eds. J. Abello et al., Dordrecht: Kluwer Academic Publ., p. 931
Brunner, R., Djorgovski, S.G., & Szalay, A. (editors), 2001, Virtual Observatories of
the Future, ASPCS, 225, San Francisco: Astronomical Society of the Pacific
Djorgovski, S.G., et al. 1997, in: Applications of Digital Image Processing XX, ed. A.
Tescher, Proc. SPIE, 3164, 98-109 (astro-ph/9708218)
Djorgovski, S.G., & Beichman, C. 1998, BAAS, 30, 912
Djorgovski, S.G., et al. 2001a, in: Astronomical Data Analysis, eds. J.-L. Starck & F.
Murtagh, Proc. SPIE 4477, 43 (astro-ph/0108346)
Djorgovski, S.G., et al. 2001b, in: Mining the Sky, eds. A.J. Banday et al., ESO Astrophysics Symposia, Berlin: Springer Verlag, p. 305 (astro-ph/0012489)
14
Djorgovski & Williams
Djorgovski, S.G., et al. 2002, in: Statistical Challenges in Astronomy III, eds. E. Feigelson & J. Babu, New York: Springer Verlag, p. 125 (astro-ph/020824)
Djorgovski, S.G. 2002, in: Small Telescopes in the New Millenium. I. Perceptions, Productivity, and Priorities, ed. T. Oswalt, Dordrecht: Kluwer, p. 85
(astro-ph/0208170)
Harwit, M. 1998, in: Proc. IAU Symp. 179, New Horizons from Multi-Wavelength
Surveys, eds. B. McLean et al., Dordrecht: Kluwer, p. 3
McKee, C., Taylor, J., et al. 2000, Astronomy and Astrophysics in the New Millennium, National Academy of Science, Astronomy and Astrophysics Survey Committee, Washington D.C.: National Academy Press, available at
http://www.nap.edu/books/0309070317/html/
NVO Science Definition Team report, available at http://nvosdt.org
NVO White Paper “Toward a National Virtual Observatory: Science Goals, Technical
Challenges, and Implementation Plan”, 2001, in: Virtual Observatories of the
Future, ASPCS, 225, San Francisco: Astronomical Society of the Pacific, p. 353,
available at http://www.arXiv.org/abs/astro-ph/0108115
Ochsenbein, F., Williams, R.D., et al. 2001), VOTable Format Definition,
International Virtual Observatory Alliance,
available at
http://www.ivoa.net/Documents/latest/VOT.html
Quinn, P., & Gorski, K. (editors), Toward an International Virtual Observatory, ESO
Astrophysics Symposia, Berlin: Springer Verlag
Szalay, A. & Brunner, R. 1998, in: Proc. IAU Symp. 179, New Horizons from MultiWavelength Surveys, eds. B. McLean et al., Dordrecht: Kluwer, p. 455
Szalay, A., et al. 2000, Proc. ACM SIGMOD Intl. Conf. on Management of Data,
ACM SIGMOD Record, 29, 451, also Microsoft Technical Report MS-TR-99-30,
available at http://research.microsoft.com/∼Gray/
Szalay, A., & Gray, J. 2001, Science, 293, 2037
Williams, R.D., Bunn, J., Moore, R. & Pool, J.C.T. 1999, Interfaces to Scientific Data
Archives, Report of the EU-US Workshop, Future Generation Computer Systems, 16 (1): VII-VIII
Williams, R.D. 2003, Grids and the Virtual Observatory, in: Grid Computing: Making
The Global Infrastructure a Reality, eds. F. Berman, A. Hey, & G. Fox, New
York: Wiley, p.837
| 5 |
Non-constant bounded holomorphic functions of hyperbolic
numbers – Candidates for hyperbolic activation functions
arXiv:1306.1653v1 [] 7 Jun 2013
* Eckhard Hitzer (University of Fukui)
Abstract– The Liouville theorem states that bounded holomorphic complex functions
are necessarily constant. Holomorphic functions fulfill the socalled Cauchy-Riemann
(CR) conditions. The CR conditions mean that a complex z-derivative is independent
of the direction. Holomorphic functions are ideal for activation functions of complex
neural networks, but the Liouville theorem makes them useless. Yet recently the use
of hyperbolic numbers, lead to the construction of hyperbolic number neural networks.
We will describe the Cauchy-Riemann conditions for hyperbolic numbers and show that
there exists a new interesting type of bounded holomorphic functions of hyperbolic
numbers, which are not constant. We give examples of such functions. They therefore
substantially expand the available candidates for holomorphic activation functions for
hyperbolic number neural networks.
Keywords: Hyperbolic numbers, Liouville theorem, Cauchy-Riemann conditions,
bounded holomorphic functions
1
Introduction
For the sake of mathematical clarity, we first carefully
review the notion of holomorphic functions in the two
number systems of complex and hyperbolic numbers.
The Liouville theorem states that bounded holomorphic complex functions f : C → C are necessarily constant [1]. Holomorphic functions are functions
that fulfill the socalled Cauchy-Riemann (CR) conditions. The CR conditions mean that a complex
z-derivative
df (z)
, z = x + iy ∈ C, x, y ∈ R, ii = −1,
dz
Contrary to the complex case, the hyperbolic logistic function is bounded. This is
due to the absence of singularities. Thus,
in general terms, this seems to be a suitable
activation function. Concretely, the following facts, however, might be of disadvantage. The real and imaginary part have different squashing values. Both component
functions do only significantly differ from
zero around the lines1 x = y (x > 0) and
−x = y (x < 0).
(1)
is independent of the direction with respect to which
the incremental ratio, that defines the derivative, is
taken [5]. Holomorphic functions would be ideal for
activation functions of complex neural networks, but
the Liouville theorem means that careful measures
need to be taken in order to avoid poles (where the
function becomes infinite).
Yet recently the use of hyperbolic numbers
z = x + h y, h2 = 1, x, y ∈ R, h ∈
/ R.
boundaries (hyperplanes) of the real and the unipotent h-part of the output. But Buchholz argued in
[4], p. 114, that
(2)
lead to the construction of hyperbolic number neural
networks. We will describe the generalized CauchyRiemann conditions for hyperbolic numbers and show
that there exist bounded holomorphic functions of
hyperbolic numbers, which are not constant. We give
a new example of such a function. They are therefore
excellent candidates for holomorphic activation functions for hyperbolic number neural networks [2, 3].
In [3] it was shown, that hyperbolic number neural
networks allow to control the angle of the decision
Complex numbers are isomorphic to the Clifford
geometric algebra Cl0,1 which is generated by a single
vector e1 of negative square e1 = −1, with algebraic
basis {1, e1 }. The isomorphism C ∼
= Cl0,1 is realized
by mapping i 7→ e1 .
Hyperbolic numbers are isomorphic to the Clifford
geometric algebra Cl1,0 which is generated by a single
vector e1 of positive square e1 = +1, with algebraic
basis {1, e1 }. The isomorphism between hyperoblic
numbers and Cl1,0 is realized by mapping h 7→ e1 .
2
Complex variable functions
We follow the treatment given in [5]. We assume
a complex function given by an absolute convergent
1 Note that we slightly correct the two formulas of Buchholz,
because we think it necessary to delete e1 in Buchholz’ original
x = ye1 (x > 0), etc.
power series.
w = f (z) = f (x + iy) = u(x, y) + iv(x, y),
(3)
where u, v : R2 → R are real functions of the real
variables x, y. Since u, v are obtained in an algebraic way from the complex number z = x + iy, they
cannot be arbitrary functions but must satisfy certain conditions. There are several equivalent ways
to obtain these conditions. Following Riemann, we
state that a function w = f (z) = u(x, y) + iv(x, y)
is a function of the complex variable z if its derivative is independent of the direction (in the complex
plane) with respect to which the incremental ratio is
taken. This requirement leads to two partial differential equations, named after Cauchy and Riemann
(CR), which relate u and v.
One method for obtaining these equations is the
following. We consider the expression w = u(x, y) +
iv(x, y) only as a function of z, but not of z̄, i.e. the
derivative with respect to z̄ shall be zero. First we
perform the bijective substitution
x=
1
(z + z̄),
2
1
y = −i (z − z̄),
2
(4)
based on z = x + iy, z̄ = x − iy. For computing the
derivative w,z̄ = dw
dz̄ with the help of the chain rule
we need the derivatives of x and y of (4)
x,z̄ =
1
,
2
y,z̄ =
1
i.
2
(5)
3
Hyperbolic numbers
Hyperbolic numbers are also known as split-complex
numbers. They form a two-dimensional commutative
algebra. The canonical hyperbolic system of numbers
is defined [5] by
(11)
The hyperbolic conjugate is defined as
z̄ = x − h y.
(6)
Requiring that both the real and the imaginary part
of (6) vanish we obtain the Cauchy-Riemann conditions
u,x = v,y ,
u,y = −v,x .
(7)
Functions of a complex variable that fulfill the CR
conditions are functions of x and y, but they are only
functions of z, not of z̄.
It follows from (7), that both u and v fulfill the
Laplace equation
u,xx = v,yx = v,xy = −u,yy ⇔ u,xx + u,yy = 0, (8)
and similarly
v,xx + v,yy = 0.
g(u(x, y)+iv(x, y)) = gr (u(x, y))+igi (v(x, y)). (10)
z = x + h y, h2 = 1, x, y ∈ R, h ∈
/ R.
Using the chain rule we obtain
w,z̄ = u,x x,z̄ + u,y y,z̄ + i(v,x x,z̄ + v,y y,z̄ )
1
1
1
1
= u,x + iu,y + i( v,x + iv,y )
2
2
2
2
1
!
= [u,x − v,y + i(v,x + u,y )] = 0.
2
are called harmonic functions and are important in
many fields of science, notably the fields of electromagnetism, astronomy, and fluid dynamics, because
they can be used to accurately describe the behavior
of electric, gravitational, and fluid potentials. In the
study of heat conduction, the Laplace equation is the
steady-state heat equation [6].
Liouville’s theorem [1] states, that any bounded
holomorphic function f : C → C, which fulfills the
CR conditions is constant. Therefore for complex
neural networks it is not very meaningful to use holomorphic functions as activation functions. If they
are used, special measures need to be taken to avoid
poles in the complex plane. Instead separate componentwise (split) real scalar functions for the real
part gr : R → R, u(x, y) 7→ gr (u(x, y)), and for the
imaginary part gi : R → R, v(x, y) 7→ gi (v(x, y)), are
usually adopted. Therefore a standard split activation function in the complex domain is given by
(9)
The Laplace equation is a simple example of an elliptic partial differential equation. The general theory
of solutions to the Laplace equation is known as potential theory. The solutions of the Laplace equation
(12)
Taking the hyperbolic conjugate corresponds in the
isomorphic algebra Cl1,0 to taking the main involution (grade involution), which maps 1 7→ 1, e1 7→
−e1 .
The hyperbolic invariant (corresponding to the
Lorentz invariant in physics for y = ct), or modulus,
is defined as
z z̄ = (x + h y)(x − h y) = x2 − y 2 ,
(13)
which is not positive definite.
Hyperbolic numbers are fundamentally different
from complex numbers. Complex numbers and
quaternions are division algebras, every non-zero element has a unique inverse. Hyperbolic numbers do
not always have an inverse, but instead there are
idempotents and divisors of zero.
We can define the following idempotent basis
n1 =
1
(1 + h),
2
n2 =
1
(1 − h),
2
(14)
which fulfills
1
1
(1 + h)(1 + h) = (2 + 2h) = n1 ,
4
4
n22 = n2 ,
n1 + n2 = 1,
1
1
n1 n2 = (1 + h)(1 − h) = (1 − 1) = 0,
4
4
n̄1 = n2 , n̄2 = n1 .
(15)
n21 =
The inverse basis transformation is simply
1 = n1 + n2 ,
h = n1 − n2 .
(16)
Setting
z = x + hy = ξn1 + ηn2 ,
(17)
we get the corresponding coordinate transformation
x=
1
(ξ + η),
2
y=
1
(ξ − η),
2
(18)
as well as the inverse coordinate transformation
ξ = x + y ∈ R,
η = x − y ∈ R.
(19)
Figure 1: The hyperbolic number plane [9] with horizontal x-axis and vertical yh-axis, showing: (a) Hyperbolas with modulus z z̄ = −1 (green). (b) Straight
lines with modulus z z̄ = 0 ⇔ x2 = y 2 (red), i.e. divisors of zero. (c) Hyperbolas with modulus z z̄ = 1
(blue).
The hyperbolic conjugate becomes, due to (15), in
the idempotent basis
z̄ = ξn̄1 + ηn̄2 = ηn1 + ξn2 .
(20)
In the idempotent basis, using (20) and (15), the hyperbolic invariant becomes multiplicative
z z̄ = (ξn1 + ηn2 )(ηn1 + ξn2 )
= ξη(n1 + n2 ) = ξη = x2 − y 2 .
(21)
In the following we consider the product and quotient
of two hyperbolic numbers z, z 0 both expressed in the
idempotent basis {n1 , n2 }
zz 0 = (ξn1 +ηn2 )(ξ 0 n1 +η 0 n2 ) = ξξ 0 n1 +ηη 0 n2 , (22)
due to (15). We repeat that in (24) the product is
zero, even though the factors are non-zero. The numbers ξn1 , ηn2 along the n1 , n2 axis are therefore called
divisors of zero. The divisors of zero have no inverse.
The hyperbolic plane with the diagonal lines of divisors of zero (b), and the pairs of hyperbolas with
constant modulus z z̄ = 1 (c), and z z̄ = −1 (a) is
shown in Fig. 1.
4
Hyperbolic number functions
We assume a hyperbolic number function given by
an absolute convergent power series
and
w = f (z) = f (x + hy) = u(x, y) + hv(x, y),
z
ξn1 + ηn2
z z̄ 0
= 0
= 0 0
0
0
z
ξ n1 + η n2
z z̄
(ξn1 + ηn2 )(η 0 n1 + ξ 0 n2 )
= 0
(ξ n1 + η 0 n2 )(η 0 n1 + ξ 0 n2 )
(ξη 0 n1 + ηξ 0 n2 )(η 0 n1 + ξ 0 n2 )
=
ξ0 η0
ξ
η
= 0 n1 + 0 n2 .
ξ
η
h2 = 1,
h∈
/ R.
(25)
where u, v : R2 → R are real functions of the real
variables x, y. An example of a hyperbolic number
function is the exponential function
ez = ex+hy = ex ehy = ex (cosh y + h sinh y)
= u(x, y) + hv(x, y),
(23)
(26)
with
Because of (23) it is not possible to divide by z 0 if
ξ 0 = 0, or if η 0 = 0. Moreover, the product of a
hyperbolic number with ξ = 0 (on the n2 axis) times
a hyperbolic number with η = 0 (on the n1 axis) is
(ξn1 + 0n2 )(0n1 + ηn2 ) = ξηn1 n2 = 0,
(24)
u(x, y) = ex cosh y,
v(x, y) = ex sinh y.
(27)
Since u, v are obtained in an algebraic way from the
hyperbolic number z = x + hy, they cannot be arbitrary functions but must satisfy certain conditions.
There are several equivalent ways to obtain these conditions. A function w = f (z) = u(x, y) + hv(x, y) is
a function of the hyperbolic variable z, if its derivative is independent of the direction (in the hyperbolic
plane) with respect to which the incremental ratio is
taken. This requirement leads to two partial differential equations, so called generalized Cauchy-Riemann
(GCR) conditions, which relate u and v.
To obtain the GCR conditions we consider the expression w = u(x, y) + hv(x, y) only as a function of
z, but not of z̄ = x − hy, i.e. the derivative with
respect to z̄ shall be zero. First we perform the bijective substitution
x=
1
(z + z̄),
2
1
y = h (z − z̄),
2
(28)
based on z = x + hy, z̄ = x − hy. For computing the
derivative w,z̄ = dw
dz̄ with the help of the chain rule
we need the derivatives of x and y of (28)
x,z̄ =
1
,
2
1
y,z̄ = − h.
2
w,z̄ = u,x x,z̄ + u,y y,z̄ + h(v,x x,z̄ + v,y y,z̄ )
1
1
1
1
= u,x − hu,y + h( v,x − hv,y )
2
2
2
2
1
!
= [u,x − v,y + h(v,x − u,y )] = 0.
(30)
2
Requiring that both the real and the h-part of (30)
vanish we obtain the GCR conditions
u,y = v,x .
f (z) = u(x, y) + h v(x, y),
1
.
u(x, y) = v(x, y) =
1 + e−x e−y
(31)
Functions of a hyperbolic variable that fulfill the
GCR conditions are functions of x and y, but they
are only functions of z, not of z̄. Such functions are
called (hyperbolic) holomorphic functions.
It follows from (31), that u and v fulfill the wave
equation
−1
(−e−x e−y )
(1 + e−x e−y )2
e−x e−y
=
,
(1 + e−x e−y )2
u,x =
u,y = v,x = v,y =
0<
1
1+
u,x = ex cosh y,
e−x e−y
lim
v,y = ex cosh y = u,x . (34)
< 1.
1
= 0,
1 + e−x e−y
and
lim
x,y→∞
1
1+
(38)
e−x e−y
= 1.
(39)
(40)
The function (35) is representative for how to turn
any real neural node activation function r(x) into
holomorphic hyperbolic activation function via
f (x) = r(x + y) (1 + h).
(41)
We note that in [3, 4] another holomorphic hyperbolic activation function was studied, namely
u,y = ex sinh y,
v,x = ex sinh y = u,y ,
(37)
We especially have
and similarly
The wave equation is an important second-order
linear partial differential equation for the description
of waves – as they occur in physics – such as sound
waves, light waves and water waves. It arises in fields
like acoustics, electromagnetics, and fluid dynamics.
The wave equation is the prototype of a hyperbolic
partial differential equation [7].
Let us compute the partial derivatives u,x , u,y ,
v,x , v,y for the exponential function ez of (26):
e−x e−y
.
(1 + e−x e−y )2
The GCR conditions (31) are therefore clearly fulfilled, which means that the hyperbolic function f (z)
of (35) is holomorphic. Since the exponential function e−x has a range of (0, ∞), the product e−x e−y
also has values in the range of (0, ∞). Therefore the
function 1 + e−x e−y has values in (1, ∞), and the
components of the function f (z) of (35) have values
x,y→−∞
(33)
(36)
where we repeatedly applied the chain rule for differentiation. Similarly we obtain
u,xx = v,yx = v,xy = u,yy ⇔ u,xx − u,yy = 0, (32)
v,xx − v,yy = 0.
(35)
The function u(x, y) is pictured in Fig. 2.
Let us verify that the function f of (35) fulfills the
GCR conditions
(29)
Using the chain rule we obtain
u,x = v,y ,
We clearly see that the partial derivatives (34) fulfill
the GCR conditions (31) for the exponential function
ez , as expected by its definition (26). The exponential function ez is therefore a manifestly holomorphic
hyperpolic function, but it is not bounded.
In the case of holomorphic hyperbolic functions the
GCR conditions do not imply a Liouville type theorem like for holomorphic complex functions. This can
most easily be demonstrated with a counter example
f 0 (z) =
1
,
1 + e−z
(42)
2. x2 > y 2 , x < 0:
θ = artanh (y/x),
z = −ρehθ ,
i.e. the quadrant in Fig. 1 including the negative
x-axis (to the left).
3. x2 < y 2 , y > 0:
θ = artanh (x/y),
z = hρehθ ,
i.e. the quadrant in Fig. 1 including the positive
y-axis (top).
4. x2 < y 2 , y < 0:
θ = artanh (x/y),
Figure 2: Function u(x, y) = 1/(1 + e−x e−y ). Horizontal axis −3 ≤ x ≤ 3, from left corner into paper
plane −3 ≤ y ≤ 3. Vertical axis 0 ≤ u ≤ 1. (Figure
produced with [8].)
but compare the quote from [4], p. 114, given in the
introduction. The split activation function used in
[2]
1
1
+h
,
(43)
f 00 (x, y) =
−x
1+e
1 + e−y
is clearly not holomorphic, because the real part u =
1/(1 + e−x ) depends only on x and not on y, and the
h-part v = 1/(1 + e−y ) depends only on y and not on
x, thus the GCR conditions (31) can not be fulfilled.
5
Geometric interpretation of
multiplication of hyperbolic
numbers
In order to geometrically interpret the product of two
complex numbers, it proves useful to introduce polar
coordinates in the complex plane. Similarly, for the
geometric interpretation of the product of two hyperbolic numbers, we first introduce hyperbolic polar
coordinates for z = x + hy with radial coordinate
p
p
ρ = |z z̄| = |x2 − y 2 | .
(44)
The hyperbolic polar coordinate transformation [5] is
then given as
1. x2 > y 2 , x > 0:
θ = artanh (y/x),
z = ρehθ ,
i.e. the quadrant in the hyperbolic plane of Fig.
1 limitted by the diagonal idempotent lines, and
including the positive x-axis (to the right).
z = −hρehθ ,
i.e. the quadrant in Fig. 1 including the negative
y-axis (bottom).
The product of a constant hyperbolic number (assuming a2x > a2y , ax > 0)
a = ax + hay = ρa ehθa ,
q
θa = artanh (ay /ax ),
ρa = a2x − a2y ,
(45)
with a hyperbolic number z (assuming x2 > y 2 , x >
0) in hyperbolic polar coordinates is
az = ρa ehθa ρ ehθ = ρa ρ eh(θ+θa ) .
(46)
The geometric interpretation is a scaling of the modulus ρ → ρa ρ and a hyperbolic rotation (movement
along a hyperbola) θ → θ + θa .
In the physics of Einstein’s special relativistic
space-time [11, 12], the hyperbolic rotation θ → θ+θa
corresponds to a Lorentz transformation from one
inertial frame with constant velocity tanh θ to another inertial frame with constant velocity tanh(θ +
θa ). Neural networks based on hyperbolic numbers
(dimensionally extended to four-dimensional spacetime) should therefore be ideal to compute with electromagnetic signals, including satellite transmission.
6
Conclusion
We have compared complex numbers and hyperbolic
numbers, as well as complex functions and hyperbolic
functions. We saw that according to Liouville’s theorem bounded complex holomorphic functions are necessarily constant, but non-constant bounded hyperbolic holomorphic functions exist. One such function
has already beeng studied in [3, 4]. We have studied a promising example of a hyperbolic holomorphic
function
1+h
,
(47)
f (z) =
1 + e−x−y
in some detail. The distinct notions of idempotents
and divisors of zero, special to hyperbolic numbers,
were introduced. After further introducing hyperbolic polar coordinates, a geometric interpretation of
the hyperbolic number multiplication was given.
Hyerbolic neural networks offer, compared to complex neural networks, therefore the advantage of suitable bounded non-constant hyperbolic holomorphic
activation functions. It would certainly be of interest
to study convergence, accuracy and decision boundaries of hyperbolic neural networks with the activation function (35), similar to [3, 4].
Acknowledgment
I want to acknowledge God [13]:
In the beginning was the Word2 , and the
Word was with God, and the Word was
God. He was with God in the beginning.
Through him all things were made; without
him nothing was made that has been made.
In him was life, and that life was the light
of all mankind.
I want to thank my dear family, as well as T. Nitta
and Y. Kuroe.
References
[1] K. Guerlebeck et al, Holomorphic Functions in
the Plane and n-dimensional Space, Birkhauser,
2008, chp. 7.3.3.
[2] S. Buchholz, G. Sommer, A hyperbolic multilayer perceptron, Proceedings of the International Joint Conference on Neural Networks,
Como, Italy, vol. 2, 129/133 (2000).
[3] T. Nitta, S. Buchholz, On the Decision Boundaries of Hyperbolic Neurons, Proceedings of
the International Joint Conference on Neural Networks, IJCNN’08-HongKong, June 1-6,
2973/2979(2008).
[4] S. Buchholz, PhD Thesis, A Theory of Neural
Computation with Clifford Algebras, University
of Kiel, 2005.
[5] F. Catoni et al, The Mathematics of Minkowski
Space-Time, Birkhauser, 2008.
[6] Laplace’s equation, Wikipedia, accessed 24 August 2011, http://en.wikipedia.org/wiki/
Laplace’s_equation
2 Greek term: logos. Note: A Greek philosopher named
Heraclitus first used the term Logos around 600 B.C. to designate the divine reason or plan which coordinates a changing
universe. [14]
[7] Wave equation, Wikipedia, accessed 24 August
2011, http://en.wikipedia.org/wiki/Wave_
equation
[8] Online
3D
function
grapher,
//www.livephysics.com/ptools/
online-3d-function-grapher.php?
http:
[9] Split-complex number, Wikipedia, accessed
29 August 2011, http://en.wikipedia.org/
wiki/Split-complex_number
[10] Notes of collaboration with H. Ishi, Feb. 2011,
p. 15.
[11] C. Doran and A. Lasenby, Geometric Algebra for
Physicists, Cambridge University Press, Cambridge (UK), 2003.
[12] E. Hitzer, Relativistic Physics as Application of
Geometric Algebra, in K. Adhav (ed.), Proceedings of the International Conference on Relativity 2005 (ICR2005), University of Amravati, India, January 2005, 71/90(2005).
[13] The Bible, New International Version (NIV),
The Gospel according to John, chapter 1, verses
1-4, http://www.biblegateway.com/
[14] Strong’s Bible lexicon entry G3056 for logos, available online at Blue Letter Bible.
http://www.blueletterbible.org/lang/
lexicon/lexicon.cfm?Strongs=G3056&t=KJV
| 9 |
ON THE GROUP OF AUTOMORPHISMS OF THE BRANDT λ0 -EXTENSION OF
A MONOID WITH ZERO
arXiv:1609.06085v1 [] 20 Sep 2016
OLEG GUTIK
Abstract. The group of automorphisms of the Brandt λ0 -extension Bλ0 (S) of an arbitrary monoid S
with zero is described. In particular we show that the group of automorphisms Aut(Bλ0 (S)) of Bλ0 (S) is
isomorphic to a homomorphic image of the group defines on the Cartesian product Sλ × Aut(S) × H1λ
with the following binary operation:
[ϕ, h, u] · [ϕ′ , h′ , u′ ] = [ϕϕ′ , hh′ , ϕu′ · uh′ ],
where Sλ is the group of all bijections of the cardinal λ, Aut(S) is the group of all automorphisms of
the semigroup S and H1λ is the direct λ-power of the group of units H1 of the monoid S.
1. Introduction and preliminaries
Further we shall follow the terminology of [2, 21].
Given a semigroup S, we shall denote the set of idempotents of S by E(S). A semigroup S with
the adjoined unit (identity) [zero] will be denoted by S 1 [S 0 ] (cf. [2]). Next, we shall denote the unit
(identity) and the zero of a semigroup S by 1S and 0S , respectively. Given a subset A of a semigroup
S, we shall denote by A∗ = A \ {0S }.
If S is a semigroup, then we shall denote the subset of idempotents in S by E(S). If E(S) is closed
under multiplication in S and we shall refer to E(S) a band (or the band of S). If the band E(S) is a
non-empty subset of S, then the semigroup operation on S determines the following partial order 6 on
E(S): e 6 f if and only if ef = f e = e. This order is called the natural partial order on E(S).
If h : S → T is a homomorphism (or a map) from a semigroup S into a semigroup T and if s ∈ S,
then we denote the image of s under h by (s)h.
Let S be a semigroup with zero and λ a cardinal > 1. We define the semigroup operation on the set
Bλ (S) = (λ × S × λ) ∪ {0} as follows:
(
(α, ab, δ), if β = γ;
(α, a, β) · (γ, b, δ) =
0,
if β 6= γ,
and (α, a, β) · 0 = 0 · (α, a, β) = 0 · 0 = 0, for all α, β, γ, δ ∈ λ and a, b ∈ S. If S = S 1 then the
semigroup Bλ (S) is called the Brandt λ-extension of the semigroup S [4]. Obviously, if S has zero then
J = {0} ∪ {(α, 0S , β) : 0S is the zero of S} is an ideal of Bλ (S). We put Bλ0 (S) = Bλ (S)/J and the
semigroup Bλ0 (S) is called the Brandt λ0 -extension of the semigroup S with zero [8].
If I is a trivial semigroup (i.e. I contains only one element), then we denote the semigroup I with
the adjoined zero by I 0 . Obviously, for any λ > 2, the Brandt λ0 -extension of the semigroup I 0 is
isomorphic to the semigroup of λ×λ-matrix units and any Brandt λ0 -extension of a semigroup with
zero which also contains a non-zero idempotent contains the semigroup of λ×λ-matrix units. We shall
denote the semigroup of λ×λ-matrix units by Bλ . The 2 × 2-matrix semigroup with adjoined identity
B21 plays an impotent role in Graph Theory and its called the Perkins semigroup. In the paper [20]
Perkins showed that the semigroup B21 is not finitely based. More details on the word problem of the
Perkins semigroup via different graphs may be found in the works of Kitaev and his coauthors (see
[17, 18]).
Date: January 22, 2018.
2010 Mathematics Subject Classification. 20M15, 20F29 .
Key words and phrases. Semigroup, group of automorphisms, monoid, extension.
1
2
OLEG GUTIK
We always consider the Brandt λ0 -extension only of a monoid with zero. Obviously, for any monoid
S with zero we have B10 (S) = S. Note that every Brandt λ-extension of a group G is isomorphic to the
Brandt λ0 -extension of the group G0 with adjoined zero. The Brandt λ0 -extension of the group with
adjoined zero is called a Brandt semigroup [2, 21]. A semigroup S is a Brandt semigroup if and only if
S is a completely 0-simple inverse semigroup [1, 19] (cf. also [21, Theorem II.3.5]). We shall say that
the Brandt λ0 -extension Bλ0 (S) of a semigroup S is finite if the cardinal λ is finite.
In the paper [14] Gutik and Repovš established homomorphisms of the Brandt λ0 -extensions of
monoids with zeros. They also described a category whose objects are ingredients in the constructions
of the Brandt λ0 -extensions of monoids with zeros. Here they introduced finite, compact topological
Brandt λ0 -extensions of topological semigroups and countably compact topological Brandt λ0 -extensions
of topological inverse semigroups in the class of topological inverse semigroups, and established the structure of such extensions and non-trivial continuous homomorphisms between such topological Brandt
λ0 -extensions of topological monoids with zero. There they also described a category whose objects
are ingredients in the constructions of finite (compact, countably compact) topological Brandt λ0 extensions of topological monoids with zeros. These investigations were continued in [10] and [9], where
established countably compact topological Brandt λ0 -extensions of topological monoids with zeros and
pseudocompact topological Brandt λ0 -extensions of semitopological monoids with zeros their corresponding categories. Some other topological aspects of topologizations, embeddings and completions
of the semigroup of λ×λ-matrix units and Brandt λ0 -extensions as semitopological and topological
semigroups were studied in [3, 5, 7, 11, 12, 13, 15, 16].
In this paper we describe the group of automorphisms of the Brandt λ0 -extension Bλ0 (S) of an arbitrary
monoid S with zero.
2. Automorphisms of the Brandt λ0 -extension of a monoid with zero
We observe that if f : S → S is an automorphism of the semigroup S without zero then it is obvious
that the map fb: S 0 → S 0 defined by the formula
(s)f, if s 6= 0S ;
(s)fb =
0S , if s = 0S ,
is an automorphism of the semigroup S 0 with adjoined zero 0S . Also the automorphism f : S → S of
the semigroup S can be extended to an automorphism fB : Bλ0 (S) → Bλ0 (S) of the Brandt λ0 -extension
Bλ0 (S) of the semigroup S by the formulae:
(α, s, β)fB = (α, (s)f, β),
for all α, β ∈ λ
and (0)fB = 0. We remark that so determined extended automorphism is not unique.
The following theorem describes all automorphisms of the Brandt λ0 -extension Bλ0 (S) of a monoid S.
Theorem 1. Let λ > 1 be cardinal and let Bλ0 (S) be the Brandt λ0 -extension of monoid S with zero.
Let h : S → S be an automorphism and suppose that ϕ : λ → λ is a bijective map. Let H1 be the group
of units of S and u : λ → H1 a map. Then the map σ : Bλ0 (S) → Bλ0 (S) defined by the formulae
(1)
((α, s, β))σ = ((α)ϕ, (α)u · (s)h · ((β)u)−1, (β)ϕ)
and
(0)σ = 0,
is an automorphism of the semigroup Bλ0 (S). Moreover, every automorphism of Bλ0 (S) can be constructed
in this manner.
Proof. A simple verification shows that σ is an automorphism of the semigroup Bλ0 (S).
Let σ : Bλ0 (S) → Bλ0 (S) be an isomorphism. We fix an arbitrary α ∈ λ.
Since σ : Bλ0 (S) → Bλ0 (S) is the automorphism and the idempotent (α, 1S , α) is maximal with the
respect to the natural partial order on E(Bλ0 (S)), Proposition 3.2 of [14] implies that ((α, 1S , α))σ =
(α ′ , 1S , α ′ ) for some α ′ ∈ λ.
Since (β, 1S , α)(α, 1S , α) = (β, 1S , α) for any β ∈ λ, we have that
((β, 1S , α))σ = ((β, 1S , α))σ · (α ′ , 1S , α ′ ),
ON THE GROUP OF AUTOMORPHISMS OF THE BRANDT λ0 -EXTENSION OF A MONOID WITH ZERO
3
and hence
((β, 1S , α))σ = ((β)ϕ, (β)u, α ′),
for some (β)ϕ ∈ λ and (β)u ∈ S. Similarly, we get that
((α, 1S , β))σ = (α ′ , (β)v, (β)ψ),
for some (β)ψ ∈ λ and (β)v ∈ S. Since (α, 1S , β)(β, 1S , α) = (α, 1S , α), we have that
(α ′ , 1S , α ′ ) = ((α, 1S , α))σ = (α ′ , (β)v, (β)ψ) · ((β)ϕ, (β)u, α ′) = (α ′ , (β)v · (β)u, α ′ ),
and hence (β)ϕ = (β)ψ = β ′ ∈ λ and (β)v · (β)u = 1S . Similarly, since (β, 1S , α) · (α, 1S , β) = (β, 1S , β),
we see that the element
((β, 1S , β))σ = ((β, 1S , α)(α, 1S , β))σ = (β ′ , (β)v · (β)u, β ′ )
is a maximal idempotent of the subsemigroup Sβ ′ ,β ′ of Bλ0 (S), and hence we have that (β)v · (β)u = 1S .
This implies that the elements (β)v and (β)u are mutually invertible in H1 , and hence (β)v = ((β)u)−1.
If (γ)ϕ = (δ)ϕ for γ, δ ∈ λ then
0 6= (α ′ , 1S , (γ)ϕ) · ((δ)ϕ, 1S , α ′ ) = ((α, 1S , γ))σ · ((δ, 1S , α))σ,
and since σ is an automorphism, we have that
(α, 1S , γ) · (δ, 1S , α) 6= 0
and hence γ = δ. Thus ϕ : λ → λ is a bijective map.
Therefore for s ∈ S \ {0S } we have
((γ, s, δ))σ = ((γ, 1S , α) · (α, s, α) · (α, 1S , δ))σ =
= ((γ, 1S , α))σ · ((α, s, α))σ · ((α, 1S , δ))σ =
= ((γ)ϕ, (γ)u, α ′) · (α ′ , (s)h, α ′ ) · (α ′ , ((δ)u)−1, (δ)ϕ)=
= ((γ)ϕ, (γ)u · (s)h · ((δ)u)−1 , (δ)ϕ).
Also, since 0 is zero of the semigroup Bλ0 (S) we conclude that (0)σ = 0.
Theorem 1 implies the following corollary:
Corollary 1. Let λ > 1 be cardinal and let Bλ (G) be the Brandt semigroup. Let h : G → G be an
automorphism and suppose that ϕ : λ → λ is a bijective map. Let u : λ → G be a map. Then the map
σ : Bλ (G) → Bλ (G) defined by the formulae
((α, s, β))σ = ((α)ϕ, (α)u · (s)h · ((β)u)−1, (β)ϕ)
and
(0)σ = 0,
is an automorphism of the Brandt semigroup Bλ (G). Moreover, every automorphism of Bλ (G) can be
constructed in this manner.
Also, we observe that Corollary 1 implies the following well known statement:
Corollary 2. Let λ > 1 be cardinal and ϕ : λ → λ a bijective map. Then the map σ : Bλ → Bλ defined
by the formulae
((α, β))σ = ((α)ϕ, (β)ϕ) and (0)σ = 0,
is an automorphism of the semigroup of λ×λ-matrix units Bλ . Moreover, every automorphism of Bλ
can be constructed in this manner.
The following example implies that the condition that semigroup S contains the identity is essential.
Example 1. Let λ be any cardinal > 2. Let S be the zero-semigroup of cardinality > 3 and 0S is
zero of S. It is easily to see that every bijective map σ : Bλ0 (S) → Bλ0 (S) such that (0)σ = 0 is an
automorphism of the Brandt λ0 -extension of S.
4
OLEG GUTIK
Remark. By Theorem 1 we have that every automorphism σ : Bλ0 (S) → Bλ0 (S) of the Brandt λ0 extension of an arbitrary monoid S with zero identifies with the ordered triple [ϕ, h, u], where h : S → S
is an automorphism of S, ϕ : λ → λ is a bijective map and u : λ → H1 is a map, where H1 is the group
of units of S.
Lemma 1. Let λ > 1 be cardinal, S be a monoid with zero and let Bλ0 (S) be the Brandt λ0 -extension
of S. Then the composition of arbitrary automorphisms σ = [ϕ, h, u] and σ ′ = [ϕ′ , h′ , u′ ] of the Brandt
λ0 -extension of S defines in the following way:
[ϕ, h, u] · [ϕ′ , h′ , u′] = [ϕϕ′ , hh′ , ϕu′ · uh′ ].
Proof. By Theorem 1 for every (α, s, β) ∈ Bλ0 (S) we have that
(α, s, β)(σσ ′) = (α)ϕ, (α)u·(s)h·((β)u)−1, (β)ϕ σ ′ =
−1
= ((α)ϕ)ϕ′, ((α)ϕ)u′ · (α)u · (s)h · ((β)u)−1 h′ · (((β)ϕ)u′ ) , ((β)ϕ)ϕ′ =
and since h′ is an automorphism of the monoid S we get that this is equal to
= ((α)ϕ)ϕ′ , ((α)ϕ)u′ · ((α)u) h′ · ((s)h) h′ · (((β)u)h′ )
−1
· (((β)ϕ)u′)
−1
= (α)(ϕϕ′ ), (α)(ϕu′ · uh′ ) · ((s)h) h′ · (β) (ϕu′ · uh′ ) , (β)(ϕϕ′) .
−1
, ((β)ϕ)ϕ′ =
This completes the proof of the requested equality.
Theorem 2. Let λ > 1 be cardinal, S be a monoid with zero and let Bλ0 (S) be the Brandt λ0 -extension
of S. Then the group of automorphisms Aut(Bλ0 (S)) of Bλ0 (S) is isomorphic to a homomorphic image
of the group defines on the Cartesian product Sλ × Aut(S) × H1λ with the following binary operation:
(2)
[ϕ, h, u] · [ϕ′ , h′ , u′] = [ϕϕ′ , hh′ , ϕu′ · uh′ ],
where Sλ is the group of all bijections of the cardinal λ, Aut(S) is the group of all automorphisms of
the semigroup S and H1λ is the direct λ-power of the group of units H1 of the monoid S. Moreover, the
inverse element of [ϕ, h, u] in the group Aut(Bλ0 (S)) is defined by the formula:
[ϕ, h, u]−1 = ϕ−1 , h−1 , ϕ−1 u−1 h−1 .
Proof. First, we show that the binary operation defined by formula (2) is associative. Let [ϕ, h, u],
[ϕ′ , h′ , u′] and [ϕ′′ , h′′ , u′′ ] be arbitrary elements of the Cartesian product Sλ × Aut(S) × H1λ . Then we
have that
[ϕ, h, u] · [ϕ′ , h′ , u′ ] · [ϕ′′ , h′′ , u′′ ] = [ϕϕ′ , hh′ , ϕu′ · uh′ ] · [ϕ′′ , h′′ , u′′] =
= [ϕϕ′ ϕ′′ , hh′ h′′ , ϕϕ′ u′′ · (ϕu′ · uh′ )h′′ ] =
= [ϕϕ′ ϕ′′ , hh′ h′′ , ϕϕ′ u′′ · ϕu′ h′′ · uh′ h′′ ]
and
[ϕ, h, u] · ([ϕ′ , h′ , u′] · [ϕ′′ , h′′ , u′′ ]) = [ϕ, h, u] · [ϕ′ ϕ′′ , h′ h′′ , ϕ′ u′′ · u′ h′′ ] =
= [ϕϕ′ ϕ′′ , hh′ h′′ , ϕ(ϕ′ u′′ · u′ h′′ ) · uh′ h′′ ] =
= [ϕϕ′ ϕ′′ , hh′ h′′ , ϕϕ′ u′′ · ϕu′ h′′ · uh′ h′′ ],
and hence so defined operation is associative.
Theorem 1 implies that formula (1) determines a map F from the Cartesian product Sλ ×Aut(S)×H1λ
onto the group of automorphisms Aut(Bλ0 (S)) of the Brandt λ0 -extension Bλ0 (S) of the monoid S,
and hence the associativity of binary operation (2) implies that the map F is a homomorphism from
Sλ × Aut(S) × H1λ onto the group Aut(Bλ0 (S)).
Next we show that [1Sλ , 1Aut(S) , 1H1λ ] is a unit element with the respect to the binary operation (2),
where 1Sλ , 1Aut(S) and 1H1λ are units of the groups Sλ , Aut(S) and H1λ , respectively. Then we have
ON THE GROUP OF AUTOMORPHISMS OF THE BRANDT λ0 -EXTENSION OF A MONOID WITH ZERO
that
5
[ϕ, h, u] · 1Sλ , 1Aut(S) , 1H1λ = ϕ1Sλ , h1Aut(S) , ϕ1H1λ · u1Aut(S) =
= ϕ, h, ϕ1H1λ · u1Aut(S) =
= ϕ, h, 1H1λ · u =
= [ϕ, h, u]
and
1Sλ , 1Aut(S) , 1H1λ · [ϕ, h, u] = 1Sλ ϕ, 1Aut(S) h, 1Sλ u · 1H1λ h = [ϕ, h, u],
because every automorphism h ∈ Aut(S) acts on the group H1λ by the natural way as a restriction of
global automorphism of the semigroup S on every factor, and hence we get that 1H1λ h = 1H1λ .
Also, similar arguments imply that
[ϕ, h, u] · [ϕ, h, u]−1 = [ϕ, h, u] · ϕ−1 , h−1 , ϕ−1 u−1 h−1 =
= ϕϕ−1 , hh−1 , (ϕϕ−1 )u−1h−1 · uh−1 =
= ϕϕ−1 , hh−1 , (1Sλ )u−1 h−1 · uh−1 =
= ϕϕ−1 , hh−1 , u−1h−1 · uh−1 =
= 1Sλ , 1Aut(S) , 1H1λ
and
[ϕ, h, u]−1 · [ϕ, h, u] = ϕ−1 , h−1 , ϕ−1 u−1 h−1 · [ϕ, h, u]=
= ϕ−1 ϕ, h−1 h, ϕ−1 u · ϕ−1 u−1 h−1 h =
= ϕ−1 ϕ, h−1 h, ϕ−1 u · ϕ−1 u−1 =
= 1Sλ , 1Aut(S) , 1H1λ .
This implies that the elements [ϕ−1 , h−1 , ϕ−1 u−1 h−1 ] and [ϕ, h, u] are invertible in Sλ × Aut(S) × H1λ,
and hence the set Sλ × Aut(S) × H1λ with the binary operation (2) is a group.
Let Id : Bλ0 (S) → Bλ0 (S) be the identity automorphism of the semigroup Bλ0 (S). Then by Theorem 1
there exist some automorphism h : S → S, a bijective map ϕ : λ → λ and a map u : λ → H1 into the
group H1 of units of S such that
(α, s, β) = (α, s, β)Id = ((α)ϕ, (α)u · (s)h · ((β)u)−1, (β)ϕ),
for all α, β ∈ λ and s ∈ S ∗ . Since Id : Bλ0 (S) → Bλ0 (S) is the identity automorphism we conclude that
(α)ϕ = α for every α ∈ λ. Also, for every s ∈ S ∗ we get that s = (α)u · (s)h · ((β)u)−1 for all α, β ∈ λ,
and hence we obtain that
1S = (α)u · (1S )h · ((β)u)−1 = (α)u · ((β)u)−1
for all α, β ∈ λ. This implies that (α)u = (β)u = u
e is a fixed element of the group H1 for all α, β ∈ λ.
We define
n
o
λ
−1
ker N = [ϕ, h, u
e] ∈ Sλ × Aut(S) × H1 : ϕ : λ → λ is an idemtity map, u
e(s)he
u
= s for any s ∈ S .
It is obvious that the equality u
e(s)he
u−1 = s implies that (s)h = u
e−1 se
u for all s ∈ S. The previous
arguments implies that [ϕ, h, u
e] ∈ ker N if and only if [ϕ, h, u
e]F is the unit of the group Aut(Bλ0 (S)),
and hence ker N is a normal subgroup of Sλ × Aut(S) × H1λ . This implies that the quotient group
(Sλ × Aut(S) × H1λ )/ ker N is isomorphic to the group Aut(Bλ0 (S)).
6
OLEG GUTIK
References
[1] A. H. Clifford, Matrix representations of completely simple semigroups, Amer. J. Math. 64 (1942), 327–342.
[2] A. H. Clifford and G. B. Preston, The Algebraic Theory of Semigroups, Vols. I and II, Amer. Math. Soc. Surveys 7,
Providence, R.I., 1961 and 1967.
[3] S. Bardyla and O. Gutik, On a semitopological polycyclic monoid, Algebra Discr. Math. 21:2 (2016), 163–183.
[4] O. V. Gutik, On Howie semigroup, Mat. Metody Phis.-Mekh. Polya. 42:4 (1999), 127–132 (in Ukrainian).
[5] O. Gutik, On closures in semitopological inverse semigroups with continuous inversion, Algebra Discr. Math. 18:1
(2014), 59–85.
[6] O. V. Gutik and K. P. Pavlyk, Topological Brandt λ-extensions of absolutely H-closed topological inverse semigroups,
Visnyk Lviv Univ., Ser. Mekh.-Math. 61 (2003), 98–105.
[7] O. V. Gutik and K. P. Pavlyk, On topological semigroups of matrix units, Semigroup Forum 71:3 (2005), 389–400.
[8] O. V. Gutik and K. P. Pavlyk, On Brandt λ0 -extensions of semigroups with zero, Mat. Metody Phis.-Mekh. Polya.
49:3 (2006), 26–40.
[9] O. Gutik and K. Pavlyk, On pseudocompact topological Brandt λ0 -extensions of semitopological monoids, Topological
Algebra Appl. 1 (2013), 60–79.
[10] O. Gutik, K. Pavlyk, and A. Reiter, Topological semigroups of matrix units and countably compact Brandt λ0 extensions, Mat. Stud. 32:2 (2009), 115–131.
[11] O. V. Gutik, K. P. Pavlyk, and A. R. Reiter, On topological Brandt semigroups, Mat. Metody Fiz.-Mekh. Polya 54:2
(2011), 7–16 (in Ukrainian); English version in: J. Math. Sci. 184:1 (2012), 1–11.
[12] O. Gutik and O. Ravsky, On feebly compact inverse primitive (semi)topological semigroups, Mat. Stud. 44:1 (2015),
3–26.
[13] O. V. Gutik and O. V. Ravsky, Pseudocompactness, products and Brandt λ0 -extensions of semitopological monoids,
Mat. Metody Fiz.-Mekh. Polya 58:2 (2015), 20–37.
[14] O. Gutik and D. Repovš, On Brandt λ0 -extensions of monoids with zero, Semigroup Forum 80:1 (2010), 8–32.
[15] J. Jamalzadeh and Gh. Rezaei, Countably compact topological semigroups versus Brandt extensions and paragroups,
Algebras Groups Geom. 27:2 (2010), 219–228.
[16] J. Jamalzadeh and Gh. Rezaei, Brandt extensions and primitive topologically periodic inverse topological semigroups,
Bull. Iran. Math. Soc. 39:1 (2013), 87–95.
[17] S. Kitaev and V. Lozin, Words and Graphs, Monographs in Theor. Comput. Sc. An EATCS Series. Springer, Cham,
2015.
[18] S. Kitaev and S. Seif, Word problem of the Perkins semigroup via directed acyclic graphs, Order 25:3 (2008), 177–194.
[19] W. D. Munn, Matrix representations of semigroups, Proc. Cambridge Phil. Soc. 53 (1957), 5–12.
[20] P. Perkins, Bases for equational theories of semigroups, J. Algebra 11:2 (1969), 298–314.
[21] M. Petrich, Inverse Semigroups, John Wiley & Sons, New York, 1984.
Faculty of Mathematics, National University of Lviv, Universytetska 1, Lviv, 79000, Ukraine
E-mail address: o [email protected], [email protected]
| 4 |
arXiv:1712.06393v2 [] 28 Dec 2017
Graph-based Transform Coding with Application
to Image Compression
Giulia Fracastoro, Dorina Thanou, Pascal Frossard
December 29, 2017
Abstract
In this paper, we propose a new graph-based coding framework and
illustrate its application to image compression. Our approach relies on
the careful design of a graph that optimizes the overall rate-distortion
performance through an effective graph-based transform. We introduce
a novel graph estimation algorithm, which uncovers the connectivities
between the graph signal values by taking into consideration the coding
of both the signal and the graph topology in rate-distortion terms. In
particular, we introduce a novel coding solution for the graph by treating
the edge weights as another graph signal that lies on the dual graph.
Then, the cost of the graph description is introduced in the optimization
problem by minimizing the sparsity of the coefficients of its graph Fourier
transform (GFT) on the dual graph. In this way, we obtain a convex
optimization problem whose solution defines an efficient transform coding
strategy. The proposed technique is a general framework that can be
applied to different types of signals, and we show two possible application
fields, namely natural image coding and piecewise smooth image coding.
The experimental results show that the proposed method outperforms
classical fixed transforms such as DCT, and, in the case of depth map
coding, the obtained results are even comparable to the state-of-the-art
graph-based coding method, that are specifically designed for depth map
images.
1
Introduction
In the last years, the new field of signal processing on graphs has gained increasing attention [1]. Differently from classical signal processing, this new emerging
field considers signals that lie on irregular domains, where the signal values
are defined on the nodes of a weighted graph and the edge weights reflect the
pairwise relationship between these nodes. Particular attention has been given
to the design of flexible graph signal representations, opening the door to new
structure-aware transform coding techniques, and eventually to more efficient
signal and image compression frameworks. As an illustrative example, an image
can be represented by a graph, where the nodes are the image pixels and the
1
edge weights capture the similarity between adjacent pixels. Such a flexible representation permits to go beyond traditional transform coding by moving from
classical fixed transforms such as the discrete cosine transform (DCT) [2] to
graph-based transforms that are better adapted to the actual signal structure,
such as the graph Fourier transform (GFT) [3]. Hence, it is possible to obtain
a more compact representation of an image, as the energy of the image signal is concentrated in the lowest frequencies. This provides a strong advantage
compared to the classical DCT transform especially when the image contains
arbitrarily shaped discontinuities. In this case, the DCT transform coefficients
are not necessarily sparse and contain many high frequency coefficients with
high energy. The GFT, on the other hand, may lead to sparse representations
and eventually more efficient compression.
However, one of the biggest challenges in graph-based signal compression remains the design of the graph and the corresponding transform. A good graph
for effective transform coding should lead to easily compressible signal coefficients, at the cost of a small overhead for coding the graph. Most graph-based
coding techniques focus mainly on images, and they construct the graph by considering pairwise similarities among pixel intensities [4,5] or using a lookup table
that stores the most popular GFTs [6]. It has been shown that these methods
could provide a significant gain in the coding of piecewise smooth images. Instead, in the case of natural images, the cost required to describe the graph often
outweighs the coding gain provided by the adaptive graph transform, and often
leads to unsatisfactory results. The problem of designing a graph transform
stays critical and may actually represent the major obstacle towards effective
compression of signal that live on an irregular domain.
In this work, we build on our previous work [7], and introduce a new graphbased signal compression scheme and apply it to image coding. First, we propose
a novel graph-based compression framework that takes into account the coding
of the signal values as well as the cost of transmitting the graph. Second, we
introduce an innovative way for coding the graph by treating its edge weights
as a graph signal that lies on the dual graph. We then compute the graph
Fourier transform of this signal and code its quantized transform coefficients.
The choice of the graph is thus posed as a rate-distortion optimization problem.
The cost of coding the signal is captured by minimizing the smoothness of the
graph signal on the adapted graph. The transmission cost of the graph itself
is controlled by penalizing the sparsity of the graph Fourier coefficients of the
edge weight signal that lies on the dual graph. The solution of our optimization
problem is a graph that provides an effective tradeoff between the sparsity of
the signal transform and the graph coding cost.
We apply our method to two different types of signals, namely natural images
and piecewise smooth images. Experimental results on natural images confirm
that the proposed algorithm can efficiently infer meaningful graph topologies,
which eventually lead to improved coding results compared to non-adaptive
methods based on classical transforms such as the DCT. Moreover, we show that
our method can significantly improve the classical DCT on piecewise smooth
images, and it even leads to comparable results to the state-of-the-art graph2
based depth image coding solutions. However, in contrary to these dedicated
algorithms, it is important to underline that our framework is quite generic and
can be applied to very different types of signals.
The outline of the paper is as follows. We first discuss related work in Section II. We then introduce some preliminary definitions on graphs in Section III.
Next, we present the proposed graph construction problem in Section IV. The
application of the proposed graph construction algorithm to image coding and
the entire compression framework are described in Section V. Then, the experimental results on natural images and piecewise smooth images are presented in
Section VI and VII, respectively. Finally we draw some conclusions in Section
VIII.
2
Related work
In this section, we first provide a brief overview of transform coding. Then, we
focus on graph-based coding and learning methods, that are closely related to
the framework proposed in this work.
2.1
Transform coding
Lossy image compression usually employs a 2D transform to produce a new
image representation that lies in the transform domain [8]. Usually, the obtained transform coefficients are approximately uncorrelated and most of the
information is contained in only a few of them. It is proved that the KarhunenLoève transform (KLT) can optimally decorrelate a signal that has Gaussian
entries [9]. However, since the KLT is based on the eigendecomposition of the
covariance matrix, this matrix or the transform itself has to be sent to the receiver. For this reason, the KLT is not practical in most circumstances [8]. The
most common transform in image compression is the DCT [2], which employs
a fixed set of basis vector. It is known that the DCT is asymptotically equivalent to the KLT for signals that can be modelled as a first-order autoregressive
process [10]. Nevertheless, this model fails to capture the complex and nonstationary behavior that is typically present in natural images. In the light of
the above, transform design is still an active research field and in the last years
many signal adaptive transforms have been presented. In this paper, we focus
on a specific type of adaptive transforms, namely graph-based transforms.
2.2
Graph-based image coding
In the last years, graph signal processing has been applied to different image
coding applications, especially for piecewise smooth images. In [4,5], the authors
propose a graph-based coding method where the graph is defined by considering
pairwise similarities among pixel intensities. Another efficient graph construction method for piecewise smooth images has been proposed in [6], where the
authors use a lookup table that stores the most popular graphs. Then, for
3
each signal, they perform an exhaustive search choosing the best GFT in ratedistortion terms. Furthermore, a new graph transform, called signed graph
Fourier transform, has been presented in [11]. This transform is targeted for
compression of depth images and its underlying graph contains negative edges
that describe negative correlations between pixels pairs.
Recently, a number of methods using a graph-based approach have also been
proposed for transform coding of inter and intra predicted residual blocks in
video compression. A novel graph-based method for intra-frame coding has been
presented in [12], which introduces a new generalized graph Fourier transform. A
graph-based method for inter predicted video coding has been introduced in [13],
where the authors design a set of simplified graph templates capturing the basic
statistical characteristics of inter predicted residual blocks. Furthermore, a few
separable graph-based transforms for residual coding have also been introduced.
In [14], for example, the authors propose a new class of graph-based separable
transforms for intra and inter predictive video coding. The proposed transform
is based on two separate line graphs, where the edge weights are optimized using
a graph learning problem. Another graph-based separable transform for inter
predictive video coding has been presented in [15]. In this case, the proposed
transform, called symmetric line graph transform, has symmetric eigenvectors
and therefore it can be efficiently implemented.
Finally, a few graph-based methods have also been presented for natural
image compression. In [16], a new technique of graph construction targeted for
image compression is proposed. This method employs innovative edge metrics,
quantization and edge prediction technique. Moreover, in [17], a new class of
transforms called graph template transforms has been introduced for natural
image compression, focusing in particular on texture images. Finally, a method
for designing sparse graph structures that capture principal gradients in image
code blocks is proposed in [18]. However, in all these methods, it is still not
clear how to define a graph whose corresponding transform provides an effective
tradeoff between the sparsity of the transform coefficients and the graph coding
cost.
2.3
Graph construction
Several attempts to learn the structure and in particular a graph from data observations have been recently proposed, but not necessarily from a compression
point of view. In [19–21], the authors formulate the graph learning problem as
a precision matrix estimation with generalized Laplacian constraints. The same
method is also used in [14, 15], where the authors use a graph learning problem
in order to find the generalized graph Laplacian that best approximates residual video data. Moreover, in [22, 23], a sparse combinatorial Laplacian matrix
is estimated from the data samples under a smoothness prior. Furthermore,
in [17], the authors use a graph template to impose on the graph Laplacian a
sparsity pattern and approximate the empirical inverse covariance based on that
template.
Even if all the methods presented above contain some constraints on the
4
sparsity of the graph, none of them explicitly takes into account the real cost
of representing and coding the graph. In addition, most of them do not really
target images. Instead, in this paper, we go beyond prior art and we fill this
gap by defining a new graph construction problem that takes into account the
graph coding cost. Moreover, we show how our generic framework can be used
for image compression.
3
Basic definitions on graphs
For any graph G = (V, E) where V and E represent respectively the node and
edge sets with |V| = N and |E| = M , we define the weighted adjacency matrix
W ∈ RN ×N where Wij is the weight associated to the edge (i, j) connecting
nodes i and j. For undirected graphs with no self loops, W is symmetric and
has null diagonal. The graph Laplacian is defined as L = D − W , where D is
a diagonal matrix whose i-th diagonal element Dii is the sum of the weights
of all the edges incident to node i. Since L is a real symmetric matrix, it is
diagonalizable by an orthogonal matrix
L = ΨΛΨT ,
where Ψ ∈ RN ×N is the eigenvector matrix of L that contains the eigenvectors
as columns, and Λ ∈ RN ×N is the diagonal eigenvalue matrix, with eigenvalues
sorted in ascending order.
In the next sections, we will use also an alternative definition of the graph
Laplacian L that uses the incidence matrix B ∈ RN ×M [24], which is defined as
follows
if e = (i, j)
1,
Bie = −1,
if e = (j, i)
0,
otherwise,
c ∈ RM ×M be a
where an orientation is chosen arbitrarily for each edge. Let W
c
diagonal matrix where Wee = Wij if e = (i, j). Then, we can define the graph
Laplacian L as
cBT .
L = BW
(1)
It is important to underline that the graph Laplacian obtained using (1) is
independent from the edge orientation in G.
3.1
Graph Fourier Transform
A graph signal x ∈ RN in the vertex domain is a real-valued function defined on
the nodes of the graph G, such that xi , i = 1, . . . , N is the value of the signal at
node i ∈ V [1]. For example, for an image signal we can consider an associated
graph where the nodes of the graph are the pixels of the image. Then, the
5
smoothness of x on G can be measured using the Laplacian L [25]
N
xT Lx =
N
1 XX
Wij (xi − xj )2 .
2 i=1 j=1
(2)
Eq. (2) shows that a graph signal x is considered to be smooth if strongly
connected nodes have similar signal values. This equation also shows the importance of the graph. In fact, with a good graph representation the discontinuities should be penalized by low edge weights, in order to obtain a smooth
representation of the signal. Finally, the eigenvectors of the Laplacian are used
to define the graph Fourier transform (GFT) [1] of the signal x as follows:
x̂ = ΨT x.
The graph signal x can be easily retrieved from x̂ by inversion, namely x = Ψx̂.
Analogously to the Fourier transform in the Euclidean domain, the GFT is used
to describe the graph signal in the Fourier domain.
3.2
Comparison between KLT and GFT
As we have said in Section 2, the KLT is the transform that optimally decorrelates a signal that has Gaussian entries. In this section, we discuss the connection of the graph Fourier transform with the KLT, showing that the GFT can
be seen as an approximation of the KLT.
Let us consider a signal x ∈ RN that follows a Gaussian Markov Random
Field (GMRF) model with respect to a graph G, with a mean µ and a precision
matrix Q. Notice that the GMRF is a very generic model, where the precision
matrix can be defined with much freedom, as long as its non-zero entries encode
the partial correlations between random variables, and as long as their locations
correspond to the edges of the graph. It has been proved that, if the precision
matrix Q of the GMRF model corresponds to the Laplacian L, then the KLT
of the signal x is equivalent to the GFT [26].
As shown before, the graph Laplacian has a very specific structure where
the non-zero components correspond to the edges of the graph, and, for this
reason, it is a sparse matrix, since typically |E| N 2 . Since the precision
matrix in general does not have such fixed structure, we now study the KLT
of a signal whose model is a GMRF with a generic precision matrix Q. In this
case, the GFT does not correspond to the KLT anymore and the GFT should
be considered as an approximation of the KLT, where the precision matrix
is forced to follow this specific structure. In order to find the GFT that best
approximates the KLT, we introduce a maximum likelihood estimation problem,
using an approach similar to the one presented in [20]. The density function of
a GMRF has the following form [27]
1
N
1
p(x) = (2π)− 2 (det Q) 2 exp − (x − µ)T Q(x − µ) .
2
6
The log-likelihood function can then be computed as follows
1
1
log L(Q, µ|x) = log(det Q) 2 − (x − µ)T Q(x − µ).
(3)
2
Given x1 , ..., xn observations of the signal x, we find the Laplacian matrix L
that best approximates Q by solving the following problem
max log L(L, µ|x1 , ..., xn ),
L∈Γ
(4)
where Γ denotes the set of valid Laplacian matrices. Then, by using (3), the
problem in (4) can be written as
1
1
max log(det∗ L) 2 − tr((X − µ)T L(X − µ)),
L∈Γ
2
(5)
where X is the matrix whose columns are the N column vectors x1 , x2 , ..., xN
and det∗ is the pseudo-determinant (since L is singular). The optimization
problem in (5) defines the graph whose GFT best approximates the KLT. The
advantage of using the GFT instead of the KLT is that we force the precision
matrix to follow the specific sparse structure defined by the Laplacian. In this
way, the transform matrix can be transmitted to the decoder in a more compact
way. In the next section, we will highlight the connection between the proposed
graph construction problem and the maximum likelihood estimation problem
presented in (5).
4
Graph-transform optimization
Graph-based compression methods use a graph representation of the signal
through its GFT, in order to obtain a data-adaptive transform that captures the
main characteristics of the signals. The GFT coefficients are then encoded, instead of the original signal values themselves. In general, a signal that is smooth
on a graph has its energy concentrated in the low frequency coefficients of the
GFT, hence it is easily compressible. To obtain good compression performance,
the graph should therefore be chosen such that it leads to a smooth representation of the signal. At the same time, it should also be easy to encode, since it
has to be transmitted to the decoder for signal reconstruction. Often, the cost of
the graph representation outweighs the benefits of using an adaptive transform
for signal representation. In order to find a good balance between graph signal
representation benefits and coding costs, we introduce a new graph construction
approach that takes into consideration the above mentioned criteria.
We first pose the problem of finding the optimal graph as a rate-distortion
optimization problem defined as
min
L∈RN ×N
D(L) + γ(Rc (L) + RG (L)),
(6)
where D(L) is the distortion between the original signal and the reconstructed
one and is defined as follows
D(L) = ku − ũ(L)k2 ,
7
where u and ũ(L) are respectively the original and the reconstructed signal via
its graph transform on L. The total coding rate is composed of two representation costs, namely the cost of the signal transform coefficients Rc (L) and the
cost of the graph description RG (L). Each of these terms depends on the graph
characterized by L and on the coding scheme. We describe them in more details
in the rest of the section.
4.1
Distortion approximation
The distortion D(L) is defined as follows
D(L) = ku − ũ(L)k2 = kû(L) − ûq (L)k2 ,
where u and ũ(L) are respectively the original and the reconstructed signal, and
û(L) and ûq (L) are respectively the transform coefficients and the quantized
transform coefficients. The equality holds due to the orthonormality of the
GFT. Considering a uniform scalar quantizer with the same step size q for all
the transform coefficients, if q is small the expected value of the distortion D(L)
can be approximated as follows [28]
D=
q2 N
.
12
With this high-resolution approximation, the distortion depends only on the
quantization step size and it does not depend on the chosen L [6]. For simplicity,
in the rest of the paper we adopt this assumption. Therefore, the optimization
problem (6) is reduced to finding the graph that permits to minimize the rate
terms.
4.2
Rate approximation of the transform coefficients
We can evaluate the cost of the transform coefficients Rc (L) by evaluating the
smoothness of the signal on the graph described by L. We use the approximation
proposed in [6], [5], namely
!
N
−1
X
T
T
T
Rc (L) = u Lu = u
λl (L)ψl (L)ψl (L)
u
l=0
=
N
−1
X
T
T
λl (L)(u ψl (L))(ψl (L) u) =
l=0
N
−1
X
(7)
λl û2l (L),
l=0
where λl and ψl are respectively the l-th eigenvalue and l-th eigenvector of L.
Therefore, Rc (L) is an eigenvalue-weighted sum of squared transform coefficients. It assumes that the coding rate decreases when the smoothness of the
signal over the graph defined by L increases. In addition, (7) relates the measure
of the signal smoothness with the sparsity of the transform coefficients. The approximation in (7) does not take into account the coefficients that corresponds
8
to λ0 = 0 (i.e., the DC coefficients). Thus, (7) does not capture the variable
cost of DC coefficients in cases where the graph contains a variable number of
connected components. However, in our work we ignore this cost as we impose
that the graph is connected.
It is also interesting to point out that there is a strong connection between
(7) and (5). In fact, if we suppose that µ = 0 and if we consider u as the
only observation of the signal x, then the second term of the log-likelihood in
(5) is equal to −Rc (L). For this reason, we can say that the solution of our
optimization problem can be seen as an approximation of the KLT.
4.3
Rate approximation of the graph description
The graph description cost RG (L) depends on the method that is used to code
the graph. Generally, a graph could have an arbitrary topology. However, in
order to reduce the graph transmission cost, we choose to use a fixed incidence
matrix B for the graph and to vary only the edge weights. Therefore, the graph
can be defined simply by a vector w ∈ RM , where we with 1 ≤ e ≤ M is the
weight of the edge e. Then, by using (1) we can define the graph Laplacian
L = B T diag(w)B.
In order to compress the edge weight vector w, we propose to treat it as a
graph signal that lies on the dual graph Gd . Given a graph G, we define its dual
graph Gd as an unweighted graph where each node of Gd represents an edge of
G and two nodes of Gd are connected if and only if their corresponding edges in
G share a common endpoint. An example of a dual graph is shown in Fig. 1.
We choose to use this graph representation for the edge weight signal w because
consecutive edges G often have similar weights, since the signals have often
smooth regions or smooth transitions between regions. The latter is generally
true in case of images. In this way, the dual graph can provide a smooth
representation of w. We can define the graph Laplacian matrix Ld ∈ RM ×M of
the dual graph Gd and the corresponding eigenvector and eigenvalue matrices
Ψd ∈ RM ×M and Λd ∈ RM ×M such that Ld = Ψd Λd ΨTd . We highlight that,
since Gd is an unweighted graph, it is independent of the choice of L, and by
consequence also Λd and Ψd are independent from L.
Since w can be represented as a graph signal, we can compute its GFT
ŵ ∈ RM as
ŵ = ΨTd w.
Therefore, we can use ŵ to describe the graph G and we evaluate the cost of the
graph description by measuring the coding cost of ŵ. It has been shown that
the total bit budget needed to code a vector is proportional to the number of
non-zero coefficients [29], thus we approximate the cost of the graph description
by measuring the sparsity of ŵ as follows
RG (L) = kŵk1 = kΨTd wk1 .
(8)
We highlight that we use two different types of approximations for Rc (L) and
RG (L), even if both of them are treated as graph signals. This is due to the
9
2
c
d
d
5
c
3
a
a
f
b
b
f
1
e
4
e
(a)
(b)
Figure 1: An example of a graph (a) and its corresponding dual graph (b). The
edges in the first graph (labeled with lower case letters) become the nodes of
the corresponding dual graph.
fact that the two signals have different characteristics. In the case of an image
signal u, we impose that the signal is smooth over G, building the graph G with
this purpose. Instead for w, even if we suppose that consecutive edges usually
have similar values, we have no guarantees that w is smooth on Gd , since Gd is
fixed and it is not adapted to the image signal. Therefore, in the second case
using a sparsity constraint is more appropriate for capturing the characteristics
of the edge weight signal w.
To be complete, we finally note that the dual graph has already been used in
graph learning problems in the literature. In particular, in [30] the authors propose a method for joint denoising and contrast enhancement of images using the
graph Laplacian operator, where the weights of the graph are defined through
an optimization problem that involves the dual graph. Moreover, [31] presents
a graph-based dequantization method by jointly optimizing the desired graphsignal and the similarity graph, where the weights of the graph are treated as
another graph signal defined on the dual graph. The approximation of RG (L)
presented in (8) may look similar to the one used in [31]. The main difference
between the two formulations is that in (8) we minimize the sparsity of w in the
GFT domain in order to lossy code the signal w; instead, in [31], the authors
minimize the differences between neighboring edges in order to optimize the
graph structure without actually coding it.
4.4
Graph construction problem
By using (1), (7) and (8), our graph construction problem (6) is reduced to the
following optimization problem
min uT B(diag(w))B T u + αkΨTd wk1 ,
w∈RM
(9)
where α is a weighting constant parameter, that allows us to balance the contribution of the two terms.
10
u
w∗
Solve GL
problem (11)
For each ∆i , quantize ŵ∗ = ΨTd w∗
∗
ŵr,∆
i
Choose the best
∆i by solving (12)
∗
ŵr,∆
, ûq
i
Entropy coding
Bitstream
Figure 2: Block diagram of the proposed coding method for an input image u.
Building on the rate-distortion formulation of (9), we design the graph by
solving the following optimization problem
min uT B(diag(w))B T u + αkΨTd wk1 − β1T log(w),
w∈RM
s. t.
(10)
w ≤ 1,
where α and β are two positive regularization parameters and 1 denotes the
constant one vector. The inequality constraint has been added to guarantee
that all the weights are in the range (0, 1], which is the same range of the most
common normalized weighting functions [32]. Then, the logarithmic term has
been added to penalize low weight values and to avoid the trivial solution. In
addition, this term guarantees that wm > 0, ∀m, so that the graph is always
connected. A logarithmic barrier is often employed in graph learning problems
[23]. In particular, it has further been shown that a graph with Gaussian weights
can be seen as the result of a graph learning problem with a specific logarithmic
barrier on the edge weights [23].
The problem in (10) can be cast as a convex optimization problem with a
unique minimizer. To solve this problem, we write the first term in the following
form
uT B(diag(w))B T u = tr((B T uuT B)diag(w))
= vec(B T uuT B)T vec(diag(w))
= vec(B T uuT B)T Mdiag w,
where tr(·) denotes the trace of a matrix, vec(·) is the vectorization operator,
2
and Mdiag ∈ RM ×M is a matrix that converts the vector w into vec(diag(w)).
Then, we can rewrite problem (10) as
min vec(B T uuT B)T Mdiag w + αkΨTd wk1 − β1T log(w),
w∈RM
s. t.
(11)
w ≤ 1.
The problem in (11) is a convex problem with respect to the variable w and can
be solved efficiently via interior-point methods [33].
5
Graph-based image compression
We now describe how the graph construction problem of the previous section
can be applied to block-based image compression. It is important to underline
that the main goal of this section is to present an application of our framework.
11
Therefore, we do not present an optimization of the full coding process, but we
mainly focus on the transform block.
As pointed out in the previous sections, given an image block u we have
two different types of information to transmit to the decoder: the transform
coefficients of the image signal û and the description of the graph ŵ. The image
coefficients û are quantized and then coded using an entropy coder. Under the
assumption of high bitrate, the optimal entropy-constrained quantizer is the
uniform quantizer [34]. Moreover, it has been proved that, under the assumption
that all the transform coefficients follow the same probability distribution, the
transform code is optimized when the quantization steps of all coefficients are
equal [35]. For these reasons, we quantize the image transform coefficients û
using an uniform quantizer with the same step size q for all the coefficients.
Then, since we assume that the non-zero coefficients are concentrated in the low
frequencies, we code the quantized coefficients until the last non-zero coefficient
using an adaptive bitplane arithmetic encoder [36] and we transmit the position
of the last significant coefficient.
The graph itself is transmitted by its GFT coefficients vector ŵ, which is
quantized and then transmitted to the decoder using an entropy coder. In order
to reduce the cost of the graph description, we reduce the number of elements
f M coefficients, which usually
in ŵ by taking into account only the first M
f coefficients to zero.
are the most significant ones, and setting the other M − M
f
M
The reduced signal ŵr ∈ R is quantized using the same step size for all its
coefficients and then coded with the same entropy coder used for the image
signal.
Given an image signal, we first solve the optimization problem in (11) obtaining the optimal solution w∗ . To transmit w∗ to the decoder, we first compute
its GFT coefficients ŵ∗ and the reduced vector ŵr∗ , then we quantize ŵr∗ and
code it using the entropy coder described above. It is important to underline
that, since we perform a quantization of ŵr∗ , the reconstructed signal w̃∗ is not
strictly equal to the original w∗ and its quality depends on the quantization step
size used. The graph described by w̃∗ is then used to define the GFT transform
for the image signal.
Since it is important to find the best tradeoff between the quality of the graph
and its transmission cost, for each block in an image we test different quantization step sizes {∆i }1≤i≤Q for a given graph represented by ŵr∗ . To choose the
best quantization step size, we use the following rate-distortion problem
min D(∆i ) + γ(Rc (∆i ) + RG (∆i )),
i
(12)
∗
where RG (∆i ) is the rate of ŵr,∆
, the coefficient vector ŵr∗ quantized with
i
∆i , and D(∆i ) and Rc (∆i ) are respectively the distortion and the rate of the
reconstructed image signal obtained using the graph transform described by
∗
ŵr,∆
. We point out that the choice of ∆i depends on the quantization step size
i
q used for the image transform coefficients û. In fact, at high bitrate (small q)
we expect to have a smaller ∆i and thus a more precise graph, instead at low
bitrate (large q) we will have a larger ∆i that corresponds to a coarser graph
12
approximation. We also underline that, in (12), we evaluate the actual distortion
and rate without using the approximation introduced previously in (6), (7),
(8). The actual coding methods described above are used to compute the rates
Rc (∆i ) and RG (∆i ). The principal steps of the proposed image compression
method are summarized in Fig. 2.
6
Experimental results on natural images
In this section, we evaluate the performance of our illustrative graph-based
encoder for natural images. We first describe the general experimental settings,
then we present the obtained experimental results.
6.1
Experimental setup
First of all, we subdivide the image into non-overlapping 16×16 pixel blocks.
For each block, we define the edge weights using the graph learning problem
described in the previous sections. The chosen topology of the graph is a 4connected grid: this is the most common graph topology for graph-based image
compression, since its number of edges is not too high, and thus the coding
cost
√ is√limited. In a 4-connected square grid with N nodes, we have M =
2 N ( N − 1) edges. In all our experiments on natural images, we use Q = 8
f = 64, which is the
possible quantization step sizes ∆i for ŵr and we set M
length of the reduced coefficient vector ŵr . In order to set the value of the
parameter α in (11), we first have to perform a block classification. In fact, we
recall that the parameter α in (11) is related to the l1 -norm of ŵ, where ŵ are
the GFT coefficients of the signal w that lies on the dual graph. As we have
explained previously, the motivation for using the dual graph is that consecutive
edges usually have similar values. However, this statement is not always true,
but it depends on the characteristics of the block. In smooth blocks nearly all
the edges will have similar values. Instead, in piecewise smooth blocks there
could be a small percentage of edges whose consecutive ones have significantly
different values. Finally, in textured blocks this percentage may even increase in
a significant way. For this reason, we perform a priori a block classification using
a structure tensor analysis, as done in [18]. The structure tensor is a matrix
derived from the gradient of an image patch, and it is commonly used in many
image processing algorithms, such as edge detection [37], corner detection [38,39]
and feature extraction [40]. Let µ1 and µ2 be the two eigenvalues of the structure
tensor, where µ1 ≥ µ2 ≥ 0. We classify the image blocks in the following way:
• Class 1: smooth blocks, if µ1 ≈ µ2 ≈ 0;
• Class 2: blocks with a dominant principal gradient, if µ1 µ2 ≈ 0;
• Class 3: blocks with a more complex structure, if µ1 and µ2 are both large.
Fig. 3 shows an example of block classification. For each block class, we have
set the values of parameters α and β by fine tuning. We set α = 100 for blocks
13
Class 1
Class 3
Class 2
Figure 3: Block classification of the image Lena. Class 1 contains smooth blocks,
class 2 blocks with a dominant principal gradient, and class 3 consists of blocks
with a more complex structure.
Image
Lena
Boat
Peppers
House
Couple
Stream
Class 1
Learned graph Gaussian graph
0.11
0.11
0.09
0.07
0.18
0.14
0.05
0.05
0.26
0.29
0.17
0.19
Class 2
Learned graph Gaussian graph
0.70
0.49
0.33
0.21
0.87
0.45
0.68
0.59
1.17
0.75
0.50
0.30
Class 3
Learned graph Gaussian graph
0.51
0.32
0.46
0.34
0.88
0.57
0.61
0.49
0.98
0.81
0.31
0.22
Total
Learned graph Gaussian graph
0.31
0.23
0.28
0.21
0.47
0.31
0.36
0.31
0.66
0.55
0.31
0.23
Table 1: Bjontegaard average gain in PSNR for natural images w.r.t. DCT
that belong to the first class, α = 500 for blocks that belong to the second class
and α = 800 for blocks that belong to the third class. For all the three classes,
we set the same value for the other optimization parameter, i.e., β = 1.
We compare the performance of the proposed method to a baseline coding
scheme built on the classical DCT transform. In order to obtain comparable
results, we code the transform coefficients û of the image signal using the same
entropy coder for the graph-based method and for the DCT-based encoder. In
the first case, in addition to the bitrate of û, we count the bitrate due to the
transmission of ŵi∗ and log Q additional bits per block to transmit the chosen
quantization step size ∆i for ŵr . For both methods, we vary the quantization
step size q of the transform coefficients to vary the encoding rates. In addition,
in our method, for each block, we compare the RD-cost of the GFT and the
one of the DCT. Then, we eventually code the block with the transform that
has the lowest RD-cost and we use 1 additional bit per block to signal if we are
using the GFT or the DCT.
In order to show the advantages of the proposed graph construction problem,
we compare our method with a classical graph construction technique that uses
a Gaussian weight function [32] to define the edge weights
Wij = e−
(ui −uj )2
σ2
,
where σ is a gaussian parameter that we defined as σ = 0.15 maxi,j |ui − uj |. In
order to have comparable results, we use the coding scheme described in Sec.
V also for the Gaussian graph.
14
Peppers - class 1
50
Peppers - class 2
50
PSNR (dB)
PSNR (dB)
45
45
40
35
DCT
Learned graph
Gaussian graph
35
0
0.5
1
1.5
2
2.5
3
30
3.5
bitrate (bpp)
Peppers - class 3
50
1
2
3
4
bitrate (bpp)
Peppers - total
45
PSNR (dB)
PSNR (dB)
DCT
Learned graph
Gaussian graph
50
45
40
35
30
40
DCT
Learned graph
Gaussian graph
1
2
3
4
40
DCT
Learned graph
Gaussian graph
35
5
bitrate (bpp)
0.5
1
1.5
2
2.5
3
3.5
4
bitrate (bpp)
Figure 4: Rate-distortion curves for the image Peppers.
6.2
Results
The experiments are performed on six classical grayscale images (House, Lena,
Boat, Peppers, Stream, Couple) [41]. This dataset contains different types of
natural images, for example some of them have smooth regions (e.g. House
and Peppers), others instead are more textured (e.g. Boat, Lena, Couple and
Stream). In Table 1, we show the obtained performance results in terms of
average gain in PSNR compared to DCT, evaluated through the Bjontegaard
metric [42]. Moreover, in Fig. 4 we show the rate-distortion curves for the
image Peppers. Instead, in Fig. 5 we show a visual comparison between the
DCT and the proposed method for the image Peppers. We see that, in the
second and third classes, the proposed method outperforms DCT providing an
average PSNR gain of 0.6 dB for blocks in the second class and 0.64 dB for
blocks in the third class. It should be pointed out that there is not a significant
difference in performance between the second class and the third one. This
probably is due to the fact that the proposed graph construction method is
able to adapt the graph and its description cost to the characteristics of each
block. Instead, in the first class, which corresponds to smooth blocks, the gain
is nearly 0, as DCT in this case is already optimal. Finally, we notice that, in
the classes where the DCT is not optimal, the learned graph always outperforms
the Gaussian graph.
15
Original image
DCT
Proposed method
Figure 5: Visual comparison for a detail of the image Peppers at 0.6 bpp.
7
Experimental results on piecewise smooth images
In this section, we evaluate the performance of the proposed method on piecewise
smooth images, comparing our method with classical DCT and the state-of-theart graph-based coding method of [6]. We first describe the specific experimental
setting used for this type of signals, then we present the obtained results.
7.1
Experimental setup
We choose as piecewise smooth signals six depth maps taken from [43,44]. Similarly to the case of natural images, we split them into non-overlapping 16×16
pixel blocks and the chosen graph topology is a 4-connected grid. In addition,
we keep for Q the same setting as the one used for natural images. Then,
to define the parameters α and β we again subdivide the image blocks into
classes using the structure tensor analysis. In [6], the authors have identified
three block classes for piecewise smooth images: smooth blocks, blocks with
weak boundaries (e.g., boundaries between different parts of the same foreground/background) and blocks with sharp boundaries (e.g., boundaries between foreground and backgound). In our experiments, since we have observed
that the first two classes have a similar behavior, we decided to consider only
two different classes:
• Class 1: smooth blocks and blocks with weak edges, if µ1 ≈ µ2 ≈ 0.
• Class 2: blocks with sharp edges, if µ1 0,
where µ1 and µ2 , with µ1 ≥ µ2 ≥ 0, are the two eigenvalues of the structure
tensor. An example of block classification is shown in Fig. 6. As done for
natural images, for each class we set parameters α and β by fine tuning. For
16
Teddy - class 1
Teddy - class 2
Figure 6: Block classification of the image Teddy.
the first class, we set α = 40 and β = 0.02. For the second class, we set α = 400
and β = 1.
With this type of signals, we have observed that the coefficients ŵ∗ of the
learned graph are very sparse, as shown in Fig. 7. For this reason, we decided to
modify the coding method used for ŵ∗ . As done for natural images, we reduce
f coefficients
the number of elements in ŵ by taking into account only the first M
f
(in this case we set M = 256). Then, we use an adaptive binary arithmetic
encoder to transmit a significance map that signals the non-zero coefficients. In
this way, we can use an adaptive bitplane arithmetic encoder to code only the
values of the non-zero coefficients. This allows a strong reduction of the number
of coefficients that we have to transmit to the decoder.
Similarly to the case of natural images, we compare our method to a transform coding method based on the classical DCT. However, in the specific case
of depth map coding it has been shown that graph-based methods significantly
outperforms the classical DCT. For this reason, we also propose a comparison
with a graph-based coding scheme that is specifically designed for piecewise
smooth images. The method presented in [6] achieves the state-of-the-art performance in graph-based depth image coding. This method uses a table-lookup
based graph transform: the most popular GFTs are stored in a lookup table,
and then for each block an exhaustive search is performed to choose the best
GFT in rate-distortion terms. In this way, the side information that has to be
sent to the decoder is only the table index. Moreover, the method in [6] incorporates a number of coding tools, including multiresolution coding scheme and
edge-aware intra-prediction. Since in our case we are interested in evaluating
the performance of the transform, we only focus on the transform part and we
use as reference method a simplified version of the method in [6] that is similar to the one used in [45]. The simplified version of [6] that we implemented
employs 16×16 blocks and it does not make use of edge-aware prediction and
multiresolution coding. Since the transform used in [6] is based on a lookup
17
Table 2: Bjontegaard average gain in PSNR compared to DCT.
Image
Class 1 Class 2 Total
Teddy
0.59
6.12
4.82
Cones
0.70
8.37
6.88
Art
0.56
8.66
7.62
Dolls
0.55
8.59
5.57
Moebius
0.82
7.36
5.52
Reindeer
0.45
8.34
5.75
Table 3: Bjontegaard average gain in PSNR between the proposed method and
the reference method.
Image
Class 1 Class 2 Total
Teddy
-0.87
-0.12
-0.38
Cones
-1.21
1.17
0.54
Art
-0.89
0.86
0.49
Dolls
-0.78
1.16
0.26
Moebius
-1.14
-0.47
-0.76
Reindeer
-0.51
0.02
-0.33
table, we use 40 training depth images to build the table as suggested in [6]. In
the training phase, we identify the most common graph transforms. As a result,
the obtained lookup table contains 718 transforms. Then, in the coding phase
each block is coded using one of the transforms contained in the lookup table
or the DCT. The coding method used for the table index is the same as in [6].
Instead for the transform coefficients û, in order to have comparable results, we
use the coding method described in Sec. 5.
7.2
Results
The first coding results on depth maps are summarized in Table 2, where we
show the average gain in PSNR compared to DCT. Instead, in Table 3 we show
the Bjontegaard average gain in PSNR between the proposed method and the
reference method described previously. Moreover, in Fig. 8 we show the ratedistortion curves for the image Dolls. Finally, Fig. 9 shows an example of a
decoded image obtained using the proposed method.
The results show that the proposed technique provides a significant quality gain compared to DCT, displaying a behavior similar to other graph-based
techniques. Moreover, it is important to highlight that the performance of the
proposed method are close to that of the state-of-the-art method [6], although
our method is not optimized for piecewise smooth images, but it is a more general method that can be applied to a variety of signal classes. In particular, for
the blocks belonging to the second class, in 4 out of 6 images (namely Cones,
18
Art, Dolls and Moebius) we are able to outperform the reference method, reaching in some cases a quality gain larger than 1 dB (see Table 3). Overall, with
our more generic compression framework, we outperform the reference method
in approximately half of the test images. In general, we observe that the proposed method outperforms the reference one in blocks that have several edges
or edges that are not straight. This is probably due to the fact that, in these
cases, it is more difficult to represent the graph using a lookup table. It is also
worth noting that our method shows better performance at low bitrate, as it is
possible to see in Fig. 8.
8
Conclusion
In this paper, we have introduced a new graph-based framework for signal compression. First, in order to obtain an effective coding method, we have formulated a new graph construction problem targeted for compression. The solution
of the proposed problem is a graph that provides an effective tradeoff between
the energy compaction of the transform and the cost of the graph description.
Then, we have also proposed an innovative method for coding the graph by
treating the edge weights as a new signal that lies on the dual graph. We have
tested our method on natural images and on depth maps. The experimental
results show that the proposed method outperforms the classical DCT and, in
the case of depth map coding, even compares to the state-of-the-art graph-based
coding method.
We believe that the proposed technique participates to opening a new research direction in graph-based image compression. As future work, it would
be interesting to investigate other possible representation for the edge weights
of the graph, such as graph dictionaries or graph wavelets. This may lead to
further improvements in the coding performance of the proposed method.
Acknowledgment
This work was partially supported by Sisvel Technology.
References
[1] D. Shuman, S. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The
emerging field of signal processing on graphs: Extending high-dimensional
data analysis to networks and other irregular domains,” IEEE Signal Process. Mag., vol. 30, no. 3, pp. 83–98, 2013.
[2] N. Ahmed, T. Natarajan, and K. Rao, “Discrete cosine transform,” IEEE
Trans. Computers, vol. C-23, no. 1, pp. 90–93, 1974.
19
[3] D. K. Hammond, P. Vandergheynst, and R. Gribonval, “Wavelets on graphs
via spectral graph theory,” Applied and Computational Harmonic Analysis,
vol. 30, no. 2, pp. 129–150, 2011.
[4] G. Shen, W. S. Kim, S. K. Narang, A. Ortega, J. Lee, and H. Wey, “Edgeadaptive transforms for efficient depth map coding,” in Proc. Picture Coding Symposium (PCS), 2010, pp. 2808–2811.
[5] W. Kim, S. K. Narang, and A. Ortega, “Graph based transforms for
depth video coding,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012, pp. 813–816.
[6] W. Hu, G. Cheung, A. Ortega, and O. C. Au, “Multiresolution graph
fourier transform for compression of piecewise smooth images,” IEEE
Trans. on Image Process., vol. 24, no. 1, pp. 419–433, 2015.
[7] G. Fracastoro, D. Thanou, and P. Frossard, “Graph transform learning for
image compression,” in Proc. Picture Coding Symposium (PCS), 2016, pp.
1–5.
[8] K. Sayood, Introduction to data compression.
Newnes, 2012.
[9] V. K. Goyal, J. Zhuang, and M. Vetterli, “Transform coding with backward
adaptive updates,” IEEE Transactions on Information Theory, vol. 46,
no. 4, pp. 1623–1633, 2000.
[10] A. K. Jain, “A sinusoidal family of unitary transforms,” IEEE Transactions
on Pattern Analysis and Machine Intelligence, no. 4, pp. 356–365, 1979.
[11] W.-T. Su, G. Cheung, and C.-W. Lin, “Graph fourier transform with negative edges for depth image coding,” arXiv preprint arXiv:1702.03105, 2017.
[12] W. Hu, G. Cheung, and A. Ortega, “Intra-prediction and generalized graph
fourier transform for image coding,” IEEE Signal Process. Lett., vol. 22,
no. 11, pp. 1913–1917, 2015.
[13] H. E. Egilmez, A. Said, Y.-H. Chao, and A. Ortega, “Graph-based transforms for inter predicted video coding,” in Proc. IEEE International Conference on Image Processing (ICIP), 2015, pp. 3992–3996.
[14] H. E. Egilmez, Y.-H. Chao, A. Ortega, B. Lee, and S. Yea, “Gbst: Separable
transforms based on line graphs for predictive video coding,” in Proc. IEEE
International Conference on Image Processing (ICIP), 2016, pp. 2375–2379.
[15] K.-S. Lu and A. Ortega, “Symmetric line graph transforms for inter predictive video coding,” in Proc. Picture Coding Symposium (PCS), 2016.
[16] G. Fracastoro and E. Magli, “Predictive graph construction for image compression,” in Proc. IEEE International Conference on Image Processing
(ICIP), 2015, pp. 2204–2208.
20
[17] E. Pavez, H. E. Egilmez, Y. Wang, and A. Ortega, “GTT: Graph template
transforms with applications to image coding,” in Proc. Picture Coding
Symposium (PCS), 2015, pp. 199–203.
[18] I. Rotondo, G. Cheung, A. Ortega, and H. E. Egilmez, “Designing sparse
graphs via structure tensor for block transform coding of images,” in Proc.
Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2015, pp. 571–574.
[19] E. Pavez and A. Ortega, “Generalized laplacian precision matrix estimation
for graph signal processing,” in Proc. IEEE International Conference on
Acoustics Speech and Signal Processing (ICASSP), 2016, pp. 6350–6354.
[20] H. E. Egilmez, E. Pavez, and A. Ortega, “Graph learning from data under structural and laplacian constraints,” arXiv preprint arXiv:1611.05181,
2016.
[21] E. Pavez, H. E. Egilmez, and A. Ortega, “Learning graphs with monotone
topology properties and multiple connected components,” arXiv preprint
arXiv:1705.10934, 2017.
[22] X. Dong, D. Thanou, P. Frossard, and P. Vandergheynst, “Learning laplacian matrix in smooth graph signal representations,” IEEE Trans. Signal
Process., vol. 64, no. 23, pp. 6160–6173, 2016.
[23] V. Kalofolias, “How to learn a graph from smooth signals,” in Proc. International Conference on Artificial Intelligence and Statistics (AISTATS),
2016, pp. 920–929.
[24] J. Gallier, “Elementary spectral graph theory applications to graph clustering using normalized cuts: a survey,” arXiv preprint arXiv:1311.2492,
2013.
[25] D. Zhou and B. Schölkopf, “A regularization framework for learning from
graph data,” in Proc. ICML Workshop on Statistical Relational Learning
and its Connections to other Fields, 2004, pp. 132–137.
[26] C. Zhang and D. Florêncio, “Analyzing the optimality of predictive transform coding using graph-based models,” IEEE Signal Process. Lett., vol. 20,
no. 1, pp. 106–109, 2013.
[27] H. Rue and L. Held, Gaussian Markov random fields: theory and applications. CRC Press, 2005.
[28] R. M. Gray and D. L. Neuhoff, “Quantization,” IEEE Trans. Inf. Theory,
vol. 44, no. 6, pp. 2325–2383, 1998.
[29] S. Mallat and F. Falzon, “Analysis of low bit rate image transform coding,”
IEEE Trans. on Signal Process., vol. 46, no. 4, pp. 1027–1042, 1998.
21
[30] X. Liu, G. Cheung, and X. Wu, “Joint denoising and contrast enhancement
of images using graph laplacian operator,” in Proc. IEEE International
Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015,
pp. 2274–2278.
[31] W. Hu, G. Cheung, and M. Kazui, “Graph-based dequantization of blockcompressed piecewise smooth images,” IEEE Signal Process. Lett., vol. 23,
no. 2, pp. 242–246, 2016.
[32] L. J. Grady and J. Polimeni, Discrete calculus: Applied analysis on graphs
for computational science. Springer Science & Business Media, 2010.
[33] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge university
press, 2004.
[34] T. Wiegand and H. Schwarz, Source coding: Part I of fundamentals of
source and video coding. Now Publishers Inc, 2010.
[35] S. Mallat, A wavelet tour of signal processing: the sparse way.
press, 2008.
Academic
[36] I. H. Witten, R. M. Neal, and J. G. Cleary, “Arithmetic coding for data
compression,” Communications of the ACM, vol. 30, no. 6, pp. 520–540,
1987.
[37] U. Köthe, “Edge and junction detection with an improved structure tensor,” in Joint Pattern Recognition Symposium. Springer, 2003, pp. 25–32.
[38] C. Harris and M. Stephens, “A combined corner and edge detector.” in
Alvey vision conference, vol. 15, no. 50. Manchester, UK, 1988, pp. 10–
5244.
[39] C. S. Kenney, M. Zuliani, and B. Manjunath, “An axiomatic approach
to corner detection,” in Computer Vision and Pattern Recognition, 2005.
CVPR 2005. IEEE Computer Society Conference on, vol. 1. IEEE, 2005,
pp. 191–197.
[40] W. Förstner, “A feature based correspondence algorithm for image matching,” International Archives of Photogrammetry and Remote Sensing,
vol. 26, no. 3, pp. 150–166, 1986.
[41] USC-SIPI, “Image database, volume 3: Miscellaneous,” http://sipi.usc.
edu/database/database.php?volume=misc.
[42] G. Bjontegaard, “Calculation of average PSNR differences between RDcurves,” Doc. VCEG-M33 ITU-T Q6/16, Austin, TX, USA, 2-4 April
2001, 2001.
[43] D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using
structured light,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, 2003, pp. 195–202.
22
[44] D. Scharstein and C. Pal, “Learning conditional random fields for stereo,”
in Proc. IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), 2007, pp. 1–8.
[45] D. Zhang and J. Liang, “Graph-based transform for 2d piecewise smooth
signals with random discontinuity locations,” IEEE Trans. on Image Process., vol. 26, no. 4, pp. 1679–1693, 2017.
23
0.00013399
0.055975
0.7
0.6
0.5
^i$ j
jw
0.4
0.3
0.2
0.1
0
0
100
200
300
400
500
i
Figure 7: Top: Example of a piecewise smooth block and the corresponding
learned graph. Bottom: The corresponding GFT coefficients ŵ.
24
Dolls - class 1
55
54
PSNR (dB)
53
52
51
50
DCT
proposed method
reference method
49
48
0.1
0.2
0.3
bitrate (bpp)
Dolls - class 2
0.4
0.5
50
48
PSNR (dB)
46
44
42
40
DCT
proposed method
reference method
38
36
1
1.5
2
2.5
bitrate (bpp)
Dolls - total
3
3.5
4
54
52
PSNR (dB)
50
48
46
44
DCT
proposed method
reference method
42
40
0.4
0.6
0.8
1
bitrate (bpp)
1.2
1.4
Figure 8: Rate-distortion curves for the image Dolls.
25
Figure 9: Top: Original image. Bottom: Decoded image using the proposed
method (0.6 bpp).
26
| 7 |
Numerical modeling of the friction stir welding
process: a literature review
Diogo Mariano Neto • Pedro Neto
D. M. Neto, P. Neto
Department of Mechanical Engineering (CEMUC) - POLO II, University of Coimbra,
3030-788 Coimbra, Portugal
Tel.: +351 239 790 700
Fax: +351 239 790 701
E-mail: [email protected]
Corresponding author: D. M. Neto – [email protected]
Abstract:
This survey presents a literature review on friction stir welding (FSW) modeling with a special focus on the
heat generation due to the contact conditions between the FSW tool and the workpiece. The physical process
is described and the main process parameters that are relevant to its modeling are highlighted. The contact
conditions (sliding/sticking) are presented as well as an analytical model that allows estimating the associated
heat generation. The modeling of the FSW process requires the knowledge of the heat loss mechanisms,
which are discussed mainly considering the more commonly adopted formulations. Different approaches that
have been used to investigate the material flow are presented and their advantages/drawbacks are discussed.
A reliable FSW process modeling depends on the fine tuning of some process and material parameters.
Usually, these parameters are achieved with base on experimental data. The numerical modeling of the FSW
process can help to achieve such parameters with less effort and with economic advantages.
Keywords:
Frictions stir welding, FSW, Modeling, Numerical simulation, Heat generation, Heat
transfer, Metal flow, Review
1.
INTRODUCTION
1.1.
Friction Stir Welding Process
Friction stir welding (FSW) is a novel solid state joining process patented in 1991 by The Welding
Institute, Cambridge, UK [1]. One of the main advantages of FSW over the conventional fusion
joining techniques is that no melting occurs. Thus, the FSW process is performed at much lower
temperatures than the conventional welding. At the same time, FSW allows to avoid many of the
environmental and safety issues associated with conventional welding methods [2]. In FSW the
parts to weld are joined by forcing a rotating tool to penetrate into the joint and moving across the
entire joint. Resuming, the solid-state joining process is promoted by the movement of a nonconsumable tool (FSW tool) through the welding joint.
FSW consists mainly in three phases, in which each one can be described as a time period
where the welding tool and the workpiece are moved relative to each other. In the first phase, the
rotating tool is vertically displaced into the joint line (plunge period). This period is followed by
the dwell period in which the tool is held steady relative to the workpiece but still rotating. Owing
to the velocity difference between the rotating tool and the stationary workpiece, the mechanical
interaction produces heat by means of frictional work and material plastic deformation. This heat is
dissipated into the neighboring material, promoting an increase of temperature and consequent
material softening. After these two initial phases the welding operation can be initiated by moving
either the tool or the workpiece relative to each other along the joint line. Fig. 1 illustrates a
schematic representation of the FSW setup [3].
Fig. 1 Friction stir welding setup [3]
The FSW tool consists of a rotating probe (also called pin) connected to a shoulder piece, as
shown in Fig. 2. During the welding operation, the tool is moved along the butting surfaces of the
two rigidly clamped plates (workpiece), which are normally placed on a backing plate. The vertical
displacement of the tool is controlled to guarantee that the shoulder keeps contact with the top
surface of the workpiece. The heat generated by the friction effect and plastic deformation softens
the material being welded. A severe plastic deformation and flow of plasticized metal occurs when
the tool is translated along the welding direction. In this way, the material is transported from the
front of the tool to the trailing edge (where it is forged into a joint) [4].
The half-plate in which the direction of the tool rotation is the same as the welding direction
is called the advancing side, while the other is designated as retreating side. This difference can
lead to asymmetry in heat transfer, material flow and in the mechanical properties of the weld.
Fig. 2 Schematic illustration of the FSW process [4]
1.1.1. Process Parameters
The welding traverse speed ( vtrans ), the tool rotational speed ( ω ), the downward force ( F ), the tilt
angle of the tool and the tool design are the main variables usually used to control the FSW process
[4]. The rotation of the tool results in stirring of material around the tool probe while the translation
of the tool moves the stirred material from the front to the back of the probe. Axial pressure on the
tool also affects the quality of the weld. It means that very high pressures lead to overheating and
thinning of the joint, whereas very low pressures lead to insufficient heating and voids. The tilt
angle of the tool, measured with respect to the workpiece surface, is also an important parameter,
especially to help producing welds with “smooth” tool shoulders [5].
As mentioned before, tool design influences heat generation, plastic flow, the power required
to perform FSW and the uniformity of the welded joint. Generally, two tool surfaces are needed to
perform the heating and joining processes in FSW. The shoulder surface is the area where the
majority of the heat by friction is generated. This is valid for relatively thin plates, otherwise the
probe surface is the area where the majority of the heat is generated. Fig. 3 presents a schematic
example of an FSW tool with conical shoulder and threaded probe. In this case, the conical tool
shoulder helps to establish a pressure under the shoulder, but also operates as an escape volume for
the material displaced by the probe due to the plunge action. As the probe tip must not penetrate the
workpiece or damage the backing plate, in all tool designs the probe height is limited by the
workpiece thickness [3].
Fig. 3 FSW tool with a conical shoulder and threaded probe [3]
1.1.2. Weld Microstructure
FSW involves complex interactions between simultaneous thermomechanical processes. These
interactions affect the heating and cooling rates, plastic deformation and flow, dynamic
recrystallization phenomena and the mechanical integrity of the joint [4]. The thermomechanical
process involved under the tool results in different microstructural regions (see Fig. 4). Some
microstructural regions are common to all forms of welding, while others are exclusive of FSW [5].
The stir zone (also called nugget) is a region of deeply deformed material that
corresponds approximately to the location of the probe during welding. The grains
within the nugget are often an order of magnitude smaller than the grains in the base
material.
The thermomechanically affected zone (TMAZ) occurs on either side of the stir
zone. The strain and temperature levels attained are lower and the effect of welding
on the material microstructure is negligible.
The heat affected zone (HAZ) is common to all welding processes. This region is
subjected to a thermal cycle but it is not deformed during welding.
Fig. 4 Different microstructural regions in a transverse cross section of FSW [5]
1.2.
Numerical Modeling
Several aspects of the FSW process are still poorly understood and require further study. Many
experimental investigations have already been conducted to adjust input FSW parameters (tool
speed, feed rate and tool depth), contrary to numerical investigations, which have been scarcely
used for these purposes. Computational tools could be helpful to better understand and visualize the
influence of input parameters on FSW process. Visualization and analysis of the material flow,
temperature field, stresses and strains involved during the FSW process can be easily obtained
using simulation results than using experimental ones. Therefore, in order to attain the best weld
properties, simulations can help to adjust and optimize the process parameters and tool design [5].
One of the main research topics in FSW is the evaluation of the temperature field [6].
Although the temperatures involved in the process are lower than the melting points of the weld
materials, they are high enough to promote phase transformations. Thus, it is very important to
know the time-temperature history of the welds. Usually, FSW temperature is measured using
thermocouples [7-8]. However, the process of measuring temperature variations in the nugget zone
using the technique mentioned above is a very difficult task. Numerical methods can be very
efficient and convenient for this study and in fact, along the last few years, they have been used in
the field of FSW [9]. Riahi and Nazari present numerical results indicating that the high gradient in
temperature (for an aluminum alloy) is in the region under the shoulder [10].
In the process modeling, it is essential to keep the goals of the model in view and at the same
time it is also important to adopt an appropriate level of complexity. In this sense, both analytical
and numerical methods have a role to play [11]. Usually, two types of process modeling techniques
are adopted: fluid dynamics (simulation of material flow and temperature distribution) and solid
mechanics (simulation of temperature distribution, stress and strain). Both solid and fluid modeling
techniques involve non-linear phenomena belonging to the three classic types: geometric, material
or contact nonlinearity.
The simulation of material flow during FSW has been modeled using computational fluid
dynamics (CFD) formulations. In this scenario, the material is analyzed as a viscous fluid flowing
across an Eulerian mesh and interacting with a rotating tool [12]. Other authors have also used a
CFD approach to develop a global thermal model in which the heat flow model includes
parameters related with the shear material and friction phenomenon [13]. One of the major
disadvantages of CFD models has to do with the definition of the material properties (residual
stresses cannot be predicted) [7].
Solid mechanics models require the use of Lagrangian formulation due to the high
deformation levels. However, the high gradient values of the state variables near to the probe and
the thermomechanical coupling imply a large number of degrees of freedom in FSW modeling,
which is costly in terms of CPU time [14]. Recent research demonstrated that the computational
time can be reduced by recurring to high performance computing (HPC) techniques [15].
Nevertheless, in order to face the long computational times associated to the simulation of the FSW
process, the adaptive arbitrary Lagrangian Eulerian (ALE) formulation has been implemented by
some authors [16-17]. Van der Stelt et al. use an ALE formulation to simulate the material flow
around the pin during FSW process [16]. These models of the process can predict the role played
by the tool plunge depth on the formation of flashes, voids or tunnel defects, and the influence of
threads on the material flow, temperature field and welding forces [14]. Lagrangian, Eulerian and
ALE approaches have been used to numerically simulate the FSW process, using software such as
FORGE3 and THERCAST [18], ABAQUS [10], DiekA [16], WELDSIM [19] and SAMCEF [20].
2.
HEAT GENERATION
The heat generated during the welding process is equivalent to the power input introduced into the
weld by the tool minus some losses due to microstructural effects [21]. The peripheral speed of the
shoulder and probe is much higher than the translational speed (the tool rotates at high speeds).
FSW primarily uses viscous dissipation in the workpiece material, driven by high shear stresses at
the tool/workpiece interface. Therefore, the heat generation modeling requires some representation
of the behaviour of the contact interface, together with the viscous dissipation behaviour of the
material. However, the boundary conditions in FSW are complex to define. Material at the
interface may either stick to the tool (it has the same local velocity as the tool) or it may slip (the
velocity may be lower) [11]. An analytical model for heat generation in FSW based on different
assumptions in terms of contact condition between the rotating tool surface and the weld piece was
developed by Schmidt et al. [3]. This model will be discussed in the following sections.
2.1.
Contact Condition
When modeling the FSW process, the contact condition is a critical part of the numerical model
[22]. Usually, the Coulomb friction law is applied to describe the shear forces between the tool
surface and the workpiece. In general, the law estimates the contact shear stress as:
τ friction
μp
(1)
where μ is the friction coefficient and p is the contact pressure. Analyzing the contact condition
of two infinitesimal surface segments in contact, Coulomb’s law predicts the mutual motion
between the two segments (whether they stick or slide). The normal interpretation of Coulomb’s
law is based on rigid contact pairs, without taking into account the internal stress. However, this is
not sufficiently representative for the FSW process. Thus, three different contact states were
developed at the tool/workpiece interface, and they can be categorized according to the definition
presented by Schmidt et al. [3].
2.1.1. Sliding Condition
If the contact shear stress is smaller than the internal matrix (material to be welded) yield shear
stress, the matrix segment volume shears slightly to a stationary elastic deformation (sliding
condition).
2.1.2. Sticking Condition
When the friction shear stress exceeds the yield shear stress of the underlying matrix, the matrix
surface will stick to the moving tool surface segment. In this case, the matrix segment will
accelerate along the tool surface (receiving the tool velocity), until the equilibrium state is
established between the contact shear stress and the internal matrix shear stress. At this point, the
stationary full sticking condition is fulfilled. In conventional Coulomb’s friction law terms, the
static friction coefficient relates the reactive stresses between the surfaces.
2.1.3. Partial Sliding/ Sticking Condition
The last possible state between the sticking and sliding condition is a mixed state of both. In this
case, the matrix segment accelerates to a velocity less than the tool surface velocity. The
equilibrium is established when the contact shear stress equals the internal yield shear stress due to
a quasi-stationary plastic deformation rate (partial sliding/sticking condition). In resume, the sliding
condition promotes heat generation by means of friction and the sticking condition promotes heat
generation by means of plastic deformation. In practice, we have these two conditions together
(partial sliding/sticking condition).
2.1.4. Contact State Variable
It is convenient to define a contact state variable δ , which relates the velocity of the contact
workpiece surface with the velocity of the tool surface. This parameter is a dimensionless slip rate
defined by Schmidt et al. [3] as:
vworkpiece
δ
vtool
γ
vtool
1
γ
(2)
vtool
vworkpiece
where vtool is the velocity of the tool calculated from
ωr
(being
(3)
ω
the angular velocity and r the
radius), vworkpiece is the local velocity of the matrix point at the tool/workpiece contact interface and
γ is the slip rate. Furthermore, the assumption that the welding transverse speed does not influence
the slip rate and/or the deformation rate, results in that all workpiece velocities can be considered
tangential to the rotation axis. It is then possible to define δ as:
δ
ωworkpiece
(4)
ωtool
where ωworkpiece is the angular rotation speed of the contact matrix layer and ωtool is the angular
rotation speed of the tool. Ulysse uses this relationship to prescribe a slip boundary condition in his
CFD models of the material flow in FSW [23]. The relationship between the different contact
conditions is summarized in Table I.
Table I: Definition of contact condition, velocity/shear relationship and state variable ( ε strain rate )
[24]
Contact
condition
Sticking
vmatrix
vtool
vtool
ωr
Sticking/sliding
vmatrix
vtool
vtool
ωr
0
vtool
ωr
Sliding
2.2.
Matrix
velocity
vmatrix
Tool velocity
Contact shear stress
τcontact
τ yield ( ε)
State variable
τ yield ( ε)
τcontact
τ contact
τyield ( ε)
τ yield
δ
1
δ
0
δ
1
0
Analytical Estimation of Heat Generation
During the FSW process, heat is generated close to the contact surfaces, which can have complex
geometries according to the tool geometry. However, for the analytical model, it is assumed a
simplified tool design with a conical or horizontal shoulder surface, a vertical cylindrical probe
surface and an horizontal probe tip surface. The conical shoulder surface is characterized by the
cone angle
α , which in the case of a flat shoulder assumes the
value zero. The simplified tool
design is presented in Fig. 5, where Rshoulder is the radius of the shoulder, and Rprobe and Hprobe is the
probe radius and height, respectively. Fig. 5 also represents the heat generated under the tool
shoulder Q1 , the tool probe side Q2 , and at the tool probe tip Q3 . In this way, the total heat
generation can be calculated Qtotal
Q1
Q2
Q3 . The heat generated in each contact surface can
then be computed [24]:
ωr τcontact dA
(5)
M is the moment, F is the force, A is the contact area and
r is the cylindrical coordinate.
dQ
where
ωdM
ωrdF
Fig. 5 Heat generation contributions represented in a simplified FSW tool [17]
2.2.1. Heat Generation
There follows heat generation derivations which are analytical estimations of the heat generated at
the contact interface between a rotating FSW tool and a stationary weld piece matrix. The
mechanical power due to the traverse movement is not considered, as this quantity is negligible
compared to the rotational power. A given surface of the tool in contact with the matrix is
characterized by its position and orientation relative to the rotation axis of the tool, as shown in Fig.
6.
(a)
(b)
(c)
Fig. 6 Surface orientations and infinitesimal segment areas: (a) Horizontal; (b) Vertical; (c)
Conical/tilted [3]
2.2.1.1. Heat Generation from the Shoulder
The shoulder surface of a modern FSW tool is in most cases concave or conically shaped. Previous
analytical expressions for heat generation include a flat circular shoulder, in some cases omitting
the contribution from the probe [25]. Schmidt et al. extends the previous expressions so that the
conical shoulder and cylindrical probe surfaces are included in the analytical expressions [24]. This
analytical model for the heat generation phenomena does not include non-uniform pressure
distribution, strain rate dependent yield shear stresses and the material flow driven by threads or
flutes. Integration over the shoulder area from Rprobe to Rshoulder using equation (5) gives the
shoulder heat generation:
2π
Q1
0
Rshoulder
Rprobe
ωτ contact r 2 (1
3
3
πωτ contact (Rshoulder
2
tan α)drdθ
(6)
3
probe
R
)(1
tan α)
2.2.1.2. Heat Generation from the Probe
The heat generated at the probe has two contributions: Q2 from the side surface and Q3 from the
tip surface. Integrating over the probe side area:
2π
Q2
0
Hprobe
0
2
ωτ contact Rprobe
dzdθ
2
2πωτ contact Rprobe
Hprobe
(7)
and integrating the heat flux based over the probe tip surface, assuming a flat tip, we have that:
Q3
2π
0
Rprobe
0
ωτ contact r 2drdθ
2
3
πωτ contact Rprobe
3
(8)
The three contributions are combined to get the total heat generation estimate:
Qtotal
Q1 Q2 Q3
2
3
πωτ contact ((Rshoulder
3
3
Rprobe
)(1
tan α)
3
Rprobe
2
3Rprobe
Hprobe )
(9)
In the case of a flat shoulder, the heat generation expression simplifies to:
Qtotal
2
3
πωτ contact (Rshoulder
3
2
3Rprobe
Hprobe )
(10)
This last expression correlates with the results obtained by Khandkar et al. [26].
2.2.2. Influence of contact status: sticking and sliding
Equation (9) is based on the general assumption of a constant contact shear stress, but the
mechanisms behind the contact shear stress vary, depending on whether the material verifies the
sliding or sticking condition. If the sticking interface condition is assumed, the matrix closest to the
tool surface sticks to it. The layer between the stationary material points and the material moving
with the tool has to accommodate the velocity difference by shearing. The contact shear stress is
then:
τ contact
τ yield
σ yield
(11)
3
This gives a modified expression of (9), assuming the sticking condition:
Qtotal ,sticking
σ
2
3
πω yield ((Rshoulder
3
3
3
Rprobe
)(1
tan α)
3
Rprobe
2
3Rprobe
Hprobe )
(12)
Assuming a friction interface condition, where the tool surface and the weld material are
sliding against each other, the choice of Coulomb’s friction law to describe the shear stress
estimates the critical friction stress necessary for a sliding condition:
τcontact
τ friction
μp
(13)
Thus, for the sliding condition, the total heat generation is given by:
Qtotal ,sliding
2
3
πωμp((Rshoulder
3
3
Rprobe
)(1
tan α)
3
Rprobe
2
3Rprobe
Hprobe )
(14)
The analytical solution for the heat generation under the partial sliding/sticking condition is
simply a combination of the two solutions, respectively, with a kind of weighting function. From
the partial sliding/sticking condition follows that the slip rate between the surfaces is a fraction of
ωr , lowering the heat generation from sliding friction. This is counterbalanced by the additional
plastic dissipation due to material deformation. This enables a linear combination of the
expressions for sliding and sticking:
Qtotal
δQtotal , sticking
(1 δ)Qtotal , sliding
2
πω(δτ yield
3
3
(1 δ) μp)((Rshoulder
3
Rprobe
)(1
tan α)
3
Rprobe
2
3Rprobe
Hprobe )
(15)
where δ is the contact state variable (dimensionless slip rate), τ yield is the material yield shear
stress at welding temperature, ω is the angular rotation speed and α is the cone angle. This
δ
expression (15) can be used to estimate the heat generation for 0
when δ
0 , sticking when δ
1 and partial sliding/sticking when 0
1 , corresponding to sliding
δ
1.
2.2.3. Heat Generation Ratios
Based on the geometry of the tool and independently from the contact condition, the ratios of heat
generation are as follows:
fshoulder
fprobe
fprobe
side
tip
Q1
Qtotal
Q2
Qtotal
Q3
Qtotal
3
Rprobe
)(1 tan α)
3
(Rshoulder
3
(Rshoulder
3
3
Rprobe
)(1 tan α) Rprobe
2
3Rprobe
Hprobe
3
(Rshoulder
3
3
Rprobe
)(1 tan α) Rprobe
2
3Rprobe
Hprobe
3
Rprobe
3
(Rshoulder
3
3
Rprobe
)(1 tan α) Rprobe
where the considered tool dimensions are Rshoulder
α
2
3Rprobe
Hprobe
9 mm , Rprobe
2
3Rprobe
Hprobe
0.86
(16)
0.11
(17)
0.03
(18)
3 mm , Hprobe
4 mm and
10 . This indicates that the heat generation from the probe is negligible for a thin plate, but, it
is typically 10% or more for a thick plate [11]. Fig. 7 presents the evolution of the heat generation
ratios of the shoulder and probe as a function of the probe radius. Also, in Fig. 7 the influence of
the Rshoulder / Rprobe ratio in the heat generation ratio is highlighted.
Weight fraction
1
0.8
0.6
f shoulder
0.4
f probe
0.2
0
0
2
4
6
Probe radius [mm]
Fig. 7 Heat fraction generated by the shoulder and probe ( Rshoulder
)
8
9 mm , Hprobe
4 mm and α
10
The analytical heat generation estimate correlates with the experimental heat generation,
assuming either a sliding or a sticking condition. In order to estimate the experimental heat
generation for the sliding condition, a friction coefficient that lies in the reasonable range of known
metal to metal contact values is used. Assuming the sticking condition, a yield shear stress, which
is descriptive of the weld piece material at elevated temperatures, is used to correlate the values
[24].
2.3.
Heat Generation Mechanism
It is important to mention that it is not even clear what is the nature of the tool interface contact
condition, particularly for the shoulder interface. Frigaard et al. developed a numerical model for
FSW based on the finite difference method [27]. They assumed that heat is generated at the tool
shoulder due to frictional heating and the friction coefficient is adjusted so that the calculated peak
temperature did not exceed the melting temperature. Zahedul et al. concluded that a purely friction
heating model is probability not adequate (a low value for the friction coefficient was used) [28].
The high temperature values measured by Tang et al. near the pin suggest that heat is generated
mainly through plastic deformation during the FSW process [29]. Colegrove et al. assume that the
material is completely sticking to the tool [30]. The heating volumetric region where plastic
dissipation occurs is the thermomechanically heat affected zone (TMAZ). The corresponding
volume heat sources are equal to:
qv
βεijpσij
i, j
1,2,3
(19)
where εijp and σ ij are the components of the plastic strain rate tensor and the Cauchy tensor,
respectively. Also in (19), β is a parameter, known as the Taylor-Quinney coefficient, ranging
typically between 0.8 and 0.99 [31].
2.3.1. Surface and Volume Heat Contributions
The heat input can be divided into surface and volume heat contributions due to frictional or
viscous (plastic dissipation) heating, respectively. Simar et al. introduce a parameter ( γ ) that
exposes the relative importance of both contributions [32]:
γQ
(20)
(1 γ)Q
(21)
QV
QS
where QV is the volume heat contribution and QS is the total tool surface heat contribution. For
thermal computational models which take into account the material fluid flow, Simar et al.
concluded that a value of γ
2.4.
1 produces a best agreement with experimental thermal data [32].
Heat Input Estimation using the Torque
Modern FSW equipment usually outputs the working torque as well as the working angular
velocity. The power spent in the translation movement, which is approximately 1% of the total
value, is typically neglected in the total heat input estimative [11, 30]. Therefore, the power
introduced by the tool (input power
P ) can be obtained experimentally from the weld moment and
angular rotation speed [21, 32]:
P
Mω
Ftransvtrans
negligible
where ω is the tool rotational speed (rad/s),
(22)
M is the measured torque (N.m), Ftrans is the traverse
force (N) and vtrans (m/s) is the traverse velocity. Therefore, the heat input near the interface is
given by:
Q
Pη
(23)
where η is the fraction of power generated by the tool that is directly converted into heat in the
weld material. Nandan et al. refer to this as the power efficiency factor [33]. This value is usually
high, between 0.9 and 1.0, and it is calculated based on the heat loss into the tool, as will be show
in the next section.
3.
HEAT DISSIPATION
Heat generation and heat dissipation must be adjusted and balanced to obtain an agreement with
experimental temperature values [34]. As mentioned before, the heat in FSW is generated by the
frictional effect and by plastic deformation associated with material stirring. The heat is dissipated
into the workpiece leading to the TMAZ and the HAZ, depending on the thermal conductivity
coefficient of the base material. The heat loss occurs by means of conduction to the tool and the
backing plate, and also by means of convective heat loss to the surrounding atmosphere. The heat
lost through convection/radiative is considered negligible [33].
3.1.
Heat Loss into the Tool
Only a small fraction of the heat is lost into the tool itself. This value may be estimated from a
simple heat flow model for the tool. Measuring the temperature at two locations along the tool axis,
allows a simple evaluation of the heat losses into the tool. The value of the heat loss into the tool
has been studied using this approach, leading to similar conclusions. After modeling the
temperature distributions in the tool and comparing it with experimental results, various authors
conclude that the heat loss is about 5% [32, 24].
3.2.
Heat Loss by the Top Surface of the Workpiece
The boundary condition for heat exchange between the top surface of the workpiece and the
surroundings, beyond the shoulder, involves considering both the convective and the radiative heat
transfer, which can be estimated using the following differential equation [33]:
k
T
z
σε(T 4 Ta4 ) h(T Ta )
(24)
top
where σ is the Stefan–Boltzmann constant, ε is the emissivity, Ta is the ambient temperature and
h is the heat transfer coefficient at the top surface.
3.3.
Heat Loss by the Bottom Surface of the Workpiece
Most of the FSW process heat is dissipated through the backing plate due to the contact with the
clamps. The heat loss through the contact interface between the bottom of the workpiece and the
backing plate has been introduced in numerical models using different approaches [8]. In fact, the
contact conditions between the workpiece and the backing plate must be carefully described at the
moment of the modeling process. Thus various options can be considered:
No backing plate. The lower surface of the workpiece is assumed to be adiabatic;
Perfect contact between workpiece and backing plate;
Perfect contact under the tool region only. This option is suggested by experimental
observations: the high pressures under the tool lead to a visible indentation of the upper
surface of the blanking plate along a width approximately equal to the diameter of the tool
shoulder (Fig. 8);
Introduction of a value for the convection coefficient between the workpiece and the
backing plate.
Ulysse did not include the backing plate in the model, using the assumption of simply
adiabatic conditions at the workpiece/backing interface [23]. A reasonable agreement between
predicted and measured temperatures was attained, although measured temperatures tended to be
consistently over-predicted by the model. Other authors consider the presence of a backing plate in
the model and simulate the contact condition between the workpiece and the backing plate.
Colegrove et al. proposed a contact conductance of 1000 Wm-2K-1 between the workpiece and the
backing plate, except under the tool region where a perfect contact is modeled [35].
The majority of dissipated heat flows from the workpiece to the backing plate at the interface
under the tool. Owing to the applied pressure, the conductance gap in this location is smaller than
the conductance gap to the surrounding areas, and by this way locally maximizing the heat flow.
The use of a backing spar, in opposition to a fully backing plate, reduces the number of equations
to be solved and shortens the computer processing time, while still capturing the essential nature of
heat flow between the workpiece and backing plate [2] (Fig. 8). The width of the backing spar is
usually equal to the tool diameter, and the height varies within the thicknesses range of the backing
plate. Khandkar et al. use a 12 mm backing plate [26], Hamilton et al. assume 25.4 mm [2], while
Colegrove et al. adopt a 60 mm backing plate [13]. It can be concluded that the larger the thickness
of the backing plate, the greater the heat dissipation.
Zahedul et al. propose a value for the convection coefficient between the workpiece and the
backing plate by comparing the results of their 3D finite element models with the experimental
results [28]. They compare four different bottom convection coefficients and conclude that a value
too high for this coefficient leads to an underestimating of the maximum temperature.
Fig. 8 Employing a backing spar to model the contact condition between workpiece and backing
plate [2]
4.
METAL FLOW
Material flow during FSW is quite complex, it depends on the tool geometry, process parameters
and material to be welded. It is of practical importance to understand the material flow
characteristics for optimal tool design and to obtain high structural efficiency welds [36]. Modeling
of the metal flow in FSW is a challenging problem, but this is a fundamental task to understanding
and developing the process. Flow models should be able to simultaneously capture the thermal and
mechanical aspects of a given problem in adequate detail to address the following topics:
Flow visualization, including the flow of dissimilar metals;
Evaluation of the heat flow that governs the temperature field;
Tool design to optimize tool profiling for different materials and thicknesses;
Susceptibility to formation of defects.
The material flow around the probe is one of the main parameters, determinant for the
success of FSW [36]. Some studies show that the flow occurs predominantly in the plate plane.
Hence, various authors have first analyzed the 2-D flow around the probe at midthickness rather
than the full 3-D flow. This produces significant benefits in computational efficiency [37].
Schneider et al. have based their physical model of the metal flow in the FSW process in
terms of the kinematics describing the metal motion [38]. This approach has been followed by
other authors. Fig. 9 illustrates the decomposition of the FSW process into three incompressible
flow fields, combined to create two distinct currents. In this model, a rigid body rotation field
imposed by the axial rotation of the probe tool is modified by a superimposed ring vortex field
encircling the probe imposed by the pitch of the weld probe threads. These two flow fields, bound
by a shear zone, are uniformly translated down the length of the weld panel [36].
Fig. 9 Schematic representation of the three incompressible flow fields of the friction stir weld: (a)
rotation; (b) translation; (c) ring vortex; (d) summation of three flow fields [39]
A number of approaches have been used to visualize material flow pattern in FSW, using a
tracer technique by marker or through the welding of dissimilar alloys. In addition, some
computational methods including CFD and finite element analysis (FEA) have been also used to
model the material flow [36].
4.1.
Numerical Flow Modeling
Numerical FSW flow modeling can be based on analyses and techniques used for other processes,
such as friction welding, extrusion, machining, forging, rolling and ballistic impact [36]. As for
heat flow analyses, numerical flow models can use either an Eulerian or Lagrangian formulation
for the mesh, other solution can be the combination of both (hybrid solution and LagrangianEulerian).
The CFD analysis of FSW ranges from 2-D flow around a cylindrical pin to full 3-D analysis
of flow around a profiled pin [30]. One consequence of using CFD analysis in relation to solid
mechanic models is that some mechanical effects are excluded from the scope of the analysis, for
example, studying the effect of varying the downforce. These models cannot predict absolute
forces because elasticity is neglected. Also, since that for deforming material it is necessary to fill
the available space between the solid boundaries, free surfaces also present difficulties in CFD.
One difficulty in the numerical analysis is the steep gradient in flow velocity near the tool. In order
to solve this problem, most analyses divide the mesh into zones, as shown in Fig. 10. The flow near
the tool is predominantly rotational, thus the mesh in this region rotates with the tool. The rotating
zone is made large enough to contain the entire deformation zone and the mesh size is much finer
in that zone [30].
A 3-D elastic-plastic finite element analysis, using an ALE formulation, provides results
with an interesting physical insight. However, they present very long computation times, making it
unlikely to be used routinely as a design tool [36]. Note that 3-D analysis is able to handle some of
the process complexities: a concave shoulder, tool tilt, and threaded pin profiles.
Fig. 10 Mesh definition for computational fluid dynamics analysis of friction stir welding [30]
4.1.1. Material Constitutive Behaviour for Flow Modeling
The most common approach to model steady-state hot flow stress is the Sellars-Tegart law,
combining the dependence on temperature
Z
T and strain rate ε via the Zener-Hollomon parameter:
ε exp
where Q is an effective activation energy,
Q
RT
A(sinh ασ )n
(25)
R is the gas constant and α , A , and n are material
parameters. Other authors have used an alternative constitutive response developed by Johnson and
Cook for modeling ballistic impacts [40]:
σy
A B(ε )
pl n
ε pl
1 C ln
1
ε0
T
Tmelt
Tref
Tref
m
(26)
pl
pl
where σ y is the yield stress, ε the effective plastic strain, ε the effective plastic strain rate, ε0
the normalizing strain rate, and A, B, C, n, Tmelt , Tref and m are material/test parameters. Mishra and
Ma reported that the general flow pattern predicted is rather insensitive to the constitutive law due
to the inherent kinematic constraint of the process [36]. However, the heat generation, temperature,
and flow stress near the tool and the loading on the tool will depend closely on the material law.
5.
NUMERICAL SIMULATION OF FSW
A correct model of the FSW process should avoid any unnecessary assumptions. A list of
requirements for a FSW analysis code includes the following:
Rotational boundary condition;
Frictional contact algorithms;
Support very high levels of deformation;
Elastic-Plastic or Elastic-Viscoplastic material models;
Support for complex geometry.
These requirements constitute the minimum attributes required for an algorithm to be applied to the
FSW process analysis [41].
A 3-D numerical simulation of FSW concerned to study the impact of tool moving speed in
relation to heat distribution as well as residual stress is presented by Riahi and Nazari [10]. Another
interesting study presents a 3-D thermomechanical model of FSW based on CFD analysis [14]. The
model describes the material flow around the tool during the welding operation. The base material
for this study was an AA2024 sheet with 3.2 mm of thickness. The maximum and minimum
temperature values in the workpiece (close to the tool shoulder) are shown in Fig. 11, where we can
see that the maximum temperature value decreases when the welding velocity increases. In the
other hand the maximum temperature value increases when the tool rotational velocity increases.
Fig. 11 Extreme temperatures in the welds; (a) as a function of the welding velocity of the tool for a
tool rotational velocity equal to 400 rpm; (b) as a function of the tool rotational velocity for a
welding velocity equal to 400 mm/min [14]
The model also provides data on the process power dissipation (plastic and surface dissipation
contributions). The plastic power partition is made through the estimation of the sliding ratio in the
contact between the tool and the workpiece. The predicted and measured evolution of the power
consumed in the weld as a function of the welding parameters is presented in Fig. 12. Fig. 12 (a)
shows the repartition of the predicted power dissipation as a function of the welding velocity. It is
possible to see that although the total power generated in the weld increases with the welding
velocity, the maximum temperature value decreases.
Fig. 12 Repartition of the predicted power dissipation in the weld between plastic power and
surface power as a function of: (a) and (b) the welding velocity; (c) the tool rotational velocity [14]
6.
CONCLUSIONS
FSW modeling helps to visualize the fundamental behavior of the welded materials and allows to
analyze the influence of different weld parameters (including tool design) and boundary conditions,
without performing costly experiments. FSW modeling is challenging task due to its multiphysics
characteristics. The process combines heat flow, plastic deformation at high temperature, and
microstructure and property evolution. Thus, nowadays, the numerical simulation of FSW process
still cannot be used to optimize the process. The increasing knowledge produced about the process
and computer resources can lead, maybe in a near future, to the use of numerical simulation of
FSW to predict a good combination of the process parameters, replacing the experimental trials
actually used. This will help to promote and expand the FSW process to a wider range of different
applications and users.
7.
REFERENCES
[1] Thomas WM, Nicholas ED, Needham JC, Murch MG, Templesmith P, Dawes CJ (1991)
Friction stir welding, International Patent Application No. PCT/GB92102203 and Great
Britain Patent Application No. 9125978.8
[2] Hamilton C, Dymek S, Sommers A (2008) A thermal model of friction stir welding in
aluminum alloys. Int J Mach Tools & Manuf 48(10):1120–1130. doi:
10.1016/j.ijmachtools.2008.02.001
[3] Schmidt H, Hattel J, Wert J (2004) An analytical model for the heat generation in friction stir
welding. Model Simul Mater Sci Eng 12(1):143–157. doi: 10.1088/0965-0393/12/1/013
[4] Nandan R, DebRoy T, Bhadeshia HKDH (2008) Recent advances in friction-stir welding –
Process, weldment structure and properties. Prog Mater Sci 53(6):980–1023. doi:
10.1016/j.pmatsci.2008.05.001
[5] Guerdoux S (2007) Numerical Simulation of the Friction Stir Welding Process. Dissertation,
Mines ParisTech
[6] Schmidt H, Hattel J (2008) Thermal modelling of friction stir welding. Scr Mater 58(5):332–
337. doi: 10.1016/j.scriptamat.2007.10.008
[7] Chen CM, Kovacevic R. (2003) Finite element modeling of friction stir welding – thermal
and thermomechanical analysis. Int J Mach Tools & Manuf 43(13):1319–1326. doi:
10.1016/S0890-6955(03)00158-5
[8] Simar A, Lecomte-Beckers J, Pardoen T, Meester B (2006) Effect of boundary conditions
and heat source distribution on temperature distribution in friction stir welding. Sci Tech
Weld Join 11(2):170–177. doi: 10.1179/174329306X84409
[9] Zhang Z (2008) Comparison of two contact models in the simulation of friction stir welding
process. J Mater Sci 43(17):5867–5877. doi: 10.1007/s10853-008-2865-x
[10] Riahi M, Nazari H (2011) Analysis of transient temperature and residual thermal stresses in
friction stir welding of aluminum alloy 6061-T6 via numerical simulation. Int J Adv
Manuf Technol 55:143–152. doi: 10.1007/s00170-010-3038-z
[11] Mishra RS, Mahoney MW (2007) Friction Stir Welding and Processing. Asm International
[12] Guerdoux S, Fourment L (2009) A 3D numerical simulation of different phases of friction
stir welding. Model Simul Mater Sci Eng 17(7):1–32. doi: 10.1088/09650393/17/7/075001
[13] Colegrove PA, Shercliff HR (2003) Experimental and numerical analysis of aluminium
alloy 7075-T7351 friction stir welds. Sci Tech Weld Join 8(5):360–368. doi:
10.1179/136217103225005534
[14] Jacquin D, de Meester B, Simar A, Deloison D, Montheillet F, Desrayaud C (2011) A
simple Eulerian thermomechanical modeling of friction stir welding. J Mater Process
Tech 211(1):57–65. doi: 10.1016/j.jmatprotec.2010.08.016
[15] Menezes LF, Neto DM, Oliveira MC, Alves JL (2011) Improving Computational
Performance through HPC Techniques: case study using DD3IMP in-house code. The
14th International ESAFORM Conference on Material Forming, pp 1220–1225, Belfast,
United Kingdom. doi: 10.1063/1.3589683
[16] van der Stelt AA, Bor TC, Geijselaers HJM, Quak W, Akkerman R, Huétink J (2011)
Comparison of ALE finite element method and adaptive smoothed finite element method
for the numerical simulation of friction stir welding. The 14th International ESAFORM
Conference on Material Forming, pp 1290–1295, Belfast, United Kingdom. doi:
10.1063/1.3589694
[17] Assidi M, Fourment L (2009) Accurate 3D friction stir welding simulation tool based on
friction model calibration. Int J Mater Form 2:327–330. doi: 10.1007/s12289-009-05416
[18] Guerdoux S, Fourment L, Miles M, Sorensen C (2004) Numerical Simulation of the Friction
Stir Welding Process Using both Lagrangian and Arbitrary Lagrangian Eulerian
Formulations. Proceedings of the 8th International Conference on Numerical Methods in
Industrial Forming Processes, pp 1259–1264, Columbus, USA. doi: 10.1063/1.1766702
[19] Zhu XK, Chao YJ (2004) Numerical simulation of transient temperature and residual
stresses in friction stir welding of 304L stainless steel. J Mater Process Tech 146:263–
272. doi: 10.1016/j.jmatprotec.2003.10.025
[20] Paun F, Azouzi A (2004) Thermomechanical History of a Friction Stir Welded Plate;
Influence of the Mechanical Loading on the Residual Stress Distribution. NUMIFORM
2004, pp 1197–1202. doi: 10.1063/1.1766691
[21] Santiago DH, Lombera G, Santiago U (2004) Numerical modeling of welded joints by the
friction stir welding process. J Mater Res 7(4):569-574. doi: 10.1590/S151614392004000400010
[22] Xu S, Deng X, Reynolds AP (2001) Finite element simulation of material flow in friction
stir welding. Sci Tech Weld Join 6(3):191–193. doi: 10.1179/136217101101538640
[23] Ulysse P (2002) Three-dimensional modeling of the friction stir-welding process. Int J
Mach Tools & Manuf 42(14):1549–1557. doi: 10.1016/S0890-6955(02)00114-1
[24] Schmidt H, Hattel J (2005b) Modelling heat flow around tool probe in friction stir welding.
Sci Tech Weld Join 10(2):176–186. doi: 10.1179/174329305X36070
[25] Chao YJ, Qi X (1999) Heat Transfer and Thermo-Mechanical Analysis of friction stir
joining of AA6061-t6 plates. 1st International Symposium on Friction Stir Welding,
California, USA.
[26] Khandkar MZH, Khan JA, Reynolds AP (2003) Prediction of Temperature Distribution and
Thermal History During Friction Stir Welding: Input Torque Based Model. Sci Tech
Weld Join 8(3):165–174. doi: 10.1179/136217103225010943
[27] Frigaard O, Grong O, Midling OT (2001) A process model for friction stir welding of age
hardening aluminum alloys. Metall Mater Trans 32(5):1189–1200. doi: 10.1007/s11661001-0128-4
[28] Zahedul M, Khandkar H, Khan JA (2001) Thermal modelling of overlap friction stir
welding for Al-alloys. J Mater Process Manuf Sci 10:91–105.
[29] Tang W, Guo X, McClure JC, Murr LE, Nunes A (1998) Heat Input and Temperature
Distribution in Friction Stir Welding. J Mater Process Manuf Sci 7:163–172.
[30] Colegrove PA, Shercliff HR (2005) 3-Dimensional CFD modelling of flow round a
threaded friction stir welding tool profile. J Mater Process Tech 169(2):320–327. doi:
10.1016/j.jmatprotec.2005.03.015
[31] Rosakis P, Rosakis AJ, Ravichandran G, Hodowany J (2000) A Thermodynamic Internal
Variable Model for the Partition of Plastic Work into Heat and Stored Energy in Metals.
Journal of the Mechanics and Physics of Solids 48(3):581–607. doi: 10.1016/S00225096(99)00048-4
[32] Simar A, Pardoen T, de Meester B (2007) Effect of rotational material flow on temperature
distribution in friction stir welds. Sci Tech Weld Join 12(4):324–333. doi:
10.1179/174329307X197584
[33] Nandan R, Roy GG, Debroy T (2006) Numerical Simulation of Three-Dimensional Heat
Transfer and Plastic Flow During Friction Stir Welding. Metall Materi Trans
37(4):1247–1259. doi: 10.1007/s11661-006-1076-9
[34] Lammlein DH (2007) Computational Modeling of Friction Stir Welding.
[35] Colegrove PA, Shercliff HR (2004b) Development of Trivex friction stir welding tool. Part
2 – Three-dimensional flow modelling. Sci Tech Weld Join 9(4):352–361. doi:
10.1179/136217104225021661
[36] Mishra RS, Ma ZY (2005) Friction stir welding and processing. Mater Sci Eng R 50:1–78.
doi: 10.1016/j.mser.2005.07.001
[37] Colegrove PA, Shercliff HR (2004a) 2-Dimensional CFD Modeling of Flow Round Profiled
FSW Tooling. Sci Tech Weld Join 9:483–492.
[38] Schnieder JA, Nunes AC (2004) Characterization of plastic flow and resulting microtextures
in a friction stir weld. Metall Mater Trans 35(4):777–783. doi: 10.1007/s11663-0040018-4
[39] Schneider JA, Beshears R, Nunes AC (2006) Interfacial sticking and slipping in the friction
stir welding process. Mater Sci Eng 435:297–304. doi: 10.1016/j.msea.2006.07.082
[40] Schmidt H, Hattel J (2005a) A local model for the thermomechanical conditions in friction
stir welding. Model Simul Mater Sci Eng 13:77–93. doi: 10.1088/0965-0393/13/1/006
[41] Oliphant AH (2004) Numerical modeling of friction stir welding: a comparison of Alegra
and Forge3. MSc thesis, Brigham Young University.
| 5 |
arXiv:1712.06727v2 [] 8 Feb 2018
On parabolic subgroups of Artin–Tits groups
of spherical type
Marı́a Cumplido, Volker Gebhardt, Juan González-Meneses
and Bert Wiest∗
February 8, 2018
Abstract
We show that, in an Artin–Tits group of spherical type, the intersection of two parabolic subgroups is a parabolic subgroup. Moreover, we
show that the set of parabolic subgroups forms a lattice with respect to
inclusion. This extends to all Artin–Tits groups of spherical type a result
that was previously known for braid groups.
To obtain the above results, we show that every element in an Artin–
Tits group of spherical type admits a unique minimal parabolic subgroup
containing it. Also, the subgroup associated to an element coincides with
the subgroup associated to any of its powers or roots. As a consequence,
if an element belongs to a parabolic subgroup, all its roots belong to the
same parabolic subgroup.
We define the simplicial complex of irreducible parabolic subgroups,
and we propose it as the analogue, in Artin–Tits groups of spherical type,
of the celebrated complex of curves which is an important tool in braid
groups, and more generally in mapping class groups. We conjecture that
the complex of irreducible parabolic subgroups is δ-hyperbolic.
1
Introduction
Artin–Tits groups are a natural generalization of braid groups from the algebraic
point of view: In the same way that the braid group can be obtained from
the presentation of the symmetric group with transpositions as generators by
dropping the order relations for the generators, other Coxeter groups give rise
∗ Partially supported by a PhD contract founded by University of Rennes 1, Spanish
Projects MTM2013-44233-P, MTM2016-76453-C2-1-P, FEDER, the French-Spanish mobility programme “Mérimée 2015”, and Western Sydney University. Part of this work was done
during a visit of the third author to Western Sydney University, and visits of the first, third
and fourth authors to the University of Seville and the University of Rennes I.
1
to more general Artin–Tits groups. If the underlying Coxeter group is finite,
the resulting Artin–Tits group is said to be of spherical type.
Artin–Tits groups of spherical type share many properties with braid groups.
For instance, they admit a particular algebraic structure, called Garside structure, which allows to define normal forms, solve the word and conjugacy problems, and prove some other algebraic properties (like torsion-freeness).
However, some properties of braid groups are proved using topological or geometrical techniques, since a braid group can be seen as the fundamental group
of a configuration space, and also as a mapping class group of a punctured disc.
As one cannot replicate these topological or geometrical techniques in other
Artin–Tits groups, they must be replaced by algebraic arguments, if one tries
to extend properties of braid groups to all Artin–Tits groups of spherical type.
In this paper we will deal with parabolic subgroups of Artin–Tits groups. A
parabolic subgroup is by definition the conjugate of a subgroup generated by a
subset of the standard generators. The irreducible parabolic subgroups, as we
will see, are a natural algebraic analogue of the non-degenerate simple closed
curves in the punctured disc.
The simple closed curves, in turn, are the building blocks that form the wellknown complex of curves. The properties of this complex, and the way in
which the braid group acts on it, allow to use geometric arguments to prove
important results about braid groups (Nielsen–Thurston classification, structure
of centralizers, etc.). Similarly, the set of irreducible parabolic subgroups also
forms a simplicial complex, which we call the complex of parabolic subgroups. We
conjecture that the geometric properties of this complex, and the way in which
an Artin–Tits group acts on it, will allow to extend many of the mentioned
properties to all Artin–Tits groups of spherical type. The results in the present
paper should help to lay the foundations for this study.
In this paper we will show the following:
Theorem 1.1. (see Proposition 7.2) Let AS be an Artin–Tits group of spherical
type, and let α ∈ AS . There is a unique parabolic subgroup Pα which is minimal
(by inclusion) among all parabolic subgroups containing α.
We will also show that the parabolic subgroup associated to an element coincides
with the parabolic subgroup associated to any of its powers and roots:
Theorem 8.2. Let AS be an Artin–Tits group of spherical type. If α ∈ AS and
m is a nonzero integer, then Pαm = Pα .
The above result will have an interesting consequence:
Corollary 8.3. Let AS be an Artin–Tits group of spherical type. If α belongs
to a parabolic subgroup P , and β ∈ AS is such that β m = α for some nonzero
integer m, then β ∈ P .
2
Finally, we will use Theorem 1.1 to show the following results, which describe
the structure of the set of parabolic subgroups with respect to the partial order
determined by inclusion.
Theorem 9.5. Let P and Q be two parabolic subgroups of an Artin–Tits group
AS of spherical type. Then P ∩ Q is also a parabolic subgroup.
Theorem 10.3. The set of parabolic subgroups of an Artin–Tits group of spherical type is a lattice with respect to the partial order determined by inclusion.
2
Complex of irreducible parabolic subgroups
An Artin–Tits group AS is a group generated by a finite set S, that admits
a presentation with at most one relation for each pair of generators s, t ∈ S
of the form sts · · · = tst · · · , where the length (which may be even or odd) of
the word on the left hand side is equal to the length of the word on the right
hand side. We will denote this length m(s, t), and we will say, as usual, that
m(s, t) = ∞ if there is no relation involving s and t. We can also assume that
m(s, t) > 1, otherwise we could just remove one generator. The two sides of
the relation sts · · · = tst · · · are expressions for the least common multiple of
the generators s and t with respect to the prefix and suffix partial orders (cf.
Section 3).
The given presentation of AS is usually described by a labeled graph ΓS (called
the Coxeter graph of AS ) whose vertices are the elements of S, in which there
is an edge joining two vertices s and t if and only if m(s, t) > 2. If m(s, t) > 3
we write m(s, t) as a label on the corresponding edge.
Given an Artin–Tits group AS , where S is the standard set of generators and ΓS
is the associated Coxeter graph, we say that AS is irreducible if ΓS is connected.
For a subset X of S, the subgroup AX of AS generated by X is called a standard
parabolic subgroup of AS ; it is isomorphic to the Artin–Tits group associated to
the subgraph ΓX of ΓS spanned by X [3, 10]. A subgroup P of AS is a parabolic
subgroup of AS if it is conjugate to a standard parabolic subgroup of AS .
If one adds the relations s2 = 1 for all s ∈ S to the standard presentation of
an Artin–Tits group AS , one obtains its corresponding Coxeter group WS . If
WS is a finite group, then AS is said to be of spherical type. Artin–Tits groups
of spherical type are completely classified [4], and they are known to admit a
Garside structure, as we will see in Section 3.
The main example of an Artin–Tits group of spherical type is the braid group on
n strands, Bn , which is generated by n − 1 elements, σ1 , . . . , σn−1 , with defining
relations σi σj = σj σi (if |i − j| > 1) and σi σj σi = σj σi σj (if |i − j| = 1). Its
corresponding Coxeter group is the symmetric group of n elements, Σn .
3
The braid group Bn can be seen as the group of orientation-preserving automorphisms of the n-times punctured disc Dn , fixing the boundary pointwise,
up to isotopy. That is, Bn is the mapping class group of Dn . As a consequence,
Bn acts naturally on the set of isotopy classes of non-degenerate, simple closed
curves in Dn (here non-degenerate means that the curve encloses more than one
and less than n punctures; simple means that the curve does not cross itself).
The (isotopy classes of) curves form a simplicial complex, called the complex of
curves, as follows: A simplex of dimension d is a set of d + 1 (isotopy classes
of) curves which admit a realisation consisting of d + 1 mutually disjoint curves.
The 1-skeleton of the complex of curves is called the graph of curves, in which
the vertices are the (isotopy classes of) curves, and two vertices are connected
by an edge if and only if the corresponding isotopy classes of curves can be
represented by two disjoint curves.
Our goal is to extend the notion of complex of curves from the braid group Bn to
all Artin–Tits groups of spherical type. Hence, we need to find some algebraic
objects which can be defined for all Artin–Tits groups of spherical type, and
which correspond to isotopy classes of non-degenerate simple closed curves in the
case of Bn . We claim that the irreducible parabolic subgroups are such objects.
We can define a correspondence:
Isotopy classes of
Irreducible parabolic
non-degenerate simple
ϕ:
−→
subgroups of Bn
closed curves in Dn
so that, for a curve C, its image ϕ(C) is the set of braids that can be represented
by an automorphism of Dn whose support is enclosed by C.
Let us see that the image of ϕ consists of irreducible parabolic subgroups. Suppose that Dn is represented as a subset of the complex plane, whose boundary
is a circle, and its punctures correspond to the real numbers 1, . . . , n. Let Cm
be a circle enclosing the punctures 1, . . . , m where 1 < m < n. Then ϕ(Cm ) is
a subgroup of Bn which is naturally isomorphic to Bm . Actually, it is the standard parabolic subgroup AXm , where Xm = {σ1 , . . . , σm−1 }. More generally, if
we consider a non-degenerate simple closed curve C in Dn , we can always find
an automorphism α of Dn such that α(C) = Cm for some 1 < m < n. This
automorphism α represents a braid, and it follows by construction that ϕ(C) is
precisely the irreducible parabolic subgroup αAXm α−1 .
It is clear that ϕ is surjective as, given an irreducible parabolic subgroup P =
αAX α−1 , we can take a circle C ′ in Dn enclosing the (consecutive) punctures
which involve the generators in X, and it follows that ϕ(α−1 (C ′ )) = P . The
injectivity of ϕ will be shown later as a consequence of Lemma 2.1.
Therefore, instead of talking about curves, in an Artin–Tits group AS of spherical type we will talk about irreducible parabolic subgroups. The group AS acts
(from the right) on the set of parabolic subgroups by conjugation. This action
corresponds to the action of braids on isotopy classes of non-degenerate simple
closed curves.
4
Now we need to translate the notion of curves being “disjoint” (the adjacency
condition in the complex of curves) to the algebraic setting. It is worth mentioning that disjoint curves do not necessarily correspond to irreducible parabolic
subgroups with trivial intersection. Indeed, two disjoint nested curves correspond to two subgroups with nontrivial intersection, one containing the other.
Conversely, two irreducible parabolic subgroups with trivial intersection may
correspond to non-disjoint curves. (For instance, the curves corresponding to
the two cyclic subgroups of Bn generated by σ1 and σ2 respectively intersect.)
One simple algebraic translation of the notion of “disjoint curves” is the following: Two distinct irreducible parabolic subgroups P and Q of Bn correspond to
disjoint curves if and only if one of the following conditions is satisfied:
1. P ( Q.
2. Q ( P .
3. P ∩ Q = {1} and pq = qp for every p ∈ P , q ∈ Q.
This can be deduced easily, using geometrical arguments, but it will also follow
from the forthcoming results in this section. Hence, we can say that two irreducible parabolic subgroups P and Q in an Artin–Tits group AS are adjacent
(similarly to what we say in the complex of curves) if P and Q satisfy one of
the three conditions above, that is, if either one is contained in the other, or
they have trivial intersection and commute.
However, this characterization is not completely satisfactory, as it contains three
different cases. Fortunately, one can find a much simpler equivalent characterization by considering a special element for each parabolic subgroup, as we will
now see.
If P is an irreducible parabolic subgroup of an Artin–Tits group of spherical
type, we saw that P is itself an irreducible Artin–Tits group of spherical type.
Hence, the center of P is cyclic, generated by an element zP . To be precise,
there are two possible elements (zP and zP−1 ) which could be taken as generators
of the center of P , but we take the unique one which is conjugate to a positive
element of AS (an element which is a product of standard generators). Hence,
the element zP is determined by P . We shall see in Lemma 2.1 (case x = 1),
that conversely, the element zP determines the subgroup P .
If P is standard, that is, if P = AX for some X ⊆ S, we will write zX = zAX . It
turns out that, using the standard Garside structure of AS , one has zX = (∆X )e ,
where ∆X is the least common multiple of the elements of X and e ∈ {1, 2}.
Namely, e = 1 if ∆X is central in AX , and e = 2 otherwise [3].
If a standard parabolic subgroup AX is not irreducible, that is, if ΓX is not
connected, then X = X1 ⊔ · · · ⊔ Xr , where each ΓXi corresponds to a connected
component of ΓX , and AX = AX1 ×· · ·×AXr . In this case we can also define ∆X
as the least common multiple of the elements of X, and it is clear that (∆X )e
is central in AX for either e = 1 or e = 2. We will define zX = ∆X if ∆X is
5
central in AX , and zX = ∆2X otherwise. Notice that zX is a central element
in AX , but it is not a generator of the center of AX if AX is reducible, as in
this case the center of AX is not cyclic: it is the direct product of the centers
of each component, so it is isomorphic to Zr .
Now suppose that P is a parabolic subgroup, so P = αAX α−1 for some X ⊆ S.
We define zP = αzX α−1 . This element zP is well defined: if P = αAX α−1 =
βAY β −1 , then we have β −1 αAX α−1 β = AY , so β −1 αzX α−1 β = zY hence
αzX α−1 = βzY β −1 . Using results by Godelle [16] one can deduce the following:
Lemma 2.1. [5, Lemma 34] Let P and Q be two parabolic subgroups of AS .
Then, for every x ∈ AS , one has x−1 P x = Q if and only if x−1 zP x = zQ .
It follows from the above lemma that, if we want to study elements which
conjugate a parabolic subgroup P to another parabolic subgroup Q, we can
replace P and Q with the elements zP and zQ . It will be much easier to work
with elements than to work with subgroups. Moreover, in the case in which
P = Q, the above result reads:
NAS (P ) = ZAS (zP ),
that is, the normalizer of P in AS equals the centralizer of the element zP in AS .
Remark. We can use Lemma 2.1 to prove that the correspondence ϕ is injective.
In the case of braid groups, if C is a non-degenerate simple closed curve, and P =
ϕ(C) is its corresponding irreducible parabolic subgroup, the central element zP
is either a conjugate of a standard generator (if C encloses two punctures) or
the Dehn twist along the curve C (if C encloses more than two punctures).
Hence, if C1 and C2 are such that ϕ(C1 ) = ϕ(C2 ) then either zP or zP2 is the
Dehn twist along C1 and also the Dehn twist along C2 . Two Dehn twists along
non-degenerate curves correspond to the same mapping class if and only if their
corresponding curves are isotopic [13, Fact 3.6], hence C1 and C2 are isotopic,
showing that ϕ is injective.
Lemma 2.1 allows us to simplify the adjacency condition for irreducible parabolic
subgroups, using the special central elements zP :
Theorem 2.2. Let P and Q be two distinct irreducible parabolic subgroups of
an Artin–Tits group AS of spherical type. Then zP zQ = zQ zP holds if and only
if one of the following three conditions are satisfied:
1. P ( Q.
2. Q ( P .
3. P ∩ Q = {1} and xy = yx for every x ∈ P and y ∈ Q.
The proof of this result will be postponed to Section 11, as it uses some ingredients which will be introduced along the paper.
6
Remark. Consider two isotopy classes of non-degenerate simple closed curves C1
and C2 in the disc Dn , and their corresponding parabolic subgroups P = ϕ(C1 )
and Q = ϕ(C2 ) of Bn . Then C1 and C2 can be realized to be disjoint if and
only if their corresponding Dehn twists commute [13, Fact 3.9]. It is known
that the centralizer of a generator σi is equal to the centralizer of σi2 . Hence C1
and C2 can be realized to be disjoint if and only if zP and zQ commute, which
is equivalent, by Theorem 2.2, to the three conditions in its statement.
We can finally extend the notion of complex of curves to all Artin–Tits groups
of spherical type, replacing curves with irreducible parabolic subgroups.
Definition 2.3. Let AS be an Artin–Tits group of spherical type. We define
the complex of irreducible parabolic subgroups as a simplicial complex in which a
simplex of dimension d is a set {P0 , . . . , Pd } of irreducible parabolic subgroups
such that zPi commutes with zPj for every 0 ≤ i, j ≤ d.
As it happens with the complex of curves in the punctured disc (or in any other
surface), we can define a distance in the complex of irreducible parabolic subgroups, which is determined by the distance in the 1-skeleton, imposing all edges
to have length 1. Notice that the action of AS on the above complex (by conjugation of the parabolic subgroups) is an action by isometries, as conjugation
preserves commutation of elements.
We believe that this complex can be an important tool to study properties of
Artin–Tits groups of spherical type. One important result, that would allow to
extend many properties of braid groups to Artin–Tits groups of spherical type,
is the following:
Conjecture 2.4. The complex of irreducible parabolic subgroups of an Artin–
Tits group of spherical type is δ–hyperbolic.
3
Results from Garside theory for Artin–Tits
groups of spherical type
In this section, we will recall some results from Garside theory that will be
needed. In order to simplify the exposition, we present the material in the
context of Artin–Tits groups of spherical type that is relevant for the paper,
instead of in full generality. For details, we refer to [6, 7, 9, 12].
Let AS be an Artin–Tits group of spherical type. Its monoid of positive elements (the monoid generated by the elements of S) will be denoted A+
S . The
group AS forms a lattice with respect to the prefix order 4, where a 4 b if and
only if a−1 b ∈ A+
S . We will denote by ∧ and ∨ the meet and join operations,
respectively, in this lattice. The Garside element of AS is ∆S = s1 ∨ · · · ∨ sn ,
where S = {s1 , . . . , sn }. One can similarly define the suffix order <, where b < a
if and only if ba−1 ∈ A+
S . In general, a 4 b is not equivalent to b < a.
7
If AS is irreducible (that is, if the defining Coxeter graph ΓS is connected),
then either ∆S or ∆2S generates the center of AS [3, 10]. Actually, conjugation
by ∆S induces a permutation of S. We will denote τS (x) = ∆−1
S x∆S for every
element x ∈ AS ; notice that the automorphism τS has either order 1 or order 2.
The triple (AS , A+
S , ∆S ) determines the so called classical Garside structure of
the group AS .
The simple elements are the positive prefixes of ∆S , which coincide with the
positive suffixes of ∆S . In an Artin–Tits group AS of spherical type, a simple
element is a positive element that is square-free, that is, that cannot be written
as a positive word with two consecutive equal letters [3, 10]. For a simple
element s, we define its right complement as ∂S (s) = s−1 ∆S . Notice that ∂S
is a bijection of the set of simple elements. Notice also that ∂S2 (s) = τS (s) for
every simple element s.
The left normal form of an element α ∈ AS is the unique decomposition of the
form α = ∆pS α1 · · · αr , where p ∈ Z, r ≥ 0, every αi is a non-trivial simple
element different from ∆S , and ∂S (αi ) ∧ αi+1 = 1 for i = 1, . . . , r − 1.
The numbers p and r are called the infimum and the canonical length of α,
denoted inf S (α) and ℓS (α), respectively. The supremum of α is supS (α) =
inf S (α) + ℓS (α). By [11], we know that inf S (α) and supS (α) are, respectively,
the maximum and minimum integers p and q such that ∆pS 4 α 4 ∆qS , or
equivalently ∆qS < α < ∆pS , holds.
There is another normal form which will be important for us. The mixed normal
form, also called the negative-positive normal form, or np-normal form, is the
−1
decomposition of an element α as α = x−1
s · · · x1 y1 · · · yt , where x = x1 · · · xs
and y = y1 · · · yt are positive elements written in left normal form (here some of
the initial factors of either x or y can be equal to ∆), such that x ∧ y = 1 (that
is, there is no possible cancellation in x−1 y).
This decomposition is unique, and it is closely related to the left normal form
of α. Indeed, if x 6= 1 and y 6= 1 then inf S (x) = inf S (y) = 0 holds; otherwise
there would be cancellations. In this case, if one writes xi−1 = ∂(xi )∆S−1 for
i = 1, . . . , s, and then collects all appearances of ∆−1
on the left, one gets
S
−i
α = ∆−s
x
e
·
·
·
x
e
y
·
·
·
y
,
where
x
e
=
τ
(∂(x
)).
The
latter
is precisely the left
s
1 1
t
i
i
S
normal form of α. If x is trivial, then α = y1 · · · yt where the first p factors could
be equal to ∆S , so the left normal form is α = ∆pS yp+1 · · · yt . If y is trivial,
−1
then α = x−1
s · · · x1 where some (say k) of the rightmost factors could be equal
−1
es · · · x
ek+1 .
to ∆S . The left normal form of α in this case would be α = ∆−s
S x
Notice that if x 6= 1 then inf S (α) = −s, and if y 6= 1 then supS (α) = t.
The np-normal form can be computed from any decomposition of α as α = β −1 γ,
where β and γ are positive elements: One just needs to cancel δ = β ∧ γ in the
middle. That is, write β = δx and γ = δy; then α = β −1 γ = x−1 δ −1 δy = x−1 y,
where no more cancellation is possible. Then compute the left normal forms
of x and y and the np-normal form will be computed.
The mixed normal form is very useful to detect whether an element belongs
8
to a proper standard parabolic subgroup. If α ∈ AX , with X ⊆ S, and
−1
α = x−1
s · · · x1 y1 · · · yt is the np-normal form of α in AX , then the simple factors x1 , . . . , xs , y1 , . . . , yt ∈ AX are also simple elements in AS such that x1 · · · xs
and y1 · · · ys are in left normal form in AS . Hence, the above is also the npnormal form of α in AS . Therefore, given α ∈ AS we will have α ∈ AX if and
only if all factors in its np-normal form belong to AX . It follows that if x 6= 1,
inf S (α) = inf X (α) = −s, and if y 6= 1, supS (α) = supX (α) = t.
We finish this section with an important observation: The Artin–Tits group AS
N
admits other Garside structures (AS , A+
S , ∆S ), which are obtained by replacing
the Garside element ∆S with some non-trivial positive power ∆N
S . To see this,
recall that ∆pS 4 α 4 ∆qS is equivalent to ∆qS < α < ∆pS for any p, q ∈ Z, so the
N
positive prefixes of ∆N
S coincide with the positive suffixes of ∆S , and note that
N
one has s 4 ∆S 4 ∆N
for
any
s
∈
S,
so
the
divisors
of
∆
generate
AS .
S
S
The simple elements with respect to this Garside structure are the positive
prefixes of ∆N
S (which are no longer square-free, in general, if N > 1). The npN
normal form of an element with respect to the Garside structure (AS , A+
S , ∆S )
can be obtained from that of the Garside structure (AS , A+
,
∆
)
by
grouping
S
S
together the simple factors of the positive and the negative parts of the latter
in groups of N , ”padding“ the outermost groups with copies of the identity
element as necessary.
−1
Suppose that α = x−1
s · · · x1 y1 · · · yt is in np-normal form, for the classical
Garside structure of AS . If we take N ≥ max(s, t), it follows that x = x1 · · · xs 4
N
∆N
S and y = y1 · · · yt 4 ∆S . This means that x and y are simple elements with
respect to the Garside structure in which ∆N
S is the Garside element. Therefore,
for every α ∈ AS , we can consider a Garside structure of AS such that the npnormal form of α is x−1 y, with x and y being simple elements.
4
Other results for Artin–Tits groups of spherical type
This section focuses on some further properties of Artin–Tits groups of spherical
type that we will need. The properties listed in this section are specific to Artin–
Tits groups and do not directly extend to Garside groups in general.
Definition 4.1. Let AS be an Artin–Tits group of spherical type. Given X ⊆ S
and t ∈ S, we define the positive element
rX,t = ∆−1
X ∆X∪{t} .
If t ∈
/ X, this positive element coincides with the elementary ribbon dX,t defined
in [16]. If t ∈ X we just have rX,t = 1 while dX,t = ∆X .
Lemma 4.2. If AS is an Artin–Tits group of spherical type, X ⊆ S and t ∈ S,
−1
then there is a subset Y ⊆ X ∪ {t} such that rX,t
XrX,t = Y holds.
9
Proof. This follows from the definition, as the automorphisms τX and τX∪{t}
permute the elements of X and X ∪ {t}, respectively.
Lemma 4.3. If AS is an Artin–Tits group of spherical type, X ⊆ S and t ∈ S,
then the element rX,t can be characterized by the following property:
∆X ∨ t = ∆X rX,t
Proof. This follows immediately from the definition of ∆X , which is the least
common multiple of the elements in X, and the definition of ∆X∪{t} = ∆X rX,t ,
which is the least common multiple of the elements in X ∪ {t}.
Lemma 4.4. If AS is an Artin–Tits group of spherical type, X ( S, and
t ∈ S \ X, then the following hold:
(a) t 4 rX,t
(b) If s ∈ S with s 4 rX,t , then s = t.
Proof. We start by proving (b). Recall that the set of prefixes of a Garside
element with the classical Garside structure coincides with its set of suffixes. If
s ∈ S with s 4 rX,t , then one has s 4 rX,t 4 ∆X∪{t} , so s ∈ X ∪ {t}. On
the other hand, ∆X rX,t is a simple element by Lemma 4.3, hence square-free,
and ∆X < u for all u ∈ X, so s ∈
/ X and part (b) is shown. Turning to the
proof of (a), as t ∈
/ X, one has t 64 ∆X and thus rX,t 6= 1 by Lemma 4.3. This
means that rX,t must start with some letter, which by part (b) must necessarily
be t.
We define the support of a positive element α ∈ AS , denoted supp(α), as the
set of generators s ∈ S that appear in any positive word representing α. This
is well defined by two reasons: Firstly, two positive words represent the same
element in AS if and only if one can transform the former into the latter by
repeatedly applying the relations in the presentation of AS , that is, if and only
if the elements are the same in the monoid A+
S defined by the same presentation
as AS [18]. Secondly, due to the form of the relations in the presentation of A+
S,
applying a relation to a word does not modify the set of generators that appear,
so all words representing an element of A+
S involve the same set of generators.
For a not necessarily positive element α ∈ AS , we define its support, using the
np-normal form α = x−1 y, as supp(α) = supp(x) ∪ supp(y).
Lemma 4.5. Let α be a simple element (with respect to the usual Garside
structure) in an Artin–Tits group AS of spherical type. Let t, s ∈ S. Then:
t 64 α
⇒ αs = tα
t 4 αs
10
Proof. As the relations defining A+
S are homogeneous, the word length of α is
well defined. We proceed by induction on the length of α. If α = 1 the result is
trivially true, so suppose α 6= 1 and that the result is true for shorter elements.
Let a ∈ S such that a 4 α. There is a relation in AS of the form atat · · · =
tata · · · , where the words on each side have length m = m(a, t). Let us denote
by ρi the i-th letter of the word atat · · · , for i = 1, . . . , m. That is, ρi = a if i
is odd, and ρi = t if i is even. Recall that ρ1 · · · ρm = a ∨ t. Also, ρ1 · · · ρm =
tρ1 · · · ρm−1 .
We have t 4 αs and a 4 α 4 αs, so a ∨ t = ρ1 · · · ρm 4 αs. Notice that
ρ1 · · · ρm = a ∨ t 64 α (as t 64 α), but ρ1 = a 4 α. Hence, there is some k, where
0 < k < m, such that ρ1 · · · ρk 4 α and ρ1 · · · ρk+1 64 α.
Write α = ρ1 · · · ρk α0 . Then ρk+1 64 α0 , but ρk+1 4 α0 s. By induction hypothesis, α0 s = ρk+1 α0 . Hence αs = ρ1 · · · ρk+1 α0 , where k + 1 ≤ m.
We claim that k + 1 = m. Otherwise, as ρ1 · · · ρm 4 αs = ρ1 · · · ρk+1 α0 , we
would have ρk+2 4 α0 , and then ρ1 · · · ρk ρk+2 4 α, which is not possible as
ρk = ρk+2 and α is simple (thus square-free).
Hence k + 1 = m and αs = ρ1 · · · ρm α0 = tρ1 · · · ρm−1 α0 = tα, as we wanted to
show.
Remark. The above result is not true if α is not simple (with the usual Garside
structure). As an example, consider the Artin–Tits group ha, b | abab = babai,
and the elements α = aaba and t = s = b. We have b 64 α, but
αb = aabab = ababa = babaa,
so b 4 αb, but αb 6= bα as α = aaba 6= abaa.
We end this section with an important property concerning the central elements zP .
Lemma 4.6. Let P and Q be parabolic subgroups of an Artin–Tits group AS
of spherical type. Then the following are equivalent:
1. zP zQ = zQ zP
2. (zP )m (zQ )n = (zQ )n (zP )m for some n, m 6= 0
3. (zP )m (zQ )n = (zQ )n (zP )m for all n, m 6= 0
Proof. If zP and zQ commute, it is clear that (zP )m and (zQ )n commute for
every n, m 6= 0. Conversely, suppose that (zP )m and (zQ )n commute for some
n, m 6= 0.
By Godelle [16, Proposition 2.1], if X, Y ⊆ S and m 6= 0, then u−1 (zX )m u ∈ AY
holds if and only if u = vy with y ∈ AY and v −1 Xv = Y .
11
Hence, if u−1 (zX )m u = (zX )m we can take Y = X, so y commutes with zX
and v induces a permutation of X, which implies that v −1 AX v = AX and then
v −1 zX v = zX . Therefore u−1 zX u = y −1 v −1 (zX )vy = y −1 zX y = zX .
Now recall that (zP )m (zQ )n = (zQ )n (zP )m . Since α−1 P α = AX for some
α ∈ AS and some X ⊆ S, we can conjugate the above equality by α to obtain
(zX )m (α−1 zQ α)n = (α−1 zQ α)n (zX )m , which by the argument in the previous
paragraph implies that zX (α−1 zQ α)n = (α−1 zQ α)n zX . Conjugating back, we
get zP (zQ )n = (zQ )n zP .
Now take β ∈ AS such that β −1 Qβ = AY for some Y ⊆ S. Conjugating
the equality from the previous paragraph by β, we obtain (β −1 zP β)(zY )n =
(zY )n (β −1 zP β), which implies (β −1 zP β)zY = zY (β −1 zP β). Conjugating back,
we finally obtain zP zQ = zQ zP .
5
Cyclings, twisted cyclings and summit sets
Let α ∈ AS be an element whose left normal form is α = ∆pS α1 · · · αr with r > 0.
The initial factor of α is the simple element ιS (α) = τS−p (α1 ). Thus, α =
ιS (α)∆pS α2 · · · αr . The cycling of α is the conjugate of α by its initial factor,
that is,
cS (α) = ∆pS α2 · · · αr ιS (α).
This expression is not necessarily in left normal form, so in order to apply a
new cycling one must first compute the left normal form of cS (α), to know the
new conjugating element ιS (cS (α)). If r = 0, that is, if α = ∆pS , we just define
cS (∆pS ) = ∆pS .
The twisted cycling of α is defined as e
cS (α) = τS−1 (cS (α)). It is the conjugate of
−1
α by ιS (α)∆S , which is the inverse of a simple element. (One can also think
of e
cS as a left-conjugation by a simple element.) Notice that the conjugating
element is
−(p+1)
ιS (α)∆S−1 = ∆pS α1 ∆S
.
The following lemma tells us that twisted cycling is actually more natural than
cycling from the point of view of the mixed normal form. Notice that its
proof also works when using the alternative Garside structure with Garside
element ∆n , for some n > 1.
Lemma 5.1. If α = x−1 y = xs−1 · · · x−1
1 y1 · · · yt is the np-normal form of an
element in AS , and x 6= 1, then the conjugating element for twisted cycling is
precisely x−1
s . Hence,
−1
−1
e
cS (α) = xs−1
· · · x−1
1 y1 · · · yt xs .
Proof. We have seen that the conjugating element for the twisted cycling of α
s−1
is ∆−s
S α1 ∆S , where α1 is the first non-∆S factor in its left normal form, that
is, α1 = x
es = τS−s (∂S (xs )) = ∆sS ∂S (xs )∆−s
S .
12
−s s−1
s
−1
Thus, the conjugating element is ∆−s
= ∂S (xs )∆−1
S ∆S ∂S (xs )∆S ∆S
S = xs .
If α = ∆pS α1 · · · αr is in left normal form and r > 0, the decycling of α is the
conjugate of α by α−1
r , that is,
dS (α) = αr ∆p α1 · · · αr−1 .
If r = 0, that is, if α = ∆pS , we define dS (∆pS ) = ∆pS . Cyclings (or twisted
cyclings) and decyclings are used to compute some important finite subsets of
the conjugacy class of an element, which we will refer to with the common name
of summit sets.
Definition 5.2. Given α ∈ AS , we denote by C + (α) the set of positive conjugates of α. Notice that this set is always finite, and it could be empty.
Definition 5.3. [11] Given α ∈ AS , the Super Summit Set of α, denoted
SSS(α), is the set of all conjugates of α with maximal infimum and minimal
supremum. Equivalently, SSS(α) is the set of conjugates of α with minimal
canonical length.
Definition 5.4. [15] Given α ∈ AS , the Ultra Summit Set of α, denoted
U SS(α), is the set of all elements β ∈ SSS(α) such that ckS (β) = β for some
k > 0.
It is easy to deduce from the definitions that cS and τS commute; indeed, for
any β ∈ AS , the conjugating elements for the conjugations β 7→ cS (τS (β)) and
β 7→ τS (cS (β)) are identical. Therefore, ckS (β) = β for some k > 0 holds if and
only if e
ctS (β) = β for some t > 0. This means that twisted cycling can be used
to define the Ultra Summit Set.
We will also need two other types of summit sets:
Definition 5.5. [17] Let α be an element in a Garside group G. The Reduced
Super Summit Set of α is
RSSS(α) = {x ∈ αG : ckS (x) = x = dtS (x) for some k, t > 0}.
As above, one can use twisted cycling instead of cycling to define RSSS(α).
Definition 5.6. [1] Let α be an element in a Garside group G. The stable ultra
summit set of α is SU (α) = {x ∈ αG : xm ∈ U SS(xm ) ∀m ∈ Z}.
It is well known [11, 15] that when applying iterated cycling, starting with
α ∈ AS , one obtains an element α′ whose infimum is maximal in its conjugacy
class. Then, by applying iterated decycling to α′ one obtains an element α′′
whose supremum is minimal in its conjugacy class, so α′′ ∈ SSS(α). Finally,
when applying iterated cycling to α′′ until the first repeated element α′′′ is
13
obtained, one has α′′′ ∈ U SS(α). If one then applies iterated decycling to α′′′
until the first repeated element, one obtains α
b ∈ RSSS(α).
In order to conjugate α
b to SU (α), as explained in [1], one just need to apply the
conjugating elements for iterated cycling or decycling of suitable powers of α
b. It
follows that all the above summit sets are nonempty (and finite), and moreover
we have the following:
Lemma 5.7. Let α be an element in a Garside group G with Garside element ∆,
and let I be either SSS(α), or U SS(α), or RSSS(α), or SU (α). One can
conjugate α to β ∈ I by a sequence of conjugations:
α = α0 → α1 → · · · → αm = β
where, for i = 0, . . . , m − 1, the conjugating element from αi to αi+1 has the
form αpi ∆q ∧ ∆r for some integers p, q and r.
Proof. We just need to show that the conjugating elements, for either cycling
or decycling, of a power xk of an element x has the form xp ∆q ∧ ∆r for some
integers p, q and r.
If xk is a power of ∆, the conjugating element is trivial, so the result holds.
Otherwise, suppose that ∆m x1 · · · xn is the left normal form of xk . Then
xk ∆−m = τ −m (x1 ) · · · τ −m (xn ), where the latter decomposition is in left normal form. Hence, xk ∆−m ∧ ∆ = τ −m (x1 ) = ι(xk ). So the conjugating element
for cycling has the desired form.
−k
On the other hand, ∆m+n−1 ∧ xk = ∆m x1 · · · xn−1 . So x−1
(∆m+n−1 ∧
n = x
k
−k m+n−1
−1
x )=x ∆
∧1. Since xn is the conjugating element for decycling of xk ,
the result follows.
The sets C + (α), SSS(α), U SS(α), RSSS(α) and SU (α) share a common
important property, which is usually called convexity. Recall that β x means
x−1 βx and that ∧ denotes the meet operation in the lattice associated to the
prefix partial order 4.
Lemma 5.8. [14, Propositions 4.8 and 4.12], [15, Theorem 1.18] Let α be an
element in a Garside group G and let I be either C + (α), or SSS(α), or U SS(α),
or RSSS(α), or SU (α). If α, αx , αy ∈ I then αx∧y ∈ I.
Remark. Convexity of SU (α) is not shown in [1], but it follows immediately
from convexity of U SS(α).
Remark. Convexity in RSSS(α) also follows from convexity in U SS(α), after
the following observation: An element x belongs to a closed orbit under decycling if and only if x−1 belongs to a closed orbit under twisted cycling, hence to a
closed orbit under cycling. Actually, the conjugating element for twisted cycling
of x−1 equals the conjugating element for decycling of x. Hence the elements
in RSSS(α) are those elements x, conjugate to α, such that both x and x−1
belong to closed orbits under cycling. This implies the convexity in RSSS(α).
Note that it also implies that SU (α) ⊆ RSSS(α).
14
The set I is usually obtained by computing the directed graph GI , whose vertices are the elements of I, and whose arrows correspond to minimal positive
conjugators. That is, there is an arrow labelled x starting from a vertex u and
finishing at a vertex v if and only if:
1. x ∈ A+
S;
2. ux = v; and
3. uy ∈
/ I for every non-trivial proper prefix y of x.
Thanks to the convexity property, one can see that the graph GI is finite and
connected, and that the label of each arrow is a simple element. This is why that
graph can be computed starting with a single vertex, iteratively conjugating the
known vertices by simple elements, until no new elements of I are obtained.
n
We will need to use different Garside structures (AS , A+
S , ∆S ) for distinct positive values of n. To distinguish the precise Garside structure we are using, we
will write In instead of I for a given summit set. For instance, given n ≥ 1 and
an element α ∈ AS , we will write SSSn (α), U SSn (α), RSSSn (α) and SUn (α)
to refer to the super summit set, the ultra summit set, the reduced super summit set and the stable ultra summit set, respectively, of α with respect to the
n
structure (AS , A+
S , ∆S ). Notice that all those sets are finite sets consisting of
conjugates of α. Also, the set C + (α) is independent of the Garside structure
under consideration, so Cn+ (α) = C + (α) for every n > 0.
We will now see that in a summit set I there is always some element which
belongs to In for every n > 1.
Definition 5.9. Let α ∈ T
AS and let I be either SSS, or U SS, or RSSS, or SU .
Then we define I∞ (α) = n≥1 In (α).
Proposition 5.10. For every α ∈ AS , the set I∞ (α) is nonempty.
Proof. We write In for In (α) for n ∈ N ∪ {∞}. For every N ≥ 1, let I≤N =
TN
n=1 In . We will show that I≤N 6= ∅ by induction on N .
If N = 1 then I≤N = I1 , which is known to be nonempty. We can then assume
that N > 1 and that there is an element β ∈ I≤(N −1) . Using the Garside
N
structure (AS , A+
S , ∆S ), we can conjugate β to an element γ ∈ IN by applying
some suitable conjugations
β = β0 → β1 → · · · → βm = γ.
By Lemma 5.7, for every i = 0, . . . , m − 1 the conjugating element from βi to
q
Nr
βi+1 has the form βip ∆N
for some integers p, q and r. Recall that
S ∧ ∆S
β ∈ I≤(N −1) . We claim that, if we apply such a conjugating element to β, the
resulting element β1 will still belong to I≤(N −1) . Repeating the argument m
times, it will follow that γ ∈ I≤(N −1) ∩ IN = I≤N , so I≤N 6= ∅ as we wanted to
show.
15
p
Nq
Nq
But notice that, if β ∈ In for some n, then β β ∆S = β ∆S = τSN q (β) ∈ In , and
Nr
also β ∆S = τSN r (β) ∈ In . By Lemma 5.8, it follows that β1 ∈ In . Hence, if
β ∈ I≤(N −1) then β1 ∈ I≤(N −1) , which shows the claim.
Therefore, we have shown that I≤N 6= ∅ for every N ≥ 1. As the set I1 is finite,
and we have a monotonic chain
I1 = I≤1 ⊇ I≤2 ⊇ I≤3 ⊇ · · · ,
this chain must stabilize for some N ≥ 1, so I∞ = I≤N 6= ∅.
We finish this section by recalling an important tool for studying properties of
the Ultra Summit Set: The transport map.
Definition 5.11. [15] Let α be an element of a Garside group G and v, w ∈
U SS(α). Let x ∈ AS be an element conjugating v to w, that is, x−1 vx = w.
For every i ≥ 0, denote v (i) = ciS (v) and w(i) = ciS (w). The transport of x at v
is the element x(1) = ιS (v)−1 x ιS (w), which conjugates v (1) to w(1) .
We can define iteratively x(i) as the transport of x(i−1) at v (i−1) . That is,
the i-th transport of x at v, denoted x(i) , is the following conjugating element
from v (i) to w(i) :
−1
x(i) = ι(v)ι(v (1) ) · · · ι(v (i−1) )
x ι(w)ι(w(1) ) · · · ι(w(i−1) ) .
Lemma 5.12. [15, Lemma 2.6] Let α be an element of a Garside group G and
v, w ∈ U SS(α). For every conjugating element x such that x−1 vx = w there
exists some integer N > 0 such that v (N ) = v, w(N ) = w and x(N ) = x.
We remark that in [15] it is assumed that x is a positive element, but this is not a
constraint: Multiplying x by a suitable central power of the Garside element ∆,
we can always assume that x is positive.
Lemma 5.12 can be rewritten in the following way:
Lemma 5.13. Let α be an element in a Garside group G, and let v, w ∈
U SS(α). Let m and n be the lengths of the orbits under cycling of v and w,
respectively. Denote Ct (v) (resp. Ct (w)) the product of t consecutive conjugating
elements for cycling, starting with v (resp. starting with w). Then, for every x
such that x−1 vx = w, there is a positive common multiple N of m and n such
that x−1 CkN (v) x = CkN (w) for every k > 0.
Proof. By Lemma 5.12, there is some N > 0 such that v (N ) = v, w(N ) = w and
x(N ) = x. The first property implies that N is a multiple of m. The second one,
that N is a multiple of n. Finally, by definition of transport, the third property
just means x−1 CN (v)x = CN (w).
Now notice that, as N is a multiple of the length m of the orbit of v under
cycling, one has CkN (v) = CN (v)k for every k > 0. In the same way, CkN (w) =
CN (w)k for every k > 0. Therefore x−1 CkN (v)x = x−1 CN (v)k x = CN (w)k =
CkN (w).
16
Corollary 5.14. Let α be an element in a Garside group G, and let v, w ∈
et (v) (resp. C
et (w)) the product of t consecU SS(α). For every t > 0, denote C
utive conjugating elements for twisted cycling, starting with v (resp. starting
with w). Then, for every x such that x−1 vx = w, there is a positive integer M
eM (v) x = C
eM (w), where C
eM (v) commutes with v and C
eM (w)
such that x−1 C
commutes with w.
et (v) = Ct (v)∆−t
Proof. By definition of cycling and twisted cycling, we have C
−t
e
and Ct (w) = Ct (w)∆ for every t > 0.
We know from Lemma 5.13 that there is some positive integer N such that
x−1 CkN (v) x = CkN (w) for every k > 0. If we take k big enough so that ∆kN
is central, and we denote M = kN , we have:
eM (v) x = x−1 CM (v)∆−M x = x−1 CM (v) x∆−M = CM (w)∆−M = C
eM (w).
x−1 C
Moreover, from Lemma 5.13 we know that CM (v) commutes with v, hence
eM (v) = CM (v)∆−M also commutes with v. In the same way, C
eM (w) comC
mutes with w.
6
Positive conjugates of elements in a parabolic
subgroup
Suppose that an element α belongs to a proper parabolic subgroup P ( AS
and has a positive conjugate. We will show in this section that then all positive
conjugates of α belong to a proper standard parabolic subgroup, determined
by their corresponding supports.
Moreover, the support of a positive conjugate of α will be shown to be preserved
by conjugation, in the sense that if α′ and α′′ are two positive conjugates of α,
whose respective supports are X and Y , then every element conjugating α′ to
α′′ will also conjugate AX to AY .
This will allow us to define a special parabolic subgroup associated to α, which
will be the smallest parabolic subgroup (by inclusion) containing α.
Lemma 6.1. If α ∈ AX ( AS where X ( S, then cS (α) ∈ AY , where either
Y = X or Y = τS (X).
−1
Proof. Let x−1 y = x−1
s · · · x1 y1 · · · yt be the np-normal form of α in AS . As
this is precisely the np-normal form of α in AX , one has that supp(x) ⊆ X and
supp(y) ⊆ X.
Suppose that x = 1, so y = y1 · · · yt . It is clear that y1 6= ∆S , as X ( S. Then
cS (α) = αy1 = y2 · · · yt y1 , hence cS (α) ∈ AX . Notice that if y1 6= ∆X or if α
is a power of ∆X , then one has cS (α) = cX (α), but otherwise we may have
cS (α) 6= cX (α).
17
Now suppose that x 6= 1. We saw in Lemma 5.1 that
−1
−1
e
cS (α) = x−1
s−1 · · · x1 y1 · · · yt xs ,
hence e
cS (α) = τS−1 (cS (α)) ∈ AX , which implies that cS (α) ∈ AτS (X) . Notice
that in this case cS (α) is not necessarily equal to cX (α), but e
cS (α) = e
cX (α).
Lemma 6.2. If α ∈ AX ( AS where X ( S, then dS (α) ∈ AY , where either
Y = X or Y = τS (X).
Proof. It is obvious from the conjugating elements for decycling and twisted
cycling that for every element α, one has dS (α) = (e
cS (α−1 ))−1 . Now α ∈ AX ,
−1
−1
so α
∈ AX and then e
cS (α ) ∈ AY , where either Y = X or Y = τS (X).
Therefore, dS (α) = (e
cS (α−1 ))−1 ∈ AY .
Thanks to Lemma 6.1 and Lemma 6.2, we see that if α ∈ AX , then we obtain
α′′ ∈ AX ∩ SSS(α), where α′′ is obtained from α by iterated cycling and decycling (and possibly conjugating by ∆S at the end). Notice that if α is conjugate
to a positive element, then α′′ will be positive (as the infimum of α′′ is maximal
in its conjugacy class).
Let us suppose that α′′ ∈ AX is positive. We want to study the graph GC + (α) .
We already know that some vertex α′′ ∈ C + (α) belongs to a proper standard
parabolic subgroup. Let us see that this is the case for all elements in C + (α).
Proposition 6.3. Let v ∈ C + (α) with supp(v) = X ( S. Then the label of
every arrow in GC + (α) starting at v either belongs to AX , or is a letter t ∈ S
that commutes with every letter in X, or is equal to rX,t for some letter t ∈ S
adjacent to X in ΓS .
Proof. Let x be the label of an arrow in GC + (α) starting at v. Let t ∈ S be such
that t 4 x. We distinguish three cases.
Case 1: Suppose that t ∈ X. As we know that ∆X conjugates v to a positive
element τX (v), the convexity property implies that ∆X ∧ x also conjugates v
to a positive element. As x is minimal with this property, ∆X ∧ x must be
either trivial or equal to x. However, ∆X ∧ x cannot be trivial as t 4 ∆X ∧ x.
Therefore, ∆X ∧ x = x, which is equivalent to x 4 ∆X . Hence x ∈ AX .
Case 2: Suppose that t ∈
/ X and t is not adjacent to X. Then t commutes
with all letters of X, which means that v t = v is a positive element. Hence, by
minimality of x, we have x = t.
Case 3: Suppose that t ∈
/ X and t is adjacent to X. We must show x = rX,t .
We will determine x algorithmically, starting with t 4 x, and iteratively adding
letters until we obtain the whole conjugating element x.
Write v = v1 · · · vr , where vi ∈ X for i = 1, . . . , r, and let c0 be such that
t 4 c0 4 x and c0 4 rX,t . Consider the following diagram, in which every two
paths with the same initial and final vertices represent the same element:
18
v1
c0
v2
/
c1
u1
vr
/
/
cr−1
c2
/
u2
/
/
ur
/
cr
/
Starting with c0 and v1 , the elements c1 and u1 are defined by the condition
c0 ∨ v1 = c0 u1 = v1 c1 , corresponding to the first square. Then c2 and u2 are
determined by the condition c1 ∨ v2 = c1 u2 = v2 c2 corresponding to the second
square, and so on.
Consider the following claims that will be proven later:
Claim 1: c0 ∨ (v1 · · · vi ) = (v1 · · · vi )ci for every 1 ≤ i ≤ r.
Claim 2: t 4 c0 4 c1 4 c2 4 · · · 4 cr 4 x and cr 4 rX,t .
These claims give us a procedure to compute x: We start with c0 = t = x0 , and
compute c1 , c2 , . . . , cr =: x1 . If x1 is longer than x0 , we start the process again,
this time taking c0 = x1 , and compute c1 , . . . , cr =: x2 . We keep going while xi
is longer than xi−1 . As all obtained elements are prefixes of x, it follows that
the process will stop and we have xk−1 = xk for some k. On the other hand,
Claim 1 implies that xk 4 vxk , so v xk is a positive element. Now notice that
xk 4 x. Hence, by minimality of x, it follows that xk = x.
At this point of the iteration, something interesting will happen. We can substitute c0 = x in the above diagram and obtain:
v1
x
v2
/
x
u1
/
vr
/
x
/
u2
/
x
/
/
x
ur
/
This happens because each vertical arrow is a prefix of the following one, and the
first and last arrows coincide, hence all vertical arrows must be the same. Then
vi x = xui for every i = 1, . . . , r. As the relations defining AS are homogeneous,
it follows that each ui is a single letter, and thus x conjugates the whole set X to
a set Y ( S (since supp(v) = X). But then x−1 ∆X x is a positive element, that
is, x 4 ∆X x. Hence t 4 x 4 ∆X x. By Lemma 4.3, ∆X rX,t = ∆X ∨ t 4 ∆X x,
so we have that rX,t 4 x. By minimality of x, we thus have x = rX,t , as we
wanted to show.
The proofs of Claim 1 and Claim 2 remain to be done:
Proof of Claim 1: We will show by induction on i that c0 ∨ (v1 · · · vi ) =
(v1 · · · vi )ci . This is true for i = 1 by definition of c1 , so assume that i > 1
and that the claim is true for i − 1. Let then c0 ∨ (v1 · · · vi ) = v1 · · · vi d. We
have
(v1 · · · vi−1 )ci−1 = c0 ∨ (v1 · · · vi−1 ) 4 c0 ∨ (v1 · · · vi ) = (v1 · · · vi )d,
19
and thus
(v1 · · · vi−1 )vi ci = (v1 · · · vi−1 )vi ∨ (v1 · · · vi−1 )ci−1 4 (v1 · · · vi−1 )vi d.
This implies that ci 4 d. But (v1 · · · vi )ci is a common multiple of c0 and v1 · · · vi
by construction, so d = ci and the claim is shown.
Proof of Claim 2: By hypothesis, we have that c0 4 rX,t . We will first show by
induction on i that ci 4 rX,t for every i ≥ 0. Suppose that ci−1 4 rX,t for some i.
r
Since X rX,t = Y for some Y ( S by Lemma 4.2, it follows that vi X,t is positive,
that is, rX,t 4 vi rX,t . Hence ci−1 4 vi rX,t and finally vi ci = ci−1 ∨ vi 4 vi rX,t ,
so ci 4 rX,t .
Secondly, we show that ci−1 4 ci for i ≥ 1: Since vi is a single letter belonging to X, and t is the only possible initial letter of rX,t by Lemma 4.4, the
assumption t ∈
/ X implies that vi 64 ci−1 , hence ui 6= 1. Write the word ci−1 ui
as ci−1 u′i s, where s is a single letter. Since ci−1 ui = ci−1 ∨ vi , it follows that
vi 64 ci−1 u′i and vi 4 ci−1 u′i s. By Lemma 4.5 (notice that ci−1 is a simple
element with respect to the usual Garside structure, as it is a prefix of rX,t ,
hence ci−1 u′i is simple as it is a prefix of ci−1 ∨ vi , which is also simple), we
obtain that ci−1 u′i s = vi ci−1 u′i . That is, vi ci = ci−1 ui = ci−1 u′i s = vi ci−1 u′i ,
which implies that ci−1 4 ci .
Finally, we prove that cr 4 x: As x−1 vx is positive, we have that c0 4 x 4 vx,
and also vc0 4 vx. Hence, c0 ∨vc0 4 vx. But notice that by Claim 1, c0 ∨v = vcr ,
so vcr = c0 ∨ v 4 c0 ∨ vc0 4 vx. Hence cr 4 x.
Example 6.4. In Figure 1 we can see the graph GC + (α) for α = σ1 σ2 in the
braid group on 5 strands (the Artin–Tits group of type A4 ). We see the 6
vertices corresponding to the positive conjugates of σ1 σ2 , and the three kinds
of arrows explained in Proposition 6.3. For instance, the arrows starting from
σ1 σ2 are labeled σ1 (type 1), σ4 (type 2) and σ3 σ2 σ1 (type 3).
σ3 σ2 σ1
σ4
σ4 σ3 σ2
σ1 σ2
σ2 σ3
σ1 σ2 σ3
σ1
σ2
σ2 σ3 σ4
σ2
σ3
σ3 σ2 σ1
σ4
σ2 σ1
σ1
σ3 σ4
σ3
σ4
σ4 σ3 σ2
σ3 σ2
σ1 σ2 σ3
σ4 σ3
σ2 σ3 σ4
σ1
Figure 1: The graph GC + (α) for α = σ1 σ2 in the braid group on 5 strands.
Corollary 6.5. Let α ∈ AS be a non-trivial element that belongs to a proper
parabolic subgroup, and is conjugate to a positive element. Then all positive
20
conjugates of α belong to a proper standard parabolic subgroup. Moreover, if v
and w are positive conjugates of α with supp(v) = X and supp(w) = Y , then
for every x ∈ AS such that x−1 vx = w, one has x−1 zX x = zY (and hence
x−1 AX x = AY ).
Proof. With the given hypothesis, we know that we can conjugate α to a positive
element v ∈ AX , where X ( S: First compute a conjugate of α in a proper
standard parabolic subgroup, then apply iterated twisted cycling and decycling
until a super summit conjugate is obtained; the latter will be positive and, by
Lemma 6.1 and Lemma 6.2, contained in a proper standard parabolic subgroup.
By Proposition 6.3, there are three types of arrows in GC + (α) starting at v. In
each of these three cases, consider the target vertex v x of an arrow with label x
starting at v:
1. x ∈ AX . In this case v x ∈ AX .
2. x = t ∈
/ X where t is not adjacent to X. In this case v x = v ∈ AX .
3. x = rX,t , where t ∈
/ X is adjacent to X. Then x−1 Xx = Z for some
proper subset Z ( S. Hence v x ∈ AZ .
Therefore, in every case, v x belongs to a proper standard parabolic subgroup.
We can apply the same argument to every vertex of GC + (α) and, since GC + (α)
is connected, it follows that all vertices in GC + (α) belong to a proper standard
parabolic subgroup.
Now denote X = supp(v), let x be the label of an arrow in GC + (α) starting at v,
and let Y = supp(v x ).
If x ∈ AX , we have Y = supp(v x ) ⊆ X. If we had Y ( X, then v x would be
a positive element belonging to a proper standard parabolic subgroup of AX .
Hence, taking AX as the global Artin–Tits group, all positive conjugates of v x
in AX would belong to a proper standard parabolic subgroup of AX , which is
is not the case, as v itself does not satisfy that property. Hence Y = X, so
supp(v x ) = X. Moreover, as x ∈ AX , we have x−1 zX x = zX .
If x = t ∈
/ X where t is not adjacent to X, then x commutes with all letters
of X, so we have v x = v, whence Y = X, and also x−1 zX x = zX .
Finally, if x = rX,t where t ∈
/ X is adjacent to X, then x−1 Xx = Z for some
Z ( S. As v contains all letters of X, it follows that v x contains all letters of Z,
that is, Z = Y . Therefore x−1 Xx = Y , which implies that x−1 AX x = AY and
hence x−1 zX x = zY .
Applying this argument to all arrows in GC + (α) , it follows that the label of any
arrow starting at a vertex u0 and ending at a vertex u1 conjugates zsupp(u0 )
to zsupp(u1 ) . This can be extended to paths in GC + (α) : If a path goes from u0
to uk , the element associated to the path conjugates zsupp(u0 ) to zsupp(uk ) .
21
Now suppose that v and w are two positive conjugates of α, where supp(v) = X
and supp(w) = Y , and suppose that x ∈ AS is such that x−1 vx = w. Then v
and w are vertices of GC + (α) . Up to multiplying x by a central power of ∆S ,
we can assume that x is positive. Decomposing x as a product of minimal
conjugators, it follows that x is the element associated to a path in GC + (α)
starting at v and finishing at w. Therefore, x−1 zX x = zY , as we wanted to
show.
Example 6.6. Consider again the situation from Example 6.4 and Figure 1.
We see that all positive conjugates of σ1 σ2 belong to a proper standard parabolic
subgroup, namely either to hσ1 , σ2 i, or to hσ2 , σ3 i, or to hσ3 , σ4 i. We can also
check that the labels of the arrows in the graph GC + (α) conjugate the central elements of the (minimal) standard parabolic subgroups containing the respective
conjugates as expected; cf. Figure 2.
σ1
σ4
σ2
σ3
σ3 σ2 σ1
2
σ4 σ3 σ2
2
(σ1 σ2 σ1 )
σ1 σ2 σ3
σ2
(σ3 σ4 σ3 )2
(σ2 σ3 σ2 )
σ1
σ2 σ3 σ4
σ3
σ4
Figure 2: The action of the conjugating elements from Figure 1 on the central elements z. of the (minimal) standard parabolic subgroups containing the
positive conjugates of σ1 σ2 .
The result from Corollary 6.5 allows to introduce an important concept:
Definition 6.7. Let α ∈ AS be conjugate to a positive element α′ = β −1 αβ ∈
′
A+
S . Let X = supp(α ). We define the parabolic subgroup associated to α as the
subgroup Pα = βAX β −1 .
Proposition 6.8. Under the above assumptions, the parabolic subgroup Pα is
well defined, and it is the smallest parabolic subgroup (by inclusion) containing α.
Proof. Suppose that α′′ = γ −1 αγ is another positive conjugate of α, and denote
−1
Y = supp(α′′ ). We must show that βAX β −1 = γAY γ −1 . We have (α′ )β γ =
α′′ . If α belongs to a proper parabolic subgroup, we can apply Corollary 6.5,
so both AX and AY are proper standard parabolic subgroups, the conjugating
element β −1 γ maps zX to zY , and then it maps AX to AY by Lemma 2.1.
Therefore βAX β −1 = γAY γ −1 , that is, Pα is well defined. If α does not belong
to a proper parabolic subgroup, none of its conjugates does, hence AX = AY =
AS . We then have βAX β −1 = βAS β −1 = AS = γAS γ −1 = γAY γ −1 , so Pα is
also well defined (and equal to AS ) in this case.
22
Now let P be a parabolic subgroup containing α. Let x ∈ AS be such that
x−1 P x is standard, that is, x−1 P x = AZ for some Z ⊆ S. Then x−1 αx ∈ AZ
and we can obtain another conjugate α′ = y −1 x−1 αxy ∈ AZ ∩ SSS(α), where
y ∈ AZ , by iterated twisted cycling and iterated decycling. Since α is conjugate
to a positive element, all elements in SSS(α) are positive, so α′ is positive.
Hence, if we denote X = supp(α′ ), we have AX ⊆ AZ . Conjugating back,
we have Pα = xyAX y −1 x−1 ⊆ xyAZ y −1 x−1 = xAZ x−1 = P . Hence Pα is
contained in any parabolic subgroup containing α, as we wanted to show.
7
Parabolic subgroup associated to an element
In the previous section we defined a parabolic subgroup associated to a given
element α ∈ AS , provided that α is conjugate to a positive element. In this
section we will see how to extend this definition to every element α ∈ AS . That
is, we will see that for every element α ∈ AS there is a parabolic subgroup Pα
which is the smallest parabolic subgroup (by inclusion) containing α.
Instead of using the positive conjugates of α (which may not exist), we will define
Pα by using one of the summit sets we defined earlier: RSSS∞ (α). We will see
that the support of the elements in RSSS∞ (α) is also preserved by conjugation,
so it can be used to define the smallest parabolic subgroup containing α.
Definition 7.1. Let α ∈ AS . Let α′ ∈ RSSS∞ (α), where α′ = β −1 αβ. If
we denote Z = supp(α′ ), we define the parabolic subgroup associated to α as
Pα = βAZ β −1 .
The next result shows Theorem 1.1.
Proposition 7.2. Under the above assumptions, the parabolic subgroup Pα is
well defined, and it is the smallest parabolic subgroup (by inclusion) containing α.
Proof. Write α′ = x−1 y in np-normal form, and recall that Z = supp(α′ ) =
supp(x) ∪ supp(y). If α does not belong to a proper parabolic subgroup, none of
its conjugates does, hence supp(v) = S for every v ∈ RSSS∞ (α), so Pα = AS
is well defined, and it is indeed the smallest parabolic subgroup containing α.
Hence we can assume that α belongs to some proper parabolic subgroup.
If α is conjugate to a positive element, then all elements in SSS(α) will be
positive. As RSSS∞ (α) ⊆ RSSS(α) ⊆ SSS(α), it follows that α′ is positive
and Z = supp(α′ ). Therefore the above definition of Pα coincides with the
definition we gave in the previous section. Hence, by Proposition 6.8, Pα is well
defined and it is the smallest parabolic subgroup containing α.
Suppose that α−1 is conjugate to a positive element. As one has β ∈ SSS(α)
if and only if β −1 ∈ SSS(α−1 ), the inverse of every element of RSSS(α) ⊆
SSS(α) is positive. In this case Z = supp((α′ )−1 ), and Pα coincides with the
23
definition of Pα−1 in the previous section. Hence Pα is well defined, and it is the
smallest parabolic subgroup containing α−1 , thus the smallest one containing α.
We can then assume that α belongs to some proper parabolic subgroup, and
−1
that the np-normal form of α′ has the form α′ = x−1 y = x−1
s · · · x1 y1 · · · yt ,
with s, t > 0 (that is, x, y 6= 1).
N
Let N = max(s, t), and let us use the Garside structure (AS , A+
S , ∆S ). With
′
respect to this structure, x and y are simple elements and, as α ∈ RSSS∞ (α),
both α′ and (α′ )−1 belong to their respective ultra summit sets.
In order to show that Pα is well defined, let α′′ = γ −1 αγ ∈ RSSS∞ (α). We
have to show that every element g conjugating α′ to α′′ , conjugates AZ to AU ,
where Z = supp(α′ ) and U = supp(α′′ ). We will show this by constructing
positive elements with supports Z and U , respectively, which are also conjugate
by g; then the claim follows by Corollary 6.5.
N
′
−1
As we are using the Garside structure (AS , A+
y
S , ∆S ), we have that α = x
′′
in np-normal form, where x and y are simple elements, and also α = u−1 v
in np-normal form, where u and v are simple elements, since both α′ and α′′
belong to SSSN (α), so they have the same infimum and supremum.
Let then g be an element such that g −1 α′ g = α′′ , and recall that α′ , α′′ ∈
RSSS∞ (α) ⊆ U SS∞ (α) ⊆ U SSN (α). By Corollary 5.14, there is a positive
eM (α′ )g = C
eM (α′′ ), where C
eM (α′ ) (resp. C
eM (α′′ )) is
integer M such that g −1 C
the product of the conjugating elements for M consecutive twisted cyclings of
N
α′ (resp. α′′ ) with respect to the Garside structure (AS , A+
S , ∆S ).
eM (α′ ) is the inverse of a positive element, say w1 (α′ ) :=
Hence, by Lemma 5.1, C
eM (α′ )−1 . Recall that α′ ∈ AZ , hence the factors composing C
eM (α′ ) belong to
C
+
′
′′
e
AZ , and then w1 (α ) ∈ AZ . In the same way, CM (α ) is the inverse of a positive
eM (α′′ )−1 ∈ A+ . We then have g −1 w1 (α′ )g = w1 (α′′ ),
element, say w1 (α′′ ) := C
U
+
′
where w1 (α ) ∈ AZ and w1 (α′′ ) ∈ A+
U.
Now, from g −1 α′ g = α′′ we obtain g −1 (α′ )−1 g = (α′′ )−1 . Since α′ , α′′ ∈
RSSS∞ (α), it follows that (α′ )−1 , (α′′ )−1 ∈ U SS∞ (α−1 ) ⊆ U SSN (α−1 ). Thus,
we can apply Corollary 5.14 and obtain that there is a positive integer T such
eT ((α′ )−1 )g = C
eT ((α′′ )−1 ) in the same way as above. If we denote
that g −1 C
eT ((α′ )−1 )−1 and w2 (α′′ ) = C
eT ((α′′ )−1 )−1 , we have g −1 w2 (α′ )g =
w2 (α′ ) = C
′′
w2 (α′′ ), where w2 (α′ ) ∈ A+
and
w
(α
)
∈ A+
2
Z
U.
+
′′
′′
′′
Let us denote w(α′ ) = w1 (α′ )w2 (α′ ) ∈ A+
Z and w(α ) = w1 (α )w2 (α ) ∈ AU .
By construction, g −1 w(α′ )g = w(α′′ ). We will now show that supp(w(α′ )) = Z
and supp(w(α′ )) = U .
Notice that the conjugating element for twisted cycling of α′ = x−1 y using the
N
−1
Garside structure (AS , A+
. By Lemma 5.1, x is a suffix of w1 (α′ ).
S , ∆S ) is x
On the other hand, the conjugating element for twisted cycling of (α′ )−1 = y −1 x
is y −1 . Hence, y is a suffix of w2 (α′ ). This implies that Z = supp(α′ ) =
supp(x)∪supp(y) ⊆ supp(w1 (α′ ))∪supp(w2 (α′ )) = supp(w(α′ )) ⊆ Z. Therefore
supp(w(α′ )) = Z. In the same way it follows that supp(w(α′′ )) = U .
24
We can then apply Corollary 6.5, since g −1 w(α′ )g = w(α′′ ), to conclude that
g −1 AZ g = AU , as we wanted to show. This means that Pα is well defined, as
taking g = β −1 γ (a conjugating element from α′ to α′′ ) one has:
βAZ β −1 = βgAU g −1 β −1 = γAU γ −1 ,
hence one can equally use either α′ or α′′ to define Pα .
Now let us prove that Pα is the smallest parabolic subgroup (for inclusion)
containing α. Suppose that P is a parabolic subgroup containing α, and let a ∈
AS be an element such that a−1 P a = AX is standard. Then a−1 αa ∈ AX . We
can now apply iterated twisted cyclings and iterated decyclings to this element
(in all needed Garside structures), so that the resulting element, say α
b, belongs
to RSSS∞ (α). The product of all conjugating elements, call it b, will belong to
AX , so we will have α
b = b−1 a−1 αab ∈ AX ∩ RSSS∞ (α).
Let Y = supp(b
α). By definition, we have Pα = abAY b−1 a−1 . But on the other
hand, as α
b ∈ AX , all letters in the np-normal form of α
b belong to AX . Hence
AY ⊆ AX , and we finally have:
Pα = abAY b−1 a−1 ⊆ abAX b−1 a−1 = aAX a−1 = P.
Therefore, Pα is contained in every parabolic subgroup containing α, as we
wanted to show.
8
Parabolic subgroups, powers and roots
In this section we will see that the parabolic subgroup Pα associated to an element α ∈ AS behaves as expected under conjugation, taking powers and taking
roots. The behavior under conjugation follows directly from the definition:
Lemma 8.1. For every α, x ∈ AS , one has Px−1 αx = x−1 Pα x.
Proof. Let α′ = β −1 αβ ∈ RSSS∞ (α). Then α′ = β −1 x(x−1 αx)x−1 β.
If X = supp(α′ ), by definition we have Pα = βAX β −1 , and also
Px−1 αx = x−1 βAX β −1 x = x−1 Pα x.
The behavior of Pα when taking powers or roots is not so easy, but it is also as
expected:
Theorem 8.2. Let AS be an Artin–Tits group of spherical type. If α ∈ AS and
m is a nonzero integer, then Pαm = Pα .
25
Proof. By Lemma 8.1, we can conjugate α to assume that α ∈ RSSS∞ (α). We
can further conjugate α by the conjugating elements for iterated twisted cycling
and iterated decycling of its m-th power (for all needed Garside structures), in
order to take αm to RSSS∞ (αm ). By Lemma 5.7, these conjugating elements
are the meet of two elements which conjugate α to other elements in RSSS∞ (α).
Hence, by the convexity of RSSS∞ (α), they maintain α in RSSS∞ (α). In
summary, up to conjugacy we can assume that α ∈ RSSS∞ (α) and αm ∈
RSSS∞ (αm ). We can also assume that m is positive, as Pα−m = Pαm by
definition.
Under these assumptions, the associated parabolic subgroups of α and αm will
be determined by their corresponding supports. Hence, we must show that
supp(α) = supp(αm ).
If either α or α−1 is positive, the result is clear. We can then assume that this
is not the case, that is, we have α = x−1
1 y1 in np-normal form with x1 , y1 6= 1.
Moreover, using a suitable Garside structure, we can assume that x1 and y1 are
simple elements.
We will now use the following result [1, Theorem 2.9]: If α ∈ U SS(α), inf(α) =
p, ℓ(α) > 1 and m ≥ 1, one has
αm ∆−mp ∧ ∆m = Cm (α).
(1)
In other words, if we consider the positive element αm ∆−mp , and compute the
first m factors in its left normal form (including any ∆ factors), the product
of these m factors equals the product of the first m conjugating elements for
iterated cyclings of α.
N
We point out that we are using a Garside structure (AS , A+
S , ∆S ) such that
−1
α = x1 y1 , where x1 and y1 are nontrivial simple elements. So the ∆ in (1)
means ∆N
S is our case. Also, we remark that α satisfies the hypotheses for (1),
as α ∈ RSSS∞ (α) ⊆ U SS∞ (α) ⊆ U SSN (α), and the left normal form of α is
∆−1 x
e1 y1 , so inf(α) = −1 and ℓ(α) = 2.
We will restate the above result from [1] in terms of np-normal forms. It turns
out that the statement will become much nicer.
−1
−1
Write αm = x−1
1 y1 x1 y1 · · · x1 y1 (see Figure 3). Notice that applying a twisted
−1
cycling to α means to conjugate it by x−1
1 , and one obtains α2 = y1 x1 , whose
−1
np-normal form will be of the form α2 = x2 y2 . We then have:
−1
−1
−1
αm = x−1
1 (x2 y2 x2 y2 · · · x2 y2 )y1
Now we can apply twisted cycling to α2 (conjugating it by x−1
2 ), and we obtain
α3 = x3−1 y3 . Then we see that:
−1 −1
−1
αm = x−1
1 x2 (x3 y3 · · · x3 y3 )y2 y1 .
Repeating the process m times, we finally obtain:
−1
−1
αm = x−1
1 x2 · · · xm ym · · · y2 y1 .
26
O
x1
y1
/O
O
x2
x1
y2
y1
/O
O
x3
/O
x2
y3
y2
/O
O
x4
x1
x3
y4
/
y1
/O
x2
/O
y3
/
x1
y2
/
y1
/
−1 −1 −1 −1
4
Figure 3: How to transform α4 = (x−1
1 y1 ) into x1 x2 x3 x4 y4 y3 y2 y1 .
−1
Around each square, the product xi yi is the np-normal form of yi−1 x−1
i−1 .
That is, we have written αm as a product of a negative times a positive element,
where the negative one is the product of the first m conjugating elements for
iterated twisted cycling of α. We will see that [1, Theorem 2.9] is equivalent to
−1
−1
the fact that no cancellation occurs between x−1
1 x2 · · · xm and ym · · · y2 y1 .
Indeed, on the one hand, as p = −1, we have the element αm ∆−pm = αm ∆m ,
which is:
−1
m
αm ∆m = x−1
1 · · · xm ym · · · y1 ∆
2
m−1 −1 m m
= (x1−1 ∆)(∆−1 x−1
xm ∆ )τ (ym · · · y1 ).
2 ∆ ) · · · (∆
On the other hand, for i = 1, . . . , m, we have αi = e
ci−1 (α) = τ −i+1 ◦ ci−1 (α), so
i−1
i−1
i−1
−1 i−1
c (α) = τ (αi ) = τ (xi ) τ (yi ). This means that the i-th conjugating
i
element for iterated cycling of α is τ i−1 (xi )−1 ∆ = ∆i−1 x−1
i ∆ .
Hence, by [1, Theorem 2.9] the product of the first m factors of the left normal
−1 −1 2
m
form of αm ∆m is equal to (x−1
x2 ∆ ) · · · (∆m−1 x−1
m ∆ ), that is, to
1 ∆)(∆
−1 −1
−1 m
x1 x2 · · · xm ∆ . In other words:
−1
−1 m
αm ∆m ∧ ∆m = x−1
1 x2 · · · xm ∆ .
As the greatest common prefix is preserved by multiplication on the right by
any power of ∆, we obtain:
−1
−1
αm ∧ 1 = x−1
1 x2 · · · xm .
But the biggest common prefix of an element and 1 is precisely the negative
part of its np-normal form, so this shows that there is no cancellation between
−1
−1
x−1
1 x2 · · · xm and ym · · · y2 y1 , as we claimed.
27
But then:
supp(αm ) = supp(xm · · · x1 ) ∪ supp(ym · · · y1 )
⊇ supp(x1 ) ∪ supp(y1 ) = supp(α).
Since it is clear that supp(αm ) ⊆ supp(α) (as no new letters can appear when
computing the np-normal form of αm starting with m copies of the np-normal
form of α), we finally have supp(αm ) = supp(α), as we wanted to show.
This behavior of Pα with respect to taking powers or roots allows us to show an
interesting consequence: All roots of an element in a parabolic subgroup belong
to the same parabolic subgroup.
Corollary 8.3. Let AS be an Artin–Tits group of spherical type. If α belongs
to a parabolic subgroup P , and β ∈ AS is such that β m = α for some nonzero
integer m, then β ∈ P .
Proof. Since α is a power of β, Theorem 8.2 tells us that Pα = Pβ . Now α ∈ P
implies Pα ⊆ P , as Pα is the minimal parabolic subgroup containing α. Hence
β ∈ Pβ = Pα ⊆ P .
9
Intersection of parabolic subgroups
In this section we will show one of the main results of this paper: The intersection of two parabolic subgroups in an Artin–Tits group of spherical type AS is
also a parabolic subgroup.
We will use the parabolic subgroup Pα associated to an element α ∈ AS , but
we will also need some technical results explaining how the left normal form of
a positive element, in which some factors equal ∆X for some X ⊆ S, behaves
when it is multiplied by another element:
Lemma 9.1. Let X ⊆ S be nonempty, and let α ∈ A+
S be a simple element such
that ∆X α is simple. Then there is Y ⊆ S and a decomposition α = ρβ in A+
S,
such that Xρ = ρY and the left normal form of ∆X (∆X α) is (ρ∆Y )(∆Y β).
Proof. We proceed by induction on the length |α| of α as a word over S. If
|α| = 0 then ∆X (∆X α) = ∆X ∆X , which is already in left normal form, and
the result holds taking Y = X and ρ = β = 1. Suppose then that |α| > 0 and
the result is true when α is shorter.
Let Z ⊆ S be the set of initial letters of ∆X α. That is, Z = {σi ∈ S : σi 4
∆X α}. It is clear that X ⊆ Z, as every letter of X is a prefix of ∆X .
Notice that the set of final letters of ∆X is precisely X. Hence, if Z = X
the decomposition ∆X (∆X α) is already in left normal form, so the result holds
taking X = Y , ρ = 1 and β = α.
28
Suppose on the contrary that there exists some t ∈ Z which is not in X. Then
t 4 ∆X α so t ∨ ∆X = ∆X rX,t 4 ∆X α. Cancelling ∆X from the left we have
rX,t 4 α, so α = rX,t α1 .
We know that XrX,t = rX,t T for some T ( S. Hence:
∆X (∆X α) = ∆X (∆X rX,t α1 ) = (rX,t ∆T )(∆T α1 ).
Now consider the element ∆T (∆T α1 ). As α1 is shorter than α, we can apply
the induction hypothesis to obtain that α1 = ρ1 β, where T ρ1 = ρ1 Y for some
Y ( S, and the left normal form of ∆T (∆T α1 ) is (ρ1 ∆Y )(∆Y β).
We claim that the left normal form of ∆X (∆X α) is (ρ∆Y )(∆Y β), where ρ =
rX,t ρ1 . First, we have:
∆X (∆X α) = ∆X (∆X rX,t ρ1 β) = (rX,t ∆T )(∆T ρ1 β)
= (rX,t ρ1 ∆Y )(∆Y β) = (ρ∆Y )(∆Y β).
Next, we see that ρ∆Y = ∆X ρ is simple, as it is a prefix of ∆X α which is
simple. Finally, the set of final letters of ρ∆Y contains the set of final letters of
its suffix ρ1 ∆Y , which in turn contains the set of initial letters of ∆Y β (as the
product (ρ1 ∆Y )(∆Y β) is in left normal form). So the claim holds.
Now it just remains to notice that Xρ = XrX,t ρ1 = rX,t T ρ1 = rX,t ρ1 Y = ρY ,
to finish the proof.
Lemma 9.2. Let X ⊆ S be nonempty, and let m > r > 0. Let α ∈ A+
S be
such that sup(α) = r, and let x1 x2 · · · xm+r be the left normal form of (∆X )m α
(where some of the the first factors can be equal to ∆S and some of the last
factors can be trivial). Then there is Y ⊆ S and a decomposition α = ρβ in
A+
S , such that Xρ = ρY and:
1. x1 · · · xr = (∆X )r ρ = ρ (∆Y )r .
2. xi = ∆Y for i = r + 1, . . . , m − 1.
3. xm · · · xm+r = ∆Y β.
Proof. Suppose first that α is a simple element, so r = 1. By the domino rule
[7, Definition III 1.57, Proposition V 1.52] (see also [8, Lemma 1.32]), the left
normal form of (∆X )m α is computed as follows:
∆X
∆X
/
y2
x1
x2
/
∆X
/
ym−2
y3
/
x3
∆X
/
/
/
xm−1
ym−1
/
xm
∆X
/
/
ym
/
xm+1
α
/
where the xi and the yi are defined from right to left, in the following way: First,
the left normal form of ∆X α is ym xm+1 (here xm+1 could be trivial). Then the
29
left normal form of ∆X ym is ym−1 xm , and so on. Around each square, the
down-right path represents the left normal form of the right-down path.
By construction, ∆X 4 ym . Hence, by the previous lemma, the left normal
form of ∆X ym is (ρ∆Y )(∆Y β1 ), for some Y ⊆ S and some ρ which conjugates
X to Y . Hence ym−1 = ρ∆Y = ∆X ρ, and xm = ∆Y β1 .
Now, if some yk = ∆X ρ, as it is clear that the left normal form of ∆X (∆X ρ) =
(ρ∆Y )(∆Y ), it follows that yk−1 = ρ∆Y = ∆X ρ and xk = ∆Y . Therefore, the
above diagram is actually as follows:
∆X
ρ∆Y
∆X
/
ρ∆Y
∆Y
/
∆X
/
ρ∆Y
/
∆Y
/
ρ∆Y
/
/
∆Y
∆X
∆X
/
ym
ρ∆Y
/
∆Y β1
/
/
xm+1
α
/
By construction, α = ρβ1 xm+1 . This shows the result for r = 1.
The case r > 1 follows from the above one and the domino rule. If α = α1 · · · αr
in left normal form, the left normal form of (∆X )m α is computed by completing
the squares from the diagram in Figure 4 (row by row, from right to left), where
the down-right path is the left normal form of the right-down path, around each
square.
By construction, the subsets X = Y0 , Y1 , · · · , Yr = Y of S and the elements
ρ1 , . . . , ρr satisfy Yi−1 ρi = ρi Yi . This implies that the first r factors in the
normal form of (∆X )m α are:
(ρ1 ∆Y1 )(ρ2 ∆Y2 ) · · · (ρr ∆Yr ) = (∆X )r ρ = ρ(∆Y )r ,
where ρ = ρ1 · · · ρr . Moreover, xi = ∆Y for i = r + 1, . . . , m − 1, as we can see
in the diagram. And finally we see that xm = ∆Y βr and
(∆X )m α = ρ(∆Y )m−1 xm · · · xm+r = (∆X )m−1 ρxm · · · xm+r .
Therefore ρxm · · · xm+r = ∆X α = ∆X ρβ = ρ∆Y β. Hence xm · · · xm+r = ∆Y β,
as we wanted to show.
Therefore, if we multiply a big power (∆X )m by some element α which is a
product of r simple factors, the normal form of the result still has m − r − 1
factors of the form ∆Y for some Y .
In the forthcoming result, we will need a special procedure to compare elements
in AS . For that purpose, we introduce the following:
Definition 9.3. For every element γ ∈ AS we will define an integer ϕ(γ) as
follows: Conjugate γ to γ ′ ∈ RSSS∞ (γ). Let U = supp(γ ′ ). Then ϕ(γ) = |∆U |,
the length of the element ∆U as a word in the standard generators.
30
∆X
ρ1 ∆Y1
∆X
/
ρ1 ∆Y1
∆Y1
ρ2 ∆Y2
∆Y2
∆X
/
ρ1 ∆Y1
/
ρ2 ∆Y2
/
∆Y1
ρ1 ∆Y1
/
/
ρ2 ∆Y2
/
∆Y2
∆X
/
∆Y1
ρ1 ∆Y1
/
ρ2 ∆Y2
/
/
/
∆Y2
∆Y1
∆Y2
∆X
/
ρ1 ∆Y1
/
ρ2 ∆Y2
/
∆X
∆Y1
ρ1 ∆Y1
/
ρ2 ∆Y2
/
∆Y2
∆X
/
∆Y1
∆X
/
/
ρ1 ∆Y1
/
∆Y1 β1
α1
/
/
ρ2 ∆Y2
/
∆Y2 β2
α2
/
/
/
/
/
/
31
/
/
∆Yr−1
ρr ∆Yr
/
ρr ∆Yr
∆Yr
∆Yr−1
ρr ∆Yr
/
∆Yr
∆Yr−1
/
ρr ∆Yr
/
/
∆Yr
∆Yr−1
/
∆Yr−1 βr−1
/
αr
ρr ∆Yr
/
∆Yr βr
/
xm+1
/
xm+2
/
Figure 4: Computing the left normal form of (∆X )m α in the proof of Lemma 9.2.
/
xm+r
/
Proposition 9.4. The integer ϕ(γ) is well defined. Moreover, if γ is conjugate
to a positive element, then ϕ(γ) = |∆X |, where X = supp(β) for any positive
element β conjugate to γ.
Proof. Suppose that γ ′ , γ ′′ ∈ RSSS∞ (γ), and let U = supp(γ ′ ) and V =
supp(γ ′′ ). Then Pγ ′ = AU and Pγ ′′ = AV , and every element x conjugating γ ′
to γ ′′ must also conjugate AU to AV . Hence x−1 zU x = zV , by Lemma 2.1.
This implies that |zU | = |zV |. Moreover, as AU is conjugate to AV , we have
zU = ∆eU and zV = ∆eV for the same e ∈ {1, 2}. Therefore |∆U | = |∆V |, which
proves that ϕ(γ) is well defined.
On the other hand, if γ is conjugate to a positive element β, and X = supp(β),
then Pβ = AX . We can then apply the same argument as above to γ ′ and β, to
obtain that ϕ(γ) = |∆X |.
We can finally show one of the main results in this paper:
Theorem 9.5. Let P and Q be two parabolic subgroups of an Artin–Tits group
AS of spherical type. Then P ∩ Q is also a parabolic subgroup.
Proof. If either P or Q is equal to AS or to {1}, the result is trivially true.
Hence we can assume that both subgroups are proper parabolic subgroups.
If P ∩ Q = {1} the result holds. Hence we will assume that there exists some
nontrivial element α ∈ P ∩ Q. We take α such that ϕ(α) is maximal (notice
that ϕ(α) is bounded above by |∆S |).
Let Pα be the parabolic subgroup associated to α. By Theorem 1.1, we know
that Pα ⊆ P , and also Pα ⊆ Q, so Pα ⊆ P ∩ Q. Moreover, up to conjugating
Pα , P and Q by the same suitable element, we can assume that Pα is standard,
so Pα = AZ ⊆ P ∩ Q for some Z ( S. Notice that ∆Z ∈ Pα ⊆ P ∩ Q.
We will show that P ∩ Q = Pα , that is, P ∩ Q = AZ .
Take any element w ∈ P ∩ Q. In order to show that w ∈ AZ , we will consider
its associated parabolic subgroup Pw , which we will denote by T . By the above
arguments, we have T ⊆ P ∩ Q and, in particular, zT ∈ P ∩ Q. Notice that
T is conjugate to AX for some X ( S, hence zT is conjugate (by the same
conjugating element) to the positive element zX . Since the support of zX is X,
it follows that PzX = AX , and conjugating back we have PzT = T . Therefore,
if we show that zT ∈ AZ , this will imply that T ⊆ AZ and then w ∈ AZ , as
desired.
We then need to show that zT ∈ AZ . Let a−1 b be the np-normal form of zT .
We will now construct an infinite family of elements in P ∩ Q, using zT and
∆Z . For every m > 0, consider βm = zT (∆Z )m = a−1 b(∆Z )m . By construction
βm ∈ P ∩ Q for every m > 0.
Suppose that a = a1 · · · ar and b = b1 · · · bs are the left normal forms of a and
32
b, respectively. Then
m
βm = ar−1 · · · a−1
1 b1 · · · bs (∆Z ) .
The np-normal form of βm is computed by making all possible cancellations
in the middle of the above expression. As all the involved factors are simple
elements, it follows that inf(βm ) ≥ −r and sup(βm ) ≤ s + m.
Recall that ϕ(α) = |∆Z | is maximal among the elements in P ∩Q. Let us denote
n = ϕ(α) = |∆Z |. Now, for every m > 0, choose some βem ∈ RSSS∞ (βm ).
Claim There is M > 0 such that βem is positive for all m > M .
Let Um = supp(βem ). We know by maximality of ϕ(α) that |∆Um | ≤ n. So the
length of each simple element in the normal form of βem must be at most n.
e
Let x−1
As βem ∈ RSSS∞ (βm ) ⊆
m ym be the the np-normal form of βm .
SSS(βm ), it follows that xm is a positive element formed by at most r simple elements, and ym is a positive element formed by at most s + m simple
elements.
Given some m > 0, suppose that none of the factors in the left normal form of ym
is equal to ∆Um . This means that the length of each factor of ym is at most n−1,
so |ym | ≤ (n − 1)(s + m), that is, |ym | ≤ (n − 1)m + k where k is a constant
independent of m. Hence the exponent sum of βem as a product of standard
generators and their inverses is s(βem ) = |ym | − |xm | ≤ |ym | ≤ (n − 1)m + k.
But this exponent sum is invariant under conjugation, hence s(βem ) = s(βm ) =
|b| − |a| + nm. That is, s(βem ) = nm + K for some constant K independent of
m. We then have nm + K ≤ (n − 1)m + k, that is, m ≤ k − K.
If we denote M = k − K, it follows that for every m > M , the left normal form
of ym has some factor equal to ∆Um . Recall that Um = supp(βem ), so βem ∈ AUm .
This means that the left normal form of ym starts with ∆Um . But there cannot
be cancellations between x−1
m and ym , so this implies that xm = 1. Therefore,
βem is positive for all m > M . This proves the claim.
We then know that, if m > M , the element βm is conjugate to a positive element.
The good news is that one can conjugate βm to a positive element βbm , using
a conjugating element cm whose length is bounded by a constant independent
of m. Indeed, one just needs to apply iterated cycling to βm until its infimum
becomes non-negative. Since inf(βm ) ≥ −r, one just needs to increase the
infimum at most r times, and we know from [2] that this can be done with
at most r|∆S | − r cyclings. Hence, we can take sup(cm ) ≤ r|∆S | − r, say
sup(cm ) ≤ N , a number which is independent of m.
−1 −1
We then have βbm = c−1
b(∆Z )m cm ∈ A+
m βm c m = c m a
S . We will now try to
b
describe the support of the positive element βm .
Consider the element (∆Z )m cm . By Lemma 9.2, if m is big enough we can
decompose cm = ρm dm so that ∆Z ρm = ρm ∆Ym (actually Zρm = ρm Ym ) for
some Ym ( S, and the left normal form of (∆Z )m cm finishes with m − N − 1
33
copies of ∆Ym followed by some factors whose product equals ∆Ym dm . Notice
that making m big enough, we can have as many copies of ∆Ym as desired.
−1
The negative part of βbm as it is written above is c−1
, which is the product
m a
b
of at most N + r inverses of simple factors. Since βm is positive, this negative
part must cancel completely with the positive part. Namely, it cancels with the
first N + r simple factors in the left normal form of b(∆Z )m cm .
But the first N +r simple factors of b(∆Z )m cm are a prefix of b multiplied by the
first N + r factors of (∆Z )m cm . Recall that we can take m big enough so that
(∆Z )m cm has as many copies of ∆Ym as desired. Hence, for m big enough we
can decompose b(∆Z )m cm = A∆Ym dm , where A is a positive element containing
−1
enough simple factors to absorb c−1
completely. That is, A = acm B for some
m a
positive element B. It follows that βbm = B∆Ym dm .
Now recall that ρ−1
m ∆Z ρm = ∆Ym , hence |∆Ym | = n. On the other hand,
b
since βm is positive, its support determines ϕ(βm ). Hence, if U = supp(βbm ),
we have |∆U | ≤ n. As Ym ⊆ U , it follows that n = |∆Ym | ≤ |∆U | ≤ n, so
|∆Ym | = |∆U | = n and then Ym = U = supp(βbm ). This implies in particular
that dm ∈ AYm .
b
But now recall that βbm = c−1
m βm cm , where βm is positive. Hence the min−1
−1
= ρm dm AYm d−1
imal parabolic subgroup containing βm is cm AYm cm
m ρm =
−1
ρm AYm ρm = AZ .
Therefore, βm ∈ AZ for some m big enough. Since βm = zT (∆Z )m , it follows
that zT ∈ AZ , as we wanted to show.
10
The lattice of parabolic subgroups
We finish this paper with an interesting simple consequence of the main result in
the previous section: The set of parabolic subgroups forms a lattice for inclusion.
Proposition 10.1. Let AS be an Artin–Tits group of spherical type and let P
be the set of parabolic subgroups of AS . If π is a predicate on P such that the
conjunction of π(P ) and π(Q) implies π(P ∩ Q) for any P, Q ∈ P, then the
set Pπ = {P ∈ P : π(P )} contains a unique minimal element with respect to
inclusion, namely
\
P.
P ∈Pπ
Proof. First note that for P, Q ∈ P, we have P ∩ Q ∈ P by Theorem 9.5, so
π(P ∩ Q) is defined.
The set Pπ is partially ordered by inclusion. We will show that
\
R=
P
P ∈Pπ
34
is the unique minimal element in Pπ . It is clear by definition that R is contained
in every element of Pπ , hence it just remains to show that R is an element of Pπ .
Notice that the set P of parabolic subgroups in AS is a countable set, as every
element P ∈ P can be determined by a subset X ⊆ S and an element α ∈ AS
such that α−1 P α = AX . As there are a finite number of subsets of S and a
countable number of elements in AS , it follows that P is countable.
Therefore, the set Pπ is also countable, and we can enumerate its elements:
Pπ = {Pi : i ∈ N}. Now let
n
\
Pi .
Tn =
i=0
By Theorem 9.5 and the assumption on π, the intersection of any finite number
of elements of Pπ is contained in Pπ , so we have Tn ∈ Pπ for all n ≥ 0.
We then have the following descending chain of elements of Pπ
T0 ⊇ T1 ⊇ T2 ⊇ · · ·
where the intersection of all the parabolic subgroups in this chain equals R.
We finish the proof by noticing that in AS there cannot be an infinite chain of
distinct nested parabolic subgroups, as if α−1 AX α ( β −1 AY β then |X| < |Y |.
Hence, there can be at most |S| + 1 distinct nested parabolic subgroups in any
chain. Therefore, there exists N ≥ 0 such that TN = TN +k for every k > 0, and
then
N
∞
\
\
Pi ,
Ti = TN =
R=
i=0
i=0
so R is an element of Pπ .
Example 10.2. Let AS be an Artin–Tits group of spherical type and α ∈ AS .
Applying Proposition 10.1 with the predicate π(P ) = (α ∈ P ), we see that the
minimal parabolic subgroup Pα containing α that was defined in Proposition 7.2
is the intersection of all parabolic subgroups containing α.
Theorem 10.3. The set of parabolic subgroups of an Artin–Tits group of spherical type is a lattice with respect to the partial order determined by inclusion.
Proof. Let AS be an Artin–Tits group of spherical type, and let P be the set
of parabolic subgroups of AS . This set is partially ordered by inclusion. Now
assume that P, Q ∈ P are given.
By Theorem 9.5, P ∩ Q is the unique maximal parabolic subgroup among those
parabolic subgroups contained in both P and Q.
Applying Proposition 10.1 with the predicate π(T ) = (P ∪ Q ⊆ T ) shows that
there is a unique minimal parabolic subgroup among those parabolic subgroups
containing both P and Q.
35
11
Adjacency in the complex of irreducible parabolic subgroups
We postponed to this section the proof of the following result, which characterizes the pairs of adjacent subgroups in the complex of irreducible parabolic
subgroups.
Theorem 2.2. Let P and Q be two distinct irreducible parabolic subgroups of
an Artin–Tits group AS of spherical type. Then zP zQ = zQ zP holds if and only
if one of the following three conditions are satisfied:
1. P ( Q.
2. Q ( P .
3. P ∩ Q = {1} and xy = yx for every x ∈ P and y ∈ Q.
Proof. If P ( Q then zP ∈ Q and zQ is central in Q, so both elements commute.
Similarly, if Q ( P then zP and zQ commute. Also, if the third condition is
satisfied every element of P commutes with every element of Q, so zP and zQ
commute.
Conversely, assume that zP and zQ commute. We can assume {1} =
6 P ( AS
and {1} 6= Q ( AS , as otherwise either Condition 1 or Condition 2 holds. We are
going to prove a result which is slightly stronger than what is required: We shall
show that P and Q can be simultaneously conjugated to standard irreducible
parabolic subgroups AX and AY (for some subsets X, Y ⊆ S); moreover, one of
the following holds:
1. X ( Y .
2. Y ( X.
3. X ∩ Y = ∅, and all elements of X commute with all elements of Y .
Notice that the four properties listed in the statement of Theorem 2.2 are preserved by conjugation. Hence, up to conjugation, we can assume that P is
standard.
We decompose zQ in pn-normal form. Namely, zQ = ab−1 where a and b are
positive elements such that a ∧ b = 1 (where ∧ means the greatest common
suffix in AS ). The suffix order is preserved by right multiplication, so we can
right-multiply the above equation by b−1 , and we have (ab−1 ) ∧ 1 = b−1 , that
is, zQ ∧ 1 = b−1 .
−1
Now zQ commutes with zP , and 1 also commutes with zP . Hence zQ zP zQ
and
−1
1zP (1) are positive elements (we are assuming that P is standard, so zP is
positive). It follows by convexity (Lemma 5.8 applied to the suffix order) that
(zQ ∧ 1)zP (zQ ∧ 1)−1 is positive, that is, b−1 zP b is positive.
36
But we know from Corollary 6.5 (see also [5, Corollary 40]) that each positive
conjugate of zP is the generator of the center of a standard irreducible parabolic
subgroup. That is, b−1 zP b = zX for some X ( S.
On the other hand, it is shown in [5, Theorem 4] that if zQ = ab−1 is in
pn-normal form, then b is the minimal standardizer of Q, that is, b is the smallest positive element which conjugates Q to a standard parabolic subgroup, so
b−1 zQ b = zY for some Y ( S.
Therefore, when conjugating both zP and zQ by b, we obtain elements zX
and zY , generators of the centers of standard parabolic subgroups. We can
then assume, up to conjugacy, that P and Q are both standard irreducible
parabolic subgroups.
Now we will need the following:
Claim: Let AS be an arbitrary Artin–Tits group. Let s0 , . . . , sk ∈ S be standard generators such that si does not commute with si+1 , and si 6= si+2 for
every i. If an element α ∈ A+
S is represented by a positive word w which contains the subsequence s0 s1 · · · sk , then all positive words representing α contain
the same subsequence.
Proof of the claim: It is known [18] that Artin monoids inject in their groups.
This implies that every positive word representing α is obtained from w after
a finite sequence transformations, each one replacing a subword sts · · · (having m(s, t) letters) with tst · · · (also having m(s, t) letters). It suffices to show
that the word obtained from w after a single transformation contains the subsequence s0 · · · sk . If m(s, t) = 2, the transformation replaces st with ts. But
the subword st can intersect the subsequence s0 · · · sk in at most one letter
(as si and si+1 do not commute for every i), hence the subsequence survives
after the transformation. If m(s, t) ≥ 3 then the subword sts · · · intersects the
subsequence s0 · · · sk of w in at most two consecutive letters (as si 6= si+2 for
every i). This intersection is either (si , si+1 ) = (s, t) or (si , si+1 ) = (t, s). In either case, the subsequence survives after the transformation, as tst · · · contains
both possible subsequences. This shows the claim.
Recall that we are assuming that P and Q are distinct nonempty proper standard parabolic subgroups of AS , so P = AX and Q = AY , where X, Y ( S
and ΓX and ΓY are connected graphs. We are also assuming that zP and zQ
(that is, zX and zY ) commute. We will further assume that none of the three
conditions in the statement holds, and we will arrive at a contradiction.
If X ∩ Y = ∅, Condition 3 not being satisfied implies the existence of a ∈ X and
b ∈ Y that are adjacent in ΓS .
Otherwise, as Condition 1 is not satisfied and ΓX is connected, there exist
a ∈ X \ Y and s1 ∈ X ∩ Y that are adjacent in ΓS . Moreover, as Condition 2
is not satisfied and ΓY is connected, there are b ∈ Y \ X and a simple path
s1 , s2 , . . . , sk = b in ΓY .
In either case, we have a path a = s0 , s1 , . . . , sk = b satisfying the hypothesis of
37
the above claim, where a ∈ X \ Y , b ∈ Y \ X, and s1 , . . . , sk ∈ Y .
Now consider the element zP zQ , which is equal to zX zY . It is a positive element,
and any representative of zX involves the letter a. On the other hand, let us
denote Ai = {s1 , . . . , si } for i = 1, . . . , k. Then ∆Ak 4 ∆Y 4 zY , and we have
a decomposition
∆Ak = ∆A1 rA1 ,s2 rA2 ,s3 · · · rAk−1 ,sk
where the product of the i leftmost factors is precisely ∆Ai , for i = 1, . . . , k.
Now notice that s1 = ∆A1 and recall that si is the first letter of rAi−1 ,si (by
Lemma 4.4) for i = 2, . . . , k. Therefore, the sequence s0 , . . . , sk is a subsequence
of zX zY .
From the above claim, it follows that every positive word representing zX zY
must contain s0 , . . . , sk as a subsequence. Now choose a representative of zY zX
which is the concatenation of a word representing zY and a word representing zX . In such a representative, each instance of the letter b appears to the left
of each instance of the letter a. Therefore, this word does not contain s0 , . . . , sk
as a subsequence. Hence zX zY 6= zY zX , that is, zP zQ 6= zQ zP . The latter is a
contradiction which finishes the proof.
References
[1] Joan S. Birman, Volker Gebhardt, and Juan González-Meneses. Conjugacy
in Garside groups. I. Cyclings, powers and rigidity. Groups Geom. Dyn.,
1(3):221–279, 2007.
[2] Joan S. Birman, Ki Hyoung Ko, and Sang Jin Lee. The infimum, supremum, and geodesic length of a braid conjugacy class. Adv. Math., 164:41–
56, 2001.
[3] Egbert Brieskorn and Kyoji Saito. Artin-Gruppen und Coxeter-Gruppen.
Invent. Math., 17:245–271, 1972.
[4] Harold S. M. Coxeter. The complete enumeration of finite groups of the
form ri2 = (ri rj )kij = 1. J. Lond. Math. Soc., 1-10(1):21–25, 1935.
[5] Marı́a Cumplido. On the minimal positive standardizer of a parabolic
subgroup of an Artin-Tits group. arXiv:1708.09310.
[6] Patrick Dehornoy. Groupes de Garside. Ann. Sci. École Norm. Sup. (4),
35(2):267–306, 2002.
[7] Patrick Dehornoy. Foundations of Garside theory, volume 22 of EMS Tracts
in Mathematics. European Mathematical Society (EMS), Zürich, 2015.
With François Digne, Eddy Godelle, Daan Krammer and Jean Michel.
[8] Patrick Dehornoy and Volker Gebhardt. Algorithms for Garside calculus.
J. Symb. Comp., 63:68–116, 2014.
38
[9] Patrick Dehornoy and Luis Paris. Gaussian groups and Garside groups, two
generalisations of Artin groups. Proc. London Math. Soc. (3), 79(3):569–
604, 1999.
[10] Pierre Deligne. Les immeubles des groupes de tresses généralisés. Invent.
Math., 17:273–302, 1972.
[11] Elsayed A. El-Rifai and H. R. Morton. Algorithms for positive braids.
Quart. J. Math. Oxford Ser. (2), 45(180):479–497, 1994.
[12] David B. A. Epstein, James W. Cannon, Derek F. Holt, Silvio V. F. Levy,
Michael S. Paterson, and William P. Thurston. Word processing in groups.
Jones and Bartlett Publishers, Boston, MA, 1992.
[13] Benson Farb and Dan Margalit. A primer on mapping class groups. Princeton University Press, Princeton, NJ, 2012.
[14] Nuno Franco and Juan González-Meneses. Conjugacy problem for braid
groups and Garside groups. J. Algebra, 266(1):112–132, 2003.
[15] Volker Gebhardt. A new approach to the conjugacy problem in Garside
groups. J. Algebra, 292(1):282–302, 2005.
[16] Eddy Godelle. Normalisateur et groupe d’Artin de type sphérique. J.
Algebra, 269(1):263–274, 2003.
[17] Sang-Jin Lee. Algorithmic solutions to decision problems in the braid
groups. PhD thesis, Advanced Institute of Science and Technology, Korea, 2000.
[18] Luis Paris. Artin monoids inject in their groups. Comment. Math. Helv.,
77(3):609–637, 2002.
Marı́a Cumplido.
[email protected]
Univ Rennes, CNRS, IRMAR - UMR 6625, F-35000 Rennes (France).
& Depto. de Álgebra. Instituto de Matemáticas (IMUS).
Universidad de Sevilla. Av. Reina Mercedes s/n, 41012 Sevilla (Spain).
Volker Gebhardt.
[email protected]
Western Sydney University
Centre for Research in Mathematics
Locked Bag 1797, Penrith NSW 2751, Australia
Juan González-Meneses.
[email protected]
Depto. de Álgebra. Instituto de Matemáticas (IMUS).
Universidad de Sevilla. Av. Reina Mercedes s/n, 41012 Sevilla (Spain).
39
Bert Wiest.
[email protected]
Univ Rennes, CNRS, IRMAR - UMR 6625, F-35000 Rennes (France).
40
| 4 |
Sparse Coding by Spiking Neural Networks:
Convergence Theory and Computational Results
arXiv:1705.05475v1 [cs.LG] 15 May 2017
Ping Tak Peter Tang, Tsung-Han Lin, and Mike Davies
Intel Corporation
{peter.tang, tsung-han.lin, mike.davies}@intel.com
Abstract
In a spiking neural network (SNN), individual neurons operate autonomously and only communicate
with other neurons sparingly and asynchronously via spike signals. These characteristics render a massively
parallel hardware implementation of SNN a potentially powerful computer, albeit a non von Neumann one.
But can one guarantee that a SNN computer solves some important problems reliably? In this paper, we
formulate a mathematical model of one SNN that can be configured for a sparse coding problem for feature
extraction. With a moderate but well-defined assumption, we prove that the SNN indeed solves sparse
coding. To the best of our knowledge, this is the first rigorous result of this kind.
1
Introduction
A central question in computational neuroscience is to understand how complex computations emerge from
networks of neurons. For neuroscientists, a key pursuit is to formulate neural network models that resemble the
researchers’ understanding of physical neural activities and functionalities. Precise mathematical definitions
or analysis of such models is less important in comparison. For computer scientists, on the other hand, a key
pursuit is often to devise new solvers for specific computational problems. Understanding of neural activities
serves mainly as an inspiration for formulating neural network models; the actual model adopted needs not
be so much faithfully reflecting actual neural activities as to be mathematically well defined and possesses
provable properties such as stability or convergence to the solution of the computational problem at hand.
This paper’s goal is that of a computer scientist. We formulate here two neural network models that can
provably solve a mixed `2 -`1 optimization problem (often called a LASSO problem). LASSO is a workhorse
for sparse coding, a method applicable across machine learning, signal processing, and statistics. In this work,
we provide a framework to rigorously establish the convergence of firing rates in a spiking neural network to
solutions corresponding to a LASSO problem. This network model, namely the Spiking LCA, is first proposed
in [16] to implement the LCA model [15] using analog integrate-and-fire neuron circuit. We will call the LCA
model in [15] the Analog LCA (A-LCA) for clarity. In the next section, we introduce the A-LCA model and
its configurations for LASSO and its constrained variant CLASSO. A-LCA is a form of Hopfield network,
but the specific (C)LASSO configurations render convergence difficult to establish. We will outline our recent
results that use a suitable generalization of the LaSalle principle to show that A-LCA converges to (C)LASSO
solutions.
In A-LCA, neurons communicate among themselves with real numbers (analog values) during certain time
intervals. In Spiking LCA (S-LCA), neurons communicate among themselves via “spike” (digital) signals
that can be encoded with a single bit. Moreover, communication occurs only at specific time instances.
Consequently, S-LCA is much more communication efficient. Section 3 formulates S-LCA and other auxiliary
variables such as average soma currents and instantaneous spike rates. The section subsequently provides a
proof that the instantaneous rates converge to CLASSO solutions. This proof is built upon the results we
obtained for A-LCA and an assumption that a neuron’s inter-spike duration cannot be arbitrarily long unless
it stops spiking altogether after a finite time.
Finally, we devise a numerical implementation of S-LCA and empirically demonstrate its convergence to
CLASSO solutions. Our implementation also showcases the potential power of problem solving with spiking
1
neurons in practice: when an approximate implementation of S-LCA is ran on a conventional CPU, it is able
to converge to a solution with modest accuracy in a short amount of time. The convergence is even faster than
FISTA [4], one of the fastest LASSO solvers. This result suggests that a specialized spiking neuron hardware
is promising, as parallelism and sparse communications between neurons can be fully leveraged in such an
architecture.
2
Sparse Coding by Analog LCA Neural Network
We formulate the sparse coding problem as follows. Given N vectors in RM , Φ = [φ1 , φ2 , . . . , φN ], N > M , (Φ
is usually called a redundant—due to N > M —dictionary) and a vector s ∈ RM (consider s an input signal),
try to code (approximate well) s as Φa where a ∈ RN contains as many zero entries as possible. Solving a
sparse coding problem has attracted a tremendous amount of research effort [9]. One effective way is to arrive
at a through solving the LASSO problem [19] where one minimizes the `2 distance between s and Φa with a
`1 regularization on the a parameters. For reasons to be clear later on, we will consider this problem with the
additional requirement that a be non-negative: a ≥ 0. We call this the CLASSO (C for constrained) problem:
1
ks − Φak22 + λkak1
(1)
2
Rozell, et al., presented in [15] the first neural network model aims at solving LASSO. N neurons are used
to represent each of the N dictionary atoms φi . Each neuron receives an input signal bi that serves to increase
a “potential” value ui (t) that a neuron keeps over time. When this potential is above a certain threshold,
neuron-i will send inhibitory signals that aim to reduce the potential values of the list of receiving neurons
with which neuron-i “competes.” The authors called this kind of algorithms expressed in this neural network
mechanism Locally Competitive Algorithms (LCAs). In this paper, we call this as analog LCA (A-LCA).
Mathematically, an A-LCA can be described as a set of ordinary differential equations (a dynamical system)
of the form
X
u̇i (t) = bi − ui (t) −
wij T (uj (t)), i = 1, 2, . . . , N.
argmina≥0
j6=i
The function T is a thresholding (also known as an activation) function that decides when and how an inhibition
signal is sent. The coefficients wij further weigh the severity of each inhibition signal. In this general form,
A-LCA is an instantiation of the Hopfield network proposed in [10, 11].
Given a LASSO or CLASSO problem, A-LCA is configured by bi = φTi s, wij = φTi φj . For LASSO, the
thresholding function T is set to T = T±λ , and for CLASSO it is set to T = Tλ : Tλ (x) is defined as 0 when
def
x ≤ λ and x − λ when x > λ; and T±λ (x) = Tλ (x) + Tλ (−x). Note that if all the φi s are normalized to
φTi φi = 1, then the dynamical system in vector notation is
u̇ = b − u − (ΦT Φ − I)a,
a = T(u).
(2)
The vector function T : RN → RN simply applies the same scalar function T to each of the input vector’s
component. We say A-LCA solves (C)LASSO if a particular solution of the dynamical system converges to
a vector u∗ and that a∗ = T(u∗ ) is the optimal solution for (C)LASSO. This convergence phenomenon was
demonstrated in [15].
LCA needs not be realized on a traditional computer via some classical numerical differential equation
solver; one can realize it using, for example, an analog circuit which may in fact be able to solve (C)LASSO
faster or with less energy. From the point of view of establishing A-LCA as a robust way to solve (C)LASSO,
rigorous mathematical results on A-LCA’s convergence is invaluable. Furthermore, any convergence theory
here will bound to have bearings on other neural network architectures, as we will see in Section 3. Had the
thresholding function T in A-LCA be strictly increasing and unbounded above and below, standard Lyapunov
theory can be applied to establish convergence of the dynamical system. This is already pointed out in
Hopfield’s early work for both graded neuron model [11] and spiking neuron model [12]. Nevertheless, such
an A-LCA does not correspond to (C)LASSO where the thresholding functions are not strictly increasing.
Furthermore, the CLASSO thresholding function is bounded below as well. While Rozell, et al., demonstrated
some convergence phenomenon [15], it is in two later works [1, 2] that Rozell and other colleagues attempted
to complement the original work with convergence analysis and proofs. Among other results, these works
2
stated that for any particular A-LCA solution u(t), T(u(t)) with T = T±λ converges to a LASSO optimal
solution. Unfortunately, as detailed in [18], there are major gaps in the related proofs and thus the convergence
claims are in doubt. Moreover, the case of T = Tλ for the CLASSO problem was not addressed. In [18], one
of our present authors established several convergence results which we now summarize so as to support the
development of Section 3. The interested reader can refer to [18] for complete details.
A-LCA is a dynamical system of the form u̇ = F(u), F : RN → RN . In this case, the function F is
defined as F(x) = b − x − (ΦT Φ − I)T(x). Given any “starting point” u(0) ∈ RN , standard theory of ordinary
differential equations shows that there is a unique solution u(t) such that u(0) = u(0) and u̇(t) = F(u(t)) for
all t ≥ 0. Solutions are also commonly called flows. The two key questions are (1) given some (or any) starting
point u(0) , whether and in what sense the flow u(t) converges, and (2) if so, what relationships exist between
the limiting process and the (C)LASSO solutions.
The LaSalle invariance principle [14] is a powerful tool to help answer the first question. The gist of the
principle is that if one can construct a function V : RN → R such that it is non-increasing along any flow, then
one can conclude that all flows must converge to a special set1 M which is the largest positive invariant set2
inside the set of points at which the Lie derivative of V is zero. The crucial technical requirements on V are that
V possesses continuous partial derivatives and be radially unbounded3 . Unfortunately, the natural choice of
V for A-LCA does not have continuous first partial derivatives everywhere, and not radially unbounded in the
case of CLASSO. Both failures are due to the special form of T with V (u) = (1/2)ks − ΦT(u)k22 + λ kT(u)k1 .
Based on a generalized version of LaSalle’s principle proved in [18], we establish that any A-LCA flow u(t)
(LASSO or CLASSO) converges to M, the largest positive invariant set inside the “stationary” set S = { u |
(∂V /∂un )Fn (u) = 0 whenever |T (un )| > 0. }.
Having established u(t) → M, we further prove in [18] that M is in fact the inverse image under T of the
set C of optimal (C)LASSO solutions. The proof is based on the KKT [6] condition that characterizes C and
properties particular to A-LCA.
Theorem 1. (A-LCA convergence results from [18]) Given the A-LCA
u̇ = F(u),
F(u) = b − u − (ΦT Φ − I)T(u).
T is based on T±λ if one wants to solve LASSO and Tλ , CLASSO. Let u(0) be an arbitrary starting point and
u(t) be the corresponding flow. The following hold:
1. Let C be the set of (C)LASSO optimal solutions and F̂ = T−1 (C) be C’s inverse image under the
corresponding thresholding function T. Then any arbitrary flow u(t) always converges to the set F̂.
2. Moreover, limt→∞ E(a(t)) = E ∗ where E ∗ is the optimal objective function value of (C)LASSO, E(a) =
(1/2)ks − Φak22 + λkak1 and a(t) = T(u(t)).
3. Finally, when the (C)LASSO optimal solution a∗ is unique, then there is a unique u∗ such that F(u∗ ) = 0.
Furthermore u(t) → u∗ and T(u(t)) → T(u∗ ) = a∗ as t → ∞.
3
Sparse Coding by Spiking LCA Neural Network
A-LCA is inherently communication efficient: Neuron-i needs to communicate to others only when its internal
state ui (t) exceeds a threshold, namely |T (ui (t))| > 0. In a sparse coding problem, it is expected that the
internal state will eventually stay perpetually below the threshold for many neurons. Nevertheless, for the
entire duration during which a neuron’s internal state is above threshold, constant communication is required.
Furthermore, the value to be sent to other neurons are real valued (analog) in nature. In this perspective, a
spiking neural network (SNN) model holds the promise of even greater communication efficiency. In a typical
SNN, various internal states of a neuron are also continually evolving. In contrast, however, communication in
the form of a spike—that is one bit—is sent to other neurons only when a certain internal state reaches a level
(a firing threshold). This internal state is reset right after the spiking event, thus cutting off communication
1 u(t)
→ M if dist(u(t), M) → 0 where dist(x, M) = inf y∈M kx − yk2 .
set is positive invariant if any flow originated from the set stays in that set forever.
3 The function V is radially unbounded if |V (u)| → ∞ whenever kuk → ∞
2
2A
3
immediately until the time when the internal state is “charged up” enough. Thus communication is necessary
only once in a certain time span and then a single bit of information carrier suffices.
While such a SNN admits mathematical descriptions [16, 3], there is hitherto no rigorous results on the
network’s convergence behavior. In particular, it is unclear how a SNN can be configured to solve specific
problems with some guarantees. We present now a mathematical formulation of a SNN and a natural definition
of instantaneous spiking rate. Our main result is that under a moderate assumption, the spiking rate converges
to the CLASSO solution when the SNN is suitably configured. To the best of our knowledge, this is the first
time a rigorous result of this kind is established.
In a SNN each of the N neurons maintains, over time t, an internal soma current µi (t) configured to receive a
Rt
constant input bi and an internal potential vi (t). The potential is “charged” up according to vi (t) = 0 (µi − λ)
where λ ≥ 0 is a configured bias current. When vi (t) reaches a firing threshold νf at a time ti,k , neuroni resets its potential to νr but simultaneously fires an inhibitory signal to a preconfigured set of receptive
neurons, neuron-js, whose soma current will be diminished according to a weighted exponential decay function:
µj (t) ← µj (t) − wji α(t − ti,k ), where α(t) = e−t for t P
≥ 0 and zero otherwise. Let {ti,k } be the ordered time
sequence of when neuron-i spikes and define σi (t) = k δ(t − ti,k ), then the soma current satisfies both the
algebraic and differential equations below (the operator ∗ denotes convolution):
X
X
µi (t) = bi −
wij (α ∗ σj )(t), µ̇i (t) = bi − µi (t) −
wij σj (t).
(3)
j6=i
j6=i
Equation 3 together with the definition of the spike trains σi (t) describe our spiking LCA (S-LCA).
An intuitive definition of spike rate of a neuron is clearly the number of spikes per unit time. Hence we
define the instantaneous spiking rate ai (t) and average soma current ui (t) for neuron-i as:
Z t
Z t
1
1
def
def
σi (s) ds and ui (t) =
µi (s) ds, t0 ≥ 0 is a parameter.
(4)
ai (t) =
t − t0 t0
t − t 0 t0
Rt
Apply the operator (t − t0 )−1 t0 ds to the differential equation portion in (3), using also the relationship
u̇i (t) = (µi (t) − ui (t))/(t − t0 ), and we obtain
X
u̇i (t) = bi − ui (t) −
wij aj (t) − (ui (t) − ui (t0 ))/(t − t0 ).
(5)
j6=i
Consider now a CLASSO problem where the dictionary atoms are non-negative and normalized to unit
Euclidean norm. Configure S-LCA with λ and wij = φTi φj from Equation 1, and set νf ← 1, νr ← 0. So
configured, it can be shown that the soma currents’ magnitudes (and thus that of the average currents as well)
are bounded: there is a B such that |µi (t)|, |ui (t)| ≤ B for all i = 1, 2, . . . , N and all t > t0 . Consequently,
lim u̇i (t) = lim (µi (t) − ui (t))/(t − t0 ) = 0,
t→∞
t→∞
for i = 1, 2, . . . , N .
The following relationship between ui (t) and ai (t) is crucial:
Z ti,k
Z t
1
1
(µi − λ) +
(µi − λ) = ai (t) + vi (t)/(t − t0 ).
ui (t) − λ =
t − t 0 t0
t − t0 ti,k
(6)
(7)
From this equation and a moderate assumption that inter-spike duration ti,k+1 − ti,k cannot be arbitrarily
long unless neuron-i stops spiking altogether, one can prove that
Tλ (ui (t)) − ai (t) → 0
as t → ∞.
(8)
The complete proof for this result is left in the Appendix.
We can derive convergence of S-LCA as follows. Since the average soma currents are bounded, Bolzanodef
T
Weierstrass theorem shows that u(t) = [u1 (t), u2 (t), . . . , uN (t)] has at least one limit point, that is, there is
∗
N
a point u ∈ R and a time sequence t1 < t2 < · · · , tk → ∞ such that u(tk ) → u∗ as k → ∞. By Equation 8,
def
T(u(tk )) → T(u∗ ) = a∗ . By Equations 5 and 6, we must therefore have
0 = b − u∗ − (ΦT Φ − I)a∗ .
(9)
Since S-LCA is configured for a CLASSO problem, the limit u∗ is in fact a fixed point of A-LCA, which is
unique whenever CLASSO’s solution is. In this case, the limit point of the average currents {u(t) | t ≥ 0} is
unique and thus indeed we must have u(t) → u∗ and a(t) → a∗ , the CLASSO solution.
4
Potential
Neuron 2
Neuron ID
Neuron 1
1
Neuron 3
0
0
0
2
4
6
8
10
12
Spike count
1
0
-1
2
4
6
8
10
4
6
8
10
12
8
10
12
15
2
0
2
(c)
(a)
Current
01
02
03
12
9
6
3
0
12
SNN time
0
2
4
6
SNN time
(b)
(d)
Figure 1: Detail dynamics of a simple 3-neuron spiking network: In the beginning before any neuron fires,
the membrane potentials (see 1-a) of the neurons grow linearly with a rate determined by the initial soma
currents (see 1-b). This continues until Neuron 3 becomes the first to reach the firing threshold; an inhibitory
spike is sent to Neurons 1 and 2, causing immediate drops in their soma currents. Consequently, the growths
of Neurons 1 and 2’s membrane potentials slow down, and the neurons’ instantaneous spike rates decrease.
The pattern of membrane integration, spike, and mutual inhibition repeats; the network rapidly moves into a
steady state where stable firing rates can be observed. The convergent firing rates yield the CLASSO optimal
solution of [0.684, 0, 1.217] (this solution is also verified by running the LARS algorithm [8]). The four subfigures: (a) Evolution of membrane potential. (b) Evolution of soma current. (c) Spike raster plot. (d) Solid
Rt
lines are the cumulative spike count of each neuron, and dashed line depicts the value of 0 Tλ (ui (s))ds in the
corresponding A-LCA. The close approximation indicates a strong tie between the two formulations.
4
Numerical Simulations
To simulate the dynamics of S-LCA on a conventional CPU, one can precisely solve the continuous-time spiking
network formulation by tracking the order of firing neurons. In between consecutive spiking events, the internal
variables, vi (t) and µi (t), of each neuron follow simple differential equations and permit closed-form solutions.
This method, however, is likely to be slow that it requires a global coordinator that looks ahead into the future
to determine the next firing neuron. For efficiency, we instead take an approximate approach that evolves the
network state in constant-sized discrete time steps. At every step, the internal variables of each neuron are
updated and a firing event is triggered if the potential exceeds the firing threshold. The simplicity of this
approach admits parallel implementations and is suitable for specialized hardware designs. Nevertheless, this
constant-sized-time-step approach introduces errors in spike timings: the time that a neuron sends out a spike
may be delayed by up to a time step. As we will see in this section, the timing error is the major factor that
limits the accuracy of the solutions from spiking networks. However, such an efficiency-accuracy trade-off may
in fact be desirable for certain applications such as those in machine learning.
4.1
Illustration of SNN dynamics
We solve a simple CLASSO problem: mina 12 ks − Φak22 + λ kak1 subject to a ≥ 0, where
0.5
0.3313 0.8148 0.4364
s = 1 , Φ = [φ1 φ2 φ3 ] = 0.8835 0.3621 0.2182 , λ = 0.1.
1.5
0.3313 0.4527 0.8729
We use a 3-neuron network configured with bi = φTi s, wij = φTi φj , the bias current as λ = 0.1 and firing
threshold set to 1. Figure 1 details the dynamics of this simple 3-neuron spiking network. It can be seen from
this simple example that the network only needs very few spike exchanges for it to converge. In particular, a
weak neuron, such as Neuron 2, is quickly rendered inactive by inhibitory spike signals from competing neurons.
This raises an important question: how many spikes are in the network? We do not find this question easy to
answer theoretically. However, empirically we see the number of spikes in σi (t) in S-LCA can be approximated
5
1e2
1e0
1e-1
1e-2
1e-3
1e-4
1e0
1e-1
1e-2
1e-3
1e-4
1e-5
1e-6
1e-1
Average from t = 0
Average from t = 0.5
Exponential kernel
Thresholded average current
1e1
(E(t)-E*) / E*
(E(t)-E*) / E*
1e2
step = 0.01
step = 0.001
step = 0.0001
1e1
1e0
1e1
1e2
1e3
1e4
1e-5
1e-1
1e5
SNN time
1e0
1e1
1e2
1e3
1e4
1e5
SNN time
(a)
(b)
Figure 2: (a) The convergence of a 400-neuron spiking network to a CLASSO solution. (b) Comparing the
convergence of different formulations to read out solutions from a spiking neural network. Using a positive t0
for Equation 4 gives the fastest initial convergence, while using the thresholded average current reaches the
highest accuracy the quickest. Despite a lack of theoretical guarantee, the exponential kernel method yields an
acceptable, though less accurate, solution. This kernel is easy to implement in hardware and thus attractive
when a SNN “computer” is to be built.
from the state variable ui in A-LCA, that is the ui (s) in Equation 10 below are solutions to Equation 2:
Z
t
Z
σi (s)ds ≈
0
t
Tλ (ui (s))ds,
i = 1, 2, . . . , N.
(10)
0
Figure 1(d) shows the close approximation of spike counts using (10) in the example. We observe that such
approximation consistently holds in large-scale problems, suggesting a strong tie between S-LCA and A-LCA.
Since in an A-LCA configured for a sparse coding problem, we expect Tλ (ui (t)) for most i’s to converge to
zero, (10) suggests that the total spike count in S-LCA is small.
4.2
Convergence of spiking neural networks
We use a larger 400-neuron spiking network to empirically examine the convergence of spike rates to CLASSO
solution. The neural network is configured to perform feature extraction from a 8×8 image patch, using a 400atom dictionary learned from other image datasets.4 With the chosen λ, the optimal solution has 8 non-zeros
entries. Figure 2(a) shows the convergence of the objective function value in the spiking network solution,
comparing to the true optimal objective value obtained from a conventional CLASSO solver. Indeed, with a
small step size, the spiking network converges to a solution very close to the true optimum.
The relationships among step size, solution accuracy and total computation cost are noteworthy. Figure 2(a)
shows that increasing the step size from 10−3 to 10−2 sacrifices two digits of accuracy in the computed E ∗ .
The total computation cost is reduced by a factor of 103 : It takes 102 times fewer time units to converge, and
each time unit requires 10 times fewer iterations. This multiplication effect on cost savings is highly desirable
in applications such as machine learning where accuracy is not paramount. We note that a large-step-size
configuration is also suitable for problems whose solutions are sparse: The total number of spikes are fewer
and thus total timing errors are correspondingly fewer.
There are several ways to “read out” a SNN solution. Most rigidly, we can adhere to ai (t) in Equation 4
with t0 = 0. In practice, picking some t0 > 0 is better when we expect a sparse solution: The resulting
ai (t) will be identically zero for those neurons that only spike before time t0 . Because Tλ (ui (t)) − ai (t) → 0
(Equation 8), another alternative is to use Tλ (ui (t)) as the solution, which is more likely to deliver a truly
R t t−s
sparse solution. Finally, one can change ai (t)’s definition to τ −1 0 e− τ σi (s)ds, so that the impact of the
spikes in the past decays quickly. Figure 2(b) illustrates these different “read out” methods and shows that
the exponential kernel is as effective empirically, although we must point out that the previous mathematical
convergence analysis is no longer applicable in this case.
6
1e-2
1e-4
0.4
4
2
1672 spikes
(avg 0.052 spike/neuron)
1e-3
0
0.05
0.1
0.15
0.2
0.25
Execution time (second)
(a) 52×52 image
0.3
0.1
0
0
0.05
0.1
0.15
0.2
1e1
0.25
0.3
S-LCA
FISTA
5th iteration
1e0
6
Sparsity (%)
1e-1
L2 error
(E(t)-E*) / E*
1e0
8
S-LCA (l2 error)
FISTA (l2 error)
S-LCA (sparsity)
FISTA (sparsity)
S-LCA
FISTA
5th iteration
(E(t)-E*) / E*
0.7
1e1
1e-1
1e-2
10922 spikes
(avg 0.019 spike/neuron)
1e-3
1e-4
0
0.5
(b) Breakdown of (a)
1
1.5
2
2.5
3
Execution time (second)
Execution time (second)
(c) 208×208 image
Figure 3: CPU execution time for spiking neural networks, with a step size of 0.01. There are 32,256 unknowns
in the 52×52 image case shown in (a), and 582,624 unknowns in the 208×208 image case shown in (c). (b)
shows the breakdown of the objective function in the 52×52 image experiment. The `2 error is defined as
ks−Φak2
ksk2 , and the sparsity is the percentage of entries with values greater than 0.01. Note that the spiking
network finds the optimal solution by gradually increasing the sparsity, rather than decreasing as in FISTA.
This results in the spare spiking activities of the neurons.
4.3
CPU benchmark of a spiking network implementation
Our earlier discussions suggest that the spiking network can solve CLASSO using very few spikes. This
property has important implications to a SNN’s computational efficiency. The computation cost of a N neuron spiking network has two components: neuron states update and spiking events update. Neuron states
update includes updating the internal potential and current values of every neuron, and thus incurs an O(N )
cost at every time step. The cost of spiking events update is proportional to N times the average number
of inter-neuron connections because a spiking neuron updates the soma currents of those neurons to which
it connects. Thus this cost can be as high as O(N 2 ) (for networks with all-to-all connectivity, such as in
the two previous examples) or as low as O(N ) (for networks with only local connectivity, such as in the
example below). Nevertheless, spiking-event cost is incurred only when there is a spike, which may happen
far fewer than once per time step. In practice we observe that computation time is usually dominated by
neuron-states update, corroborating the general belief that spiking events are relatively rare, making spiking
networks communication efficient.
We report the execution time of simulating the spiking neural network on a conventional CPU, and compare
the convergence time with FISTA [4], one of the fastest LASSO solvers. We solve a convolutional sparse coding
problem [20] on a 52x52 image and a 208x208 image.5 The experiments are ran on 2.3GHz Intel R Xeon R CPU
E5-2699 using a single core. SIMD is enabled to exploit the intrinsic parallelism of neural network and matrix
operations. As shown in Figure 3, the spiking network delivers much faster early convergence than FISTA,
despite its solution accuracy plateauing due to spike timing errors. The convergence trends in both figures
are similar, demonstrating that spiking networks can solve problems of various sizes. The fast convergence of
spiking networks can be attributed to their ability to fully exploit the sparsity in solutions to reduce the spike
counts. The fine-grain asynchronous communication can quickly suppress most neurons from firing. In FISTA
or in any other conventional solvers, communications between variables is similarly needed, but is realized
through matrix-vector multiplications performed in an iteration-to-iteration basis. The only way to exploit
sparsity is to avoid computations involving variables that have gone to zero during one iteration. A comparison
of how the sparsity in solutions evolves in S-LCA and FISTA can be found in Figure 3(b).
5
Discussion
Our work is closely related to the recent progress on optimality-driven balanced network [7, 3, 5]. The SNN
model in [3, 5] differs slightly from ours in that only one internal state is used in the former. Using our
4 The
5 We
input has 128 dimensions by splitting the image into positive and negative channels.
use 8×8 patches, a stride of 4, and a 128×224 dictionary.
7
language here, neuron-i’s spike is generated by µi (t) reaching a threshold and not by vi (t), whose role is
eliminated altogether. Despite the differences in the details of neuron models, spikes in both networks occur
from a competitive process between neurons, and serve to minimize a network-level energy function. This
work furthers the understanding of the convergence property in such spiking networks. Additionally, it is
argued that in a tightly balanced excitatory/inhibitory network, spike codes are highly efficient that each
spike is precisely timed to keep the network in optimality. This work provides evidence of the high coding
efficiency even before the network settles into steady-state. By utilizing well-timed spikes, the neurons are able
to collectively solve optimization problems with minimum communications. We demonstrate that this insight
can be translated into practical value through an approximate implementation on conventional CPU.
We observe that mathematical rigor was not a focus in [3]: The statement that in a tightly balanced
network the potential converges to zero is problematic when taken literally as all spiking events will eventually
cease in that case. The stationary points of the loss function (Equation 6 in [3]) are no longer necessarily the
stationary points when the firing rates are constrained to be non-negative. The general KKT condition has
to be used in this situation. The condition E(no spike) > E(spike) does not affect the behavior of the loss
function in between spikes. In essence, there is no guarantee that the trajectory of the r(t) variable generated
d
E(r(t)) ≤ 0.
by the SNN is descending the loss function, that is, dt
Our SNN formulation and the established convergence properties can be easily extended to incorporate an
additional `2 -penalty term, the so-called elastic-net problem [21]
argmina≥0
1
ks − Φak22 + λ1 kak1 + λ2 kak22
2
(11)
The elastic-net formulation can be handled by modifying the slope of the activation function T in A-LCA as
follows
0
if x ≤ λ1
def
def
Tλ (x) =
, T±λ (x) = Tλ (x) + Tλ (−x).
x−λ1
if
x
>
λ
1
2λ2 +1
In S-LCA, this corresponds to setting the bias current to λ1 and modifying the firing thresholds νf of the
neurons to 2λ2 + 1.
There are several other works studying the computation of sparse representations using spiking neurons.
Zylberberg et al. [22] show the emergence of sparse representations through local rules, but do not provide a
network-level energy function. Hu et al. [13] derive a spiking network formulation that minimizes a modified
time-varying LASSO objective. Shapero et al. [16, 17] are the first to propose the S-LCA formulation, but
yet to provide an in-depth analysis. We believe the S-LCA formulation can be a powerful primitive in future
spiking network research.
The computational power of spikes enables new opportunities in future computer architecture designs. The
spike-driven computational paradigm motivates an architecture composed of massively parallel computation
units. Unlike the von Neumann architecture, the infrequent but dispersed communication pattern between the
units suggests a decentralized design where memory should be placed close to the compute, and communication
can be realized through dedicated routing fabrics. Such designs have the potential to accelerate computations
without breaking the energy-density limit.
8
Appendices
A
Governing Algebraic and Differential Equations
Consider a neural networking consisting of N neurons. The only independent variables are the N soma
currents µi (t) for i = 1, 2, . . . , N . There are another N variables of potentials vi (t) which are depedent on the
currents to be described momentarily. Consider the following configurations. Each neron receives a positive
constant input current bi . A nonnegative current bias λ and a positive potential threshold ν are set a priori.
At any given time t0 such that vi (t0 ) < ν, the potential evolves according to
Z t
vi (t) =
(µi (s) − λ) ds
t0
until the time ti,k > t0 when vi (t) = ν. At this time, a spike signal is sent from neuron-i to all the nerons
that are connected to it, weighted by a set of pre-configured weights wj,i . The potential vi (t) is reset to zero
immediately afterwards. That is, for t > ti,k but before the next spike is generated,
Z t
vi (t) =
(µi (s) − λ) ds.
ti,k
Moreover, for any consecutive spike times ti,k and ti,k+1 ,
Z ti,k+1
(µi (s) − λ) ds = ν.
ti,k
Finally, when neuron-i receives a spike from neuron-j at time tj,k with a weight wi,j , the soma current µi (t)
is changed by the additive signal −wi,j α(t − tj,k ) where
α(t) = H(t)e−t/τ ,
H(t) being the Heaviside function that is 1 for t ≥ 0 and 0 otherwise. The sign convention used here means
that a positive wi,j means that a spike from neuron-j always tries to inhibit neuron-i.
Suppose the initial potentials vi (0) are all set to be below the spiking threshold ν, then the dynamics of
the system can be succintly described by the set of algebraic equations
X
µi (t) = bi −
wi,j (α ∗ σj )(t), i = 1, 2, . . . , N
(AE)
j6=i
where ∗ is the convolution operator and σj (t) is the sequence of spikes
X
σj (t) =
δ(t − tj,k ),
k
δ(t) being the Dirac delta function. The spike times are determined in turn by the evolution of the soma
currents that govern the evolutions of the potentials.
One can also express the algebraicR equations AE as a set of differential equations. Note that the Heaviside
t
function can be expressed as H(t) = −∞ δ(s) ds. Hence
Z t
d
d
α(t) =
e−t/τ
δ(s) ds
dt
dt
−∞
1
= − α(t) + δ(t).
τ
Thus, differentiating Equation AE yields
X
1
µ̇i (t) = (bi − µi (t)) −
wi,j σj (t).
(DE)
τ
j6=i
Note that Equations AE and DE are given in terms of the spike trains σj (t) that are governed in turn by
the soma currents themselves as well as the configuartions of initial potentials, the spiking threshold ν and
bias current λ.
9
B
Defining Spike Rates and Average Currents
Suppose the system of spiking neurons are initialized with sub-threshold potentials, that is, vi (0) < ν for
all i = 1, 2, . . . , N . Thus at least for finite time after 0, all soma currents remain constant at bi and that
no neurons will generate any spikes. Furthermore, consider for now that wi,j ≥ 0 for all i, j. That is, only
inhibitory signals are present. Let the spike times for each neuron i be 0 < ti,1 < ti,2 < · · · . This sequence
could be empty, finite, or infinite. It is empty if the potential vi (t) never reaches the threshold. It is finite if
the neuron stop spiking from a certain time onwards. We will define the spike rate, ai (t), and average current,
ui (t), for each neuron as follows.
1 Rt
def
t 0 σi (s) ds t > 0, ,
ai (t) =
0
t=0
and
def
ui (t) =
R
1 t
t 0
bi
µi (s) ds
t > 0,
.
t=0
With these definitions, the section presents the following results.
• The inhibition assumption leads to the fact that all the soma currents are bounded above. This in turns
shows that none of the neurons can spike arbitrarily rapidly.
• The fact that neurons cannot spike arbitrarily rapidly implies the soma currents are bounded from below
as well.
• The main assumption needed (that is, something cannot be proved at this point) is that if a neron spikes
infinitely often, then the duration between consecutive spikes cannot be arbitrarily long.
• Using this assumption and previous established properties, one can prove an important relationship
between the spike rate and average current in terms of the familiar thresholding function T
Proposition 1. There exists bounds B− and B+ such that µi (t) ∈ [B− , B+ ] for all i and t ≥ 0. With the
def
convention that ti,0 = 0, then there is a positive value R > 0 such that ti,k+1 −ti,k ≥ 1/R for all i = 1, 2, . . . , N
and k ≥ 0, whenever these values exist.
Proof. Because all spike signals are inhibitory, clearly from Equation AE, we have µi (t) ≤ bi for all t ≥ 0.
def
Thus, defining B+ = maxi bi leads to µi (t) ≤ B+ for all i and t ≥ 0.
Given any two consecutive ti,k and ti,k+1 that exist,
Z ti,k+1
+
ν = vi (ti,k ) +
(µi (s) − λ) ds
ti,k
≤
vi (t+
i,k )
+ (ti,k+1 − ti,k )(B+ − λ).
Note that vi (t+
i,k ) = 0 if k ≥ 1. For the special case when k = 0, this value is vi (0) < ν. Hence
n
o
ti,k+1 − ti,k ≥ min min{ν − vi (0)}, ν (B+ − λ)−1 .
i
Thus there is a R > 0 so that ti,k+1 − ti,k ≥ 1/R whenever these two spike times exist.
Finally, because of duration between spikes cannot be arbitrarily small, it is easy to see that
def
γ =
∞
X
−`
e Rτ ≥ (α ∗ σ)(t).
`=0
Therefore,
def
B− = min{−γ
X
wi,j } ≤ µi (t)
j6=i
for all i = 1, 2, . . . , N and t ≥ 0. So indeed, there are B− and B+ such that µi (t) ∈ [B− , B+ ] for all i and
t ≥ 0.
10
Proposition 1 shows that among other things, there is a lower bound of the duration of consecutive spikes.
The following is an assumption.
Assumption 1. Assume that there is a positive number r > 0 such that whenever the numbers ti,k and ti,k+1
exist, ti,k+1 − ti,k ≤ 1/r.
In simple words, this assumption says that unless a neuron stop spiking althogether after a certain time,
the duration between consecutive spike cannot become arbitrarily long. With this assumption and the results
in Proposition 1, the following important relationship between u(t) and a(t) can be established.
Theorem 2. Let T (x) be the thresholding function where T (x) = 0 for x ≤ λ, and T (x) = x − λ for x > λ.
For each neuron i, there is a function ∆i (t) such that
T (ui (t)) = ai (t) ν + ∆i (t)
and that ∆i (t) → 0 as t → ∞.
Proof. Let
A = { i | neuron-i spikes infinitely often }
(A stands for “active”), and
I = { i | neuron-i stop spiking after a finite time }
(I stands for “inactive”). First consider i ∈ I. Let ti,k be the time of the final spike. For any t > ti,k ,
ui (t) − λ
=
1
t
Z
1
t
Z
ti,k
(µi (s) − λ) ds +
0
1
t
Z
t
(µi (s) − λ) ds
ti,k
ti,k
1
(µi (s) − λ) ds + vi (t)
t
0
1
= ai (t) ν + vi (t),
t
1
ui (t) = ai (t) ν + λ + vi (t).
t
=
Note that vi (t) ≤ ν always. If vi (t) ≥ 0, then
0 ≤ T (ui (t)) − ai (t) ≤ ν/t.
If vi (t) < 0,
−ai (t) ν ≤ T (ui (t)) − ai (t) ν ≤ 0.
Since i ∈ I, ai (t) → 0 obviously. Thus
T (ui (t)) − ai (t) ν → 0.
Consider the case of i ∈ A. For any t > 0, let ti,k be the largest spike time that is no bigger than t. Because
i ∈ A, ti,k → ∞ as t → ∞.
ui (t) − λ
=
1
t
Z
ti,k
(µi (s) − λ) ds +
0
1
= ai (t) ν +
t
Z
1
t
Z
t
(µi (s) − λ) ds
ti,k
t
(µi (s) − λ) ds.
ti,k
Furthermore, note that because of the assumption ti,k+1 − ti,k ≤ 1/r always, where r > 0, lim inf ai (t) ≥ r.
In otherwords, there is a time T large enough such that ai (t) ≥ r/2 for all i ∈ A and t ≥ T . Moreover,
0 ≤ t − ti,k ≤ ti,k+1 − ti,k ≤ 1/r and µi (t) − λ ∈ [B− − λ, B+ − λ]. Thus
1
t
Z
t
(µi (s) − λ) ds ∈
ti,k
1
[B− − λ, B+ − λ]/r → 0.
t
11
When this term is eventually smaller in magnitude than ai (t) ν,
Z
1 t
T (ui (t) = ai (t) ν +
(µi (s) − λ) ds
t ti,k
and we have
T (ui (t)) − ai (t) ν → 0.
C
Spiking Neural Nets and LCA
This section shows that for a spiking neural net (SNN) that corresponds to a LCA, the limit points of the
SNN necessarily are the fixed points of the LCA. In particular, when the LCA corresponds to a constrained
LASSO, that is LASSO where the parameters are constrained to be nonnegative, whose solution is unique,
then SNN necessarily converges to this solution. The proof for all these is surprisingly straightforward.
The following differential equation connecting u̇i (t) to ui (t) and all other spiking rates aj (t) is crucial.
u̇i (t) =
X
1
1
(bi − ui (t)) −
wi,j aj (t) − (ui (t) − bi ) .
τ
t
(rates-DE)
j6=i
Derivation of this relationship is straightforward. First, apply the operation (1/t)
Z
X
1
1 t
µ̇i (s) ds = (bi − ui (t)) −
wi,j aj (t).
t 0
τ
Rt
0
to Equation DE:
j6=i
To find an expression for the left hand side above, note that
Z
d
d 1 t
ui (t) =
µi (s) ds
dt
dt t 0
Z
1 t
1
µi (t) − 2
=
µi (s) ds
t
t 0
1
(µi (t) − ui (t)) .
=
t
Therefore
Z
1 t
1
(µi (t) − bi )
µ̇i (s) ds =
t 0
t
1
1
(µi (t) − ui (t)) + (ui (t) − bi )
=
t
t
d
1
=
ui (t) + (ui (t) − bi ) .
dt
t
Consequently, Equation rates-DE is established.
Observe that because µi (t) is bounded (Proposition 1), so is the average current ui (t). This means that
u̇i (t) → 0 as t → ∞ because it was shown just previously that u̇i (t) = (µi (t) − ui (t))/t.
Since µi (t) and ai (t) are all bounded, the vectors u(t) must have a limit point (Bolzano-Weierstrass) u∗ .
By Theorem 2, there is a correpsonding a∗ such that T(u∗ ) = a∗ ν. Moreover, we must have
1
(b − u∗ ) − W a∗
τ
= 0. Hence
0=
where the matrix W has entries wi,j and wi,i
1
1
(b − u∗ ) − W T (u∗ ).
τ
ν
∗
∗
∗
Indeed, u , a = T (u ) correspond to a fixed point of LCA. In the case when this LCA corresponds to a
LASSO with unique solution, there is only one fixed point, which implies that there is also one possible limit
point of SNN, that is, the SNN must converge, and to the LASSO solution.
0=
12
References
[1] A. Balavoine, J. Romberg, and C. J. Rozell. Convergence and rate analysis of neural networks for sparse approximation. IEEE Trans. Neural Netw., 23(9):1377–1389, September 2012.
[2] A. Balavoine, C. J. Rozell, and J. Romberg. Convergence of a neural network for sparse approximation using
nonsmooth Lojasiewicz inequality. In Proceedings of the International Joint Conference on Neural Networks,
Dalla, TX, August 2013.
[3] D. G. T. Barrett, S. Denève, and C. K. Machens. Firing rate predictions in optimal balanced networks. In NIPS,
2013.
[4] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM
Journal on Imaging Sciences, 2(1):183–202, 2009.
[5] M. Boerlin, C. Machens, and S. Deneve. Predictive coding of dynamical variables in balanced spiking networks.
PLoS Comput Biol, 9(11), 2013.
[6] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, Cambridge, 2004.
[7] S. Denève and C. K. Machens. Efficient codes and balanced networks. Nature neuroscience, 19(3):375–382, 2016.
[8] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. The Annals of Statistics, 32(2):407–
499, 2004.
[9] M. Elad. Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing.
Springer, 2010.
[10] J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl.
Acad. Sci., 79(8):2554–2558, 1982.
[11] J. J. Hopfield. Neurons with graded response have collective computational properties like those of two-state
neurons. Proc. Natl. Acad. Sci., 1:3088–3092, 1984.
[12] J. J. Hopfield and A. V. Herz. Rapid local synchronization of action potentials: Toward computation with coupled
integrate-and-fire neurons. Proc. Natl. Acad. Sci., 92(15):6655–6662, 1995.
[13] T. Hu, A. Genkin, and D. B. Chklovskii. A network of spiking neurons for computing sparse representations in an
energy-efficient way. Neural Comput., 24(11):2852–2872, 2012.
[14] J. P. LaSalle. Some extensions of Liapunov’s second method. IRE Trans. Circuit Theory, 7(4):520–527, December
1960.
[15] C. J. Rozell, D. H. Johnson, R. G. Baraniuk, and B. A. Olshausen. Sparse coding via thresholding and local
competition in neural circuits. Neural Comput., 20(10):2526–2563, 2008.
[16] S. Shapero, C. Rozell, and P. Hasler. Configurable hardware integrate and fire neurons for sparse approximation.
Neural Netw., 45:134–143, 2013.
[17] S. Shapero, M. Zhu, J. Hasler, and C. Rozell. Optimal sparse approximation with integrate and fire neurons.
International journal of neural systems, 24(05):1440001, 2014.
[18] P. T. P. Tang. Convergence of LCA Flows to (C)LASSO Solutions. ArXiv e-prints, Mar. 2016, 1603.01644.
[19] R. Tibshirani. Regression shrinkage and selection via the Lasso. J. Royal Statist. Soc B., 58(1):267–288, 1996.
[20] M. D. Zeiler, D. Krishnan, G. W. Taylor, and R. Fergus. Deconvolutional networks. In IEEE CVPR, 2010.
[21] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. J. Royal Statist. Soc B., 67:301–320,
2005.
[22] J. Zylberberg, J. T. Murphy, and M. R. DeWeese. A sparse coding model with synaptically local plasticity
and spiking neurons can account for the diverse shapes of v1 simple cell receptive fields. PLoS Comput Biol,
7(10):e1002250, 2011.
13
| 9 |
Fractional Order Load-Frequency Control of Interconnected
Power Systems Using Chaotic Multi-objective Optimization
Indranil Pana,b and Saptarshi Dasb,c,*
a) Centre for Energy Studies, Indian Institute of Technology Delhi, Hauz Khas, New Delhi 110016,
India.
b) Department of Power Engineering, Jadavpur University, Salt Lake Campus, LB-8, Sector 3,
Kolkata-700098, India.
c) School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ,
United Kingdom.
Authors’ Emails:
[email protected], [email protected] (I. Pan)
[email protected], [email protected] (S. Das*)
Phone: +44-7448572598
Abstract:
Fractional order proportional-integral-derivative (FOPID) controllers are designed for load
frequency control (LFC) of two interconnected power systems. Conflicting time domain
design objectives are considered in a multi objective optimization (MOO) based design
framework to design the gains and the fractional differ-integral orders of the FOPID
controllers in the two areas. Here, we explore the effect of augmenting two different chaotic
maps along with the uniform random number generator (RNG) in the popular MOO
algorithm – the Non-dominated Sorting Genetic Algorithm-II (NSGA-II). Different measures
of quality for MOO e.g. hypervolume indicator, moment of inertia based diversity metric,
total Pareto spread, spacing metric are adopted to select the best set of controller parameters
from multiple runs of all the NSGA-II variants (i.e. nominal and chaotic versions). The
chaotic versions of the NSGA-II algorithm are compared with the standard NSGA-II in terms
of solution quality and computational time. In addition, the Pareto optimal fronts showing the
trade-off between the two conflicting time domain design objectives are compared to show
the advantage of using the FOPID controller over that with simple PID controller. The nature
of fast/slow and high/low noise amplification effects of the FOPID structure or the four
quadrant operation in the two inter-connected areas of the power system is also explored. A
fuzzy logic based method has been adopted next to select the best compromise solution from
the best Pareto fronts corresponding to each MOO comparison criteria. The time domain
system responses are shown for the fuzzy best compromise solutions under nominal operating
1
conditions. Comparative analysis on the merits and de-merits of each controller structure is
reported then. A robustness analysis is also done for the PID and the FOPID controllers.
Keywords: Two area Load frequency control (LFC); power system control; fractional order
PID controller; control trade-off design; chaotic NSGA-II
1. Introduction
Large scale power system networks comprise of several interconnected subsystems
representing particular geographical areas. Each of these subsystems has their own generation
capability and has variable load demand. These sub-systems are connected by tie lines which
control the flow of power between the different areas [1]. A sudden load demand in a certain
area results in a drop in system frequency which is detrimental for connected electrical loads.
To ensure proper power quality, the load frequency controllers in the interconnected power
system regulate the flow of power among the different areas through the tie lines and balance
the load and the drop in frequency. The LFCs balance the mismatch between the frequencies
of the interconnected areas and schedule the flow of power through the tie lies, helping the
interconnected power system to overcome the aberrations introduced due to varying load
demand, generation outage etc. Recently the LFCs are gaining more importance due to the
integration of renewables in the grid which have an inherent stochastic characteristic due to
the vagaries of nature unlike those of the base load thermal power plants [2], [3]. Thus proper
design and operation of the LFCs are very important for the stable and reliable operation of
large scale power systems. Control of interconnected power system considering various
aspects has been a topic of intense research in the recent past. Different type of generating
units and their effects have been studied e.g. thermal with reheat [4], generation rate
constraint (GRC) [5], reheat and battery energy storage both the areas [6], hydro turbine and
hydro-governor in both the areas [7], thermal with reheat turbine along with hydro and gas
turbine plants in both the areas [8], etc.
Traditionally a proportional-integral (PI) or a proportional-integral-derivative (PID)
controller is used for the LFCs [9] and a variety of different methods exist for proper tuning
of the controller parameters. Many robust control design techniques have been applied to the
LFC problem so that the designed controller is able to handle uncertainties of the system.
Some variants of robust designs include an adaptive output feedback based robust control
[10], adaptive robust control [11], decentralized robust control using iterative linear matrix
inequalities (LMIs) [12], decentralized control [13] etc. Optimal control designs using Linear
Quadratic Regulator (LQR) technique has been reported in [14]. Several other popular
control philosophies like the model predictive control (MPC) [15], sliding mode control [16],
and singular value decomposition (SVD) [17], etc. have also been applied in decentralized
LFC of multi-area power systems.
Many computational intelligence based techniques have been employed in the design
of LFCs as well. Global optimization techniques using evolutionary and swarm intelligence
has been used to tune the PID controller parameters in various literatures. Genetic algorithm
(GA) has been used in the design of LFCs in [7]. A variable structure controller has been
designed for LFC’s using GA in [18]. Fuzzy logic based gain scheduling has been done in
2
[19] to obtain improved control strategy for LFC. The application of neural networks in the
problem of load frequency control has been investigated in [20], [21]. There are other
literatures which employ robust control techniques like H ∞ loop shaping [22], µ-synthesis
[23] and LMI approaches [12], [22] with intelligent genetic algorithms to obtain robust
controllers. A detailed review of the existing design methodologies in LFC has been
documented in [24]-[26]. Other intelligent algorithms like Bacterial foraging algorithm [27],
fuzzy logic [28], recurrent fuzzy neural network [29] have also been applied in LFC of multiarea inter-connected power systems.
Fractional order controllers have been gaining popularity in recent years due to added
capability to handle control design specifications [30], [31]. Fractional order PID controllers
have been applied in a wide variety of control systems and have generally proven to be better
than their integer order (IO) counterparts [32]. Recently fractional order controllers have been
applied to power systems and favourable results have been obtained. In [33] a fractional order
controller has been designed for an automatic voltage regulator (AVR) with particle swarm
optimization (PSO) algorithm to show that the FO controllers have more robustness to tackle
uncertainties than the conventional IO-PID controller. Alomoush [34] has applied fractional
order controllers for LFC of a two area interconnected power system and an automatic
generation control for an isolated single area power system. It has been shown in [34] that the
fractional order PID controller has more flexibility in design and can adjust the system
dynamics better than the IO-PID controller. FOPID controllers are also shown to be robust
and competitive to IO-PID controllers [35–38]. In these previous literatures, only a single
objective intelligent optimization has been employed to design the control system. However
it is well known that there exists multiple trade-offs among different design specifications in
control [39] and similar system design [40]. It is not possible to simultaneously minimize all
design objectives using a particular control structure and different controller structure may
yield different trade-offs depending on the choice of the conflicting control objectives [41],
[42]. Thus there is a requirement of multi-objective approach for addressing different
conflicting objectives in the control system design [42]. In [42–44], time and frequency
domain multi-objective formulation have been used to study the design trade-offs among
various conflicting FOPID design objectives in an automatic voltage regulator (AVR) system.
It has been shown in [42–44] that sometimes the FOPID and at other times the PID controller
performs better depending on the choice of the conflicting objective functions. This concept
of MOO for FO controllers has been extended in this paper for two area LFC problem. In this
paper, FOPID controllers in the two area LFC are designed using chaotic multi-objective
NSGA-II algorithm. Set point tracking and low control signal are chosen as the two
conflicting objectives for the MOO based tuning of FOPID/PID controllers and the
performance improvement due to the FOPID compared to the PID is illustrated by numerical
simulations.
The main highlights of the paper includes:
•
Augmenting the NSGA-II algorithm with different chaotic maps (like Logistic and
Henon maps) to obtain better Pareto optimal solutions.
3
•
Using Pareto metrics like hyper-volume indicator, spacing metric, Pareto spread and
diversity metric [45]-[47], to assess the performance of the chaotic MOO algorithms.
•
Use of FOPID controller to obtain better system performance for the two area LFC.
•
Demonstration of conflicting time domain trade-offs in the controller performances
for the FOPID and the PID controller and use of a fuzzy based mechanism for
selecting the best compromise solution.
•
Robustness study of FOPID as LFC over that with PID, under system parametric
uncertainty and random change in load patterns.
The rest of the paper is organised as follows. Section 2 gives a brief description of the
two area LFC problem in the interconnected power system. Section 3 introduces the basics of
fractional calculus, FOPID controller and its flexibility over PID. Section 4 outlines the
conflicting time domain criteria based MOO for the design of the LFC system. Section 5
introduces the chaotic versions of the multi-objective NSGA-II algorithm. Different MOO
measure based selection of the best optimizer and the controller are enunciated in Section 6,
along with the respective time domain responses. In section 7, the effect of uncertainty in the
synchronizing coefficient and the effect of randomly changing load patterns in both areas are
explored. The paper ends with the conclusions in Section 8 followed by the references.
2. Load frequency control of interconnected two area power system
The main functionalities of the LFC are
a) To keep the operating power system frequency within specified tolerance limits to
ensure power quality
b) To ensure proper load sharing between the generators of the interconnected system
c) To honour the pre-specified load exchange constraints by controlling the power flow
between the interconnected areas.
Figure 1: Block diagram representation of a two area AGC system with secondary LFC loop
4
Table 1: Data for the two area system considered in the present simulation
Area
KPS (Hz/pu)
TPS (s)
R (Hz/pu MW)
B (pu
MW/Hz)
TG (s)
TT (s)
TR (s)
K1
K2
T12
Area 1
120
20
2.4
0.425
0.1
0.3
10
0.5
0.5
0.0707
Area 2
120
20
2.4
0.425
0.1
0.3
10
0.5
0.5
A schematic diagram for the two area LFC is depicted in Figure 1. The parameters of
the various units of the system are shown in Table 1 [23]. The area control error (ACE) in
each area is a function of the frequency deviation ( ∆f ) and the inter area tie-line power flow
( ∆Ptie ). The ACE is fed as an input to the FOPID controller where it calculates the
appropriate control signal to be applied. This control signal is fed into the amplifier and then
the actuator to produce an appropriate change in the mechanical torque of the turbine (prime
mover). This produces a change in the active power output of the generator to compensate the
power flow in the system and thus ∆f and ∆Ptie are kept within desired limits. The tasks of
the PID/FOPID controller in each area are to ensure faster damping of individual ACEs and
also damping the inter-area power oscillation ∆Ptie .
Each area includes steam turbine, governor, reheater stages along with GRC
nonlinearity in the turbine and dead-band in the governor. The power systems (PS) are
represented by first order transfer functions. The tie-line power is also affected by the choice
of synchronizing coefficient (T12). There are different configurations like primary and
secondary LFC as described in [34]. In the primary LFC control, load change in one area is
not corrected by a controller in that area or in other areas. For the secondary LFC loop, the
tie-line is represented by the accelerating power coefficient ( Ps = 2π T12 ). Any change in the
demand-load ( ∆PL ) will result in deviation of frequency in both the areas and the tie-line
power flow. The ACE for the i th area can be expressed as (1).
M
ACEi = ∑ ∆Pij + Bi ∆fi
(1)
j =1
where ∆Pij is the deviation in tie-line power flow from its scheduled values between the
i th area and the j th area, ∆fi is the frequency aberration in the i th area and M is the number of
areas connected to area i. The frequency bias factor ( Bi ) can be expressed as a combination
of the speed regulation ( Ri ) and the damping coefficient ( Di ) and is given by (2).
Bi = (1 Ri ) + Di
(2)
In the present model, significant effect of nonlinearity is studied in the form of deadzone in the governors and GRC in the turbines. The GRC keeps the rate of change of power
within a specified limit of δ = ±0.005 which is implemented by replacing the linear model of
5
the turbine ( ∆PR ∆X G ) by the nonlinear one as shown in Figure 1. The governor dead-band
affects the speed control under disturbances and has been chosen as 0.06% in each area. For
the present simulation study, both the areas are subjected to step-load disturbance of
PL1 = 0.02 pu and PL 2 = 0.008 pu respectively. In contemporary literature there were several
studies on the dynamics of nonlinearities like GRC [5], [29], [37], [36] and dead-zone [15]
along with reheat turbine [48], but their MOO based FO control design has not been
investigated yet, which is the main motivation of the present paper.
3. Fractional calculus and Fractional order PIλDµ (FOPID) controller
The generalized fractional differentiation and integration has mainly three definitions,
the Grunwald-Letnikov definition, Riemann-Liouville definition and Caputo definition. The
Grunwald-Letnikov formula is basically an extension of the backward finite difference
formula for successive differentiation. This formula is widely used for the numerical solution
of fractional differentiation or integration of a function. The Riemann-Liouville definition is
an extension of n-fold successive integration and is widely used for analytically finding
fractional differ-integrals. In the FO systems and control related literatures, mostly the
Caputo’s definition of fractional differ-integration is referred. This typical definition of
fractional derivative is generally used to derive fractional order transfer function models from
fractional order ordinary differential equations with zero initial conditions. According to
Caputo’s definition, the α th order derivative of a function f ( t ) with respect to time is given
by (3) which is used in the present paper for realizing the fractional integro-differential
operators of the FOPID controllers.
C
0
Dtα f ( t ) =
t
D m f (τ )
1
dτ , α ∈ ℝ + , m ∈ ℤ + , m − 1 < α < m.
Γ ( m − α ) ∫0 ( t − τ )α +1− m
(3)
Figure 2: Four quadrant operation of FOPID with different choice of speed level and noise amplification, compared to
the conventional integer order PID controller
6
The FOPID controller is an extension of the IO-PID controller with non-integer choice of
integro-differential orders along with the conventional PID controller gains. The transfer
function representation of a FOPID controller is given in (4).
C ( s ) = K p + ( Ki s λ ) + K d s µ
(4)
This typical controller structure has five independent tuning knobs i.e. the three
controller gains { K p , K i , K d } and two fractional order integro-differential operators {λ , µ} .
For λ = 1 and µ = 1 , the controller structure (4) reduces to the classical PID controller in
parallel structure. Figure 2 shows the schematic representation of the FOPID controller in the
integro-differential (λ-µ) plane and its relation with the conventional integer order P, PI, PD
and PID controllers. Several other special cases of the FO controller structure can also be
defined in the λ-µ plane like PIλ, PDµ, PIλD, PIDµ etc. It has been shown in [31] that
increasing the integral order above λ > 1 amplifies the low frequency components thus
making the closed-loop system oscillatory. Similarly, an increase in the derivative
order µ > 1 increases the high frequency gain, thus amplifying the measurement noise and
high frequency random components. The extension of ‘points to plane’ has led to the concept
of FOPID or PIλDµ controllers as shown in Figure 2 which can be further refined to define
regions of low/high noise amplification and slow/fast time response depending on the range
of values for µ and λ, below/above one. Figure 2 shows how different trade-offs could be
achieved by selecting the range of operation in the four quadrants of the λ-µ plane (instead of
only one quadrant with {λ , µ} < 1 ), for different application of FOPID controller. For the
present power system control application, the controller selection problem is divided in two
parts – fast FOPID (λ>1 or 1st/2nd quadrant operation) and slow FOPID (λ<1 or 3rd/4th
quadrant operation). Additionally, for choice of speed (λ), four other combinations of the
controllers in the two areas are explored, depending on the derivative order µ>1 or µ<1.
Within the present MOO framework, considering similar or different controller structure in
both the areas, the best non-dominated solution generated by the four controller combinations
is selected. The choice of the controller within the MOO framework has been shown in (5).
Select Slow FOPID ( λ1 , λ2 < 1) ,in both the areas
choice 1: µ1 < 1,in area 1 and µ2 < 1,in area 2
choice 2 : µ1 < 1,in area 1 and µ2 > 1,in area 2
selection by MOO
choice 3 : µ1 > 1,in area 1 and µ2 < 1,in area 2
choice 4 : µ1 > 1,in area 1 and µ2 > 1,in area 2
Select Fast FOPID ( λ1 , λ2 > 1) ,in both the areas
choice 1: µ1 < 1,in area 1 and µ2 < 1,in area 2
choice 2 : µ1 < 1,in area 1 and µ2 > 1,in area 2
selection by MOO
choice 3 : µ1 > 1,in area 1 and µ2 < 1,in area 2
choice 4 : µ1 > 1,in area 1 and µ2 > 1,in area 2
7
(5)
Therefore, within the slow and fast family of FOPID controllers, different levels of
noise amplification in the respective areas are selected automatically by the MOO which
results in non-dominated Pareto front.
Few recent research results show that band-limited implementation of FOPID
controllers using higher order rational transfer function approximation of the integrodifferential operators gives satisfactory performance in industrial automation. Here, the
Oustaloup’s recursive approximation has been used to implement the integro-differential
operators in frequency domain is given by (6), which represents a higher order analog filter.
N
sα ≃ K ∏
k =− N
s + ωk′
s + ωk
(6)
where, the poles, zeros, and gain of the filter can be recursively evaluated as (7).
ωk = ωb (ωh ωb )
k + N + (1+α ) 2
2 N +1
, ωk′ = ωb (ωh ωb )
k + N + (1−α ) 2
2 N +1
, K = ωhα
(7)
Thus, the area control errors (ACEs) can be passed through the filter (6) and the output of the
filter can be regarded as an approximation to the fractionally differentiated or integrated
signal Dα f ( t ) . These FO differentiated or integrated signals are weighted by the respective
gains to form the final control signal which goes to the governor in Figure 1. In (6)-(7), α is
the order of the differ-integration, ( 2 N + 1) is the order of the filter and (ωb , ωh ) is the
expected frequency fitting range. In the present study, 5th order Oustaloup’s recursive
approximation is implemented to approximate the integro-differential operators within the
frequency band of ω ∈ {10−2 ,102 } rad/sec for the constant phase elements (CPEs) of the
FOPID controller.
4. Need of multi-objective optimization and conflicting time domain control
objectives
It is well known that a single controller structure cannot give good results for all
design specifications. For example, a fuzzy logic controller is good at coping with uncertainty
in the loop, whereas a model predictive controller is good for tackling large time delays in
process control. For specific applications, different controller structures would give a tradeoff solution among conflicting design specifications. Hence for effective comparison of
different controller structures it is essential to know the limits of performance of each of the
individual controllers for conflicting design specifications. In [42]-[44], a similar approach
has been taken to compare the efficacy of the FOPID controller vis-à-vis the IOPID one for
several conflicting time and frequency domain objectives respectively. In the present case,
the disturbance rejection and controller effort are considered as the conflicting objectives and
expressed in (8)-(9).
8
M
M ∞
i =1
i =1 0
J1 = ∑ ITSEi = ∑ ∫ t ⋅ ei 2 (t )dt
M
M ∞
i =1
i =1 0
(8)
J 2 = ∑ ISDCOi = ∑ ∫ ( ∆ui (t ) ) dt
2
(9)
where, ITSE represents the Integral of the Time multiplied Squared Error, ISDCO represents
Integral of the Squared Deviation in Controller Output, ei ( t ) represents the error signal
(ACE) in area i , ui ( t ) represents the control signal and M represents the total number of
areas.
The reason for considering these two as conflicting objective functions can be briefly
explained as follows. To achieve a faster damping of ACEs and the grid frequency
oscillation, it is essential that the controller gains should be higher. In other words, the
controller must be able to exert much more control action on the power generating system so
that the frequency oscillation settles within a short amount of time. However, the control
signal should ideally be smaller to prevent actuator saturation and minimize the cost
associated with sizing of a larger actuator. It can be inferred that both these objectives of
small control signal, as well as faster damping of load-disturbances cannot be ideally
obtained by a fixed set of parameters of the PID/FOPID controller. Thus there would exist a
range of values for the tuning parameters of the controller { K p , K i , K d , λ , µ} , where the
controller would show good load disturbance rejection at the cost of higher control signal and
vice-versa. After a large number of trade-off solutions between the two chosen objectives are
obtained, a compromise solution could be selected next for deciding the most optimal
controller setting [42], [49].
5. Multi-objective controller design using chaotic maps
5.1.
Chaotic multi-objective optimization
A generalized multi-objective optimization framework can be defined as follows:
Minimize F ( x) = ( f1 ( x), f 2 ( x),..., f m ( x))
F ( x) = ( f1 ( x), f 2 ( x),..., f m ( x))
(10)
such that x ∈Ω ; where Ω is the decision space, ℝ m is the objective space, and F : Ω → ℝ m
consists of m real valued objective functions.
Let, u = {u1 ,..., um } , v = {v1 ,..., vm } ∈ ℝ m be two vectors.
u is said to dominate
v
if
ui < vi ∀ i ∈ {1, 2,..., m} and u ≠ v . A point x* ∈ Ω is called Pareto optimal if ∃ x | x ∈Ω
such that F ( x) dominates F ( x* ) . The set of all Pareto optimal points, denoted by PSMOO is
9
{
}
called the Pareto set. The set of all Pareto objective vectors, PF = F ( x) ∈ ℝ m , x ∈ PS MOO , is
called the Pareto Front or the set of non-dominated solutions. This implies that no other
feasible objective vector exists which can improve one objective function without
simultaneous worsening of some other objective function.
Multi-objective Evolutionary Algorithms (MOEAs) which use non-dominated sorting
and sharing, have higher computational complexity. They use a non-elitist approach and
require the specification of a sharing parameter. The non-dominated sorting genetic algorithm
(NSGA-II) removes these problems and is able to find a better spread of solutions and better
convergence near the actual Pareto optimal front [50], [51].
The NSGA-II algorithm converts different objectives into one fitness measure by
composing distinct fronts which are sorted based on the principle of non-domination. In the
process of fitness assignment, the solution set not dominated by any other solutions in the
population is designated as the first front F1 and the solutions are given the highest fitness
value. These solutions are then excluded and the second non-dominated front from the
remaining population F2 is created and ascribed the second highest fitness. This method is
iterated until all the solutions are assigned a fitness value. Crowding distances are the
normalized distances between a solution vector and its closest neighbouring solution vectors
in each of the fronts. All the constituent elements of the front are assigned crowding distances
to be later used for niching. The selection is achieved in tournaments of size 2 according to
the following logic.
a) If the solution vector lies on a lower front than its opponent, then it is selected.
b) If both the solution vectors are on the same front, then the solution with the highest
crowding distance wins. This is done to retain the solution vectors in those regions of
the front which are scarcely populated.
The optimization variables for the fractional order PID controller are the proportionalintegral-derivative gains and the differ-integral orders, i.e.
{K
p
, K i , K d , λ , µ} for both the
areas. For the IO-PID controller the optimization variables are the gains only i.e.
{K
p
, K i , K d } for both the areas. In other words, the dimension of the decision space Ω is
five for the FOPID controller and three for the PID controller. The population size is taken as
15 × nvar and the algorithm is run until the cumulative change in fitness function value is less
than the function tolerance of 10-6 or the maximum generations ( 200 × nvar ) are exceeded.
Here, nvar of parameters to be tuned by the MOO algorithm in both the areas and given
PID
FOPID
by nvar
= 6 and nvar
= 10 for the two controller structures. The crossover fraction is taken as
0.8 and an intermediate crossover scheme is adopted. The mutation fraction is chosen as 0.2
and the Gaussian scheme is adopted. For choosing the parent vectors based on their scaled
fitness values, the algorithm uses a tournament selection method with a tournament size of 2.
The Pareto front population fraction is taken as 0.7. This parameter indicates the fraction of
10
population that the solver tries to limit on the Pareto front [51]. For the MOO problem, the
limits of
{λ , µ}
{K
p
, K i , K d } are chosen as [ 0,10] and the bounds of the differ-integral orders
are chosen within the range [ 0, 2] which is described in (5).
The uniformly distributed RNG is normally used for the crossover and mutation
operations in the standard version of the NSGA-II algorithm [50][51]. However since the
strength of evolutionary algorithms lies in the randomness of the crossover and mutation
operators, many contemporary researchers have focussed on increasing the efficiency of these
algorithms by incorporating different random behaviours through various techniques like
stochastic resonance and noise [52], chaotic maps [53] etc. These different strategies can be
classified as special cases of a broader principle of diversification which essentially entails a trade-off
between exploration and exploitation [54]. In [55] it has been shown that the performance of
single objective evolutionary algorithms increase if different types of chaotic maps are
introduced instead of the uniform RNG for the crossover and mutation operations. It has also
been demonstrated in [55] that, in general, using chaotic systems for the RNG in the
crossover and mutation operations may yield better result than using RNG from a noisy
sequence in terms of convergence and effectiveness of the algorithms in finding global
minima. In [56]-[58] it has been shown that the multi-objective NSGA-II algorithm can be
improved by using chaotic maps and gives better result than the original NSGA-II algorithm
in terms of convergence and efficiency. This is due to the fact that the chaotic process
introduces diversity in the solutions. In this paper, we adopt similar policy and use chaotic
logistic map and chaotic Henon map to obtain comparable solutions and convergence with
respect to the standard NSGA-II algorithm. The logistic map is one of the simplest discrete
time dynamical systems exhibiting chaos. The equation for the logistic map is given in (11).
xk +1 = axk +1 (1 − xk )
(11)
The Henon map is a discrete time dynamical system that exhibits chaotic behaviour. Given a
point with co-ordinates { xn , yn } , the Henon map transforms it to a new point { xn +1 , yn +1} using
the set of equations in (12).
xn +1 = yn + 1 − axn2 ,
yn +1 = bxn
(12)
The map is chaotic for the parameters a = 1.4 and b = 0.3 . It is actually a simplified model of
the Poincare section of the Lorenz system. The output yn +1 varies in different ranges
depending on the initial seed { x0 , y0 } . Since the Henon map is used here as a random number
generator, it must produce a random number in the range [ 0,1] . Hence the output of the map
for different initial conditions have been scaled in the range [ 0,1] as also done in [59].
11
Figure 3: Random number generation by multiplying chaotic Henon map and Logistic map, normalized in the range [0,1]
with uniform RNG. Top: first 500 samples of the sequences. Bottom: histogram of 10000 samples.
The initial seed of the Logistic map ( x0 ) in (11) has also been chosen randomly for
the fixed parameter a = 4 exhibiting chaos, similar to that studied in [60]. For the
implementation of the chaotic versions of the MOO algorithms, the outputs of the chaotic
maps are scaled and multiplied with a uniform RNG which generates numbers in the
range [ 0,1] . The first 500 samples of uniform RNG, the scaled output of the Henon map and
the Logistic map and the outputs obtained by multiplying them with the uniform RNG are
shown in Figure 3. It also shows the histogram of the respective cases with 10000 samples. It
can be seen that even though the Logistic map and the Henon map have different
distributions, the corresponding distributions obtained by multiplying them with a uniform
RNG is highly skewed with lower values occurring more often than higher values. However,
it is to be noted that even though both the histograms show similar skewed characteristics, it
does not capture the time domain evolution like randomness and autocorrelation of the
numbers. Therefore this generation mechanism with chaotic maps is different than simply
drawing a random number from such skewed probability distributions each time.
5.2.
Quantification of Pareto fronts and choosing the best compromise
controller parameters
Due to the stochastic nature of the NSGA-II algorithm and also its two chaotic
versions, the final Pareto fronts obtained at the end of each independent run would be slightly
different with respect to one another. Therefore multiple runs are conducted to assess the
convergence characteristics of the algorithms. There are several measures to compare
multiple Pareto fronts apart from the non-domination criteria. In most realistic problems, two
Pareto fronts under comparison could intersect with each other which indicates that in one
region one set of solutions are more non-dominated in terms of one objective whereas for the
12
other conflicting objective the other Pareto front would be more non-dominated. Especially in
such cases of weak dominance, it could be difficult to judge the quality of the MOO solutions
from a global perspective. In the case of strong Pareto dominance where one front dominates
the other in all objectives, such problem is avoided due no intersection between multiple
Pareto fronts. In order to avoid such case dependent strategy formulation, several generic
metrics have been proposed in [45], which may indicate towards the quality of MOO
solutions. In this paper, four different Pareto measures viz. minimum hypervolume indicator,
maximum diversity metric, maximum Pareto spread, minimum spacing metric are explored.
Since none of the measures can capture all necessary properties of the best possible Pareto
front for the controller parameters [39], [61] a composite criteria is used to choose the best
Pareto front.
The hypervolume indicator is the given by the total area/volume/hypervolume under
the Pareto front with respect to the reference point as the origin. Therefore, a Pareto front
closest to the origin will have the minimum hypervolume indicator showing a strong
nondominance over other set of solutions [62]. The hypervolume indicator for two
dimensions with reference to the origin is given by
N
HI = ∑ ( xi − xi −1 )( yi − yi −1 )
(13)
i =1
where, N is the number of points on the Pareto front, x, y are the two dimensions, x0 , y0
correspond to the projections of the end points of the Pareto front on the two axes
respectively. This has to be minimized to obtain a better non-dominated Pareto front.
The spacing metric measures the distance between the variance of neighbouring datapoints on the Pareto-set [63], [64], which is given by (14).
SP =
(
1 S
∑ d − di
S − 1 i =1
)
2
(14)
m
where, the distance di = min ∑ xki − xkj , {i, j} = 1, 2,⋯, S and d is the mean value of d i .
j
k =1
Here, S is the number of non-dominated solutions and m is the number of objectives.
The total Pareto spread is given by the Euclidean distance between the extreme points
of the Pareto front [65]. The total Pareto spread helps in understanding the span of the
solutions along different conflicting objectives. For two dimensions, if ( x1 , y1 ) and ( x2 , y2 )
are the coordinates of the end points of the Pareto front, then the Pareto spread is given by
Pspread =
( x1 − x2 ) + ( y1 − y2 )
2
13
2
(15)
The moment of inertia based diversity metric is another popular measure to judge the
quality of the Pareto fronts [66]. Let there be Spt number of points in the m-dimensional
objective function space. Then the centroid for ith dimension is given by (16).
S pt
Ci = ∑ xij S pt , for i = 1, 2, ⋯, m
j =1
(16)
where, xij denotes the ith dimension of the jth point. Then, the diversity metric can be
calculated as (17).
I = ∑∑ ( xij − Ci )
N
S
2
(17)
i =1 j =1
Generally, the best, worst, mean and standard deviation of the diversity metric and other
measure are compared for different MOO algorithms [46], [47]. In the present paper, we also
report the box-plots of each of the four measures to ascertain the best Pareto front amongst 30
independent runs of each variant of the NSGA-II – nominal and chaotic. Additionally we
report the comparison of the convergence times for different controller settings and
optimizers.
After selecting the best Pareto front out 30 independent runs depending each of the
four criteria i.e. minimum hypervolume indicator, maximum diversity metric, maximum
Pareto spread, minimum spacing metric, a best compromise solution on the Pareto front has
been obtained using a fuzzy based mechanism. In [43], [44] the median solution has been
reported amongst all solutions on the Pareto front which has been improved here with a fuzzy
based systematic choice of the best compromise solution [46], [47]. The designer may have
imprecise goals for each objective function which can be encoded in the form of a fuzzy
membership function µ F . Here, µ Fi for each objective function i is taken to be a strictly
monotonic and decreasing continuous function expressed as (18).
1
max
µ Fi = ( Fi − F ) ( Fi max − Fi min )
0
if
Fi ≤ Fi min
if Fi min ≤ Fi ≤ Fi max
if
Fi ≥ Fi
(18)
max
The value of µ Fi represents the degree to which a particular solution has satisfied the
objective Fi . The membership function lies between zero and one implying worst and best
satisfaction of the objective respectively. The degree of satisfaction of each objectives by
each solution can be represented as in (19).
m
S
j =1 i =1
m
µ k = ∑ µik ∑∑ µi j
i =1
14
(19)
where, m is the number of objectives and S is the number of solutions on the Pareto front.
The best compromise solution on the Pareto front has been chosen in such a way for which
(19) reaches its maximum.
6. Simulation and Results
6.1.
Multi-objective criteria for the best controller selection
Similar to [23], the system in Figure 1 is simulated with a step input ∆PL1 of 0.02 pu
in the first area and the ∆PL 2 of 0.008 pu in the second area. The PID/FOPID controllers have
been tuned using time domain performance indices in (8)-(9) under a MOO framework to
handle these load inputs. Three different controller structures are considered – traditional PID
controller, slow FOPID controller ( 0 < λ < 1 ) and the fast FOPID controller ( 1 < λ < 2 ). Each
of these cases are run with three different optimization algorithms – the standard NSGA-II,
the Logistic map adapted NSGA-II and the Henon map adapted NSGA-II. These 9 cases (3
controllers × 3 MOOs) are run for 30 times each and the statistics of the simulation time and
Pareto metrics (like hypervolume indicator, diversity metric, Pareto spread and spacing
metric) are calculated. The corresponding statistics are shown in the box-plots in Figure 4 to
Figure 8. From Figure 4 it can be observed that the median time taken by the chaotic versions
of the NSGA-II is higher than that of the normal NSGA-II for all the controllers. This is due
to the additional computational time taken by the chaotic maps in the various crossover and
mutation operators which employ RNG.
Figure 4: Box-plot of the simulation times with different MOO algorithms and controller structures.
15
Figure 5: Box-plot of the hypervolume indicator with different MOO algorithms and controller structures.
Figure 5 shows the box plots of the hypervolume indicator for the three different
algorithms and controller structures. From the minimum and median values, it is evident that
the performance of the FOPID controller is better than the PID controller. The hypervolume
indicator shows that the median values obtained using the traditional NSGA-II algorithm is
better than the chaotic versions for the FOPID controllers but is worse for the PID structure.
Figure 6: Box-plot of the moment of inertia based diversity metric with different MOO algorithms and controller
structures.
Figure 6 shows the box plot of the moment of inertia based diversity metric for 30
runs of all the cases. A higher value of diversity metric indicates a better Pareto front. It can
16
be observed that the Pareto fronts obtained for the FOPID controllers using the chaotic
NSGA-II algorithms have a higher value of the diversity metric. But for PID controllers, the
NSGA-II gives better results. It can also be seen that the slow FOPID controller has a higher
range of the diversity metric than the PID controller. This is possibly due to the extra degrees
of freedom of the FO integro-differential orders of the FOPID controller which has a larger
search space and thus allows more diverse solutions. The fast FOPID has a very small value
of diversity metric, indicating that most of the stable solutions belong to a small region of the
search space. For all the other cases, the solutions are either unstable or dominated.
Figure 7: Box-plot of the total Pareto spread with different MOO algorithms and controller structures.
Figure 8: Box-plot of the spacing metric with different MOO algorithms and controller structures.
17
Figure 7 shows the box plots of the total Pareto spread for 30 runs of all the cases. It is
observed that the chaotic versions of the NSGA-II are able to obtain a wider Pareto spread for
the FOPID controllers, indicating a more diverse set of solutions. However, for the PID
structure, the traditional NSGA-II gives a better Pareto spread. Figure 8 shows the box plots
of the spacing metric for 30 runs of all the cases. The original NSGA-II is found to give a
more uniform distribution of different solutions on the Pareto front.
Figure 9: Comparison of the Pareto fronts for slow FOPID controller using different MOO algorithms and selection
criteria.
Figure 10: Comparison of the Pareto fronts for fast FOPID controller using different MOO algorithms and selection
criteria.
18
It is therefore clear that none of the algorithms are better than their counterparts in all
the metrics individually. Also each metric represents different characteristics of the Pareto
front and finding the best Pareto front must leverage on some of these criteria taken together.
The diversity metric, Pareto spread and spacing metric reflect the distribution of the solutions
on the Pareto front. The hypervolume criterion which indicates the non-domination of the
different fronts, directly affects the quality of the obtained solutions and a better nondomination implies a better control system performance. Hence non-domination is one of the
most important metric among these. Therefore for comparison, the best Pareto front which is
obtained by each of the different Pareto metrics is found out of each of the 30 runs.
Figure 11: Comparison of the Pareto fronts for PID controller using different MOO algorithms and selection criteria.
It can be observed that for all the different Pareto metrics (hypervolume indicator,
diversity metric, Pareto spread and spacing metric) the chaotic version of the NSGA-II (either
logistic map assisted or Henon map assisted) gives the best non-dominated Pareto front. This
can be verified from the superimposed Pareto fronts in Figure 9-Figure 11 for the three
controller structures respectively. In most of the cases the logistic map assisted NSGA-II
gives better performance than the others. In Figure 9-Figure 11, the best Pareto front (out of
30 runs) according to each of the four Pareto measures are shown. According to each criteria
like the minimum hypervolume indicator, maximum diversity metric, maximum Pareto
spread and minimum spacing metric, the chaotic NSGA-II versions gives wider and nondominated Pareto spreads over that with the standard NSGA-II.
Next, for the sake of comparison amongst the best controller structures, the nondominated Pareto fronts according to each of the Pareto metrics is identified from Figure 9Figure 11 and are superimposed in Figure 12. It can be observed that under the nominal
operating condition of the power system, the slow FOPID controller structure is more nondominated and is therefore capable of resulting in better control performance. Although in all
the four cases in Figure 12, the slow FOPID gives the non-dominant Pareto front but
19
depending on different criteria the best compromise solution may perform better/worse.
Therefore, we here report all the best compromise solution of the three controller structures
for each of the Pareto fronts in Figure 12 (4 metrics × 3 controllers = 12 solutions in total).
The corresponding algorithm which gives the best non-dominated Pareto front and along with
the optimum controller parameters and objective functions have been reported in Table 2.
Table 2: Best compromise solutions of the FOPID and PID controller based on different Pareto metrics
Controller
Slow
FOPID
Fast
FOPID
PID
Criterion
Minimum
hypervolume
indicator
Maximum
diversity
metric
Maximum
Pareto spread
Minimum
spacing metric
Minimum
hypervolume
indicator
Maximum
diversity
metric
Maximum
Pareto spread
Minimum
spacing metric
Minimum
hypervolume
indicator
Maximum
diversity
metric
Maximum
Pareto spread
Minimum
spacing metric
Best
Nondominated
Algorithm
case
ITSE
ISDCO
Henon NSGA-II
1
1.01380
1.00040
Logistic NSGAII
Logistic NSGAII
2
1.01645
1.00053
3
1.01145
1.00053
Henon NSGA-II
4
1.01126
1.00068
Logistic NSGAII
Logistic NSGAII
Logistic NSGAII
Logistic NSGAII
Logistic NSGAII
Logistic NSGAII
Logistic NSGAII
Logistic NSGAII
5
1.01236
1.04872
1.00007
7
1.04872
1.00007
8
1.01392
1.00055
1.00726
Ki1
0.297
Kd1
0.036
λ1
0.869
µ1
0.208
Kp2
0.036
Ki2
0.237
Kd2
0.036
λ2
0.533
µ2
0.585
0.215
0.230
0.026
0.573
0.724
0.160
0.121
0.068
0.354
0.612
0.036
0.388
0.048
0.928
0.398
0.000
0.237
0.135
0.823
0.660
0.010
0.484
0.083
0.570
0.544
0.090
0.165
0.140
0.523
0.715
0.129
0.397
0.107
1.014
0.553
0.244
0.173
0.139
1.155
0.745
0.008
0.100
0.037
1.055
0.203
0.034
0.036
0.014
1.222
0.699
0.008
0.100
0.037
1.055
0.203
0.034
0.036
0.014
1.222
0.699
0.062
0.329
0.099
1.048
0.366
0.078
0.174
0.216
1.053
0.502
0.991
0.570
0.376
-
-
0.485
0.269
0.800
-
-
0.411
0.375
0.005
-
-
0.316
0.149
0.042
-
-
0.411
0.375
0.005
-
-
0.316
0.149
0.042
-
-
0.322
0.250
0.006
-
-
0.300
0.178
0.052
-
-
1.00082
6
9
Kp1
0.090
1.15637
10
1.01164
1.00137
11
1.01164
1.00137
12
1.01378
1.00095
Figure 12: Comparison of non-dominance amongst the controllers using best MOO algorithms for each criterion.
20
6.2.
Time domain performance of the LFC system
The time domain performance of the best compromise controller parameters reported
in Table 2 are now compared for the four Pareto metrics (hypervolume indicator, diversity
metric, Pareto spread and spacing metric). Since depending on the spread of the Pareto front
in Figure 12 the best compromise solution may be have different time domain characteristics.
The time domain responses of the grid frequency oscillation in both the areas, tie line power
flow and the control signals have been reported for each Pareto metrics in Figure 13-Figure
16.
Figure 13: Performance comparison of the best compromise solution of the three type of nondominated controllers
obtained using the minimum hypervolume indicator criterion.
Figure 14: Performance comparison of the best compromise solution of the three type of nondominated controllers
obtained using the maximum diversity metric criterion.
21
The time responses of the power system with the fuzzy based best compromise
solution have been shown in Figure 13 according to the solution obtained with hypervolume
indicator criterion. In other words, these solutions for the three different controller structures
are obtained by calculating the best compromise solution with the fuzzy based mechanism for
each of the Pareto fronts in Figure 12(a). It is observed that the time domain performance of
the PID (for supressing the oscillations in ∆f1 , ∆f 2 and ∆Ptie ) lies between those of the slow
and the fast FOPID, but the control signal required by the PID controller is much higher. The
slow FOPID results in less oscillations and overshoot and also has a smaller control signal.
Hence it outperforms the other two controller structures. The next exploration tries to
understand whether a similar response is observed if the controllers are selected based on a
different Pareto metric.
Figure 15: Performance comparison of the best compromise solution of the three type of nondominated controllers
obtained using the maximum Pareto spread criterion.
Figure 14 indicates towards a similar conclusion which shows that the time domain
performance of the fuzzy based best compromise solution obtained from the Pareto fronts of
Figure 12(b), which is based on the maximum diversity metric criterion. The PID controller is
found to have larger oscillations and overshoot in the time domain performance of ∆f1 , ∆f 2
and ∆Ptie along with a large value of control signal. The slow FOPID controller is found to
outperform both the other two controller structures.
Figure 15 shows the time domain performance of the fuzzy best compromise solution
obtained from the Pareto fronts of Figure 12(c), which is based on the maximum Pareto
spread criterion. The fast FOPID controller has a smaller control signal but has a very
sluggish time response. The slow FOPID controller has a faster time response than the fast
FOPID but this comes at the cost of a higher control signal. However the slow FOPID is
better than the PID in terms of both the peak overshoot and the control signal.
22
Figure 16: Performance comparison of the best compromise solution of the three type of nondominated controllers
obtained using the minimum spacing metric criterion.
Figure 16 shows the time response of the fuzzy best compromise solution obtained
from the Pareto fronts of Figure 12(d), which is based on the minimum spacing metric
criterion. The slow FOPID controller is found to outperform the other two structures in terms
of both time domain performance and low value of control signal.
7. Robustness analysis of the designed solutions
It is desirable that the designed controllers should work in a wide range of operating
conditions without significant deterioration in performance. In other words, the controllers
should be robust with respect to changes in system parameters. To illustrate this, the fuzzy
best compromise solutions for the slow FOPID, fast FOPID and the PID controllers for two
different Pareto metrics (minimum hypervolume indicator and maximum diversity metric),
are simulated by varying the synchronisation coefficient ( T12 ). The corresponding time
response curves for ∆f1 , ∆f 2 , ∆u1 , ∆u2 and ∆Ptie are plotted in Figure 17 and Figure 18
respectively. A similar study for the single objective FOPID controller has also been done in
[34].
From the time response characteristics of ∆f1 , ∆f 2 and ∆Ptie in Figure 17 with increase
in T12 by a factor of two, it might appear that the PID controller performs better than the
FOPID versions as the latter introduces small oscillations and do not settle to a steady state
value quickly. However the control signals are drastically higher for the PID controller and
have sharp jumps which might be detrimental for the governor.
Figure 18 shows the robustness of the fuzzy best compromise solutions for the case of
the maximum diversity metric criterion with increase in T12 by a factor of two and three. It
can also be observed that the fast FOPID controller is better than the slow FOPID and the
PID controller when T12 is increased gradually. For the latter two controllers the system
23
becomes unstable, while the fast FOPID controller is still able to maintain a time response
almost similar to the nominal case which proves the superiority of the FOPID in LFC over
the PID.
Figure 17: Effect of two times increase in T12 with the best compromise solution of the minimum hypervolume indicator
criterion
Figure 18: Effect of gradual increase in T12 with the best compromise solution of the maximum diversity metric criterion.
Simulation reported so far has been done with the nominal system parameters (in
section 6) and under uncertain parameters of the power system (section 7). The capability of
the three controller structures to damp grid frequency oscillations even in the presence of a
random change in load pattern in both the areas [29] is explored next. Figure 19 shows the
time response of the system obtained by different controllers where the load patterns are
24
randomly changing within PL1 = 0.02 pu and PL 2 = 0.008 pu [37]. In terms of fast settling time
and low controller effort, the slow FOPID controller is found to be better than the other two
controller structures.
Figure 19: Effect of random load change in both the areas with the best compromise solution of the maximum diversity
metric criterion.
Overall, both the designed solutions using the PID and the FOPID controllers show
sufficient robustness to system parameter variations and random load change with FOPID
variants outperforming the PID. Therefore the FOPID as load frequency controller could be
applied in a practical setting where significant uncertainty exists with respect to system
parameters as well as the change in load pattern.
8. Discussions and Conclusions
The following points summarise the main findings of the reported simulations in the paper.
•
Irrespective of the chosen MOO metric (like hypervolume indicator, total Pareto
spread etc.), the Pareto front obtained by the chaotic NSGA-II algorithm always
results in a better set of solutions (in terms of non-domination). In other words, even
though all the algorithms produce a set of non-dominated solutions as the output,
those that are obtained from the chaotic NSGA II are more non-dominated vis-à-vis
those obtained from their non-chaotic counterparts. For the present LFC problem, in
most cases the logistic map assisted NSGA-II works better than the corresponding
Henon map assisted version unlike [43].
•
Under nominal conditions and random load change, slow FOPID performs better in
terms of control system performance as indicated from the fast settling time and
keeping the maximum values of ∆f1 , ∆f 2 , ∆u1 , ∆u2 and ∆Ptie low.
25
•
When the system parameters are perturbed (e.g. synchronizing coefficient is
changed), either the fast or the slow FOPID controller is better depending on the
chosen Pareto metric, but it is always better than the PID controller.
In this paper, multi-objective design of an FOPID controller is done for LFC of a two
area power system with GRC in turbine, reheater stages and dead-band in the governor. The
NSGA-II algorithm and its chaotic versions are employed for the MOO task. Different Pareto
metrics are calculated and the optimization algorithms are evaluated based on multiple Pareto
metrics taken together. Numerical simulations show that the chaotic versions of the NSGA-II
algorithm gives better Pareto solutions over the ordinary NSGA-II algorithm. In general, the
chaotic logistic map assisted NSGA-II performs better over the chaotic Henon map assisted
NSGA-II. It is also shown that the FOPID controller outperforms the PID controller for
multi-objective designs of the two area LFC problem. Thus the FOPID controllers are a
viable alternative to the conventional IO-PID controllers in load frequency control of interconnected power systems.
References
[1] P. Kundur, N. J. Balu, and M. G. Lauby, Power system stability and control. McGrawhill New York, 1994.
[2]
H. Bevrani and T. Hiyama, Intelligent Automatic Generation Control. CRC Press, 2011.
[3]
H. Bevrani, Robust power system frequency control. Springer Verlag, 2009.
[4]
I. Chidambaram and S. Velusami, “Design of decentralized biased controllers for loadfrequency control of interconnected power systems,” Electric power components and
systems, vol. 33, no. 12, pp. 1313–1331, 2005.
[5]
K. Sudha and R. Vijaya Santhi, “Robust decentralized load frequency control of
interconnected power system with generation rate constraint using type-2 fuzzy
approach,” International Journal of Electrical Power & Energy Systems, vol. 33, no. 3,
pp. 699–707, 2011.
[6]
S. Aditya and D. Das, “Battery energy storage for load frequency control of an
interconnected power system,” Electric Power Systems Research, vol. 58, no. 3, pp.
179–185, 2001.
[7]
S. K. Aditya and D. Das, “Design of load frequency controllers using genetic algorithm
for two area interconnected hydro power system,” Electric Power Components and
Systems, vol. 31, no. 1, pp. 81–94, 2003.
[8]
R. K. Sahu, S. Panda, and N. K. Yegireddy, “A novel hybrid DEPS optimized fuzzy
PI/PID controller for load frequency control of multi-area interconnected power
systems,” Journal of Process Control, 2014.
[9]
D. Kothari and I. Nagrath, Power system engineering. Tata McGraw-Hill, 2008.
26
[10] M. H. Kazemi, M. Karrari, and M. B. Menhaj, “Decentralized robust adaptive-output
feedback controller for power system load frequency control,” Electrical Engineering
(Archiv fur Elektrotechnik), vol. 84, no. 2, pp. 75–83, 2002.
[11] Y. Wang, R. Zhou, and C. Wen, “New robust adaptive load-frequency control with
system parametric uncertainties,” in Generation, Transmission and Distribution, IEE
Proceedings-, vol. 141, no. 3, 1994, pp. 184–190.
[12] H. Bevrani, Y. Mitani, and K. Tsuji, “Robust decentralised load-frequency control using
an iterative linear matrix inequalities algorithm,” in Generation, Transmission and
Distribution, IEE Proceedings-, vol. 151, no. 3, 2004, pp. 347–354.
[13] K. Lim, Y. Wang, and R. Zhou, “Robust decentralised load-frequency control of multiarea power systems,” in Generation, Transmission and Distribution, IEE Proceedings-,
vol. 143, no. 5, 1996, pp. 377–386.
[14] E. Rakhshani, “Intelligent linear-quadratic optimal output feedback regulator for a
deregulated automatic generation control system,” Electric Power Components and
Systems, vol. 40, no. 5, pp. 513–533, 2012.
[15] T. Mohamed, H. Bevrani, A. Hassan, and T. Hiyama, “Decentralized model predictive
based load frequency control in an interconnected power system,” Energy Conversion
and Management, vol. 52, no. 2, pp. 1208–1214, 2011.
[16] Z. Al-Hamouz, H. Al-Duwaish, and N. Al-Musabi, “Optimal design of a sliding mode
AGC controller: Application to a nonlinear interconnected model,” Electric Power
Systems Research, vol. 81, no. 7, pp. 1403–1409, 2011.
[17] G. Ray, A. Prasad, and T. Bhattacharyya, “Design of decentralized robust loadfrequency controller based on SVD method,” Computers & Electrical Engineering, vol.
25, no. 6, pp. 477–492, 1999.
[18] Z. Al-Hamouz and H. Al-Duwaish, “A new load frequency variable structure controller
using genetic algorithms,” Electric Power Systems Research, vol. 55, no. 1, pp. 1–6,
2000.
[19] E. Çam and I. Kocaarslan, “A fuzzy gain scheduling PI controller application for an
interconnected electrical power system,” Electric Power Systems Research, vol. 73, no.
3, pp. 267–274, 2005.
[20] D. Chaturvedi, P. Satsangi, and P. Kalra, “Load frequency control: a generalised neural
network approach,” International Journal of Electrical Power & Energy Systems, vol.
21, no. 6, pp. 405–415, 1999.
[21] F. Beaufays, Y. Abdel-Magid, and B. Widrow, “Application of neural networks to loadfrequency control in power systems,” Neural Networks, vol. 7, no. 1, pp. 183–194,
1994.
[22] D. Rerkpreedapong, A. Hasanovic, and A. Feliachi, “Robust load frequency control
using genetic algorithms and linear matrix inequalities,” Power Systems, IEEE
27
Transactions on, vol. 18, no. 2, pp. 855–861, 2003.
[23] H. Shayeghi and H. Shayanfar, “Application of ANN technique based on µ-synthesis to
load frequency control of interconnected power system,” International Journal of
Electrical Power & Energy Systems, vol. 28, no. 7, pp. 503–511, 2006.
[24] H. Shayeghi, H. Shayanfar, and A. Jalili, “Load frequency control strategies: A state-ofthe-art survey for the researcher,” Energy Conversion and Management, vol. 50, no. 2,
pp. 344–353, 2009.
[25] I. Ibraheem, P. Kumar, and D. Kothari, “Recent philosophies of automatic generation
control strategies in power systems,” Power Systems, IEEE Transactions on, vol. 20, no.
1, pp. 346–357, 2005.
[26] S. K. Pandey, S. R. Mohanty, and N. Kishor, “A literature survey on load-frequency
control for conventional and distribution generation power systems,” Renewable and
Sustainable Energy Reviews, vol. 25, pp. 318–334, 2013.
[27] E. Ali and S. Abd-Elazim, “Bacteria foraging optimization algorithm based load
frequency controller for interconnected power system,” International Journal of
Electrical Power & Energy Systems, vol. 33, no. 3, pp. 633–638, 2011.
[28] I. Kocaarslan and E. Çam, “Fuzzy logic controller in interconnected electrical power
systems for load-frequency control,” International Journal of Electrical Power &
Energy Systems, vol. 27, no. 8, pp. 542–549, 2005.
[29] K. Sabahi, M. Teshnehlab, and others, “Recurrent fuzzy neural network by using
feedback error learning approaches for LFC in interconnected power system,” Energy
Conversion and Management, vol. 50, no. 4, pp. 938–946, 2009.
[30] S. Das, Functional fractional calculus. Springer Verlag, 2011.
[31] I. Pan and S. Das, Intelligent fractional order systems and control. Springer, 2013.
[32] C. A. Monje, Y. Q. Chen, B. M. Vinagre, D. Xue, and V. Feliu, Fractional-order
systems and controls: fundamentals and applications. Springer, 2010.
[33] M. Zamani, M. Karimi-Ghartemani, N. Sadati, and M. Parniani, “Design of a fractional
order PID controller for an AVR using particle swarm optimization,” Control
Engineering Practice, vol. 17, no. 12, pp. 1380–1387, 2009.
[34] M. I. Alomoush, “Load frequency control and automatic generation control using
fractional-order controllers,” Electrical Engineering (Archiv fur Elektrotechnik), vol.
91, no. 7, pp. 357–368, 2010.
[35] S. Debbarma, L. C. Saikia, and N. Sinha, “AGC of a multi-area thermal system under
deregulated environment using a non-integer controller,” Electric Power Systems
Research, vol. 95, pp. 175–183, 2013.
28
[36] S. Debbarma, L. Chandra Saikia, and N. Sinha, “Solution to automatic generation
control problem using firefly algorithm optimized IλDµ controller,” ISA Transactions,
vol. 53, no. 2, pp. 358–366, 2014.
[37] S. Debbarma, L. C. Saikia, and N. Sinha, “Automatic generation control using two
degree of freedom fractional order PID controller,” International Journal of Electrical
Power & Energy Systems, vol. 58, pp. 120–129, 2014.
[38] S. Sondhi and Y. V. Hote, “Fractional order PID controller for load frequency control,”
Energy Conversion and Management, vol. 85, pp. 343–353, 2014.
[39] G. Reynoso-Meza, X. Blasco, J. Sanchis, and M. Martinez, “Controller tuning using
evolutionary multi-objective optimisation: Current trends and applications,” Control
Engineering Practice, vol. 28, pp. 58–73, 2014.
[40] P. Ahmadi, M. A. Rosen, and I. Dincer, “Multi-objective exergy-based optimization of
a polygeneration energy system using an evolutionary algorithm,” Energy, vol. 46, no.
1, pp. 21–31, 2012.
[41] S. Das, I. Pan, and S. Das, “Performance comparison of optimal fractional order hybrid
fuzzy PID controllers for handling oscillatory fractional order processes with dead
time,” ISA Transactions, vol. 52, no. 4, pp. 550–566, 2013.
[42] S. Das and I. Pan, “On the mixed H2/H∞ loop shaping trade-offs in fractional order
control of the AVR system,” Industrial Informatics, IEEE Transactions on, vol. 10, no.
4, pp. 1982-1991, Nov. 2014.
[43] I. Pan and S. Das, “Frequency domain design of fractional order PID controller for
AVR system using chaotic multi-objective optimization,” International Journal of
Electrical Power & Energy Systems, vol. 51, pp. 106–118, 2013.
[44] I. Pan and S. Das, “Chaotic multi-objective optimization based design of fractional
order PIλDµ controller in AVR system,” International Journal of Electrical Power &
Energy Systems, vol. 43, no. 1, pp. 393–407, 2012.
[45] E. Zitzler, J. Knowles, and L. Thiele, “Quality assessment of pareto set
approximations,” in Multiobjective Optimization, Springer, 2008, pp. 373–404.
[46] B. Panigrahi, V. R. Pandi, R. Sharma, S. Das, and S. Das, “Multiobjective bacteria
foraging algorithm for electrical load dispatch problem,” Energy Conversion and
Management, vol. 52, no. 2, pp. 1334–1342, 2011.
[47] B. Panigrahi, V. Ravikumar Pandi, S. Das, and S. Das, “Multiobjective fuzzy
dominance based bacterial foraging algorithm to solve economic emission dispatch
problem,” Energy, vol. 35, no. 12, pp. 4761–4770, 2010.
[48] S. A. Taher, M. Hajiakbari Fini, and S. Falahati Aliabadi, “Fractional order PID
controller design for LFC in electric power systems using imperialist competitive
algorithm,” Ain Shams Engineering Journal, vol. 5, no. 1, pp. 121–135, 2014.
29
[49] S. Das, S. Das, and I. Pan, “Multi-objective optimization framework for networked
predictive controller design,” ISA Transactions, vol. 52, no. 1, pp. 56–77, 2013.
[50] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective
genetic algorithm: NSGA-II,” Evolutionary Computation, IEEE Transactions on, vol. 6,
no. 2, pp. 182–197, 2002.
[51] K. Deb, Multi-objective optimization using evolutionary algorithms, vol. 16. John Wiley
& Sons, 2001.
[52] S. Graziani, Stochastic resonance: theory and applications, vol. 1. Springer, 2000.
[53] X. Yuan, Y. Yuan, and Y. Zhang, “A hybrid chaotic genetic algorithm for short-term
hydro system scheduling,” Mathematics and Computers in Simulation, vol. 59, no. 4,
pp. 319–327, 2002.
[54] M. Crepinsek, S.-H. Liu, and M. Mernik, “Exploration and exploitation in evolutionary
algorithms: a survey,” ACM Computing Surveys (CSUR), vol. 45, no. 3, p. 35, 2013.
[55] M. Bucolo, R. Caponetto, L. Fortuna, M. Frasca, and A. Rizzo, “Does chaos work better
than noise?,” Circuits and Systems Magazine, IEEE, vol. 2, no. 3, pp. 4–19, 2002.
[56] D. Guo, J. Wang, J. Huang, R. Han, and M. Song, “Chaotic-NSGA-II: an effective
algorithm to solve multi-objective optimization problems,” in Intelligent Computing and
Integrated Systems (ICISS), 2010 International Conference on, 2010, pp. 20–23.
[57] H. Lu, R. Niu, J. Liu, and Z. Zhu, “A chaotic non-dominated sorting genetic algorithm
for the multi-objective automatic test task scheduling problem,” Applied Soft
Computing, vol. 13, no. 5, pp. 2790–2802, 2013.
[58] Z. Chen, X. Yuan, B. Ji, P. Wang, and H. Tian, “Design of a fractional order PID
controller for hydraulic turbine regulating system using chaotic non-dominated sorting
genetic algorithm II,” Energy Conversion and Management, vol. 84, pp. 390–404, 2014.
[59] L. dos Santos Coelho and V. C. Mariani, “A novel chaotic particle swarm optimization
approach using Henon map and implicit filtering local search for economic load
dispatch,” Chaos, Solitons and Fractals. v39, pp. 510–518, 2007.
[60] R. Caponetto, L. Fortuna, S. Fazzino, and M. G. Xibilia, “Chaotic sequences to improve
the performance of evolutionary algorithms,” Evolutionary Computation, IEEE
Transactions on, vol. 7, no. 3, pp. 289–304, 2003.
[61] P. J. Fleming and R. C. Purshouse, “Evolutionary algorithms in control systems
engineering: a survey,” Control Engineering Practice, vol. 10, no. 11, pp. 1223–1241,
2002.
[62] E. Zitzler, D. Brockhoff, and L. Thiele, “The hypervolume indicator revisited: On the
design of Pareto-compliant indicators via weighted integration,” in Evolutionary multicriterion optimization, 2007, pp. 862–876.
30
[63] S. Bandyopadhyay, S. K. Pal, and B. Aruna, “Multiobjective GAs, quantitative indices,
and pattern classification,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE
Transactions on, vol. 34, no. 5, pp. 2088–2099, 2004.
[64] C. A. C. Coello, G. T. Pulido, and M. S. Lechuga, “Handling multiple objectives with
particle swarm optimization,” Evolutionary Computation, IEEE Transactions on, vol. 8,
no. 3, pp. 256–279, 2004.
[65] E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary
algorithms: Empirical results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195,
2000.
[66] P. Koduru, Z. Dong, S. Das, S. M. Welch, J. L. Roe, and E. Charbit, “A multiobjective
evolutionary-simplex hybrid approach for the optimization of differential equation
models of gene networks,” Evolutionary Computation, IEEE Transactions on, vol. 12,
no. 5, pp. 572–590, 2008.
31
| 9 |
arXiv:1312.4149v1 [] 15 Dec 2013
Autonomous Quantum Perceptron Neural Network
Alaa Sagheer 1 and Mohammed Zidan
Department of Mathematics
Center for Artificial Intelligence and RObotics (CAIRO)
Faculty of Science, Aswan University, Aswan, Egypt
Email: [email protected]
Abstract:Recently, with the rapid development of technology, there are a lot
of applications require to achieve low-cost learning. However the computational
power of classical artificial neural networks, they are not capable to provide
low-cost learning. In contrast, quantum neural networks may be representing a
good computational alternate to classical neural network approaches, based on
the computational power of quantum bit (qubit) over the classical bit. In this
paper we present a new computational approach to the quantum perceptron
neural network can achieve learning in low-cost computation. The proposed
approach has only one neuron can construct self-adaptive activation operators
capable to accomplish the learning process in a limited number of iterations
and, thereby, reduce the overall computational cost. The proposed approach
is capable to construct its own set of activation operators to be applied widely
in both quantum and classical applications to overcome the linearity limitation
of classical perceptron. The computational power of the proposed approach
is illustrated via solving variety of problems where promising and comparable
results are given.
1
Introduction
Classical Artificial Neural Networks (CANN) derives its computing power through
its massively parallel-distributed structure and the ability to learn and, therefore, generalize. However, CANN may face many difficulties such as the absence
of concrete algorithms and rules for specifying optimal design architectures,
limited memory capacity, time-consuming training, etc.[1]. One of the known
classical approaches is the classical Perceptron Neural Network (CPNN), which
is applied only for linearly separable learning problems [2]. In other words,
CPNN cannot be applied for problems which have inseparable classes, such as
XOR problem [1]. These limitations, and others, have been motivated many researchers to investigate new trends in neural computation domain [3, 4]. One of
the novel trends in this domain is to evoke properties and techniques of quantum
computing into classical neural computation approaches.
Several researchers expect that quantum computing is capable to enhance
the performance, and overcoming the above limitations, of classical neural computation [5, 6, 7, 8]. The beginning was in 1995 with Kak [9] who was the
first researcher introduced the concept of quantum neural computation. Then,
1 Corresponding
author
1
Menneer [10] defines a class of quantum neural network (QNN) as a superposition of single component networks where each is trained using only one pattern.
Ventura et al. in [6] introduced a new associative memory technique based
on Grover’s quantum search algorithm can solve the completion problem. The
technique restores the full pattern when a part of it is initially presented with
just a part of the pattern. Also, an exponential increase in the capacity of the
memory is performed when it compared with the CANN capacity.
As long as classical perceptron is concerned, some researchers have been tried
to increase the efficiency of perceptron using the power of quantum computation.
In 2001, Altaisky [11] developed a simple quantum perceptron that depends on
selecting the activation operator. However its simplicity, Altaisky approach
consumed much time in order to select an activation operator, especially, when
the size of training data is large. Next, Fei et al. [12] introduced a new model of
quantum neuron and its learning algorithm based on Altaisky perceptron. Fei
model used the delta rule as the learning rule which yields considerable results
such as computing XOR-Function using only one neuron and nonlinear mapping
property. Unfortunately, Fei model did not provide us a new way for deriving the
activation operator. Nevertheless, Fei model is sensitive for the selection of the
appropriate activation operator, which was the problem of Altaisky perceptron.
Recently, Zhou et al. [13] developed a quantum perceptron approach based
on the quantum phase adequately and could to compute the XOR function
using only one neuron.The drawback of Zhou perceptron is that it requires many
computation iterations to give a response. Finally, Siomau [14] introduced an
autonomous quantum perceptron based on calculating a set of positive valued
operators and valued measurements (POVM). However, Simomau perceptron
cannot be applied for problems such as quantum-Not gate and Hadamard gate.
In this paper, we propose a novel autonomous quantum perceptron neural
network (AQPNN) approach can be used to solve both classical applications
and quantum applications. The proposed AQPNN improves the computational
cost of Altaisky quantum perceptron as well as the computational cost of Zhou
quantum perceptron and its ability to learn the problems that Siomau quantum
perceptron can not learn. The proposed perceptron is capable to adapt its activation operator very fast which reduces the overall learning time. In addition,
it is capable to overcome the linearity restriction of classical perceptron where
AQPNN can be viewed as a non-linear perceptron. To evaluate AQPNN, we
solve various problems and the results are compared favorably with Zhou [13].
The paper is organized as follows: Section 2 describes the operation of
AQPNN and its learning algorithm. Section 3 shows the computational power
of AQPNN via solving various problems. Section 4 discusses the performanceof
AQPNN. Section 5 shows the conclusion and our future work.
2
2
2.1
Autonomous Quantum Perceptron Neural Network (AQPNN)
Description of the AQPNN
The proposed (AQPNN)approach is a quantum neural network approach includes only one neuron with n qubit inputs x1 , x2 , ....., xn ,(for qubit definition, see Appendix A). A set of weight operators w1 ,w2 , .....,wn is assumed
such that one weight operator is associated with each input, and ynet is the
final network response; see Figure 1. The operators Fj refers to a set of unique
activation operators of the proposed perceptron.
Figure 1: The proposed AQPNN model
The proposed AQPNN approach is based on a supervised learning procedure,
that is, it is provided with a set of learning patterns (inputs/targets) in qubit
form.
For each input pattern presented to the network, the weighted sum qubit
yj is calculated using the form:
n
X
αj
.
(1)
wi xi =
yj =
βj
i=1
where αj and βj are the probability amplitudes of the weighted sum qubit for
j th pattern in the training set. The weight operators is updated at time t using
the following rule:
(2)
wi (t + 1) = wi (t) + γ e xi .
where γ is the learning rate, e = ( d − y ) is the perceptron error and e xi
denotes the outer product of vectors e and xi . Once the weighted sum is
calculated for all the available patterns, then the set of activation operators can
be calculated using the form:
cos θj − sin θj
.
(3)
Fj =
sin φj cos φj
where j = 1, 2, 3, ..., m, and m is the set of unique activation operators (repeated
activation operators are discarded) where m ≤ N is the number of training data
set. The parameters θj and ϕj are two real valued angles calculated using the
form:
αdj
αj
cos θj − sin θj
=
.
(4)
βd j
βj
sin φj cos φj
3
where [αj ,βj ]T is the weighted sum qubit and (αdj ,βdj )T is the target qubit.
The aim of each activation operator is to transform the weighted sum qubit to
be mapped into the given target and make it a normalized qubit Eq.(A2). After
calculating the set of all activation operators, the output of the autonomous
quantum perceptron is given using the superposition of all activation operators
in the following form:
n
m
X
X
xi .
(5)
Fj
youtput =
j=1
i=1
where youtput is the network output as a superposition of the set of output
qubits. This output represents the effect of the activation operators (interference) on the weighted sum qubit resulted when any pattern presented to the
network. One qubit only from these qubits will be the response of the network
and can be specified by the following form:
ynet = L( ( youtput ◦ youtput − C) ) ◦ D ) .
(6)
T
where C = [1, 1, ..., 1] is a good vector and D is the vector of the target qubits.
The operation ◦ achieves the Hadamard product operation [15] (for more details,
see Appendix A), where the function L retain the smallest absolute value and
makes it equal one and the rest of values to be equal zeros. Hence, the result
of Eq.(6) is only one qubit represents the net response of the AQPNN for the
current input.
2.2
The learning algorithm of AQPNN
According to the description given above, the AQPNN learning algorithm is
divided into two main stages: The First stage is imbedded in both Eq.(4) and
Eq.(5), where the AQPNN algorithm collects information about the problem
in hand by constructing a set of activation operators. In the second stage the
AQPNN takes the decision about the networks response according to Eq.(6)
based on the gathered information. In the following, we can summarize the
AQPNN learning algorithm in the following steps:
Step 1: Set all Fi = I (identity matrix). Then choose the initial weight operators wi randomly, set the learning rate 0 < γ < 1 and set iteration number k = 1,
Step 2: Calculate the weighted sum qubit for each given pattern using Eq.(1),
Step 3: Compare each weighted sum qubit for the patterns of each class with
all other weighted sum qubits for other classes. We have two cases here:
1. If each weighted sum qubit for any class does not equal the same value for
any weighted sum in any other classes then go to step 5, else, go to step
4.
2. If the value of any weighted sum qubit is zero then go to step 4, else, go
to step 5.
4
Step 4: Update the weight operators using Eq.(2), set k = k + 1, go to step 2.
Step 5: Calculate the activation operator for each weighted sum qubit using
Eq.(4).
The superposition of output qubits of the network is given by Eq.(5) whereas
the net response of the AQPNN is given by Eq.(6).
3
The Computational Power of AQPNN
We proceed now to evaluate practically the computational power of the proposed
AQPNN algorithm. In this section, we show the results of using AQPNN in
solving four different problems. In the first two problems, we solve the problems
of quantum Not-gate and the Hadamard-gate. In the third problem we compute
the XOR-function, whereas in the fourth problem we achieve a classification task
application.
3.1
The Quantum Not-gate
0
Class A: The first pattern is P1 = { x1 =
, d1 =
}
1
0
1
Class B: The second pattern is P2 = { x2 =
, d2 =
}
1
0
1 0
The initial weight operators is chosen arbitrary as w =
. Once, we
0 1
introduced the two patterns to the AQPNN network we obtain the weighted
sum, according to Eq.(1), as follows:
1
0
y1 =
, y2 =
0
1
1
0
Since y1 has a different value than y2 , then we can calculate the set of
activation operators as follows:
cos θ1
sin φ1
− sin θ1
cos φ1
1
0
0
1
0
1
i.e. θ1 = −90, ϕ1 = 90 and, thereby, F1 =
=
1
0
=
0
1
1
0
.
Similarly, we can get F2 as follows:
cos θ2
sin φ2
− sin θ2
cos φ2
5
i.e. θ2 = −90, ϕ2 = 90 and, thereby, F2 = F1 =
0 1
1 0
= F
Therefore, the superposition output is youtput
.
n
P
wi xi . This means
i=1
that, the quantum Not gate is trained after only one iteration.
3.2
The Hadamard-gate
1
}
, d1 = √12
1
0
1
1
√
Class B: The second pattern is P2 = { x2 = 2
, d2 =
}
1
−1
1 0
The initial weight operators is chosen arbitrary as w =
Once, we
0 1
introduced the two patterns to the AQPNN network we obtain the weighted
sum, according to Eq.(1), as follows:
1
0
y1 =
, y2 =
0
1
Class A: The first pattern is P1 = { x1 =
1
0
By the same way, as y1 has a different value than y2 then we can calculate the set of activation operators,we can get F1 as follows:
cos θ1
sin φ1
− sin θ1
cos φ1
1
0
i.e. θ1 = −45, ϕ1 = 135 and, thereby, F1 =
√1
2
=
√1
2
1
1
1
1
1
−1
.
Similarly, we can get F2 as follows:
cos θ2
sin θ2
− sin θ2
cos θ2
0
1
i.e. θ2 = −45, ϕ2 = 135 and, thereby, F2 =
=
√1
2
√1
2
1
1
1
−1
1
−1
Therefore, the superposition output is youtput = F
.
n
P
wi xi . This means
i=1
that, the Hadamard-gate is trained after only one iteration.
3.3
The XOR-Function
It is known that, there are two classes in XOR-function:
1
1
1
Class A: The first pattern is P1 = { x1 =
, x2 =
d1 =
}
0
0
0
6
0
1
, x2 =
d2 =
}
1
0
1
0
0
Class B: The third pattern is P3 = { x1 =
, x =
d =
}
0 2
1 3
1
0
1
0
The fourth pattern is P4 = { x1 =
, x2 =
d4 =
}
1
0
1
1.1 1.2
Assume a random initial weight operator takes the value w1 = w2 =
.
0
0
If we introduced the four patterns into the AQPNN network we obtain,after only
one iteration, the weighted sum of each pattern as follows:
The second pattern is P2 = { x1 =
y1 =
2.2
0
, y2 =
2.4
0
0
1
, y3 =
2.3
0
, y4 =
2.3
0
It is easy to observe that the first couple of weighted sum qubits has different
values than the other couple of weighted sum qubits (i.e. either of y1 or y2 )
has a different value than y3 and y4 . Also, we may observe that y3 = y4
. As these two patterns are in the same class B, this implies that there only
three activation operators will take the following forms:
0.4545 −0.8907
0.4167 −0.9091
0
−1
F1 =
, F2 =
, F3 = F4 =
0
1
0
1
0.4348 0.9005
Then, the superposition output can be calculated as:
youtput =
m=3
P
j=1
Fj
n=2
P
wi xi
i=1
Table 1 shows a comparison between the proposed perceptron AQPNN,
Zhouh perceptron [13] and the classical perceptron, the classical pereptron is not
applicable in case of using one neuron. The proposed perceptron gives the final
output after only one iteration whereas Zhouh perceptron gives the final result
after 16 iterations [13]. Then it is clear that AQPNN reduces the computation
steps to get the final results.
Table 1: A comparison between the proposed AQPNN perceptron, Zhouh perceptron and the classical perceptron to solve the XOR-function
Algorithm name AQPNN Zhouh Perceptron Classical Perceptron(One neuron)
No.of iterations
1
16
Not applicable by one neuron
3.4
Two-overlapped classification problem
We proceed now to use the proposed AQPNN approach in a classification application. The application we use here is atypical two-overlapped classes classification problem, which can be regarded as a complex generalization of the
7
XOR problem [16]; see Figure
2. It has two classes: the first is a oval-shape
1
class has the target 0 =
with arbitrary input patterns given in Table 2.
0
0
The second class is square-shape class which has the target 1 =
with
1
arbitrary input patterns given in Table 3.
Figure 2: Two overlapped classes classification problem
P1
(0.1,0)
Table 2: Training input patterns of the oval-shape class
P2
P3
P4
P5
P6
P7
(0.1,0.2) (0,0.1) (-0.1,0.2) (-0.1,0) (0,-0.1) (0.1,-0.2)
P8
(-0.1,-0.2)
Table 3: Training input patterns of the square-shape class
P9
P10
P11
P12
P13
P14
P15
(0.1,0.1) (0,0) (0,0.2) (-0.1,0.1) (0.1,-0.1) (-0.1,-0.1) (0,-0.2)
It’s clear that the values of input patterns are classical data, i.e. real values, so it must be transformed into qubits using qubit normalization equation
Eq.(A2). For example, for the pattern P1= (0.1,0),
and,
where a=0.1
then,
p
0.1
1
b = 1 − (0.1)2 . Thus, we may have P1 = { x1 =
, d1 =
}.
0.9950
0
In this experiment, we chose only 15 patterns as training data whereas the
testing data is generalized over 176 patterns. If the learning rate is chosen,
randomly, to be 0.1, we will find the classification rate approaches 97.73% after
only one iteration for the learning process.
8
4
Discussion
It is worth now to discuss the performance of the proposed algorithm. It is clear
from the above examples that the computational power of the proposed AQPNN
is high, however, many observations may be one will record. First observation
is that under equal weight operators, the AQPNN model, in some applications,
does not utilize all the training data like other perceptron algorithms [2, 13, 14].
For example, in the first two situations, i.e. Not-gate and Hadamard-gate,
we need only one training input (because we use only the unique activation
operator), where as in the XOR function situation, it is required three training
inputs in order to accomplish the learning process. In the three situations, the
AQPNN is capable to reduce both the computation time and the number of
activation operators.
Second observation is concerned with the relation between the initial weight
operators and the activation operators. In the situations of the quantum Notgate and the quantum Hadamard-gate, the initial weight operator was the unitary operator, whereas in case of the XOR-function it was not the unitary
operator. The reason for this is due to the nature of the unitary operatorU
where U U † = I. Then, using Eq.(3), that includes the formula of the activation
operators, we have:
cos θj
sin φj
− sin θj
cos φj
cos θj
− sin θj
sin φj
cos φj
=
1
sin (θj − φj )
sin (θj − φj )
1
Obviously, the value of activation operator depends on the values of θj and φj ,
which in turn depend on the initial weight operators and the training data.
5
Conclusion and Future Work
This paper presented a novel algorithm achieves autonomous quantum perceptron neural network (AQPNN) to enable real time computations. The developed
algorithm represents a good computational alternate to the classical perceptron
neural network approach. AQPNN constructs self-adaptive activation operators
capable to accomplish the learning process in limited number of iterations and
reduces the overall computational cost. These activation operators can be applied in both quantum and classical applications. The efficiency of the proposed
algorithm is evaluated via solving four different problems where promising and
comparable results are given. In addition, to train the AQPNN algorithm, it
uses a limited number of training data samples, and in testing, it accomplishes
a well generalization. Using the proposed perceptron in real world applications
is one of our future aims.
9
References
[1] M. Hagan, H. Demuth and M. Beale (1996), Neural Network Design, PWS
publishing Company (USA) .
[2] F. Rosenblatt (1957), The Perceptron - a perceiving and recognizing automaton, Tech. report 85-460-1, Aeronautical Lab., Cornell Univ..
[3] F. Shafee (2007), Neural networks with quantum gated nodes, Engineering
Applications of Artificial Intelligence , 20,4, pp. 429-437.
[4] R.Zhou (2010), Quantum Competitive Neural Network,International Journal
of Theoretical Physics, 49, pp. 110-119.
[5] A. Sagheer and N. Metwally (2010), Communication via Quantum Neural
Networks,the 2nd world congress on nature and biologically inspired computing, NaBIC, IEEE, pp. 418-422.
[6] D. Ventura and T. Martinez (2000),
ory,InformationSciences, 5124, pp.273-296.
Quantum Associative Mem-
[7] C. Li and S. Li (2008), Learning algorithm and application of quantum BP
neural networks based on universal quantum gates,19,1, pp. 167-174.
[8] M. Nielsen and I. Chuang (2000.), Quantum Computation and Quantum
Information,Cambridge University Press (Cambridge).
[9] S. C. Kak (1995), Quantum Neural Computing, Advances in Imaging and
Electron Physics, 94, pp. 259-314.
[10] T. Menneer (1998), Quantum Artificial Neural Networks, Ph. D. thesis of
The Univ. of Exeter, UK,.
[11] M. V. Altaisky (2001), Quantum neural network,qunat-ph/0l07012.
[12] L. Fei and Z. Baoyu (2008), A study of quantum neural networks, Neural
Networks and Signal Processing, IEEE, 1, pp.539-542.
[13] R. Zhou, L .Qin and N. Jiang (2006), Quantum Perceptron Network,
The 16th International Conference on Artificial Neural Networks, ICANN
, LNCS,4131, pp. 651-657.
[14] M. Siomau (2013), A Quantum Model for Autonomous Learning Automata,
qunat-ph/1210.6626 v3.
[15] R. Horn and C. Johnson (1999), Topics in matrix analysis, Cambridge
University Press (Cambridge).
[16] H. Xiao and M. Cao (2009), Hybrid Quantum Neural Networks Model Algorithm and Simulation, The 5th International Conference on Natural Computation, 1, pp. 164-168.
10
Appendix A
Qubit: The smallest element store information in quantum computer is called
quantum-bit (qubit). The qubit takes either value of 0 or 1 or a superposition
of these states in the form :
ψ =a 0 +b 1 .
(A1)
Where a,b are complex numbers called the probability amplitudes. The qubit
2
2
state ψ is collapse into either basis state 0 or 1 with probability |a| or |b|
respectively where
|a|2 + |b|2 = 1.
(A2)
Hadamard product (matrices) In mathematics, the Hadamard product is a
binary operation that takes two matrices of the same dimensions, and produces
another matrix where each element i,j is the product of elements i,j of the
original two matrices. For two matrices, A, B of the same dimension, m x n the
Hadamard product, A ◦ B, is a matrix, of the same dimension as the operands,
with elements given by
˙ i,j .
(A ◦ B)i,j = (A)i,j (B)
(A3)
Example : Suppose two matrices
1 2
5 6
A=
, and B =
3 4
7 8
then the hadamrd product is
A◦B =
1(5) 2(6)
3(7) 4(8)
11
=
5 12
21 32
| 9 |
1
Energy-Efficient Cooperative Cognitive Relaying
Schemes for Cognitive Radio Networks
arXiv:1406.2255v3 [cs.NI] 26 Oct 2017
Ahmed El Shafie, Student Member, IEEE, Tamer Khattab, Member, IEEE, Amr El-Keyi, Member, IEEE
Abstract—We investigate a cognitive radio network in which
a primary user (PU) may cooperate with a cognitive radio
user (i.e., a secondary user (SU)) for transmissions of its data
packets. The PU is assumed to be a buffered node operating in
a time-slotted fashion where the time is partitioned into equallength slots. We develop two schemes which involve cooperation
between primary and secondary users. To satisfy certain quality
of service (QoS) requirements, users share time slot duration and
channel frequency bandwidth. Moreover, the SU may leverage the
primary feedback message to further increase both its data rate
and satisfy the PU QoS requirements. The proposed cooperative
schemes are designed such that the SU data rate is maximized
under the constraint that the PU average queueing delay is
maintained less than the average queueing delay in case of noncooperative PU. In addition, the proposed schemes guarantee
the stability of the PU queue and maintain the average energy
emitted by the SU below a certain value. The proposed schemes
also provide more robust and potentially continuous service for
SUs compared to the conventional practice in cognitive networks
where SUs transmit in the spectrum holes and silence sessions
of the PUs. We include primary source burstiness, sensing
errors, and feedback decoding errors to the analysis of our
proposed cooperative schemes. The optimization problems are
solved offline and require a simple 2-dimensional grid-based
search over the optimization variables. Numerical results show
the beneficial gains of the cooperative schemes in terms of SU
data rate and PU throughput, average PU queueing delay, and
average PU energy savings.
Index Terms—Cognitive radio, rate, queue stability, optimization problems.
I. I NTRODUCTION
Secondary utilization of the licensed frequency bands can
efficiently improve the spectral density of the under-utilized
licensed spectrum. Cognitive radio (secondary) users are intelligent devices that use cognitive technologies to adapt
with variations, and exploit methodologies of learning and
reasoning to dynamically reconfigure their communication
parameters [2]–[4]. This allows the secondary users (SUs) to
utilize the spectrum whenever it is free to use and with the
maximum possible data rates.
Cooperative diversity is a recently emerging technique for
wireless communications that has gained wide interest [5]–[8]
Part of this paper was published in the IEEE International Conference on
Computing, Networking and Communications (ICNC), 2015 [1].
A. El Shafie is with the University of Texas at Dallas, USA (e-mail:
[email protected]).
T. Khattab is with Electrical Engineering, Qatar University, Doha, Qatar
(email: [email protected]).
A. El-Keyi is with Wireless Intelligent Networks Center (WINC), Nile
University, Giza, Egypt ([email protected]).
The work of T. Khattab is supported by Qatar National Research Fund
(QNRF) under grant number NPRP 7-923-2-344. The statements made herein
are solely the responsibility of the authors.
where multiple channels are used to communicate the same
information symbol. Recently, cooperation in cognitive radio
networks, referred to as the cooperative cognitive relaying,
where the SU helps in relaying some of the undelivered
primary user (PU) packets, has got extensive attention [9]–
[16]. In particular, the SU functions as a relay node for the PU
whenever the PU packet cannot be decoded at its destination.
The authors of [9] showed that the maximum achievable rate
can be achieved by simultaneous transmissions of PU and
SU data signals over the same frequency band. The SU data
signals are jointly encoded with PU data signals via dirty-paper
coding techniques. Hence, the SUs know perfectly the PU’s
data. In [10], the authors assumed that the SU decodes-andforwards the undelivered PU packets during the idle periods
of the PU. The SU maximizes its throughput by adjusting its
transmit power level.
A. Related Work
In [12], the authors investigated the scenario of deploying
a dumb relay node in cognitive radio networks to increase
network spectrum efficiency. The relay node aids both the
PU and the SU. The proposed scheme is investigated for a
network consisting of a pair of PUs and a pair of SUs. In
[13], the authors considered a network with one buffered PU
and one buffered SU where the SU is allowed to access the
channel when the PU’s queue is empty. The SU has a relaying
queue to store a fraction of the undelivered PU packets
controlled through an adjustable admittance factor. A priority
of transmission is given to the relayed PU packets over the SU
own packets. The SU aims at minimizing its average queueing
delay subject to a power budget for the relayed primary
packets. In [15], the authors characterized some fundamental
issues for a wireless shared channel composed of one PU
and one SU. The authors considered a general multi-packet
reception model, where concurrent packet transmission could
be correctly decoded at receivers with a certain probability
that is characterized by the system’s parameters (e.g., packet
length, data rates, time slot duration, bandwidth, etc.). The PU
has unconditional channel access, whereas the SU accesses the
channel based on the activity state of the PU, i.e., active or
inactive, during a time slot. The spectrum sensing process is
impractically assumed to be perfect. The SU is assumed to
be capable of relaying the undelivered PU packets as in [13].
If the PU is sensed to be inactive during a time slot, the SU
accesses the channel with probability one, and if the PU is
active, the SU randomly accesses the channel simultaneously
with the PU or attempts to decode the primary packet with
2
the complement probability. The maximum stable throughput
region of the network is obtained via optimizing over the
access probability assigned by the SU during the active periods
of the PU.
Releasing portions of primary systems time slot duration
and bandwidth for the SUs has been considered in several
works, e.g., [11], [14], [17]. In [11], the authors proposed a
spectrum leasing scheme in which PUs may lease their owned
bandwidth for a fraction of time to SUs based on decode-andforward (DF) relaying scheme and distributed space-time coding. In [14], the authors proposed a new cooperative cognitive
scheme, where the PU releases portion of its bandwidth to the
SU. The SU utilizes an amplify-and-forward relaying scheme.
It receives the primary data during the first half of the time slot,
then forwards the amplified data during the second half of the
time slot. In [17], the authors considered an SU equipped with
multiple antennas sharing the spectrum with a single-antenna
energy-aware PU, where the PU aims at maximizing its mean
transmitted packets per joule. The users (SU and PU) split the
time slot duration and the total bandwidth to satisfy certain
quality of service (QoS) for the PU that cannot be attained
without cooperation. Both users maintain data buffers and are
assumed to send one data packet per time slot.
B. Contributions
Given the need for shorter transmission times and low latency communications [18]–[20], we develop two cooperative
cognitive schemes which allow the SU to transmit its data
bits simultaneously with the PU under the constraint of short
communication times and the presence of practical sensing and
feedback cost considerations. Under our proposed schemes,
the PU may cooperate with the SU to enhance its QoS,
i.e., to enhance its average queueing delay and maintain its
queue stability. Hence, cooperation is optional for the PUs.
If cooperation is beneficial for the PU, it releases portion of
its bandwidth and time slot duration for the SU. In turn, the
SU incurs portion of its transmit energy to relay the primary
packets. The SU employs a DF relaying scheme. The time
slot is divided into several intervals (or time phases) that
change according to the adopted cooperative scheme, as will
be explained later. In our first proposed cooperative scheme,
the SU blindly forwards what it receives from the PU even if
the primary destination can decode the data packet correctly
at the end of the PU transmission phase. On the other hand,
in our second proposed scheme, the SU forwards what it
receives from the PU if and only if the primary destination
could not decode the PU transmission of the primary packet;
or if the SU considers the feedback message as a negativeacknowledgement from the primary destination.1 However,
as will be explained later, there is a cost for using the
second cooperative scheme which is a reduction in the time
available for transmission data of users due to the presence
of an additional feedback duration. These practical issues are
quantified analytically in this work.
1 In
this paper, the primary feedback channel is assumed to be modeled as
an erasure channel model and can be undecodable at the secondary terminal.
This will be justified in Section VI.
The contributions of this paper are summarized as follows
• We design two cooperative cognitive schemes which
involve cooperation between the PUs and the SUs. The
two schemes differ in terms of time slot structure and
primary feedback mechanism. Both schemes achieve a
significant PU energy savings.
• We consider practical assumptions for the cognitive radio
network. Precisely, unlike most exiting literature, we
consider spectrum sensing errors and primary feedback
reception errors at the SU. Moreover, we consider the
impact of the time durations spent on spectrum sensing
and feedback message transmission on the achievable
data rates. In addition, the PU data burstiness is taken
into consideration.
• We propose two QoS measures for the PU and include
them in the proposed optimization problems as constraints. Specifically, we assume a constraint on the PU
average queueing delay and a constraint on the stability
of the PU queue. Moreover, we consider a practical
energy constraint on the SU average transmit energy. The
optimization problems are stated under such constraints.
This paper is organized as follows. In the next section, we
introduce the system model adopted in this paper. In Section
III, we analyze the PU queue and derive the PU average
queueing delay and PU queue stability condition. Our first
proposed cooperative scheme is explained in Section V. In
Section VI, we describe our second proposed cooperative
scheme. The numerical results are shown in Section VIII. We
finally conclude the paper in Section IX.
II. S YSTEM M ODEL
We consider a wireless network composed of orthogonal
primary channels, where each channel is used by one PU. Each
primary transmitter-receiver pair coexists with one secondary
transmitter-receiver pair. For simplicity, we focus on one
of those orthogonal channels.2 Each orthogonal channel is
composed of one secondary transmitter ‘s’, one primary transmitter ‘p’, one secondary destination ‘sd’ and one primary
destination ‘pd’. The SU is equipped with two antennas: one
antenna for transmission data and the other for data reception
and spectrum sensing. The PU is equipped with a single
antenna. Moreover, the PU has an infinite-length buffer for
storing a fixed-length packets. The arrivals at the PU queue
are independent and identically distributed (i.i.d.) Bernoulli
random variables from one time slot to another with mean
λp ∈ [0, 1] packets per time slot. Thus, the probability of a
data packet arrival at the PU queue in an arbitrary time slot
is λp . A list of the key variables is given in Table I.
A. Channel Model
We assume an interference wireless channel model, where
concurrent transmissions are assumed to be lost data if the
received signal-to-noise-plus-interference-ratio (SINR) is less
2 As argued in the cognitive radio literature, e.g., [9]–[17] and the references
therein, the proposed cooperative cognitive scheme and theoretical development presented in this paper can be generalized to cognitive radio networks
with more PUs and more SUs.
3
TABLE I: List of Key Variables.
Symbol
Description
τs
Spectrum sensing time duration
Symbol
Description
T and W
Time slot (coherence time) duration
and channel total bandwidth, respectively
R̃
Average SU data rate
P◦
Average transmit information power
Qp
Queue at the PU
τf
Feedback message duration
SU transmission data rate under scheme Pℓ
µp,c
(ℓ)
Re
(ℓ)
and Rb
(ℓ)
Average service rate of the PU queue
under scheme Pℓ
when the PU queue is empty and nonempty, respectively
αj,k
2
Channel gain of the j − k link with mean σj,k
PFA
False alarm probability at the SU
PMD
Misdetection probability at the SU
λp
Average arrival rate at the PU’s queue
Average queueing delay at the PU queue under scheme Pℓ
f
Probability that SU decodes
(ℓ)
Dp,c
the PU’s feedback message
Dp,nc
Average queueing delay at the PU queue with no cooperation
µp,nc
Average service rate of the PU queue
with no cooperation
Eℓ
Secondary mean transmit energy under scheme Pℓ
E
Maximum transmit energy by the SU
b
PU packet size in bits
Ti and Wi
Time and bandwidth assigned to
user i ∈ {p, s} under cooperation
than a predefined threshold, or equivalently, if the instantaneous channel gain is lower than a predefined value.3 We
propose a DF relaying technique, where the SU decodes and
then forwards the PU packet. The SU is assumed to be a fullduplex terminal which means that it can receive and transmit
at the same time. To avoid the loopback self-interference
impairments which can significantly reduce the achievable
rates, we assume that the SU cannot transmit and receive over
the same frequency band. However, the SU can transmit data
over a frequency band and receive over the other.
Both SU and PU transmit with a fixed power spectral
density of P◦ Watts/Hz. The total transmit power changes
based on the used bandwidth per transmission. When a node
transmit over a bandwidth of Wj Hz, the average transmit
power is P◦ Wj Watts. Time is slotted and a slot has a
duration of T seconds. Channel coefficient between node
j and node k, denoted by ζj,k , is distributed according to
a circularly symmetric Gaussian random variable, which is
constant over one slot, but changes independently from one
time slot to another. The expected value of the channel gain
2
αj,k = |ζj,k |2 is σj,k
, where | · | denotes the magnitude of
a complex argument. Each receiving signal is perturbed by a
zero-mean additive white Gaussian noise (AWGN) with power
spectral density N◦ Watts/Hz. The outage of a channel (link)
occurs when the transmission rate exceeds the channel rate.
The outage probability between two nodes j and k without and
with the presence of interference from other nodes are denoted
(I)
by Pj,k and Pj,k , respectively. These outage probabilities are
3 This
will be discussed later in Appendix B.
functions of the number of bits in a data packet, the slot
duration, the transmission bandwidth, the transmit powers, and
the average channel gains as detailed in Appendices A and B.
B. Primary Access and Secondary Access Permission
The PU transmits its data whenever it has a packet to send.
That is, it does not have any restrictions on using the spectrum.
Without cooperation, the PU uses the entire time slot duration
and total bandwidth for its own data signal transmissions,
while the SU does not gain any spectrum/channel access even
if the PU’s queue is empty. This is because, in practice,
the SU may erroneously misdetect the primary activity and
hence it may cause harmful interruption on the primary system
operation, e.g., collisions and packets loss, that can cause
sever packet losses and data delays. In case of cooperation,
and based on the proposed cooperative cognitive schemes that
will be explained shortly, the PU will release a portion of its
time slot duration and total bandwidth to the SU. The SU
will then be allowed to use the spectrum. In practice, the SU
may get permission to access the spectrum if it either provides
economic incentives for the PU or performance enhancement
incentives. Similar to [5], [14], [17] and the references therein,
we consider performance enhancement incentives.
III. Q UEUE S TABILITY, PU Q UEUE M ODEL , AND PU
Q UEUEING D ELAY
A. Stability
A queueing system is said to be stable if its size is bounded
all the time. More specifically, let QT denote the length of
4
queue Q at the beginning of time slot T ∈ { 1, 2, 3, . . . }.
Queue Q is said to be stable if
(1)
For the PU queue, we adopt a late-arrival model where a newly
arrived packet to the queue is not served in the arriving time
slot even if the queue is empty.4 Let ATp denote the number of
arrivals to queue Qp in time slot T, and HpT denote the number
of departures from queue Qp in time slot T. The queue length
evolves according to the following form:
(2)
QT+1
= (QTp − HpT )+ + ATp
p
+
where (z) denotes max(z, 0). We assume that departures
occur before arrivals, and the queue size is measured at the
early beginning of the time slot [21].
B. PU Queueing Delay
Let µp = H̃p , where Ṽ denotes the expected value of V,
be a general notation for the mean service rate of the PU
queue. Solving the state balance equations of the Markov chain
modeling the PU queue (Fig. 1), it is straightforward to show
that the probability that the PU queue has m ≥ 1 packets,
denoted by 0 ≤ νm ≤ 1, is given by
!m
ν0 λp µp
ν0 m
νm =
=
η , m = 1, 2, . . . , ∞ (3)
µp λp µp
µp
λ µ
where η = λp µp . Since the sum over all states’ probabilities
p p
P∞
is equal to one, i.e., m=0 νm = 1, the probability of the
PU queue being empty is obtained by solving the following
equation
∞
∞
X
X
1 m
η =1
(4)
ν0 +
νm = ν0 + ν0
µ
m=1
m=1 p
After some mathematical manipulations and simplifications,
ν0 is given by
λp
(5)
ν0 =1 −
µp
The PU queue is stable if µp > λp . Applying Little’s law, the
PU average queueing delay, denoted by Dp , is then given by
∞
1 X
Dp =
mνm
(6)
λp m=0
Using (3), Dp is rewritten as
∞
ν0 X
Dp =
mη m
(7)
λp µp m=1
Substituting with ν0 into Dp , the PU average queueing delay
is then given by
1 − λp
Dp =
(8)
µp − λp
Following are some important remarks. Firstly, the PU average
queueing delay cannot be less than one time slot, which is
attained when the denominator of (8) equals to the numerator.
This condition implies that µp = 1 packets/time slot, i.e., the
minimum of Dp is attained if the service rate of the PU queue
is equal to unity.
4 This queueing model is considered in many papers, see for example, [10],
[15], [21] and the references therein.
0
u0 l m
p p
1
2
3
u1 l m u2 l m u 3
p p
p p
Fig. 1: Markov chain of the PU’s queue. State self-transitions
are omitted for visual clarity.
70
D p [slots]
lim lim Pr{QT < x} = 1
x→∞ T→∞
lp m p
lp m p
lp
60
p
50
p
p
40
p
30
p
p
20
=0.3 [packets/slot]: analyt.
=0.3 [packets/slot]: sim.
=0.2 [packets/slot]: analyt.
=0.2 [packets/slot]: sim.
=0.1 [packets/slot]: analyt.
=0.1 [packets/slot]: sim.
10
0
0.1
0.2
0.3
0.4
0.5
p
0.6
0.7
0.8
0.9
1
[packets/slot]
Fig. 2: PU average queueing delay versus µp for different
values of λp .
To verify the average queueing delay expression and show
the impact of both λp and µp , we plotted the curves in Fig. 2.
As shown in the figure, increasing µp decreases the average
queueing delays. Moreover, the average queueing delay is
increasing with the increase of the data arrival rate λp . As
shown analytically, the minimum average queueing delay is 1
time slot when µp = 1 packets/slot.
Secondly, the primary packets average queueing delay,
Dp , decreases with increasing of the mean service rate of
the PU queue, µp . On the other hand, µp depends on the
channels outage probabilities which, in turn, are functions of
the links’ parameters, packet size, transmission time durations,
occupied bandwidth, and many other parameters as shown in
Appendices A and B.
Hereinafter, when necessary, we append a second subscript
to the used notations to distinguish between the cases of
cooperation (‘c’) and no cooperation (‘nc’). We also append a
new superscript to distinguish between the proposed schemes.
IV. N ON -C OOPERATIVE
AND
C OOPERATIVE U SERS
A. Non-Cooperative Users
Let T denote the time slot duration that a PU is allowed
to transmit data over a total bandwidth of W Hz. Without
cooperation, the time slot is divided into two non-overlapped
phases: a transmission data phase, which takes place over
the time interval [0, T − τf ]; and a feedback phase whose
length is τf seconds, which takes place over the time interval
[T − τf , T ]. The feedback phase is used by the primary destination to notify the primary transmitter about the decodability
status of its packet. If the PU queue is nonempty, the PU
transmits exactly one packet of size b bits to its respective
5
Without cooperation, a data packet at the head of the PU
queue is served if the p → pd link is not in outage. Using
the derived results in Appendix A for the channel outage
probability, the mean service rate of the PU queue, denoted
by µp,nc , is given by
b
2 W (T −τf ) − 1
(9)
µp,nc = exp − N◦
2
P◦ σp,pd
It is noteworthy from (9) that increasing the feedback duration,
τf , decreases the service rate of the PU queue. This is
because the time available for transmission data decreases with
increasing τf ; hence, the outage probability increases which
reduces the service rate. Since the PU transmits with a fixed
rate of Rp = W (Tb−τf ) bits per channel use, increasing W
or T decreases the channel outage probability as seen in (9).
However, increasing Rp decreases the throughput since the
number of decoded bits per seconds is decreased. Hence, one
should compute the number of decoded bits per second per
Hz which is given by
b
τf
2 W T (1− T ) − 1
b
µp,nc = exp − N◦
(10)
2
P◦ σp,pd
WT
Letting Rp =
b
WT
, we have
µp,nc = exp
− N◦
2
Rp
τf
)
(1−
T
−1
Rp
(11)
2
P◦ σp,pd
Using the first derivative of µp,nc in (10) with respect to b,
the optimal packet size is
2
P◦ σp,pd
W
N◦
τf
b⋆ = W T (1 − )
(12)
T
ln(2)
where W(·) is Lambert-W (omega) function. From this interesting result, increasing the feedback duration τf will decrease
the packet size. This is expected since the allowed time to
send a data packet will decrease. On the other hand, we can
see that increasing the time slot duration T or the average
P◦ σ2
at the primary destination increases the
receive SNR Np,pd
◦
optimal packet size. This implies that more packet size can be
supported by the communication system. However, increasing
T and W linearly increase the optimal packet size. When T
2.5
PU throughput [bits/sec/Hz]
destination. The PU and primary destination implement an
Automatic Repeat-reQuest (ARQ) error control scheme. The
primary destination uses the cyclic redundancy code (CRC)
bits attached to each packet to ascertain the decodability status
of the received packet. The retransmission process is based on
an acknowledgment/negative-acknowledgement (ACK/NACK)
mechanism, in which short-length packets are broadcasted by
the primary destination to inform the primary transmitter about
its packet reception status. If the PU receives an ACK over the
time interval [T − τf , T ], it removes the data packet stored at
the head of its queue; otherwise, a retransmission of the packet
is generated at the following time slot(s). The ARQ scheme
is untruncated which means that there is no maximum on the
number of retransmissions and an erroneously received packet
is retransmitted until it is decoded correctly at the primary
destination [10], [13], [15], [22].
2
1.5
1
0.5
0
0
1
2
3
4
5
6
7
Fig. 3: PU throughput [bits/sec/Hz] versus Rp [bits/sec/Hz].
is sufficiently longer than τf , this leads to
2
P◦ σp,pd
W
N
◦
b⋆ = W T
(13)
ln(2)
Thus, the number of bits per channel use Rp that maximizes
the throughput (in bits/sec/Hz) is
2
P◦ σp,pd
⋆
W
N
b
◦
R⋆p =
=
(14)
WT
ln(2)
To verify our analytical finding and show the impact of Rp
on the PU throughput [bits/sec/Hz], we plot Fig. 3. As can be
seen from Fig. 3, the PU throughput increases with Rp until a
peak is reached, then the throughput decreases until it reaches
zero. Hence, there is an optimal value for the packet size (b⋆ or
R⋆p for a given T W ) that maximizes the PU throughput. This
value is given by (14). Increasing the average receive SNR
2
P◦ σp,pd
N◦ increases the PU throughput and also increases
the optimal R⋆p . This matches our discussion below (12).
According to (8), and using (9), the PU average queueing
delay in case of non-cooperative PU is given by
1 − λp
1 − λp
=
Dp,nc =
b
µp,nc − λp
W (T −τf )
exp − N◦ 2 P◦ σ2 −1 − λp
p,pd
(15)
with λp < µp,nc which represents the stability condition of the
PU queue when there is no cooperation.
B. Cooperative Users
When the SU is able to assist the PU with relaying a portion
of the primary packets, the PU, in return, may release a portion
of its spectrum to the SU for its own transmission data if
cooperation is beneficial for the PU. In addition to releasing
some bandwidth for the SU, the PU releases a portion of
its time slot duration to the SU to retransmit the primary
packet. If the cooperation is beneficial for the PU, it cooperates
with the SU. If the PU queue is nonempty, the PU releases
Ws ≤ W Hz to the SU for its own data transmission, and
releases Ts seconds of the time slot to the SU for relaying the
primary packets. The used bandwidth for both transmission
and retransmission of the primary packet is Wp = W−Ws Hz
6
with transmission times Tp and Ts , respectively. Throughout
the paper, we use the analogy of subbands to distinguish
between the primary operational frequency subband, Wp , and
the secondary operational frequency subband, Ws .
1) Spectrum Sensing: The SU senses the primary subband,
Wp , for τs seconds from the beginning of the time slot to detect
the possible activities of the PU. If this subband is sensed to be
idle (unutilized by the PU), the SU exploits its availability by
sending some of its data bits. We assume that the SU employs
an energy-detection spectrum-sensing algorithm. Specifically,
the SU collects a number of samples over a time duration τs ≪
T , measures their energy, and then compares the measured
energy to a predefined threshold to make a decision on the
PU activity [23]. Detection reliability and quality depend on
the sensing duration, τs , and can be enhanced by increasing
τs . Specifically, as τs increases, the primary detection becomes
more reliable at the expense of reducing the time available for
secondary transmission over the primary subband if the PU is
actually inactive. This is the essence of the sensing-throughput
tradeoff in cognitive radio systems [23].
Since the sensing outcome is imperfect and subject to
errors due to AWGN, the SU may interfere with the PU
and cause some packet loss and collisions. To capture the
impact of sensing errors, we define PMD as the probability of
misdetecting the primary activity by the secondary terminal,
which represents the probability of considering the PU inactive
while it is actually active; and PFA as the probability that the
sensor of the secondary terminal generates a false alarm, which
represents the probability of considering the PU active while it
is actually inactive. The values of sensing errors probabilities
are derived in Appendix C.
2) Important Notes and Remarks: In the following, we
state some important notes regarding our proposed cooperative
schemes.
• A communication link is assumed to be ‘ON’ in a given
time slot if it is not in outage. In particular, a link is ON
if the instantaneous data rate of that link is higher than
the used transmission data rate at the transmitter. In this
case, the probability of bit-error rate is very low and can
be neglected. Otherwise, the communication link is said
to be ‘OFF’ (i.e., unable to support the transmission rate).
In other words, the bit-error rate is unbounded (average
symbol error rate is almost 1) and data retransmission
should take place in the following transmission times.
• The CSI of the s → pd, p → s and s → sd links are
assumed to be known accurately at the SU (a similar assumption of knowing the CSI at the transmitters is found
in many papers, for example, [14] and the references
therein).5 This allows the SU to better utilize the spectrum
and helps the PU whenever necessary and possible.
• We assume that the SU always has data bits to transmit and it transmits its data with the instantaneous
channel rate of its link, i.e., s → sd link. This is realized through the implementation of adaptive modulation
5 Note that the channel coefficient between the SU and the primary destination can be estimated by the primary destination and fed back to the SU.
The primary destination only needs to send the state of the channel, i.e., ON
or OFF, which can be realized through a one-bit binary feedback pilot signal.
•
•
•
•
•
•
•
•
schemes which is one of the main advantage of the
cognitive radio devices [14].
Since the SU has the CSI of all the communication links
as explained in the previous bullet, in each time slot, the
SU ascertains the state of the s → pd link, i.e., ON or
OFF link, by comparing αs,pd to the decoding threshold
αth,s,pd. Further details on a link state is provided in
Appendix A. After that, the SU can take decisions based
on the other links to better help the PU.
Since the SU operation is based on the spectrum sensing
outcomes, the time assigned to channel sensing, denoted
by τs , should be less than the PU transmission time Tp
(i.e., τs < Tp ). In particular, the SU cannot set τs to be
longer than the time assigned to PU transmission.
If the p → s link is in outage (i.e., OFF), this means
that the SU will not be able to decode the PU packet
since the noise signal dominates the data signal and the
transmission data rate is higher than the channel rate.
Each PU packet comes with a CRC so that receivers
(primary destination and SU) check the checksum to
indicate the status of the decoded packet. Hence, if the SU
cannot decode the primary packet in a time slot, i.e., the
p → s link is in outage, or if the PU’s queue is empty, the
SU will not waste energy in forwarding what it receives
from the wireless channel because it knows with certainty
that the received packet is a noisy packet (i.e., has no
data when the PU queue is empty). Consequently, the SU
saves its energy from being wasted in a useless primary
data retransmission, and it instead exploits that amount
of energy for the transmission of its own data. This is
critical since the SU energy is constrained and needs to
be optimized.
The data signals transmitted over subband Ws are independent of the data signals transmitted over subband Wp .
Hence, when there is an interference over subband Wp
due to simultaneous transmissions from the SU and the
PU, the data signals over subband Ws do not get affected.
If the PU is active in a given time slot and the SU
misdetects its activity, a concurrent transmission takes
place over the primary subband, Wp . Hence, the SU data
bits transmitted over Wp are lost since the transmission
data rate is higher than the link rate, and the primary
packet could survive if the received SINR is higher than
the decoding threshold. This event occurs with probability
(I)
Pp,pd.6 See Appendix B for further details.
We assume that the primary ARQ feedback is unencrypted and is available to the SU. A similar assumption is found in many references, e.g., [10] and the
references therein.
If the SU transmits concurrently with the primary destination during the feedback phase, the feedback message
(packet) may be undecodable at the PU. For this reason,
the SU remains silent/idle during the primary feedback
duration to avoid disturbing the primary system operation.
6 Throughout
this paper, X = 1−X .
7
Feedback duration
Sensing duration
߬௦
ܶ
ܶ௦
߬
ܶ
Fig. 4: Time slot structure under proposed scheme P1 . In the
figure, τs is the spectrum sensing time duration, Tp is the
PU transmission time of the primary data packet, Ts is the
time duration assigned to the secondary transmission of the
primary packet, and τf is the feedback duration. Note that
Tp + Ts + τf = T .
V. F IRST P ROPOSED S CHEME
In this section, we explain our first proposed cooperative
scheme, denoted by P1 , and derive the achievable data rates
and the energy emitted by the SU. The time slot structure
under P1 is shown in Fig. 4. In our first proposed cooperative
scheme, the operation of the SU during any arbitrary time slot
changes over four phases: [0, τs ], [τs , Tp ], [Tp , Tp + Ts ], and
[Tp + Ts , Tp + Ts + τf ] (or simply [T − τf , T ]).
A. Scheme Description
Before proceeding to the scheme description, we note that
if the PU is active during a time slot, its transmission takes
place over [0, Tp ], whereas the secondary retransmission of the
primary packet takes place over [Tp , Tp + Ts ]. The operation
of the SU during each phase is described as follows.
1) Time interval [0, τs ]: The SU simultaneously senses the
primary subband, Wp , and transmits its own data over Ws .
The sensing outcome is then used for the secondary operation
over [τs , Tp ].
2) Time interval [τs , Tp ]: If the SU detects the PU to be
active, it simultaneously transmits its own data over Ws , and
attempts to decode the PU transmission over Wp . If the SU
detects the PU to be inactive, it transmits its own data over
both subbands, Wp and Ws . If the PU is active and the SU
finds the primary subband to be free of the PU transmission,
there will be interference between the PU and the SU over
Wp .
3) Time interval [Tp , Tp + Ts ]: If the PU’s queue is empty,
the SU transmits its own data over both subbands. If the links
p → s and s → pd are simultaneously ON and the PU queue is
nonempty, the SU simultaneously transmits its own data over
Ws and retransmits the primary packet over Wp . If either the
p → s link or the s → pd link is OFF, the SU transmits its
own data over both subbands.
4) Time interval [T − τf , T ]: If the PU was active during
[0, Tp ], then its respective receiver broadcasts a feedback
message to indicate the status of the packet decodability.
Hence, the SU transmits its own data over Ws and remains
silent over Wp to avoid causing any interference or disturbance
for the feedback message transmission. If the PU was inactive
during [0, Tp ], there is no feedback message in the current time
slot. However, since the SU does not know the exact state of
the PU during a time slot, it remains idle.
To summarize, the SU does not access the spectrum allocated to the PU, Wp , during the feedback duration to avoid
disturbing the feedback message transmission.
B. PU and SU Data Rates and SU Emitted Energy
(1)
A packet at the head of the PU queue Qp,c is served if
the SU detects the primary activity correctly and either the
direct path or the relaying path7 is not in outage; or if the SU
misdetects the primary activity and the p → pd link is not in
(ℓ)
outage. Let µp,c denote the mean service rate of the PU under
scheme Pℓ , ℓ ∈ {1, 2}. The mean service rate of the PU queue
under scheme P1 is then given by
!
!
(I)
(1)
µp,c = PMD 1 − Pp,pd 1 − Pp,s Ps,pd +PMD 1 − Pp,pd
(16)
(I)
Pp,pd)
where PMD (1 −
denotes the probability of correct
primary packet decoding at the primary destination when the
SU misdetects the primary activity over Wp .
(ℓ)
(ℓ)
Let Re and Rb denote the SU transmission data rate
under scheme Pℓ when the PU
queue isempty and nonempty,
α
P◦
denote the instantarespectively, and R = log2 1+ s,sd
N◦
neous data rate of the s → sd link in bits/sec/Hz.
Based on the description of scheme P1 , the SU transmission
data rate when the PU queue is empty is given by
R(1)
=
τ
δ
+(T
−τ
)(P
δ
+P
(17)
)+
T
s s
p
s
FA s
FA
s WR
e
where δs = Ws /W . When the PU queue is nonempty, the SU
transmission data rate is given by
(1)
Rb = Tp δs + PMD +PMD Pp,s Ts
+PMD Pp,s Ts (Ps,pd δs +Ps,pd) W R
(18)
(1)
Rb
The term Pp,s appears in
because the SU, when the
p → s link is in outage, uses the entire bandwidth for its own
data transmission. Furthermore, the term Ps,pd appears in the
(1)
expression of Rb because the SU, in each time slot, knows
the channel state between itself and the primary destination
and uses the allocated bandwidth to the PU for its own
transmission data when that channel is in outage.
Let I[L] denote the indicator function, where I[L] = 1 if
the argument is true. The SU transmission data rate when it
operates under scheme Pℓ is then given by
(ℓ)
(ℓ)
(ℓ)
(ℓ)
R(ℓ)
s = I[Qp,c = 0]Re + I[Qp,c 6= 0]Rb
(19)
7 The relaying path is defined as the path connecting the PU to primary
destination through the SU; namely, links p → s and s → pd. Since the
channels are independent, the probability of the relaying path being not in
outage is Pp,s Ps,pd .
8
The expected value of I[L] is equal to the probability of the
argument event. That is,
Ĩ[L] = Pr{L}
(20)
The mean SU transmission data rate is then given by
(ℓ)
(ℓ)
(ℓ)
(ℓ)
R̃(ℓ)
s = Pr{Qp,c = 0}R̃e + Pr{Qp,c 6= 0}R̃b
(ℓ)
Pr{Qp,c
(ℓ)
ν0,c
(21)
(ℓ)
Pr{Qp,c
6= 0} =
= 0} =
and
Recalling that
(ℓ)
1−ν0,c , the mean SU transmission data rate under scheme P1
is then given by
(1)
(1)
R̃s = ν0,c τs δs +(Tp −τs )(PFA δs +PFA)+Ts W Gs
(1)
+ν0,c Tp δs + PMD +PMD Pp,s Ts
+PMD Pp,s Ts (Ps,pd δs +Ps,pd) W Gs
P◦
),
αs,sd N
◦
(22)
which
where Gs is the expected value of log2 (1 +
is given by (see Appendix D for details)
!
!
1
1
1
exp P◦ 2
Gs =
Γ 0, P◦ 2
(23)
ln(2)
N◦ σs,sd
N◦ σs,sd
R∞
where Γ (m, s) = 1/s exp(−z)z m−1dz is the upper incomplete Gamma function.
According to the described scheme, the mean SU transmit
energy, denoted by E1 , is given by
(1)
E1 = ν0,c τs δs +(Tp −τs )(PFA δs +PFA )+Ts W P◦
(24)
(1)
+ν0,c τs δs +(Tp −τs )(PMD δs +PMD )+Ts W P◦
Note that we assume that the maximum average emitted
secondary energy is E; hence, E1 must be at most E.
VI. S ECOND P ROPOSED
SCHEME
In our second scheme, denoted by P2 , we assume a variation
in the primary feedback mechanism to further improve the
achievable performance for both PU and SU. More specifically, we assume the existence of two primary feedback phases
within each time slot. Each transmission of the primary packet
by either the PU or the SU is followed by a feedback phase
to inform the transmitter (PU or SU) about the decodability of
the transmitted packet. In other words, a feedback message is
sent by the primary destination when it receives a copy of the
expected primary packet.8 The first feedback phase is preceded
by the PU transmission of the primary packet, whereas the
second feedback phase is preceded by the SU transmission
of the primary packet. The PU queue drops the packet if it
receives at least one ACK in any time slot. Otherwise, the
packet will be retransmitted by the PU in the following time
slots until its correct decoding at the primary destination.
On the one hand, the gain of this cooperative scheme
over the first proposed scheme lies in its ability to prevent
unnecessary retransmissions of a successfully decoded primary
8 Each packet comes with an identifier (ID) and a certain labeled number
that is generated by the transmitter. In addition, the destination sends the
expected number of the next packet as part of the feedback message.
packet at the primary destination. More specifically, if the
primary destination can decode the PU transmission correctly,
then the SU does not need to retransmit the same primary
packet over the primary subband and over the time assigned
for relaying; hence, the SU can instead use the time assigned
for relaying and the primary subband to transmit its own data
bits to its destination.9 Consequently, using scheme P2 enables
the SU to increase its average transmission rate via using the
allocated bandwidth and time duration for PU transmissions
and its transmit energy to send its own data. On the other
hand, there is a considerable cost due to appending an extra
feedback duration to the time slot. This cost is converted to
an increase in the outage probabilities of the links and the
reduction in the users’ rates. This is because the total time
allocated for data bits and packets transmissions is reduced
by τf seconds relative to the total transmission time in case
of scheme P1 .
Under cooperative scheme P2 , the secondary operation in
any arbitrary time slot changes over five phases as shown in
Fig. 5: [0, τs ], [τs , Tp ], [Tp , Tp + τf ], [Tp +τf , Tp +τf +Ts ] and
[T −τf , T ].
A. Decoding of Primary Feedback Message at the SU
The correctness of the feedback message decoding at the SU
is ascertained using the checksum appended to the feedback
message packet. The decoding of a primary feedback message
at the SU can be modeled as an erasure channel model.
In particular, the primary feedback message is assumed to
be decoded correctly at the SU with probability f . If the
SU cannot decode the primary feedback message in a given
time slot,10 it considers this feedback message as a NACK
feedback message. Another possibility is to assume that the SU
considers the “nothing” as a NACK message with probability
ω and considers it as an ACK message with probability ω.
Using such parameter allows the SU to use a fraction of the
“nothing” events that would be an ACK, which means that the
SU does not need to retransmit the primary packet, for its own
data bits transmission. The SU can optimize over ω to alleviate
wasting the channel resources without further contribution to
the primary service rate when the primary packet is already
decoded successfully at the primary destination. The primary
mean service rate in this case is given by
!
!
(I)
(2)
µp,c = PMD 1−Pp,pd 1−βPp,s Ps,pd +PMD 1−Pp,pd
(25)
where β = f + f ω is the probability of considering the
overheard feedback message as a NACK when the primary
destination sends a NACK feedback (which occurs if the
p → pd link is in outage). From (25), the primary mean
service rate is parameterized by ω. The maximum primary
9 This is because the retransmission of the primary packet by the secondary
transmitter does not provide further contribution to the primary throughput.
In addition, the retransmission of the primary packet causes both energy and
bandwidth losses that can be used otherwise for the SU data transmission.
10 This event is referred to as “nothing” event. The “nothing” event is
considered when the SU fails in decoding the feedback message, or when
the PU is idle at this time slot, i.e., Qp = 0.
9
service rate is attained when ω = 1 since the SU will relay
more PU packets. For simplicity, we consider the case of ω = 1
which guarantees the highest QoS for the PU.
Feedback duration
Sensing duration
B. Scheme Description
The PU transmission occurs over [0, Tp] and the secondary
retransmission of a primary packet occurs over [Tp +τf , Tp +
τf +Ts ]. Note that the feedback message is considered by the
SU as a NACK feedback message 1) if the p → pd link is in
outage and the feedback message is decoded correctly at the
SU terminal; or 2) if the feedback message is undecodable at
the SU. The probability that the SU considers the overheard
primary feedback message as a NACK is then given by
Γf = Pp,pdf +f
(26)
߬௦
ܶ
߬
ܶ௦
߬
ܶ
Fig. 5: Time slot structure under proposed scheme P2 . In this
scheme, there are two feedback message durations. Hence,
Tp + Ts + 2τf = T .
In the sequel of this subsection, we describe the behavior of
the SU during each phase.
1) Time interval [0, τs ] and [τs , Tp ]: The operation of the the SU detects the primary activity correctly, and the direct link
system over the time intervals [0, τs ] and [τs , Tp ] is similar to is in outage, the SU considers the primary feedback message
the first cooperative scheme during the same time intervals.
as a NACK signal, and the relaying link is not in outage; or 3)
2) Time interval [Tp , Tp +τf ]: If the PU queue is nonempty if the SU misdetects the primary activity, and the direct link
during the ongoing time slot, at the end of the PU dedicated is not in outage. The mean service rate of the PU queue is
transmission time, the SU transmits its own data over Ws , similar to the first scheme and is given by
!
!
and remains silent over Wp to avoid causing a concurrent
(I)
(2)
transmission with the feedback message transmitted from the µp = PMD 1−Pp,pd 1−Pp,s Ps,pd +PMD 1 − Pp,pd
primary destination to the PU. If the PU queue is empty during
(27)
the ongoing time slot, the SU transmits its own data over both
We
note
that
the
expression
(27)
is
similar
to
(16).
However,
subbands.
3) Time interval [Tp + τf , T − τf ]: Upon decoding the the maximum assigned transmission data times for users under
entire primary packet, the SU discerns the actual (true) state P2 are lower than P1 as P2 has two feedback durations.
When the PU is inactive, the SU instantaneous transmission
of the PU, i.e., active or inactive. The SU transmits its own
data over both subbands 1) if the PU was active during the rate is givenby
time interval [0, Tp ], the primary destination correctly decoded
R(2)
(28)
e = τs δs +(Tp −τs ) PFA +PFA δs +Ts W R
the PU packet, and the SU successfully decoded the primary
When the PU is active, the SU instantaneous transmission
feedback message, i.e., considered it as an ACK feedback; or
rate is given by
2) if the s → pd link is in outage; or 3) if the PU was inactive
during the time interval [0, Tp ]. If the PU was active during
(2)
Rb = Tp δs +PMD Ts
the time interval [0, Tp], the secondary terminal considered the
!
feedback message sent over [Tp , Tp +τf ] as a NACK feedback,
and the s → pd link is not in outage; the SU simultaneously
× Pp,s Γf Ps,pd δs +Ps,pd +Γf +Pp,s +PMDTs W R
transmits its own data over Ws and retransmits the primary
packet over Wp .
(29)
4) Time interval [T − τf , T ]: If the SU retransmitted the The mean SU transmission data rate is then given by
!
packet over [Tp + τf , T − τf ], another feedback message will
(2)
(2)
be sent over this phase by the primary destination. Hence, the R̃s = ν0,c τs δs +(Tp −τs ) PFA +PFA δs +Ts W Gs
SU simultaneously transmits its own data over Ws and remains
silent over Wp . If the SU decides not to retransmit the primary
(2)
+ν0,c Tp δs +PMD Ts
packet, there will be no primary feedback message. Therefore,
the SU transmits its own data over both subbands. If the PU
!
queue is empty during the ongoing time slot, the SU transmits
× Pp,s Γf Ps,pd δs +Ps,pd +Γf +Pp,s +PMDTs W Gs
its own data over both subbands over this feedback duration
(i.e., [T − τf , T ]).
(30)
C. PU and SU Data Rates and the SU Emitted Energy
(2)
A data packet stored at the head of the PU queue Qp,c is
served in a given time slot 1) if the SU detects the primary
activity correctly, and the direct link is not in outage; or 2) if
According to the description of scheme P2 , the mean SU
transmit energy is given by
(2)
E2 = ν0,c τs δs +(Tp −τs ) PFA +PFA δs +Ts W P◦
(31)
(2)
+ ν0,c τs δs +(Tp −τs )(PMD δs +PMD )+Ts W P◦
10
VII. P ROBLEM F ORMULATION AND P RIMARY M EAN
E NERGY S AVINGS
A. Problem Formulation
max .
s.t.
R̃(ℓ)
s
(ℓ)
Dp,c
< Dp,nc , µ(ℓ)
p,c > λp , 0 ≤ Eℓ ≤ E
1:
2:
3:
We assume that users optimize over Tp = T − τf − Ts and
Wp = W − Ws . It is noteworthy that there is a possibility
to optimize over the spectrum sensing time τs , however, for
simplicity, we assume that the spectrum sensing time is fixed
and predetermined. Sensing time optimization is out of scope
of this paper. The optimization problem is formulated such that
the secondary average data rate is maximized under a certain
PU average queueing delay, the PU queue stability condition,
and an energy constraint on the secondary average transmit
energy, given by Eℓ ≤ E (where E denotes the maximum
average SU transmit energy). The optimization problem under
proposed scheme Pℓ ∈ {P1 , P2 } is stated as follows
Tp ,Wp
Algorithm 1 Optimization Procedure
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
Select a large number K
Set i = 1
loop1:
Generate 0 ≤ δp ≤ 1 where δp = Wp /W
loop2:
Set j = 1
(ℓ)
Generate τTs ≤ ∆p ≤ TT where ∆p = Tp /T
Compute Ws = W − Tp and Ts = T (ℓ) − Tp − τs
(ℓ)
Compute Z(i, j) = R̃s in (32)
Set j = j + 1
If j 6= K, goto loop2
Set i = i + 1
If i 6= K, goto loop1
(ℓ)
Select Wp = δp W and Tp = ∆p T that maximize R̃s
(i and j corresponding to highest Z(i, j)) and satisfy the
constraints in (32)
(32)
τs ≤ Tp ≤ T (ℓ) , 0 ≤ Wp ≤ W, Tp +Ts = T (ℓ)
where T (ℓ) is the operational constraint on Tp+Ts when users
(ℓ)
(ℓ)
operate under scheme Pℓ , and Dp,c = (1 − λp )/(µp,c − λp ) is
the average queueing delay of the PU queue under cooperation.
Under our first cooperative scheme, the maximum allowable
transmission time is T−τf ; hence, T (1) = T−τf . On the other
hand, under our second cooperative scheme, the maximum
allowable transmission time is T−2τf ; hence, T (2) = T− 2τf .
It should be pointed out here that if the primary feedback
message is always undecodable at the SU, i.e., f = 1 or if the
p → pd link is always in outage, scheme P1 always outperforms scheme P2 . This is reasonable since the SU will always
retransmit the primary packet with a lower transmission time
for each user due to the existence of two feedback durations
in P2 . In addition, when τf increases, P1 may outperform P2
for some system parameters because it may be the case that
the reduction in the maximum allowable transmission time due
to the presence of an additional feedback duration is higher
than the gain of knowing the status of the primary packet
decodability at the SU before the secondary retransmission of
the primary packet.
The optimization problem (32) is solved numerically using
a two-dimensional grid-based search over Tp and Wp . The
optimal parameters obtained via solving the optimization
problem (32) are announced to both users so that Wp and
Tp are known at the PU and the SU before actual operation
of the communications system. If the optimization problem
is infeasible due to the dissatisfaction of one or more of the
constraints in (32), the SU will not be allowed to use the
spectrum and its achievable rate is zero. A simple method to
solve the optimization problem (32) is to divide the domains
of Tp and Wp into K points. Then, solve the optimization
problem (32) for K 2 times and select the solution that satisfies
the constraints and has the highest objective function. Our
proposed solution to the optimization problem in (32) is stated
in Algorithm 1.
It is worth noting that the PU average queueing delay
constraint can be replaced by a constraint on the mean service
rate of the PU queue. Since the delay constraint is given by
(ℓ)
(ℓ)
Dp,c = (1 − λp )/(µp,c − λp ) < Dp,nc = (1 − λp )/(µp,nc − λp ),
the mean service rate of the PU queue under cooperation must
be greater than the mean service rate of the PU queue without
cooperation. In particular,
µ(ℓ)
p,c > µp,nc
(33)
Combining the delay constraint with the stability constraint,
the PU queue mean service rate should be at least
n
o
µ(ℓ)
(34)
p,c > max µp,nc , λp
B. Mean Primary Energy Savings
In the absence of cooperation, the PU transmission takes
place over T − τf seconds and occupies W Hz. Hence,
the PU energy consumption per time slot is P◦ W (T − τf )
joules/slot. However, when the SU helps the PU in relaying its
packets, the PU transmits only in a fraction Tp /T of the time
slot with transmission bandwidth Wp Hz. Hence, its energy
consumption per time slot is only P◦ Wp Tp ≤ P◦ W (T −τf )
joules/slot. In this case, the average rate of the PU energy
savings, defined as the ratio of the energy savings over the
original energy consumption, is given by
(ℓ)
P◦ W (T −τf )Pr{Qp,nc 6= 0} −P◦ Wp Tp Pr{Qp,c 6= 0}
P◦ W (T −τf )Pr{Qp,nc 6= 0}
(35)
Using the fact that Pr{Qp,nc 6= 0} = λp /µp,nc if λp < µp,nc ,
(ℓ)
(ℓ)
and 1 otherwise, Pr{Qp,c 6= 0} = λp /µp,c , and noting that
there is no cooperation if the PU queue is unstable, we get
Wp Tp max{µp,nc , λp }
φ=1 −
(36)
(ℓ)
W (T −τf )
µp,c
From the above ratio, we can see that the less the bandwidth
and the transmission time that the PU occupies, the more
energy savings for the PU. We note that the PU queue under cooperation should be stable, otherwise, the optimization
problem is infeasible and there will be no cooperation. We also
note that using less bandwidth and shorter transmission time
φ=
11
RESULTS
In this section, we present some simulations of the proposed
cooperative schemes. We define a set of common parameters:
the targeted false alarm probability is PFA = 0.1, W = 10
MHz, T = 5 msec, b = 5000 bits, E = 5 × 10−6 joule,
2
2
2
τs = 0.05T , σs,pd
= σs,sd
= 0.1, σp,s
= 1, P◦ = 10−10 Watts/Hz,
−11
and N◦ = 10
Watts/Hz. Fig. 6 shows the maximum
average SU data rate of our proposed cooperative schemes.
The second proposed scheme is plotted with three different
values of f . The figure reveals the advantage of our second
proposed scheme over our first proposed scheme for f = 0.5
and f = 1. However, for f = 0, the first proposed scheme
outperforms the second one. This is reasonable since when
f = 0 there is no gain from having a feedback message after
the PU transmission; hence, using the second proposed scheme
wastes τf seconds of the time slot that can be used otherwise
in increasing users’ data rates. The figure also demonstrates
the impact of parameter f on the performance of the second
proposed scheme, i.e., scheme P2 . As shown in the figure,
increasing f enhances the performance of scheme P2 . In
addition to the common parameters, the figure is generated
2
= 0.05, τf = 0.05T and the values of f in the
using σp,pd
figure’s legend.
Fig. 7 shows the impact of the feedback message duration,
τf , on the performance of our proposed cooperative schemes.
The mean SU transmission data rate and the PU data arrival
rate feasible range decrease with increasing τf . When the
value of τf is considerable, i.e., τf = 0.2T , the first scheme
outperforms the second scheme. This is because the maximum
allowable transmission data time of nodes under scheme P2 in
this case is T −2τf = 0.6T , whereas the maximum allowable
transmission time under scheme P1 is T−τf = 0.8T . For small
values of τf , the second proposed scheme outperforms the
first scheme since the SU can use the time duration assigned
for relaying and the primary subband to transmit its data in
case of correct packet decoding after the PU transmission.
The parameters used to generate the figure are the common
2
parameters, σp,pd
= 0.05, f = 1 and the values of τf in the
plot.
Figs. 8, 9 and 10 present the primary mean service rate,
the PU average queueing delay, and the average PU power
savings, respectively, under our proposed cooperative schemes.
The case of non-cooperative users is also plotted in Figs. 8 and
9 for comparison purposes. The figures demonstrate the gains
of the proposed schemes for the PU over the non-cooperation
case. Note that without cooperation between the two users,
the PU queue is unstable when λp > 0.2 packets/slot and,
hence, the queueing delay is unbounded. On the other hand,
with cooperation, the PU queue remains stable over the range
from λp = 0 to λp = 0.95 packets/slot. The second scheme
achieves better performance than the first scheme in terms of
4
4.5
x 10
4
Mean secondary rate
[bits/slot]
VIII. N UMERICAL
primary QoS. Fig. 10 reveals that more that 95% of the average
primary energy will be saved for λp = 0.2 packets/slot.
When λp = 0.8 packets/slot, the primary energy savings is
almost 78%. For λp ≥ 0.95, the PU queue becomes unstable
even with cooperation; hence, the cooperation becomes nonbeneficial for the PU and the PU ceases cooperation with the
SU. Hence, the SU doest gain any access to the spectrum, and
the primary energy savings becomes zero since the PU will
send its data over the entire time slot duration and channel
bandwidth. The parameters used to generate the figures are the
2
2
common parameters, σp,pd
= 0.005, σs,pd
= 1, τf = 0.05T and
f = 1. Note that the performance of our two proposed schemes
are close to each other because the outage probability of the
primary channel is high and the direct link (i.e., the p → pd
link) is in outage most of the time. Hence, under scheme P2 ,
the SU retransmits the primary packets almost every time slot
instead of transmitting its own data signals. Accordingly, both
proposed schemes almost achieve the same performance.
3.5
3
2.5
P2 , f = 1
2
P2 , f = 0.5
P2 , f = 0
1.5
P1
1
0.5
0
0
0.2
0.4
0.6
λp [packets/slot]
0.8
1
Fig. 6: The maximum SU data rate in bits per slot for the
proposed schemes. Scheme P2 is plotted with three different
values of primary feedback correct decoding, f .
4
4.5
x 10
P1
P2
4
Mean secondary rate
[bits/slot]
improves the low probability of intercept/low probability of
detection (LPD/LPI) characteristics of the communication link
that appears to be especially critical in military applications.
Hence, it is always useful to use shorter transmission times
and lower bandwidth.
τf = 0.05T
3.5
3
2.5
2
1.5
τf = 0.2T
1
0.5
0
0
0.2
0.4
0.6
0.8
1
λp [packets/slot]
Fig. 7: The maximum mean SU data rate in bits per time
slot. The schemes are plotted for two values of the feedback
duration τf .
IX. C ONCLUSIONS
In this paper, we developed two cooperative cognitive
schemes which allow the SU to access the primary spectrum
12
1
Primary mean service rate
[packets/slot]
Average primary energy savings
P1
0.8
P2
0.7
Without cooperation
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
λp [packets/slot]
0.6
0.7
0.8
Primary queueing delay
[time slots]
P1
P2
Without cooperation
100
50
0.05
0.1
0.15
0.85
0.8
0.75
0.7
0.65
0.2
0.4
0.6
λp [packets/slot]
0.8
1
Fig. 10: PU power savings.
be the transmission rate of node j while communicating with
node k, γj,k be the received SINR at node k when node j
communicates with node k, and αj,k be the associated channel
2
gain with mean σj,k
, which is exponentially distributed in the
case of Rayleigh fading. The probability of channel outage
between node j and node k is given by [24]
(37)
Pj,k = Pr rj,k > log2 1 + γj,k
150
0
0
P2
0.9
0
Fig. 8: The maximum mean primary stable throughput for the
proposed schemes. The case of non-cooperative users is also
plotted for comparison.
P1
0.95
0.2
λp [packets/slot]
0.25
0.3
Fig. 9: The PU average queueing delay for the proposed
schemes. The case of non-cooperative SU is also plotted for
comparison purposes.
simultaneously with the PU. We showed the gains of our
proposed cooperative schemes for the SUs and PUs. We also
addressed the impact of the feedback process on users’ data
rates. Each of our proposed schemes can outperform the other
for certain system parameters and they differ in terms of time
slot structure. We showed that as the probability of feedback
message decoding decreases, the second proposed scheme
loses its advantage over the first proposed scheme. The PU
energy savings under cooperation is more than 60% for most
of the PU packet arrival rate. Moreover, at low mean arrival
rate at the PU data queue, the PU energy savings can be
more than 95%. We also showed a significant reduction in the
average queueing delay of the PU queue under cooperation
relative to the no-cooperation case. As a future work, we can
investigate the battery-based system where the communication
nodes are equipped with rechargeable batteries with certain
energy arrival rates.
A PPENDIX A
In this Appendix, we present the outage probability expression of a link when the transmitter communicates with its
respective receiver alone, i.e., without interference. Let rj,k
where Pr{·} denotes the probability of the event in the
P◦ αj,k
argument, and γj,k =
N◦ . The formula (37) can be
rewritten as
N◦ rj,k
2 −1
(38)
Pj,k = Pr αj,k <
P◦
rj,k
◦
Let αth,j,k = N
− 1). We note that if αj,k < αth,j,k ,
P◦ (2
the channel is in outage (OFF), whereas if αj,k ≥ αth,j,k , the
channel is not in outage (ON). It is worth pointing out here
that increasing the transmission data time and the bandwidth
assigned to any of the terminals decrease the outage probability, or equivalently increase the rate, of the link between
that terminal and its respective receiver. That is, the outage
probability of any of the links decreases exponentially with the
increase of the transmission time and the bandwidth assigned
to the transmitting node.
If the SU is available to assist, when the PU’s queue is
nonempty, the PU sends a packet of size b bits over Tp second
and frequency bandwidth Wp . Hence, the PU transmission rate
is given by
rp,pd =
b
Wp Tp
(39)
When the PU communicates with its destination alone, i.e.,
without interference from the SU, the link between the PU
and the primary destination (i.e., the p → pd link) is not in
outage with probability
b
2 Wp Tp − 1
(40)
Pp,pd = exp − N◦
2
P◦ σp,pd
The probability of primary packet correct decoding at the
SU is equal to the probability of the p → s link being not in
outage. This is given by a formula similar to the one in (40)
13
with the relevant parameters of the p → s link. That is,
b
2 Wp Tp − 1
(41)
Pp,s = exp − N◦
2
P◦ σp,s
The SU relays (retransmits) the primary packet over Ts
seconds and frequency bandwidth Wp . Hence, the transmission
rate of the relayed primary packet is given by
b
rs,pd =
(42)
Wp Ts
The relayed primary packet transmitted by the SU is correctly
decoded at the primary destination with probability
b
2 Wp Ts − 1
Ps,pd = exp − N◦
(43)
2
P◦ σs,pd
where (43) is the probability that the s → pd link is not in
outage.
A PPENDIX B
When the SU and the PU transmit at the same time over
the primary subband, the outage event of the p → pd link is
given by
b
αp,pd P◦
(I)
> log2 1 +
Pp,pd = Pr
(44)
Tp Wp
N◦ + αs,pd P◦
This can be written
as
b
αp,pd P◦
(I)
Tp Wp
−1>
Pp,pd = Pr 2
(45)
N◦ + αs,pd P◦
Since the channels are independent, the region where the
b
αp,pd P◦
is satisfied can be easily
inequality 2 Tp Wp − 1 > N◦ +α
s,pd P◦
obtained. After some algebra, the probability of primary packet
correct decoding when the SU interrupts the PU transmission
data over Wp is given by
(I)
Pp,pd =
1+
Pp,pd
2
b
σs,pd
(2 Wp Tp
2
σp,pd
− 1)
≤ Pp,pd
(46)
From expression (46), the successful transmission in case of
interference is outer bounded by Pp,pd . This quantifies the
reduction of primary throughput due to concurrent transmission which may occur due to sensing errors. As the message
rate increases WpbTp , the outage probability Pp,pd increases.
Under interference, the correct decoding probability decreases
withb the same amount in addition to a reduction factor of
(2 Wp Tp − 1). Moreover, as the cross-channel average gain,
2
given by σs,pd
, decreases, the correct decoding probability
increases since the interference is weak. Actually, what really
matters is the ratio of the average of the direct (main) channel
σ2
σ2
and the interference channel, which is given by σ2s,pd . As σ2s,pd
p,pd
p,pd
decreases, the interference can cause no impact on the correct
2
σ
packet decoding. When σ2s,pd or the transmission rate WpbTp
p,pd
is very small, we have
(I)
Pp,pd
≈ Pp,pd .
of spectrum sensing [23] and Wp is the primary bandwidth in
case of cooperation) is described as follows
H1 : s(k̂) = ζp,s x(k̂) + ε(k̂)
H0 : s(k̂) = ε(k̂)
H(s) =
In this appendix, we derive the sensing errors probabilities
at the SU. The detection problem at time slot T (assuming that
τs Fs is an integer, where Fs = Wp is the sampling frequency
(48)
k̂=1
where |ζp,s |2 = αp,s is channel gain of of the p → s link,
hypotheses H1 and H0 denote the cases where the PT is active
and inactive, respectively, τs Fs is the total number of used
samples for primary activity detection, ε is the noise instantaneous value at time slot T with variance Np = N◦ Wp , x is
the PU transmitted signal at slot T with variance Pp = P◦ Wp ,
x(k̂) is the k̂-th sample of the PU transmit signal, s(k̂) is the
k̂-th received sample of the primary signal at the SU, and H(·)
is the test statistic of the energy detector.
The quality of the sensing process outcome is determined
by the probability of detection, PD = 1 − PMD , and the
probability of false alarm, PFA , which are defined as the
probabilities that the spectrum sensing scheme detects a PU
under hypotheses H1 and H0 , respectively. Using the central
limit theorem (CLT), the test statistic H for hypothesis Hθ ,
θ ∈ {0, 1}, can be approximated by Gaussian distributions
[23] with parameters
(θαp,s Pp + Np )2
(49)
Λθ = θαp,s Pp + Np , σθ2 =
Fs τs
where Λθ and σθ2 denote the mean and the variance of the
Gaussian distribution for the hypothesis Hθ , where θ ∈ {0, 1}.
Since αp,s is Exponentially distributed random variable with
2
parameter 1/σp,s
, the probabilities PFA and PD can be written
as
PD = Pr{H(s) > ǫ|H1 }
N
exp( σ2 pPp ) Z ∞ p
z
ǫ
p,s
)dz
Q( Fs τs [ −1]) exp(− 2
=
2 P
σp,s
z
σp,s Pp
p
Np
(50)
p
ǫ
PFA = Pr{H(s) > ǫ|H0 } = Q( Fs τs [
− 1])
(51)
Np
where exp(·) denotes the exponential
R ∞ function, ǫ is the energy
threshold and Q(Y) = √12π Y exp(−z 2 /2)dz is the Qfunction.
For a targeted false alarm probability, PFA , the value of the
threshold ǫ is given by
Q−1 (P )
√ FA + 1
(52)
ǫ = Np
Fs τs
Thus, for a targeted false alarm probability, PFA , the probability of misdetection is given by substituting Eqn. (52) into
Eqn. (50). That is,
1
1
exp( 2
Np )
2 P
σp,s
σ
p
p,s Pp
−1
!
Z ∞ p
Np Q√F(PτFA) +1
z
s s
Fs τs
Q
×
)dz
−1 exp(− 2
z
σ
Np
p,s Pp
(53)
PMD = 1−
A PPENDIX C
Fs τs
1 X
|s(k̂)|2
Fs τs
(47)
14
where Q−1 (·) is the inverse of Q-function.
A PPENDIX D
In this Appendix, we derive the average value of SU
instantaneous data rate R = log2 (1 + αs,sd γs,sd ). It can be
shown that Z
∞
αs,sd
P◦
1
) exp(− 2 ) dαs,sd
Gs = 2
log2 (1 + αs,sd
σs,sd 0
N◦
σs,sd
!
1
1
1
=
exp(
2 )Γ 0, γ
2
ln(2)
γs,sd σs,sd
s,sd σs,sd
(54)
where Γ (·, ·) is the upper incomplete Gamma function.
P◦
. Integration by parts and rearranging
Proof. Let γs,sd = N
◦
the resultant, the expression is given by
!
Z ∞
αs,sd
Gs = −
log2 (1 + αs,sd γs,sd ) d exp − 2
σs,sd
0
!∞
αs,sd
= − log2 (1 + αs,sd γs,sd ) exp − 2
σs,sd
0
|
{z
}
zero
!
Z ∞
γs,sd
1
αs,sd
dαs,sd
+
exp − 2
ln(2) 0
σs,sd 1 + αs,sd γs,sd
(55)
After eliminating the zero term, Gs becomes
!
Z ∞
1
γs,sd
αs,sd
Gs =
dαs,sd (56)
exp − 2
ln(2) 0
σs,sd 1 + αs,sd γs,sd
Letting z = 1 + y, Gs becomes
!
Z ∞
1
1
y
dy
Gs =
exp −
2
ln(2) 0
γs,sd σs,sd 1 + y
!
Z ∞
1
z−1
1
dy
exp −
=
2
ln(2) 1
γs,sd σs,sd z
!Z
!
∞
1
z
1
1
exp
dz
exp −
=
2
2
ln(2)
γs,sd σs,sd
γs,sd σs,sd z
1
(57)
Letting γs,sdzσ2 = q, we get
s,sd
!Z
∞
1
1
1
exp (−q) dq
exp
Gs =
2
1
ln(2)
γs,sd σs,sd
q
γs,sd σ2
s,sd
(58)
!
!
1
1
1
=
Γ 0,
exp
2
2
ln(2)
γs,sd σs,sd
γs,sd σs,sd
where Γ (·, ·) is the upper incomplete Gamma function.
R EFERENCES
[1] A. El Shafie and T. Khattab, “Energy-efficient cooperative relaying
protocol for full-duplex cognitive radio users and delay-aware primary
users,” in Proc. ICNC, Feb 2015, pp. 207–213.
[2] J. Van Hecke, P. Del Fiorentino, V. Lottici, F. Giannetti, L. Vandendorpe,
and M. Moeneclaey, “Distributed dynamic resource allocation for cooperative cognitive radio networks with multi-antenna relay selection,”
IEEE Trans. Wireless Commun., vol. 16, no. 2, pp. 1236–1249, 2017.
[3] W. Liang, S. X. Ng, and L. Hanzo, “Cooperative overlay spectrum access
in cognitive radio networks,” IEEE Commun. Surveys Tutorials, vol. 19,
no. 3, pp. 1924–1944, thirdquarter 2017.
[4] S. Haykin and P. Setoodeh, “Cognitive radio networks: The spectrum
supply chain paradigm,” IEEE Trans. Cognitive Commun. and Network.,
vol. 1, no. 1, pp. 3–28, March 2015.
[5] A. Naeem, M. H. Rehmani, Y. Saleem, I. Rashid, and N. Crespi,
“Network coding in cognitive radio networks: A comprehensive survey,”
IEEE Commun. Surveys & Tutorials, 2017.
[6] S. Narayanan, M. Di Renzo, F. Graziosi, and H. Haas, “Distributed
spatial modulation: A cooperative diversity protocol for half-duplex
relay-aided wireless networks,” IEEE Trans. Veh. Techn., vol. 65, no. 5,
pp. 2947–2964, 2016.
[7] A. E. Shafie, M. G. Khafagy, and A. Sultan, “Optimization of a relayassisted link with buffer state information at the source,” IEEE Commun.
Lett., vol. 18, no. 12, pp. 2149–2152, Dec 2014.
[8] A. E. Shafie, “Space-time coding for an energy harvesting cooperative
secondary terminal,” IEEE Commun. Lett., vol. 18, no. 9, pp. 1571–1574,
Sept 2014.
[9] N. Devroye, P. Mitran, and V. Tarokh, “Achievable rates in cognitive
radio channels,” IEEE Trans. Inf. Theory, vol. 52, no. 5, pp. 1813–1827,
2006.
[10] O. Simeone, Y. Bar-Ness, and U. Spagnolini, “Stable throughput of
cognitive radios with and without relaying capability,” IEEE Trans.
Commun., vol. 55, no. 12, pp. 2351–2360, Dec. 2007.
[11] O. Simeone, I. Stanojev, S. Savazzi, Y. Bar-Ness, U. Spagnolini,
and R. Pickholtz, “Spectrum leasing to cooperating secondary ad hoc
networks,” IEEE J. Sel. Areas Commun., vol. 26, no. 1, pp. 203–213,
2008.
[12] I. Krikidis, Z. Sun, J. N. Laneman, and J. Thompson, “Cognitive legacy
networks via cooperative diversity,” IEEE Commun. Lett., vol. 13, no. 2,
pp. 106–108, 2009.
[13] M. Elsaadany, M. Abdallah, T. Khattab, M. Khairy, and M. Hasna,
“Cognitive relaying in wireless sensor networks: Performance analysis
and optimization,” in Proc. IEEE GLOBECOM, Dec. 2010, pp. 1–6.
[14] W. Su, J. D. Matyjas, and S. Batalama, “Active cooperation between
primary users and cognitive radio users in heterogeneous ad-hoc networks,” IEEE Trans. Signal Process., vol. 60, no. 4, pp. 1796–1805,
2012.
[15] S. Kompella, G. Nguyen, J. Wieselthier, and A. Ephremides, “Stable
throughput tradeoffs in cognitive shared channels with cooperative
relaying,” in Proc. IEEE INFOCOM, Apr. 2011, pp. 1961–1969.
[16] I. Krikidis, T. Charalambous, and J. Thompson, “Stability analysis and
power optimization for energy harvesting cooperative networks,” IEEE
Signal Process. Lett., vol. 19, no. 1, pp. 20–23, January 2012.
[17] A. El Shafie, A. Sultan, and T. Khattab, “Maximum throughput of a
secondary user cooperating with an Energy-Aware primary user,” in
Proc. IEEE WiOpt, May 2014, pp. 287–294.
[18] G. Durisi, T. Koch, and P. Popovski, “Towards massive, ultra-reliable,
and low-latency wireless communication with short packets,” 2015.
[Online]. Available: https://arxiv.org/pdf/1504.06526.pdf
[19] B. Lee, S. Park, D. J. Love, H. Ji, and B. Shim, “Packet structure
and receiver design for low latency wireless communications with
ultra-short packets,” IEEE Trans. Commun., 2017. [Online]. Available:
http://ieeexplore.ieee.org/abstract/document/8047997/
[20] J. Oestman, G. Durisi, E. G. Stroem, J. Li, H. Sahlin, and G. Liva, “Lowlatency ultra-reliable 5G communications: Finite-blocklength bounds
and coding schemes,” in IEEE International ITG Conference on Systems,
Communications and Coding, Feb 2017, pp. 1–6.
[21] A. Sadek, K. Liu, and A. Ephremides, “Cognitive multiple access via
cooperation: protocol design and performance analysis,” IEEE Trans.
Inf. Theory, vol. 53, no. 10, pp. 3677–3696, Oct. 2007.
[22] A. E. Shafie, T. Khattab, A. El-Keyi, and M. Nafie, “On the coexistence of a primary user with an energy harvesting secondary user: A
case of cognitive cooperation,” Wireless Communications and Mobile
Computing, vol. 16, no. 2, pp. 166–176, 2016.
[23] Y. Liang, Y. Zeng, E. Peh, and A. Hoang, “Sensing-throughput tradeoff
for cognitive radio networks,” IEEE Trans. Wireless Commun., vol. 7,
no. 4, pp. 1326–1337, April 2008.
[24] I. Krikidis, N. Devroye, and J. Thompson, “Stability analysis for
cognitive radio with multi-access primary transmission,” IEEE Trans.
Wireless Commun., vol. 9, no. 1, pp. 72–77, 2010.
| 7 |
arXiv:1711.11383v1 [stat.ML] 30 Nov 2017
Learning to Learn from Weak Supervision
by Full Supervision
Mostafa Dehghani
University of Amsterdam
[email protected]
Aliaksei Severyn, Sascha Rothe
Google Research
{severyn,rothe}@google.com
Jaap Kamps
University of Amsterdam
[email protected]
Abstract
In this paper, we propose a method for training neural networks when we have a
large set of data with weak labels and a small amount of data with true labels. In
our proposed model, we train two neural networks: a target network, the learner and
a confidence network, the meta-learner. The target network is optimized to perform a
given task and is trained using a large set of unlabeled data that are weakly annotated.
We propose to control the magnitude of the gradient updates to the target network
using the scores provided by the second confidence network, which is trained on
a small amount of supervised data. Thus we avoid that the weight updates computed
from noisy labels harm the quality of the target network model.
1
Introduction
Using weak or noisy supervision is a straightforward approach to increase the size of the training
data [Dehghani et al., 2017b, Patrini et al., 2016, Beigman and Klebanov, 2009, Zeng et al., 2015,
Bunescu and Mooney, 2007]. The output of heuristic methods can be used as weak or noisy signals
along with a small amount of labeled data to train neural networks. This is usually done by pre-training
the network on weak data and fine tuning it with true labels [Dehghani et al., 2017b, Severyn and
Moschitti, 2015a]. However, these two independent stages do not leverage the full capacity of
information from true labels and using noisy labels of lower quality often brings little to no improvement.
This issue is tackled by noise-aware models where denoising the weak signal is part of the learning
process [Patrini et al., 2016, Sukhbaatar et al., 2014, Dehghani et al., 2017a].
In this paper, we propose a method that leverages a small amount of data with true labels along with
a large amount of data with weak labels. In our proposed method, we train two networks in a multi-task
fashion: a target network which uses a large set of weakly annotated instances to learn the main task
while a confidence network is trained on a small human-labeled set to estimate confidence scores. These
scores define the magnitude of the weight updates to the target network during the back-propagation
phase. From a meta-learning perspective [Andrychowicz et al., 2016, Finn et al., 2017, Ravi and
Larochelle, 2016], the goal of the confidence network, as the meta-learner, trained jointly with the
target network, as the learner, is to calibrate the learning rate of the target network for each instance
in the batch. I.e., the weights w of the target network fw at step t +1 are updated as follows:
b
wt −
w t+1 =w
ηt Õ
cθ (xi, ỹi )∇L( fwt (xi ), ỹi )
b i=1
(1)
where ηt is the global learning rate, L(·) is the loss of predicting ŷ = fw (xi ) for an input xi when the
label is ỹ; cθ (·) is a scoring function learned by the confidence network taking input instance xi and
its noisy label ỹi . Thus, we can effectively control the contribution to the parameter updates for the
target network from weakly labeled instances based on how reliable their labels are according to the
confidence network (learned on a small supervised data).
NIPS Workshop on Meta-Learning (MetaLearn 2017), Long Beach, CA, USA.
Prediction loss
wrt. the weak labels
Supervision Layer
Prediction loss
wrt. the weak labels
Confidence Network
Supervision Layer
True value of the
confidence
Representation Learning
Weak Annotator
Confidence Network
Goodness of
instances
Representation Learning
True Labels
Weak Annotator
(a) Full Supervision Mode: Training on batches of data with true labels.
True Labels
(b) Weak Supervision Mode: Training on batches of data with weak labels.
Figure 1: Our proposed multi-task network for learning a target task using a large amount of weakly labeled data and a
small amount of data with true labels. Faded parts of the network are disabled during the training in the corresponding mode.
Red-dotted arrows show gradient propagation. Parameters of the parts of the network in red frames get updated in the backward
pass, while parameters of the network in blue frames are fixed during the training.
Our approach is similar to [Andrychowicz et al., 2016], where a separate recurrent neural network called
optimizer learns to predict an optimal update rule for updating parameters of the target network. The optimizer receives a gradient from the target network and outputs the adjusted gradient matrix. As the number
of parameters in modern neural networks is typically on the order of millions the gradient matrix becomes
too large to feed into the optimizer, so the approach presented in [Andrychowicz et al., 2016] is applied to
very small models. In contrast, our approach leverages additional weakly labeled data where we use the
confidence network to predict per-instance scores that calibrate gradient updates for the target network.
Our setup requires running a weak annotator to label a large amount of unlabeled data, which is done
at pre-processing time. For many tasks, it is possible to use a simple heuristic to generate weak labels.
This set is then used to train the target network. In contrast, a small human-labeled set is used to train
the confidence network, which estimates how good the weak annotations are, i.e. controls the effect
of weak labels on updating the parameters of the target network. This helps to alleviate updates from
instances with unreliable labels that may corrupt the target network.
In this paper, we study our approach on sentiment classification task.Our experimental results suggest
that the proposed method is more effective in leveraging large amounts of weakly labeled data compared
to traditional fine-tuning. We also show that explicitly controlling the target network weight updates
with the confidence network leads to faster convergence.
2
The Proposed Method
In the following, we describe our recipe for training neural networks, in a scenario where along with a
small human-labeled training set a large set of weakly labeled instances is leveraged. Formally, given a set
of unlabeled training instances, we run a weak annotator to generate noisy labels. This gives us the training
setU. It consists of tuples of training instances xi and their weak labels ỹi , i.e. U = {(xi, ỹi ),...}. For a small
set of training instances with true labels, we also apply the weak annotator to generate weak labels. This
creates the training set V, consisting of triplets of training instances x j , their weak labels ỹ j , and their true
labels y j , i.e. V = {(x j , ỹ j ,y j ),...}. We can generate a large amount of training dataU at almost no cost using
the weak annotator. In contrast, we have only a limited amount of data with true labels, i.e. |V | << |U|.
In our proposed framework we train a multi-task neural network that jointly learns the confidence
score of weak training instances and the main task using controlled supervised signals. The high-level
representation of the model is shown in Figure 1: it comprises two neural networks, namely the
confidence network and the target network. The goal of the confidence network is to estimate the
confidence score c̃ j of training instances. It is learned on triplets from training set V: input x j , its
weak label ỹ j , and its true label y j . The score c̃ j is then used to control the effect of weakly annotated
training instances on updating the parameters of the target network.
The target network is in charge of handling the main task we want to learn. Given the data instance,
xi and its weak label ỹi from the training set U, the target network aims to predict the label ŷi . The
target network parameter updates are based on noisy labels assigned by the weak annotator, but the
magnitude of the gradient update is based on the output of the confidence network.
Both networks are trained in a multi-task fashion alternating between the full supervision and the weak
supervision mode. In the full supervision mode, the parameters of the confidence network get updated
2
using batches of instances from training set V. As depicted in Figure 1b, each training instance is
passed through the representation layer mapping inputs to vectors. These vectors are concatenated with
their corresponding weak labels ỹ j . The confidence network then estimates c̃ j , which is the probability
of taking data instance j into account for training the target network.
In the weak supervision mode the parameters of the target network are updated using training set U. As
shown in Figure 1a, each training instance is passed through the same representation learning layer and
is then processed by the supervision layer which is a part of the target network predicting the label for the
main task. We also pass the learned representation of each training instance along with its corresponding
label generated by the weak annotator to the confidence network to estimate the confidence score of
the training instance, i.e. c̃i . The confidence score is computed for each instance from set U. These
confidence scores are used to weight the gradient updating the target network parameters during backpropagation. It is noteworthy that the representation layer is shared between both networks, so the confidence network can benefit from the largeness of setU and the target network can utilize the quality of setV.
2.1 Model Training
Our optimization objective is composed of two terms: (1) the confidence network loss L c , which
captures the quality of the output from the confidence network and (2) the target network loss Lt , which
expresses the quality for the main task.
Both networks are trained by alternating between the weak supervision and the full supervision mode.
In the full supervision mode, the parameters of the confidence network are updated using training
instance drawn from training set V. We use cross-entropy loss function for the confidence network to
capture the
Í difference between the predicted confidence score of instance j, i.e. c̃ j and the target score
c j : L c = j ∈V −c j log(c̃ j )−(1−c j )log(1− c̃ j ), The target score c j is calculated based on the difference
of the true and weak labels with respect to the main task. In the weak supervision mode, the parameters
of the target network are updated using training instances from U. We use a weighted loss function,
Lt , toÍcapture the difference between the predicted label ŷi by the target network and target label ỹi :
Lt = i ∈U c̃i Li , where Li is the task-specific loss on training instance i and c̃i is the confidence score
of the weakly annotated instance i, estimated by the confidence network. Note that c̃i is treated as
a constant during the weak supervision mode and there is no gradient propagation to the confidence
network in the backward pass (as depicted in Figure 1a).
We minimize two loss functions jointly by randomly alternating between full and weak supervision
modes (for example, using a 1:10 ratio). During training and based on the chosen supervision mode,
we sample a batch of training instances from V with replacement or from U without replacement (since
we can generate as much train data for set U).
3
Experiments
In this section, we apply our method to sentiment classification task. This task aims to identify the
sentiment (e.g., positive, negative, or neutral) underlying an individual sentence. Our target network
is a convolutional model similar to [Deriu et al., 2017, Severyn and Moschitti, 2015a,b, Deriu et al.,
2016]. In this model, the Representation Learning Layer learns to map the input sentence s to a
dense vector as its representation. The inputs are first passed through an embedding layer mapping
the sentence to a matrix S ∈ Rm× |s | , followed by a series of 1d convolutional layers with max-pooling.
The Supervision Layer is a feed-forward neural network with softmax instead as the output layer which
returns the probability distribution over all three classes. As the the Weak Annotator, for the sentiment
classification task is a simple unsupervised lexicon-based method [Hamdan et al., 2013, Kiritchenko
et al., 2014], which averages over predefined sentiment score of words [Baccianella et al., 2010] in
the sentence. More details about the sentiment classification model and the experimental setups are
provided in Appendix A and Appendix B, respectively. In the following, we briefly introduce our
baselines, dataset we have used, and present results of the experiments.
Baselines. We evaluate the performance of our method compared to the following baselines: (WA)
Weak Annotator, i.e. the unsupervised method that we used for annotating the unlabeled data. (WSO)
Weak Supervision Only, i.e. the target network trained only on weakly labeled data. (FSO) Full
Supervision Only, i.e. the target network trained only on true labeled data. (WS+FT) Weak Supervision
+ Fine Tuning, i.e. the target network trained on the weakly labeled data and fine tuned on true labeled
data. (NLI) New Label Inference [Veit et al., 2017] is similar to our proposed neural architecture
inspired by the teacher-student paradigm [Hinton et al., 2015], but instead of having the confidence
3
Table 1: Performance of the baseline models as
well as our proposed method on different datasets
in terms of Macro-F1. IJ orŹ indicates that the improvements or degradations with respect to weak
supervision only (WSO) are statistically significant, at the 0.05 level using the paired two-tailed
t-test.
Method
SemEval-14
SemEval-15
WALexicon
0.5141
0.4471
WSO
FSO
0.6719
0.6307
0.5606
0.5811
WS+FT
0.7080IJ
0.6441IJ
NLI
0.7113IJ
0.6433IJ
L2LWSST
L2LWS
0.7183IJ
0.7362IJ
0.6501IJ
0.6626IJ
SemEval1th
0.7162IJ
0.6618IJ
Figure 2: Loss of the target network (L t ) and the confidence network
(L c ) compared to the loss of WSO (LWSO ) on training/validation set
and performance of L2LWS, WSO, and WA on test sets with respect
to different amount of training data on sentiment classification.
network to predict the “confidence score” of the training instance, there is a label generator network
which is trained on set V to map the weak labels of the instances in U to the new labels. The new
labels are then used as the target for training the target network. (L2LWSST ) Our model with different
training setup: Separate Training, i.e. we consider the confidence network as a separate network,
without sharing the representation learning layer, and train it on set V. We then train the target network
on the controlled weak supervision signals. (L2LWS) Learning to Learn from Weak Supervision
with Joint Training is our proposed neural architecture in which we jointly train the target network and
the confidence network by alternating batches drawn from sets V and U (as explained in Section 2.1).
Data. For train/test our model, we use SemEval-13 SemEval-14, SemEval-15, twitter sentiment classification task. We use a large corpus containing 50M tweets collected during two months as unblabled set.
Results and Discussion. We report the official SemEval metric, Macro-F1, in Table 1. Based on
the results, L2LWS provides a significant boost on the performance over all datasets. Typical fine
tuning, i.e. WS+FT, leads to improvement over weak supervision only. The performance of NLI is
worse than L2LWS as learning a mapping from imperfect labels to accurate labels and training the
target network on new labels is essentially harder than learning to filter out the noisy labels, hence
needs a lot of supervised data. L2LWSST performs worse than L2LWS since the training data V is not
enough to train a high-quality confidence network without taking advantage of the shared representation
that can be learned from the vast amount of weakly annotated data in U. We also noticed that this
strategy leads to a slow convergence compared to WSO. Besides the general baselines, we also report
the best performing systems, which are also convolution-based models ([Rouvier and Favre, 2016] on
SemEval-14; [Deriu et al., 2016] on SemEval-15). Our proposed model outperforms the best systems.
Controlling the effect of supervision to train neural networks not only improves the performance, but also
provides the network with more solid signals which speeds up the training process. Figure 2 illustrates the
training/validation loss for both networks, compared to the loss of training the target network with weak
supervision, along with their performance on test sets, with respect to different amounts of training data for
the sentiment classification task. As shown, training, Lt is higher than LWSO , but the target labels with respect of which the loss is calculated, are weak, so regardless overfitting problem and lack of generalization,
a very low loss means fitting the imperfection of the weak data. However, Lt in the validation decreases
faster than LWSO and compared to WSO, the performance of L2LWS on both test sets increases quickly
and L2LWS passes the performance of the weak annotator by seeing fewer instances annotated by WA.
4
Conclusion
In this paper, we propose a neural network architecture that unifies learning to estimate the confidence
score of weak annotations and training neural networks with controlled weak supervision. We apply
the model to the sentiment classification task, and empirically verify that the proposed model speeds
up the training process and obtains more accurate results.
4
References
Martín Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL
http://tensorflow.org/. Software available from tensorflow.org.
Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul,
and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In Advances in
Neural Information Processing Systems, pages 3981–3989, 2016.
Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. Sentiwordnet 3.0: An enhanced lexical
resource for sentiment analysis and opinion mining. In LREC, volume 10, pages 2200–2204, 2010.
Eyal Beigman and Beata Beigman Klebanov. Learning with annotation noise. In Proceedings of the
Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference
on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 280–287. Association
for Computational Linguistics, 2009.
Razvan Bunescu and Raymond Mooney. Learning to extract relations from the web using minimal
supervision. In ACL, 2007.
Mostafa Dehghani, Aliaksei Severyn, Sascha Rothe, and Jaap Kamps. Avoiding your teacher’s mistakes:
Training neural networks with controlled weak supervision. arXiv preprint arXiv:1711.00313, 2017a.
Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W. Bruce Croft. Neural
ranking models with weak supervision. In SIGIR’17, 2017b.
Jan Deriu, Maurice Gonzenbach, Fatih Uzdilli, Aurelien Lucchi, Valeria De Luca, and Martin Jaggi.
Swisscheese at semeval-2016 task 4: Sentiment classification using an ensemble of convolutional
neural networks with distant supervision. Proceedings of SemEval, pages 1124–1128, 2016.
Jan Deriu, Aurelien Lucchi, Valeria De Luca, Aliaksei Severyn, Simon Müller, Mark Cieliebak,
Thomas Hofmann, and Martin Jaggi. Leveraging large amounts of weakly supervised data for
multi-language sentiment classification. In Proceedings of the 26th international International
World Wide Web Conference (WWW’17), pages 1045–1052, 2017.
Thomas Desautels, Andreas Krause, and Joel W Burdick. Parallelizing exploration-exploitation
tradeoffs in gaussian process bandit optimization. Journal of Machine Learning Research, 15(1):
3873–3923, 2014.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation
of deep networks. In ICML, 2017.
Hussam Hamdan, Frederic Béchet, and Patrice Bellot. Experiments with dbpedia, wordnet and
sentiwordnet as resources for sentiment analysis in micro-blogging. In Second Joint Conference
on Lexical and Computational Semantics (* SEM), volume 2, pages 455–459, 2013.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv
preprint arXiv:1503.02531, 2015.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
Svetlana Kiritchenko, Xiaodan Zhu, and Saif M Mohammad. Sentiment analysis of short informal
texts. Journal of Artificial Intelligence Research, 50:723–762, 2014.
Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, and Veselin Stoyanov. Semeval-2016
task 4: Sentiment analysis in twitter. Proceedings of SemEval, pages 1–18, 2016.
Giorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, and Lizhen Qu. Making neural
networks robust to label noise: a loss correction approach. arXiv preprint arXiv:1609.03683, 2016.
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2016.
5
Sara Rosenthal, Preslav Nakov, Svetlana Kiritchenko, Saif M Mohammad, Alan Ritter, and Veselin
Stoyanov. Semeval-2015 task 10: Sentiment analysis in twitter. In Proceedings of the 9th
international workshop on semantic evaluation (SemEval 2015), pages 451–463, 2015.
Mickael Rouvier and Benoit Favre. Sensei-lif at semeval-2016 task 4: Polarity embedding fusion
for robust sentiment analysis. Proceedings of SemEval, pages 202–208, 2016.
Aliaksei Severyn and Alessandro Moschitti. Twitter sentiment analysis with deep convolutional
neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research
and Development in Information Retrieval, pages 959–962. ACM, 2015a.
Aliaksei Severyn and Alessandro Moschitti. Unitn: Training deep convolutional neural network for
twitter sentiment classification. In Proceedings of the 9th International Workshop on Semantic
Evaluation (SemEval 2015), Association for Computational Linguistics, Denver, Colorado, pages
464–469, 2015b.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.
Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):
1929–1958, 2014.
Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training
convolutional networks with noisy labels. arXiv preprint arXiv:1406.2080, 2014.
Yuan Tang. Tf.learn: Tensorflow’s high-level module for distributed machine learning. arXiv preprint
arXiv:1612.04251, 2016.
Andreas Veit, Neil Alldrin, Gal Chechik, Ivan Krasin, Abhinav Gupta, and Serge Belongie. Learning
from noisy large-scale datasets with minimal supervision. In The Conference on Computer Vision
and Pattern Recognition, 2017.
Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. Distant supervision for relation extraction via
piecewise convolutional neural networks. In EMNLP, pages 1753–1762, 2015.
6
Appendices
A
Sentiment Classification Model
In the sentiment classification task, we aim to identify the sentiment (e.g., positive, negative, or neutral)
underlying an individual sentence. The model we used as the target network is a convolutional model
similar to [Deriu et al., 2017, Severyn and Moschitti, 2015a,b, Deriu et al., 2016].
Each training instance x consists of a sentence s and its sentiment label ỹ. The architecture of the target
network is illustrated in Figure 3. Here we describe the setup of the target network, i.e. description
of the representation learning layer and the supervision layer.
The Representation Learning Layer learns a representation for the input sentence s and is shared
between the target network and confidence network. It consists of an embedding function ε : V → Rm ,
where V denotes the vocabulary set and m is the number of embedding dimensions.
This function maps the sentence to a matrix S ∈ Rm×|s | , where each
column represents the embedding of a word at the corresponding
position in the sentence. Matrix S is passed through a convolution
layer. In this layer, a set of f filters is applied to a sliding window
of length h over S to generate a feature map matrix
Í O. Each feature
map oi for a given filter F is generated by oi = k, j S[i :i+h]k, j Fk, j ,
where S[i : i + h] denotes the concatenation of word vectors from
position i to i + h. The concatenation of all oi produces a feature
vector o ∈ R |s |−h+1 . The vectors o are then aggregated over all f
filters into a feature map matrix O ∈ R f ×(|s |−h+1) .
Classifier
Pooled Repr.
Conv.
Feature Map
We also add a bias vector b ∈ R f to the result of a convolution. Embedding
Embedding
Each convolutional layer is followed by a non-linear activation
function (we use ReLU) which is applied element-wise. Afterward,
the output is passed to the max pooling layer which operates on
columns of the feature map matrix O returning the largest value: Figure 3: The target network for the
pool(oi ) : R1×( |s |−h+1) → R (see Figure 3). This architecture is similar sentiment classification task.
to the state-of-the-art model for Twitter sentiment classification from Semeval 2015 and 2016 [Severyn
and Moschitti, 2015b, Deriu et al., 2016].
We initialize the embedding matrix with word2vec embeddings pretrained on a collection of 50M
tweets.
The Supervision Layer receives the vector representation of the inputs processed by the representation
learning layer and outputs a prediction ỹ. We opt for a simple fully connected feed-forward network with l
hidden layers followed by a softmax. Each hidden layer zk in this network computes zk = α(Wk zk−1 +bk ),
where Wk and bk denote the weight matrix and the bias term corresponding to the k th hidden layer
and α(.) is the non-linearity. These layers follow a softmax layer which returns ỹi , the probability
distribution over all three classes. We employ the weighted cross entropy loss:
Õ Õ
Lt =
c̃i
− ỹik log( ŷik ),
(2)
i ∈BU
k ∈K
where BU is a batch of instances from U, and c̃i is the confidence score of the weakly annotated instance
i, and K is a set of classes.
The Weak Annotator for the sentiment classification task is a simple unsupervised lexicon-based
method [Hamdan et al., 2013, Kiritchenko et al., 2014]. We use SentiWordNet03 [Baccianella
et al., 2010] to assign probabilities (positive, negative and neutral) for each token in set U. Then a
sentence-level distribution is derived by simply averaging the distributions of the terms, yielding a noisy
label ỹi ∈ R |K | , where |K | is the number of classes, i.e. |K | = 3. We empirically found that using soft
labels from the weak annotator works better than assigning a single hard label. The target label c j for the
confidence network
Í is calculated by using the mean absolute difference of the true label and the weak
label: c j = 1− |K1 | k ∈K |y kj − ỹ kj |, where y j is the one-hot encoding of the sentence label over all classes.
7
B
Experimental Setups
The proposed architectures are implemented in TensorFlow [Tang, 2016, Abadi et al., 2015]. We use
the Adam optimizer [Kingma and Ba, 2014] and the back-propagation algorithm. Furthermore, to
prevent feature co-adaptation, we use dropout [Srivastava et al., 2014] as a regularization technique in
all models.
In our setup, the confidence network to predict c̃ j is a fully connected feed forward network. Given that
the confidence network is learned only from a small set of true labels and to speed up training we initialize
the representation learning layer with pre-trained parameters, i.e., pre-trained word embeddings. We
use ReLU as a non-linear activation function α in both target network and confidence network.
Collections. We test our model on the twitter message-level sentiment classification of SemEval-15
Task 10B [Rosenthal et al., 2015]. Datasets of SemEval-15 subsume the test sets from previous editions
of SemEval, i.e. SemEval-13 and SemEval-14. Each tweet was preprocessed so that URLs and
usernames are masked.
Data with true labels. We use train (9,728 tweets) and development (1,654 tweets) data from
SemEval-13 for training and SemEval-13-test (3,813 tweets) for validation. To make our results
comparable to the official runs on SemEval we use SemEval-14 (1,853 tweets) and SemEval-15 (2,390
tweets) as test sets [Rosenthal et al., 2015, Nakov et al., 2016].
Data with weak labels. We use a large corpus containing 50M tweets collected during two months for
both, training the word embeddings and creating the weakly annotated set U using the lexicon based
method explained in Section A.
Parameters and Settings. We tuned hyper-parameters for each model, including baselines, separately
with respect to the true labels of the validation set using batched GP bandits with an expected improvement
acquisition function [Desautels et al., 2014]. The size and number of hidden layers for the classifier and
the confidence network were separately selected from {32,64,128} and {1,2,3}, respectively. We tested
the model with both, 1 and 2 convolutional layers. The number of convolutional feature maps and
the filter width is selected from {200,300} and {3,4,5}, respectively. The initial learning rate and the
dropout parameter were selected from {1E −3,1E −5} and {0.0,0.2,0.5}, respectively. We considered
embedding sizes of {100,200} and the batch size in these experiments was set to 64.
8
| 2 |
Fine-grained acceleration control for autonomous
intersection management using deep reinforcement
learning
arXiv:1705.10432v1 [] 30 May 2017
Hamid Mirzaei
Dept. of Computer Science
University of California, Irvine
[email protected]
Abstract—Recent advances in combining deep learning and
Reinforcement Learning have shown a promising path for
designing new control agents that can learn optimal policies for
challenging control tasks. These new methods address the main
limitations of conventional Reinforcement Learning methods such
as customized feature engineering and small action/state space
dimension requirements. In this paper, we leverage one of the
state-of-the-art Reinforcement Learning methods, known as Trust
Region Policy Optimization, to tackle intersection management
for autonomous vehicles. We show that using this method, we can
perform fine-grained acceleration control of autonomous vehicles
in a grid street plan to achieve a global design objective.
I. I NTRODUCTION
Previous works on autonomous intersection management
(AIM) in urban areas have mostly focused on intersection
arbitration as a shared resource among a large number
of autonomous vehicles. In these works [1][2], high-level
control of the vehicles is implemented such that the vehicles
are self-contained agents that only communicate with the
intersection management agent to reserve space-time slots in
the intersection. This means that low-level vehicle navigation
which involves acceleration and speed control is performed
by each individual vehicle independent of other vehicles and
intersection agents. This approach is appropriate for minor
arterial roads where a large number of vehicles utilize the
main roads at similar speeds while the adjacent intersections
are far away.
In scenarios involving local roads, where the majority of the
intersections are managed by stop signs, the flow of traffic is
more efficiently managed using a fine-grained vehicle control
methodology. For example, when two vehicles are crossing
the intersection of two different roads at the same time, one
vehicle can decelerate slightly to avoid collision with the other
one or it can take another path to avoid confronting the other
vehicle completely. Therefore, the nature of the AIM problem
is a combination of route planning and real-time acceleration
control of the vehicles. In this paper, we propose a novel AIM
formulation which is the combination of route planning and
fine-grained acceleration control. The main objective of the
control task is to minimize travel time of the vehicles while
avoiding collisions between them and other obstacles. In this
Tony Givargis
Dept. of Computer Science
University of California, Irvine
[email protected]
context, since the movement of a vehicle is dependent on the
other vehicles in the same vicinity, the motion data of all
vehicles is needed in order to solve the AIM problem.
To explain the proposed AIM scheme, let us define a “zone”
as a rectangular area consisting of a number of intersections
and segments of local roads. An agent for each zone collects
the motion data and generates the acceleration commands
for all autonomous vehicles within the zone’s boundary. All
the data collection and control command generation should
be done in real-time. This centralized approach cannot be
scaled to a whole city, regardless of the algorithm used, due
to the large number of vehicles moving in a city which
requires enormous computational load and leads to other
infeasible requirements such as low-latency communication
infrastructure. Fortunately, the spatial independence (i.e., the
fact that navigation of the vehicles in one zone is independent
of the vehicles in another zone that is far enough away) makes
AIM an inherently local problem. Therefore, we can assign an
agent for each local zone in a cellular scheme.
The cellular solution nevertheless leads to other difficulties
that should be considered for a successful design of the AIM
system. One issue is the dynamic nature of the transportation
problem. Vehicles can enter or leave a zone controlled by
an agent or they might change their planned destinations
from time to time. To cope with these issues, the receding
horizon control method can be employed where the agent
repeatedly recalculates the acceleration command over a
moving time horizon to take into account the mentioned
changes. Additionally, two vehicles that are moving toward
the same point on the boundary of two adjacent zones
simultaneously might collide because the presence of each
vehicle is not considered by the agent of the adjacent
zone. This problem can be solved by adequate overlap
between adjacent zones. Furthermore, any planned trip for a
vehicle typically crosses multiple zones. Hence, a higher level
planning problem should be solved first that determines the
entry and exit locations of a vehicle in a zone.
In this paper we focus on the subproblem of acceleration
control of the vehicles moving in a zone to minimize the
total travel time. We use a deep reinforcement learning
(RL) approach to tackle the fine-grained acceleration control
problem since conventional control methods are not applicable
because of the non-convex collision avoidance constraints
[3]. Furthermore, if we want to incorporate more elements
into the problem, such as obstacles or reward/penalty terms
for gas usage, passenger comfort, etc., the explicit modeling
becomes intractable and an optimal control law derivation will
be computationally unattainable.
RL methods can address the above mentioned limitations
caused by the explicit modeling requirement and conventional
control method limitations. The main advantage of RL is that
most of the RL algorithms are “model-free” or at most need
a simulation model of the physical system which is easier
to develop than an explicit model. Moreover, the agent can
learn optimal policies just by interacting with the environment
or executing the simulation model. However, conventional
RL techniques are only applicable in small-scale problem
settings and require careful design of approximation functions.
Emerging Deep RL methods [4] that leverage the deep neural
networks to automatically extract features seem like promising
solutions to shortcomings of the classical RL methods.
The main contributions of this paper are: (1) Definition
and formulation of the AIM problem for local road settings
where vehicles arecoordinated by fine-grained acceleration
commands. (2) Employing TRPO proposed in [5] to solve
the formulated AIM problem. (3) Incorporating collision
avoidance constraint in the definition of RL environment as
a safety mechanism.
II. R ELATED W ORK
Advances in autonomous vehicles in recent years have
revealed a portrait of a near future in which all vehicles
will be driven by artificially intelligent agents. This emerging
technology calls for an intelligent transportation system
by redesigning the current transportation system which is
intended to be used by human drivers. One of the interesting
topics that arises in intelligent transportation systems is AIM.
Dresner et al. have proposed a multi-agent AIM system in
which vehicles communicate with intersection management
agents to reserve a dedicated spatio-temporal trajectory at the
intersection [2].
In [6], authors have proposed a self-organizing control
framework in which a cooperative multi-agent control scheme
is employed in addition to each vehicle’s autonomy. The
authors have proposed a priority-level system to determine
the right-of-way through intersections based on vehicles’
characteristics or intersection constraints.
Zohdy et al. presented an approach in which the Cooperative
Adaptive Cruise Control (CACC) systems are leveraged to
minimize delays and prevent clashes [7]. In this approach,
the intersection controller communicates with the vehicles
to recommend the optimal speed profile based on the
vehicle’s characteristics, motion data, weather conditions and
intersection properties. Additionally, an optimization problem
is solved to minimize the total difference of actual arrival times
at the Intersection and the optimum times subject to conflictfree temporal constraints.
Environment
State
Reward
Action
Agent
Fig. 1: Agent-Environment interaction model in RL
A decentralized optimal control formulation is proposed in
[8] in which the acceleration/deceleration of the vehicles are
minimized subject to collision avoidance constraints.
Makarem et al. introduced the notion of fluent coordination
where smoother trajectories of the vehicles are achieved
through a navigation function to coordinate the autonomous
vehicles along predefined paths with expected arrival time at
intersections to avoid collisions.
In all the aforementioned works, the AIM problem is
formulated for only one intersection and no global minimum
travel time objective is considered directly. Hausknecht
et al. extended the approach proposed in [2] to multiintersection settings via dynamic traffic assignment and
dynamic lane reversal [1]. Their problem formulation is based
on intersection arbitration which is well suited to main roads
with a heavy load of traffic.
In this paper, for the first time, we introduce fine-grained
acceleration control for AIM. In contrast to previous works,
Our proposed AIM scheme is applicable to local road
intersections. We also propose an RL-based solution using
Trust Region Policy Optimization to tackle the defined AIM
problem.
III. R EINFORCEMENT L EARNING
In this section, we briefly review RL and introduce the
notations used in the rest of the paper. In Fig. 1, the agentenvironment model of RL is shown. The “agent” interacts with
the “environment” by applying “actions” that influence the
environment state at the future time steps and observes the
state and “reward” in the next time step resulting from the
action taken. The “return” is defined as the sum of all the
rewards from the current step to the end of current “episode”:
Gt =
T
X
ri
(1)
i=t
where ri are future rewards and T is the total number of
steps in the episode. An “episode” is defined as a sequence of
agent-environment interactions. In the last step of an episode
the control task is “finished.” Episode termination is defined
specifically for the control task of the application.
For example, in the cart-pole balancing task, the agent is the
controller, the environment is the cart-pole physical system,
the action is the force command applied to the cart, and the
reward can be defined as r = 1 as long as the pole is nearly
in an upright position and a large negative number when the
pole falls. The system states are cart position, cart speed, pole
angle and pole angular speed. The agent task is to maximize
the return Gt , which is equivalent to prevent pole from falling
for the longest possible time duration.
In RL, a control policy is defined as a mapping of the system
state space to the actions:
a = π(s)
(2)
where a is the action, s is the state and π is the policy. An
optimal policy is one that maximizes the return for all the
states, i.e.:
vπ∗ (s) ≥ vπ (s),
for all s, π
(3)
where v is the return function defined as the return achievable
from state s by following policy π. Equation (3) means that
the expected return under optimal policy π ∗ is equal to or
greater than any other policy for all the system states.
The concepts mentioned above to introduce RL are all
applicable to deterministic cases, but generally we should be
able to deal with inherent system uncertainty, measurement
noise, or both. Therefore, we model the system as a Markov
Decision Process (MDP) assuming that the environment has
the Markov property [9]. However, contrary to most of the
control design methods, many RL algorithms do not require
the models to be known beforehand. The elimination of the
requirement to model the system under control is a major
strength of RL.
A system has the Markov property if at a certain time
instant, t, the system history can be captured in a set of
state variables. Therefore, the next state of the system has
a distribution which is only conditioned on the current state
and the taken action at the current time, i.e.:
st+1 ∼ P (st+1 |st , at )
(4)
The Markov property holds for many cyber-physical system
application domains and therefore MDP and RL can be applied
as the control algorithm. We can also define the stochastic
policy which is a generalized version of (2) as a probability
distribution of actions conditioned on the current state, i.e.:
at ∼ π(at |st )
(5)
The expected return function which is the expected value of
‘return’ defined in (1) can be written as:
"∞
#
X
i
vπ (st ) =
E
γ rt+i
(6)
aτ ∼π,τ ≥t
i=0
This equation is defined for infinite episodes and the constant
0 < γ < 1 is introduced to ensure that the defined expected
return is always a finite value, assuming the returns are
bounded.
Another important concept in RL is the action-value
function, Qπ (s, a) defined as the expected return (value) if
action at is taken at time t under policy π:
"∞
#
X
i
Qπ (st , at ) =
E
γ rt+i
aτ ∼π,τ >t
(7)
i=0
There are two main categories of methods to find the
optimal policy. In the first category, Qπ (s, a) is parameterized
as Qθπ (s, a) and the optimal action-value parameter vector θ
is estimated in an iterative process. The optimal policy can
be defined implicitly from Qπ (s, a). For example, the greedy
policy is the one that maximizes Qπ (s, a) in each step:
at = arg max {Qπ (s, a)}
(8)
a
In the second category, which is called policy optimization and
has been successfully applied to large-scale and continuous
control systems [4], the policy is parameterized directly as
π θ (at |st ) and the parameter vector of the optimal policy θ is
estimated. The Trust Region Policy Method (TRPO) [5] is an
example of the second category of methods that guarantees
monotonic policy improvement and is designed to be scalable
to large-scale settings. In each iteration of TRPO, a number
of MDP trajectories are simulated (or actually experienced by
the agent) and θ is updated to improve the policy. A high level
description of TRPO is shown in algorithm 1.
IV. P ROBLEM S TATEMENT
There is a set of vehicles in a grid street plan area consisting
of a certain number of intersections. For simplicity, we assume
that all the initial vehicle positions and the desired destinations
are located at the intersections. There is a control agent for
the entire area. The agent’s task is to calculate the acceleration
command for the vehicles in real-time (see Fig. 2). We assume
that there are no still or moving obstacles other than vehicles’
or street boundaries.
The input to the agent is the real-time state of the vehicles
which consists of their positions and speeds. We are assuming
that vehicles are point masses and their angular dynamics
are ignored. However, to take the collision avoidance in the
problem formulation, we define a safe radius for each vehicle
and no objects (vehicles or street boundaries) should be closer
than the safe radius to the vehicle.
Algorithm 1: High-Level description of Trust Region Optimization
1
2
3
4
5
6
Data: S
. Actual system or Simulation model
πθ
. Parameterized Policy
Result: θ∗
. Optimal parameters
repeat
Use S to generate trajectories of the system using
current π θ ;
Perform one iteration of policy optimization using
Monte Carlo method to get θnew ;
θ ← θnew
until no more improvements;
return θ
come to a full stop when they are closer than the safe radius
to another vehicle or boundary. By applying this heuristic
in the simulation model, the agent should avoid any “near
collision” situations explained above because the deceleration
and acceleration cycles take a lot of time and will decrease
the expected return.
Based on the problem statement explained above, we can
describe the RL formulation in the rest of the subsection. The
state is defined as the following vector:
|
1
st = x1t , yt1 , vxt
, vy1 t , . . . , xnt , ytn , vxn t , vyn t
(9)
i
where (xit , yti ) and (vxt
, vyi t ) are the position and speed of
vehicle i at time t. The action vector is defined as:
|
at = a1xt , a1y t , . . . , anx t , any t
(10)
Fig. 2: Intersection Management Problem. The goal of the problem is
to navigate the vehicles from the sources to destinations in minimum
time with no collisions.
The objective is to drive all the vehicles to their respective
destinations in a way that the total travel time is minimized.
Furthermore, no collision should occur between any two
vehicles or a vehicle and the street boundaries.
To minimize the total travel time, a positive reward is
assigned to the terminal state in which all the vehicles
approximately reach the destinations within some tolerance. A
discount factor γ strictly less than one is used. Therefore, the
agent should try to reach the terminal state as fast as possible
to maximize the discounted return. However, by using only
this reward, too many random walk trajectories are needed
to discover the terminal state. Therefore, a negative reward is
defined for each state, proportional to the total distance of the
vehicles to their destinations as a hint of how far the terminal
state is. This negative reward is not in contradiction with the
main goal which is to minimize total travel time.
To avoid collisions, two different approaches can be
considered: we can add large negative rewards for the collision
states or we can incorporate a collision avoidance mechanism
into the environment model. Our experiments show that
the first approach makes the agent too conservative about
moving the vehicles to minimize the probability of collisions.
This might lead to extremely slow learning which makes it
infeasible. Furthermore, collisions are inevitable even with
large negative rewards which limits the effectiveness of learned
policies in practice.
For the above mentioned reasons, the second approach is
employed, i.e. the safety mechanism that is used in practice is
included in the environment definition. The safety mechanism
is activated whenever two vehicles are too close to each other
or a vehicle is too close to the street boundary. In these cases,
the vehicle built-in collision avoidance system will control the
vehicle’s acceleration and the acceleration commands from
the RL agent are ignored as long as the distance is near
the allowed safe radius of the vehicle. In the agent learning
process these cases are simulated in a way that the vehicles
where (aixt , aiy t ) is the acceleration command of vehicle i at
time t. The reward function is defined as:
(
1 if k(xi − dix , y i − dix )| k < η (1 ≤ i ≤ n)
r(s) =
Pn
−α i=1 k(xi − dix , y i − dix )| k otherwise
(11)
where (dix , diy ) is the destination coordinates of vehicle i, η is
the distance tolerance and α is a positive constant.
Assuming no collision occurs, the state transition equations
for the environment are defined as follows:
xit+1 = satx,x (xit + hvx it )
i
yt+1
= saty,y (yti + hvy it )
vx it+1 = satvm ,vm (vx it + hax it )
vy it+1 = satvm ,vm (vy it + hay it )
(12)
where h is the sampling time, (x, x, y, y) defines area limits,
vm is the maximum speed and satw,w (.) is the saturation
function defined as:
w x ≤ w
satw,w (x) = w x ≥ w
(13)
x otherwise.
To model the collisions, we should check certain conditions
and set the speed to zero. A more detailed description of
collision modeling is presented in Algorithm 2.
A. Solving the AIM problem using TRPO
The simulation model can be implemented based on the RL
formulation described in Section IV. To use TRPO, we need
a parameterized stochastic policy, π θ (at |st ), in addition to the
simulation model. The policy should specify the probability
distribution for each element of the action vector defined in
(10) as a function of the current state st .
We have used the sequential deep neural network (DNN)
policy representation as described in [5]. The input layer
receives the state containing the position and speed of the
vehicles (defined in (9)). There are a number of hidden layers,
each followed by tanh activation functions [10]. Finally, the
output layer generates the mean of a gaussian distribution for
each element of the action vector.
To execute the optimal policy learned by TRPO in each
sampling time, the agent calculates the forward-pass of DNN
using the current state. Next, assuming that all the action
elements have the same variance, the agent samples from the
action gaussian distributions and applies the sampled actions
to the environment as the vehicle acceleration commands.
(a)
V. E VALUATION
A. Baseline Method
To the best of our knowledge there is no other solution
proposed for the fine-grained acceleration AIM problem
introduced in this paper. Therefore, we use conventional
optimization methods to study how close the proposed solution
is to the optimal solution. Furthermore, we will see that the
conventional optimization is able to solve the AIM problem
only for very small-sized problems. This confirms that the
proposed RL-based solution is a promising alternative to the
conventional methods.
Theoretically, the best solution to the problem defined
in section IV can be obtained if we reformulate it as a
conventional optimization problem. The following equations
and inequalities describe the AIM optimization problem:
(b)
Fig. 3: Initial setup of each episode. Small circles are the sources and
big circles are the destinations. (a) small example (b) large example
am ≤ ax it ≤ am
am
−bN/2c
at ∗ = arg max
at
s. t.
n
T
−1 X
X
k(xit − dix , yti − dix )| k
−bM/2c
(14)
t=0 i=1
x ≤ xit ≤ x (1 ≤ i ≤ n)
(15)
y ≤ yti ≤ y
(16)
(1 ≤ i ≤ n)
vm ≤ vx it ≤ vm
(1 ≤ i ≤ n)
(17)
vm ≤ vy it ≤ vm
(1 ≤ i ≤ n)
(18)
Algorithm 2: State Transition Function
1
2
3
4
5
6
7
8
9
10
Data: st
. State at time t
at
. Action at time t
Result: st+1
. State at time t + 1
ax it ← satam ,am (ax it ) ;
ay it ← satam ,am (ay it ) ;
st+1 ← updated state using (12) ;
vc1 ← find all the vehicles colliding with street
boundaries ;
speed elements of vc1 in st+1 ← 0 ;
location elements of vc1 in st+1 ← closest point on the
street boundary with the margin of ;
vc2 ← find all the vehicles colliding with some other
vehicle ;
speed elements of vc2 in st+1 ← 0 ;
location elements of vc2 in st+1 ← pushed back location
with the distance of 2× safe radius to the collided
vehicle;
return st+1
xi0
xiT −1
vx i0
xit+1
i
yt+1
vx it+1
vy it+1
(xit
(1 ≤ i ≤ n)
≤ ay it ≤ am (1 ≤ i ≤ n)
≤ rti ≤ bN/2c (1 ≤ i ≤ n, rti ∈ Z)
≤ cit ≤ bM/2c (1 ≤ i ≤ n, cit ∈ Z)
= six , y0i = siy (1 ≤ i ≤ n)
= dix , yTi −1 = diy (1 ≤ i ≤ n)
= 0, vy i0 = 0 (1 ≤ i ≤ n)
= xit + vx it .h (1 ≤ i ≤ n)
= yti + vy it .h (1 ≤ i ≤ n)
= vx it + ax it .h (1 ≤ i ≤ n)
= vy it + ay it .h (1 ≤ i ≤ n)
− xjt )2 + (yti − ytj )2 ≥ (2R)2 (1 ≤ i
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
< j ≤ n)
(30)
l
|xit − cit .bw | ≤ ( − R) or
2
l
|yti − rti .bh | ≤ ( − R) (1 ≤ i ≤ n)
(31)
2
where rti and cit are the row number and column number
of vehicle at time t, respectively, assuming the zone is a
perfect rectangular grid; N and M are the number of rows and
columns, respectively; bw and bh are block width and block
height; l is the street width; R is the vehicle clearance radius;
T is number of sampling times; and (six , siy ) is the source
coordinates of vehicle i.
In the above mentioned problem setting, (15) to (20) are
the physical limit constraints. (23) to (25) describe the initial
and final conditions. (26) to (29) are dynamic constraints. (30)
is the vehicle-to-vehicle collision avoidance constraint and
finally (31) is the vehicle-to-boundaries collision avoidance
constraint.
Fig. 4: Learnt policy by (left)RL agent and (right)the baseline method for the small example
Fig. 5: Learnt policy by the AIM agent at different iterations of the training. left: begging of the training, middle: just after the fast learning
phase in Fig. 7. right: end of the training.
The basic problem with the above formulation is that
constraint (30) leads to a non-convex function and convex
optimization algorithms cannot solve this problem. Therefore,
a Mixed-Integer Nonlinear Programming (MINLP) algorithm
should be used to solve this problem. Our experiments
show that even a small-sized problem with two vehicles and
2×2 grid cannot be solved with an MINLP algorithm, i.e.
AOA[11]. To overcome this issue, we should reformulate the
optimization problem using 1-norm and introduce new integer
variables for the distance between vehicles using the ideas
proposed in [12].
To achieve the best convergence and execution time by
using a Mixed-integer Quadratic Programming (MIQP), the
cost function and all constraints should be linear or quadratic.
Furthermore, the “or” logic in (31) should be implemented
using integer variables. The full MIQP problem can be written
as the following equations and inequalities:
at ∗ = arg max
T
−1 X
n
X
at
s. t.
(xit − dix )2 + (yti − dix )2 (32)
(1 ≤ i ≤ n)
i,j
yti − ytj ≥ 2Rcy i,j
t − M(1 − cy t )
(1 ≤ i < j ≤ n)
(38)
(1 ≤ i < j ≤ n)
(39)
i,j
yti − ytj ≤ −2Rdy i,j
t + M(1 − dy t )
(1 ≤ i < j ≤ n)
(40)
l
xit − cit bw ≤ ( − R)bx it + M(1 − bx it ) (1 ≤ i ≤ n) (41)
2
l
xit − cit bw ≥ −( − R)bx it − M(1 − bx it ) (1 ≤ i ≤ n)
2
(42)
l
yti − cit bw ≤ ( − R)by it + M(1 − by it ) (1 ≤ i ≤ n) (43)
2
l
yti − cit bw ≥ −( − R)by it − M(1 − by it ) (1 ≤ i ≤ n)
2
(44)
where M is a large positive number.
(37) to (40) represent the vehicle-to-vehicle collision
avoidance constraint using 1-norm:
k(xit , yti )| − (xjt , ytj )| k1 ≥ 2R
t=0 i=1
(15) to (29)
bx it , by it ∈ {0, 1}
i,j
xit − xjt ≤ −2Rdx i,j
t + M(1 − dx t )
(33)
bx it
+ by it ≥ 1 (1 ≤ i ≤ n)
(34)
i,j
i,j
i,j
cx i,j
,c
,
d
,
d
∈
{0,
1}
(1
≤
i
<
j
≤
n)
(35)
yt
xt
yt
t
i,j
i,j
i,j
i,j
cx t +cy t + dx t + dy t ≥ 1 (1 ≤ i < j ≤ n) (36)
i,j
xit − xjt ≥ 2Rcx i,j
t − M(1 − cx t ) (1 ≤ i < j ≤ n)
(37)
(45)
for any two distinct vehicles i and j. This constraint is
equivalent to the following:
|xit − xjt | ≥ 2R or |yti − ytj | ≥ 2R
∀t, (1 ≤ i < j ≤ n)
(46)
The absolute value function displayed in (46) should be
replaced by logical “or” of two linear conditions to avoid
Input
Layer
yti − ytj ≥ 2R or yti − ytj ≤ −2R
∀t, (1 ≤ i < j ≤ n)
(47)
(35) implements the “or” logic required in (47).
(41) to (44) describe the vehicle-to-boundaries collision
avoidance constraint:
l
|xit − cit bw | ≤ ( − R)bx it or
2
l
i
i
|yt − rt bw | ≤ ( − R)by it ∀t, (1 ≤ i ≤ n)
(48)
2
which is equivalent to:
l
l
(xit − cit bw ≤ ( − R)bx it and xit − cit bw ≥ −( − R)bx it ) or
2
2
l
l
(yti − rti bw ≤ ( − R)by it and yti − rti bw ≥ −( − R)by it )
2
2
∀t, (1 ≤ i ≤ n)
(49)
The “or” logic in this constraint is realized in (34).
We will show in the next subsection that the explained
conventional optimization formulation is not feasible except
for very small-sized problems. Another limitation that makes
the conventional method impractical is that this formulation
works only for a perfect rectangular grid. However, the
proposed RL method in this paper can be extended to arbitrary
street layouts.
B. Simulation results
The implementation of the TRPO in rllab library [4] is used
to simulate the RL formulation of the AIM problem described
in Section IV. For this purpose, the AIM state transition and
reward calculation are implemented as an OpenAI Gym [13]
environment.
The neural network used to approximate the policy is an
MLP which consists of three hidden layers. Each hidden layer
has 100 nodes (Fig. 6). Table I lists the parameters for the
simulation. To speed up simulation, normalized units are used
for the physical properties of the environment instead of realworld quantities.
Fig. 3 shows the small and large grid plans used for the
simulation. The small and large circles represent the source
and destination locations, respectively. The vehicles are placed
at the intersections randomly at the beginning of each episode.
TABLE I: Parameter value settings for the experiment
Parameter
Value
discount factor(γ)
distance reward penalty factor(α)
distance tolerance(η)
maximum speed(vm )
maximum acceleration(am )
sampling time(h)
maximum episode length
vehicle safe radius
0.999
0.1
0.05
0.8
30
10 (ms)
200
0.02
Output
Layer Sampling
Acceleration Commands
xit − xjt ≥ 2R or xit − xjt ≤ −2R or
MLP
Layers
Position and Speed data
nonlinearity. Therefore we have the following four constraints
represented by (37) to (40):
4x7 units
100 units
100 units
100 units
2x7 units
Fig. 6: Neural network used in the simulations.
TABLE II: Total travel time obtained by baseline method and
proposed method
Small example
Large Example
Baseline Method
1.79 (s)
no solution
Proposed Method
2.43 (s)
11.2 (s)
The destinations are also chosen randomly. When the simulator
is reset, the same set of source and destination locations are
used.
The small grid can be solved both by the baseline method
and by the proposed RL method. However, the large grid
can only be solved by the RL method because the MIQP
algorithm could not find a feasible solution (which is not
optimal necessarily) and was stopped after around 68 hours
and using 21 GB of system memory. On the other hand, the
RL method can solve the problem using 560 MB of system
memory and 101 MB of GPU memory.
Table II and Fig. 4 show the comparison of proposed RL
and baseline method results. In Table II the total travel time
is provided for both methods and Fig. 4 shows the vehicles’
trajectories by running the navigation policy obtained by both
solutions for the small examples.
The learning curve of the RL agent which is the expected
return vs the training epoch number is shown in Fig. 7 for the
large grid example. This figure shows that the learning rate is
higher at the beginning which corresponds to the stage where
in the agent is learning the very basics of driving and avoiding
collisions, but improving the policy towards the optimal policy
takes considerably more time. The increase in learning occurs
after two epochs when the agent discovers the policy that
successfully drives all the vehicles to the destination and the
positive terminal reward is gained. Moreover, the trajectories
of vehicles are depicted in Fig. 5 at three stages of the learning
process, i.e. at the early stage, at epoch 2 where the learning
curve slope suddenly decreases, and the end of the training.
The total number of “near collision” incidents discussed in
Section IV is shown in Fig. 8. Fig. 9 shows the total travel
time as a function of training iteration.
VI. C ONCLUSION
In this paper, we have shown that Deep RL can be a
promising solution for the problem of intelligent intersection
management in local road settings where the number of
vehicles is limited and fine-grained acceleration control and
0
10000
Discounted Return
20000
30000
40000
50000
60000
70000
80000
90000
0
2
4
6
8
10
Epoch Number
12
14
16
Fig. 7: Learning curve of the AIM agent for large grid example.
The discounted return is always a negative value because the return
is the accumulated negative distance rewards and there is only one
positive reward in the terminal state in which all the vehicles are at
the destinations.
Fig. 9: Total travel time of all the vehicles vs training iteration
number.
other mainstream deep learning domains that reduce learning
time can be a studied in future. Furthermore, a more
advanced rewards system that includes gas usage penalties is
another track to developing a practical intelligent intersection
management algorithm.
VII. ACKNOWLEDGEMENT
We would like to thank Nvidia for their generous hardware
donation. This work was supported in part by the National
Science Foundation under NSF grant number 1563652.
R EFERENCES
Fig. 8: Number of near collision incidents vs training iteration
number.
motion planning can lead to a more efficient navigation of
the autonomous vehicles. We proposed an RL environment
definition in which collisions are avoided using a safety
mechanism. Using this method instead of large penalties for
collision in the reward function, the agent can learn the optimal
policy faster and the learned policy can be used in practice
where the safety mechanism is actually implemented. The
experiments show that the conventional optimization methods
are not able to solve the problem with the sizes that are
solvable by the proposed method.
Similar to the learning process of human beings, the main
benefit of the RL approach is that an explicit mathematical
modeling of the system is not required and, more importantly,
the challenging task of control design for a complex system is
eliminated. However, since the automotive systems demand
a high safety requirement, training of the RL agent using
a simulation model is inevitable in most cases. However,
developing a simulation model for a system is considerably
simpler task compared to explicit modeling especially for
systems with uncertainty.
While the work at hand is a promising first step towards
using RL in autonomous intersection management, a number
of potential improvements can be mentioned that might be
interesting to address in future work. First, the possibility
of developing pre-trained DNNs similar to the works in
[1] M. Hausknecht, T.-C. Au, and P. Stone, “Autonomous intersection
management: Multi-intersection optimization,” in 2011 IEEE/RSJ
International Conference on Intelligent Robots and Systems. IEEE,
2011, pp. 4581–4586.
[2] K. Dresner and P. Stone, “A multiagent approach to autonomous
intersection management,” Journal of artificial intelligence research,
vol. 31, pp. 591–656, 2008.
[3] C. Frese and J. Beyerer, “A comparison of motion planning algorithms
for cooperative collision avoidance of multiple cognitive automobiles,”
in Intelligent Vehicles Symposium (IV), 2011 IEEE. IEEE, 2011, pp.
1156–1162.
[4] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel,
“Benchmarking deep reinforcement learning for continuous control,”
arXiv preprint arXiv:1604.06778, 2016.
[5] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel, “Trust
region policy optimization,” CoRR, abs/1502.05477, 2015.
[6] M. N. Mladenović and M. M. Abbas, “Self-organizing control
framework for driverless vehicles,” in 16th International IEEE
Conference on Intelligent Transportation Systems (ITSC 2013). IEEE,
2013, pp. 2076–2081.
[7] I. H. Zohdy, R. K. Kamalanathsharma, and H. Rakha, “Intersection
management for autonomous vehicles using icacc,” in 2012 15th
International IEEE Conference on Intelligent Transportation Systems.
IEEE, 2012, pp. 1109–1114.
[8] A. A. Malikopoulos, C. G. Cassandras, and Y. J. Zhang, “A decentralized
optimal control framework for connected and automated vehicles at
urban intersections,” arXiv preprint arXiv:1602.03786, 2016.
[9] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction.
MIT press Cambridge, 1998, vol. 1, no. 1.
[10] B. Karlik and A. V. Olgac, “Performance analysis of various
activation functions in generalized mlp architectures of neural networks,”
International Journal of Artificial Intelligence and Expert Systems,
vol. 1, no. 4, pp. 111–122, 2011.
[11] M. Hunting, “The aimms outer approximation algorithm for minlp,”
Paragon Decision Technology, Haarlem, 2011.
[12] T. Schouwenaars, B. De Moor, E. Feron, and J. How, “Mixed integer
programming for multi-vehicle path planning,” in Control Conference
(ECC), 2001 European. IEEE, 2001, pp. 2603–2608.
[13] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman,
J. Tang, and W. Zaremba, “Openai gym,” 2016.
| 2 |
1
Cluster-Seeking James-Stein Estimators
arXiv:1602.00542v4 [] 16 Mar 2018
K. Pavan Srinath and Ramji Venkataramanan
Abstract—This paper considers the problem of estimating a
high-dimensional vector of parameters θ ∈ Rn from a noisy
observation. The noise vector is i.i.d. Gaussian with known
variance. For a squared-error loss function, the James-Stein (JS)
estimator is known to dominate the simple maximum-likelihood
(ML) estimator when the dimension n exceeds two. The JSestimator shrinks the observed vector towards the origin, and
the risk reduction over the ML-estimator is greatest for θ that lie
close to the origin. JS-estimators can be generalized to shrink the
data towards any target subspace. Such estimators also dominate
the ML-estimator, but the risk reduction is significant only when
θ lies close to the subspace. This leads to the question: in
the absence of prior information about θ, how do we design
estimators that give significant risk reduction over the MLestimator for a wide range of θ?
In this paper, we propose shrinkage estimators that attempt
to infer the structure of θ from the observed data in order
to construct a good attracting subspace. In particular, the
components of the observed vector are separated into clusters,
and the elements in each cluster shrunk towards a common
attractor. The number of clusters and the attractor for each
cluster are determined from the observed vector. We provide
concentration results for the squared-error loss and convergence
results for the risk of the proposed estimators. The results
show that the estimators give significant risk reduction over the
ML-estimator for a wide range of θ, particularly for large n.
Simulation results are provided to support the theoretical claims.
Index Terms—High-dimensional estimation, Large deviations
bounds, Loss function estimates, Risk estimates, Shrinkage estimators
Onsider the problem of estimating a vector of parameters
θ ∈ Rn from a noisy observation y of the form
y = θ + w.
The noise vector w ∈ Rn is distributed as N (0, σ 2 I), i.e., its
components are i.i.d. Gaussian random variables with mean
zero and variance σ 2 . We emphasize that θ is deterministic,
so the joint probability density function of y = [y1 , . . . , yn ]T
for a given θ is
pθ (y) =
1
n
(2πσ 2 ) 2
e
ky−θk2
− 2σ2
where the expectation is computed using the density in (I).
The normalized risk is R(θ, θ̂)/n.
Applying the maximum-likelihood (ML) criterion to (1)
yields the ML-estimator θ̂M L = y. The ML-estimator is an
unbiased estimator, and its risk is R(θ, θ̂M L ) = nσ 2 . The goal
of this paper is to design estimators that give significant risk
reduction over θ̂M L for a wide range of θ, without any prior
assumptions about its structure.
In 1961 James and Stein published a surprising result [1],
proposing an estimator that uniformly achieves lower risk than
θ̂M L for any θ ∈ Rn , for n ≥ 3. Their estimator θ̂JS is given
by
(n − 2)σ 2
y,
(2)
θ̂JS = 1 −
kyk2
and its risk is [2, Chapter 5, Thm. 5.1]
1
R θ, θ̂JS = nσ 2 − (n − 2)2 σ 4 E
.
kyk2
(3)
Hence for n ≥ 3,
R(θ, θ̂JS ) < R(θ, θ̂M L ) = nσ 2 , ∀θ ∈ Rn .
(4)
An estimator θ̂1 is said to dominate another estimator θ̂2 if
I. I NTRODUCTION
C
where k·k denotes the Euclidean norm. The risk of the
estimator for a given θ is the expected value of the loss
function:
h
i
R(θ, θ̂) := E kθ̂(y) − θk2 ,
.
(1)
The performance of an estimator θ̂ is measured using the
squared-error loss function given by
L(θ, θ̂(y)) := kθ̂(y) − θk2 ,
This work was supported in part by a Marie Curie Career Integration Grant
(Grant Agreement No. 631489) and an Early Career Grant from the Isaac
Newton Trust. This paper was presented in part at the 2016 IEEE International
Symposium on Information Theory.
K. P. Srinath and R. Venkataramanan are with Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, UK (e-mail: {pk423,
rv285}@cam.ac.uk).
R(θ, θ̂1 ) ≤ R(θ, θ̂2 ), ∀θ ∈ Rn ,
with the inequality being strict for at least one θ. Thus (4) implies that the James-Stein estimator (JS-estimator) dominates
the ML-estimator. Unlike the ML-estimator, the JS-estimator
is non-linear and biased. However, the risk reduction over the
ML-estimator can be significant, making it an attractive option
in many situations — see, for example, [3].
By evaluating the expression in (3), it can be shown that
the risk of the JS-estimator depends on θ only via kθk [1].
Further, the risk decreases as kθk decreases. (For intuition
about this, note in (3) that for large n, kyk2 ≈ nσ 2 + kθk2 .)
The dependence of the risk on kθk is illustrated in Fig. 1,
where the average loss of the JS-estimator is plotted versus
kθk, for two different choices of θ.
The JS-estimator in (2) shrinks each element of y towards
the origin. Extending this idea, JS-like estimators can be
defined by shrinking y towards any vector, or more generally,
towards a target subspace V ⊂ Rn . Let PV (y) denote the projection of y onto V, so that ky−PV (y)k2 = minv∈V ky−vk2 .
Then the JS-estimator that shrinks y towards the subspace V
2
1.6
Regular JS−Estimator
JS−Estimator Positive Part
Lindley’s Estimator
Lindley’s Estimator Positive Part
ML−Estimator
1.4
R̃(θ, θ̂)/n
1.2
1
0.8
0.6
0.4
0.2
0
0
1
2
3
4
5
6
7
∥θ∥
(a)
1.6
Regular JS−Estimator
JS−Estimator Positive Part
Lindley’s Estimator
Lindley’s Estimator Positive Part
ML−Estimator
1.4
R̃(θ, θ̂)/n
1.2
1
0.8
0.6
0.4
0.2
0
0
1
2
3
4
5
6
7
∥θ∥
(b)
Fig. 1: Comparison of the average normalized loss of the regular
JS-estimator, Lindley’s estimator, and their positive-part versions for
2
n = 10 as a function of
√ =
√ kθk. The loss of the ML-estimator is σ
1. In (a) θi = kθk/ 10, i = √1, · · · , 5, and θi = −kθk/ 10,
i = 6, · · · , 10. In (b), θi = kθk/ 10, ∀i.
is
"
θ̂ = PV (y) + 1 −
(n − d − 2)σ 2
2
ky − PV (y)k
#
(y − PV (y)) ,
(5)
where d is the dimension of V.1 A classic example of such an
estimator is Lindley’s estimator [4], which shrinks y towards
the one-dimensional subspace defined by the all-ones vector
1. It is given by
(n − 3)σ 2
θ̂L = ȳ1 + 1 −
(y − ȳ1) ,
(6)
ky − ȳ1k2
Pn
where ȳ := n1 i=1 yi is the empirical mean of y.
It can be shown that the different variants of the JS-estimator
such as (2),(5),(6) all dominate the ML-estimator.2 Further,
all JS-estimators share the following key property [6]–[8]:
the smaller the Euclidean distance between θ and the
attracting vector, the smaller the risk.
Throughout this paper, the term “attracting vector” refers
to the vector that y is shrunk towards. For θ̂JS in (2), the
attracting vector is 0, and the risk reduction over θ̂M L is
larger when kθk is close to zero. Similarly, if the components
of θ are clustered around some value c, a JS-estimator with
1 The dimension n has to be greater than d + 2 for the estimator to achieve
lower risk than θ̂M L .
2 The risks of JS-estimators of the form (5) can usually be computed using
Stein’s lemma [5], which states that E[Xg(X)] = E[g 0 (X)], where X is a
standard normal random variable, and g a weakly differentiable function.
attracting vector c1 would give significant risk reduction over
θ̂M L . One motivation for Lindley’s estimator in (6) comes
from a guess that the components of θ are close to its empirical
mean θ̄ — since we do not know θ̄, we approximate it by ȳ
and use the attracting vector ȳ1.
Fig. 1 shows how the performance of θ̂JS and θ̂L depends
on the structure of θ. In the left panel of the figure, the empirical mean θ̄ is always 0, so the risks of both estimators increase
monotonically with kθk. In the right panel, all the components
of θ are all equal to θ̄. In this case, the
from the
p distance
Pn
attracting vector for θ̂L is kθ − ȳ1k = ( i=1 wi )2 /n, so
the risk does not vary with kθk; in contrast the risk of θ̂JS
increases with kθk as its attracting vector is 0.
The risk reduction obtained by using a JS-like shrinkage estimator over θ̂M L crucially depends on the choice of attracting
vector. To achieve significant risk reduction for a wide range
of θ, in this paper, we infer the structure of θ from the data
y and choose attracting vectors tailored to this structure. The
idea is to partition y into clusters, and shrink the components
in each cluster towards a common element (attractor). Both
the number of clusters and the attractor for each cluster are to
be determined based on the data y.
As a motivating example, consider
a θ in which half the
√
n
and
the
other half are equal
components
are
equal
to
kθk/
√
to −kθk/ n. Fig. 1(a) shows that the risk reduction of both
θ̂JS and θ̂L diminish as kθk gets larger. This is because the
empirical mean ȳ is close to zero, hence θ̂JS and θ̂L both
shrink y towards 0. An ideal√
JS-estimator would shrink the √
yi ’s
corresponding to θi = kθk/ n towards the attractor
kθk/
n,
√
and the remaining observations towards −kθk/ n. Such an
estimator would give handsome gains over θ̂M L for all θ with
the above structure. On the other hand, if θ is such that all
its components are equal (to θ̄), Lindley’s estimator θ̂L is an
excellent choice, with significantly smaller risk than θ̂M L for
all values of kθk (Fig. 1(b)).
We would like an intelligent estimator that can correctly
distinguish between different θ structures (such as the two
above) and choose an appropriate attracting vector, based
only on y. We propose such estimators in Sections III and
IV. For reasonably large n, these estimators choose a good
attracting subspace tailored to the structure of θ, and use
an approximation of the best attracting vector within the
subspace.
The main contributions of our paper are as follows.
• We construct a two-cluster JS-estimator, and provide concentration results for the squared-error loss, and asymptotic convergence results for its risk. Though this estimator does not dominate the ML-estimator, it is shown to
provide significant risk reduction over Lindley’s estimator
and the regular JS-estimator when the components of θ
can be approximately separated into two clusters.
• We present a hybrid JS-estimator that, for any θ and for
large n, has risk close to the minimum of that of Lindley’s
estimator and the proposed two-cluster JS-estimator. Thus
the hybrid estimator asymptotically dominates both the
ML-estimator and Lindley’s estimator, and gives significant risk reduction over the ML-estimator for a wide
range of θ.
3
•
•
We generalize the above idea to define general multiplecluster hybrid JS-estimators, and provide concentration
and convergence results for the squared-error loss and
risk, respectively.
We provide simulation results that support the theoretical
results on the loss function. The simulations indicate that
the hybrid estimator gives significant risk reduction over
the ML-estimator for a wide range of θ even for modest
values of n, e.g. n = 50. The empirical risk of the hybrid
estimator converges rapidly to the theoretical value with
growing n.
A. Related work
George [7], [8] proposed a “multiple shrinkage estimator”,
which is a convex combination of multiple subspace-based JSestimators of the form (5). The coefficients defining the convex
combination give larger weight to the estimators whose target
subspaces are closer to y. Leung and Barron [9], [10] also
studied similar ways of combining estimators and their risk
properties. Our proposed estimators also seek to emulate the
best among a class of subspace-based estimators, but there
are some key differences. In [7], [8], the target subspaces are
fixed a priori, possibly based on prior knowledge about where
θ might lie. In the absence of such prior knowledge, it may not
be possible to choose good target subspaces. This motivates the
estimators proposed in this paper, which use a target subspace
constructed from the data y. The nature of clustering in θ is
inferred from y, and used to define a suitable subspace.
Another difference from earlier work is in how the attracting
vector is determined given a target subspace V. Rather than
choosing the attracting vector as the projection of y onto
V, we use an approximation of the projection of θ onto V.
This approximation is computed from y, and concentration
inequalities are provided to guarantee the goodness of the
approximation.
The risk of a JS-like estimator is typically computed using
Stein’s lemma [5]. However, the data-dependent subspaces
we use result in estimators that are hard to analyze using
this technique. We therefore use concentration inequalities to
bound the loss function of the proposed estimators. Consequently, our theoretical bounds get sharper as the dimension
n increases, but may not be accurate for small n. However,
even for relatively small n, simulations indicate that the risk
reduction over the ML-estimator is significant for a wide range
of θ.
Noting that the shrinkage factor multiplying y in (2) could
be negative, Stein proposed the following positive-part JSestimator [1]:
(n − 2)σ 2
y,
(7)
θ̂JS+ = 1 −
kyk2
+
where X+ denotes max(0, X). We can similarly define
positive-part versions of JS-like estimators such as (5) and
(6). The positive-part Lindley’s estimator is given by
(n − 3)σ 2
θ̂L+ = ȳ1 + 1 −
(y − ȳ1) .
(8)
ky − ȳ1k2 +
Baranchik [11] proved that θ̂JS+ dominates θ̂JS , and his result
also proves that θ̂L+ dominates θ̂L . Estimators that dominate
θ̂JS+ are discussed in [12], [13]. Fig. 1 shows that the positivepart versions can give noticeably lower loss than the regular
JS and Lindley estimators. However, for large n, the shrinkage
factor is positive with high probability, hence the positive-part
estimator is nearly always identical to the regular JS-estimator.
2
kθk2
2
Indeed, for large n, kyk
n ≈ n +σ , and the shrinkage factor
is
(n − 2)σ 2
(n − 2)σ 2
≈
1
−
> 0.
1−
kyk2
kθk2 + nσ 2
We analyze the positive-part version of the proposed hybrid
estimator using concentration inequalities. Though we cannot
guarantee that the hybrid estimator dominates the positivepart JS or Lindley estimators for any finite n, we show that
for large n, the loss of the hybrid estimator is equal to the
minimum of that of the positive-part Lindley’s estimator and
the cluster-based estimator with high probability (Theorems 3
and 4).
The rest of the paper is organized as follows. In Section
II, a two-cluster JS-estimator is proposed and its performance
analyzed. Section III presents a hybrid JS-estimator along
with its performance analysis. General multiple-attractor JSestimators are discussed in Section IV, and simulation results
to corroborate the theoretical analysis are provided in Section
V. The proofs of the main results are given in Section VI.
Concluding remarks and possible directions for future research
constitute Section VII.
B. Notation
Bold lowercase letters are used to denote vectors, and plain
lowercase letters for their entries. For example, the entries of
y ∈ Rn are yi , i = 1, · · · , n. All vectors have length n and
are column vectors, unless otherwise mentioned. For vectors
y, z ∈ Rn , hy, zi denotes their Euclidean inner product. The
all-zero vector and the all-one vector of length n are denoted
by 0 and 1, respectively. The complement of a set A is
denoted by Ac . For a finite set A with real-valued elements,
min(A) denotes the minimum of the elements in A. We
use 1{E} to denote the indicator function of an event E. A
central chi-squared distributed random variable with n degrees
of freedom
Xn2 . The Q-function is given by
R ∞ is 1denoted by
2
x
Q(x) = x √2π exp(− 2 )dx, and Qc (x) := 1 − Q(x). For a
random variable X, X+ denotes max(0, X). For real-valued
functions f (x) and g(x), the notation f (x) = o(g(x)) means
that limx→0 [f (x)/g(x)] = 0, and f (x) = O(g(x)) means that
limx→∞ [f (x)/g(x)] = c for some positive constant c.
P
For a sequence of random variables {Xn }∞
n=1 , Xn −→ X,
a.s.
L1
Xn −→ X, and Xn −→ X respectively denote convergence
in probability, almost sure convergence, and convergence in
L1 norm to the random variable X.
We use the following shorthand for concentration inequalities. Let {Xn (θ), θ ∈ Rn }∞
n=1 be a sequence of random
.
variables. The notation Xn (θ) = X, where X is either a
random variable or a constant, means that for any > 0,
nk min(2 ,1)
P (|Xn (θ) − X| ≥ ) ≤ Ke
− max(kθk2 /n,1)
,
(9)
4
where K and k are positive constants that do not depend on
n or θ. The exact values of K and k are not specified.
The shrinkage estimators we propose have the general form
"
#
nσ 2
θ̂ = ν + 1 −
(y − ν) .
2
ky − νk +
For 1 ≤ i ≤ n, the ith component of the attracting vector ν
is the attractor for yi (the point towards which it is shrunk).
II. A TWO - CLUSTER JAMES -S TEIN ESTIMATOR
Recall the example
√
√in Section I where θ has half its components equal to kθk/ n, and the other half equal to kθk/ n.
Ideally, we would like to√shrink the yi ’s corresponding to the
first group
√ towards kθk/ n, and the remaining points towards
−kθk/ n. However, without an oracle, we cannot accurately
guess which point each yi should be shrunk towards. We would
like to obtain an estimator that identifies separable clusters in
y, constructs a suitable attractor for each cluster, and shrinks
the yi in each cluster towards its attractor.
We start by dividing the observed data into two clusters
based on a separating point sy , which is obtained from y. A
natural choice for the sy would be the empirical mean θ̄; since
this is unknown we use sy = ȳ. Define the clusters
C1 := {yi , 1 ≤ i ≤ n | yi > ȳ},
C2 := {yi , 1 ≤ i ≤ n | yi ≤ ȳ}.
The points in C1 and C2 will be shrunk towards attractors
a1 (y) and a2 (y), respectively, where a1 , a2 : Rn → R are
defined in (21) later in this section. For brevity, we henceforth
do not indicate the dependence of the attractors on y. Thus
the attracting vector is
1{y1 ≤ȳ}
1{y1 >ȳ}
1{y2 ≤ȳ}
1{y2 >ȳ}
(10)
ν2 := a1
,
+ a2
..
..
.
.
1{yn ≤ȳ}
1{yn >ȳ}
with a1 and a2 defined in (21). The proposed estimator is
"
#
nσ 2
θ̂JS2 = ν2 + 1 −
(y − ν2 )
2
ky − ν2 k +
2
σ
(y − ν2 ) ,
= ν2 + 1 −
(11)
2
g ky − ν2 k /n
where the function g is defined as
g(x) := max(σ 2 , x),
Analogously, the vector in the two-dimensional subspace
defined by (10) that is closest to θ is the projection of θ onto
this subspace. Computing this projection, the desired values
for a1 , a2 are found to be
Pn
Pn
θi 1{yi ≤ȳ}
i=1 θi 1{yi >ȳ}
des
des
P
, a2 = Pi=1
. (13)
a1 =
n
n
1
i=1 {yi >ȳ}
i=1 1{yi ≤ȳ}
As the θi ’s are not available, we define the attractors a1 , a2
des
as approximations of ades
1 , a2 , obtained using the following
concentration results.
Lemma 1. We have
n
n
n
1X
σ X − (θ̄−θ2i )2
. 1X
yi 1{yi >ȳ} =
θi 1{yi >ȳ} + √
e 2σ ,
n i=1
n i=1
n 2π i=1
(14)
n
n
n
1X
σ X − (θ̄−θ2i )2
. 1X
yi 1{yi ≤ȳ} =
θi 1{yi ≤ȳ} − √
e 2σ ,
n i=1
n i=1
n 2π i=1
(15)
n
n
θ̄ − θi
1X
. X
θi Q
θi 1{yi >ȳ} =
,
(16)
n i=1
σ
i=1
n
n
θ̄ − θi
1X
. 1X
θi 1{yi ≤ȳ} =
θi Qc
,
(17)
n i=1
n i=1
σ
!
n
n
X
2
θ̄ − θi
1 X
≥ ≤ Ke−nk ,
1{yi >ȳ} −
Q
P
n i=1
σ
i=1
(18)
!
n
n
X
X
2
1
θ̄ − θi
P
1{yi ≤ȳ} −
Qc
≥ ≤ Ke−nk .
n i=1
σ
i=1
(19)
i
i
:= 1 − Q θ̄−θ
. Recall from Section I-B
where Qc θ̄−θ
σ
σ
.
that the symbol = is shorthand for a concentration inequality
of the form (9).
The proof is given in Appendix D.
des
Using Lemma 1, we can obtain estimates for ades
1 , a2
in (13) provided we have an estimate for the term
Pn − (θ̄−θ2i )2
σ
√
2σ
. This is achieved via the following
i=1 e
n 2π
concentration result.
Lemma 2. Fix δ > 0. Then for any > 0, we have
n
σ2 X
1{|yi −ȳ|≤δ} −
2nδ i=0
!
P
x ∈ R.
(12)
The attracting vector ν2 in (10) lies in a twodimensional subspace defined by the orthogonal vectors
[1{y1 >ȳ} , · · · , 1{yn >ȳ} ]T and [1{y1 ≤ȳ} , · · · , 1{yn ≤ȳ ]T . To derive the values of a1 and a2 in (10), it is useful to compare ν2
to the attracting vector of Lindley’s estimator in (6). Recall
that Lindley’s attracting vector lies in the one-dimensional
subspace spanned by 1. The vector lying in this subspace
that is closest in Euclidean distance to θ is its projection θ̄1.
Since θ̄ is unknown, we use the approximation ȳ to define the
attracting vector ȳ1.
≥
n
σ X − (θ̄−θ2i )2
√
e 2σ
+ κn δ
n 2π i=0
2
≤ 10e−nk ,
where k is a positive constant and |κn | ≤
!
(20)
√1 .
2πe
The proof is given in Appendix E.
Note 1. Henceforth in this paper, κn is used to denote a
generic bounded constant (whose exact value is not needed)
that is a coefficient of δ in expressions of the form f (δ) =
a + κn δ + o(δ) where a is some constant. As an example
1
to illustrate its usage, let f (δ) = a+bδ
, where a > 0 and
5
|bδ| < a. Then, we have f (δ) =
o(δ)) = a1 + κn δ + o(δ).
1
a(1+bδ/a)
=
1
a (1
+ ab δ +
Using Lemmas 1 and 2, the two attractors are defined to be
Pn
2 Pn
yi 1{yi >ȳ} − σ2δ i=0 1{|yi −ȳ|≤δ}
Pn
a1 = i=1
,
i=1 1{yi >ȳ}
(21)
Pn
Pn
σ2
i=0 1{|yi −ȳ|≤δ}
i=1 yi 1{yi ≤ȳ} + 2δ
Pn
.
a2 =
i=1 1{yi ≤ȳ}
With δ > 0 chosen to be a small positive number, this
completes the specification of the attracting vector in (10),
and hence the two-cluster JS-estimator in (11).
Note that ν2 , defined by (10), (21), is an approximation of the projection of θ onto the two-dimensional subspace V spanned by the vectors [1y1 >ȳ , · · · , 1yn >ȳ ]T and
[1y1 ≤ȳ , · · · , 1yn ≤ȳ ]T . We remark that ν2 , which approximates
the vector in V that is closest to θ, is distinct from the
projection of y onto V. While the analysis is easier (there
would be no terms involving δ) if ν2 were chosen to be
a projection of y (instead of θ) onto V, our numerical
simulations suggest that this choice yields significantly higher
risk. The intuition behind choosing the projection of θ onto V
is that if all the yi in a group are to be attracted to a common
point (without any prior information), a natural choice would
be the mean of the θi within
group, as in (13). This mean is
Pthe
n
determined
by
the
term
E(
i=1 θi 1{yi ≥ȳ} ), which is different
Pn
from E( i=1 yi 1{yi ≥ȳ} ) because
!
!
n
n
X
X
E
(yi − θi )1{yi ≥ȳ} = E
wi 1{yi ≥ȳ} 6= 0.
i=1
TheP term
involving
n
E( i=1 wi 1{yi ≥ȳ} ).
i=1
δ
in
(21)
approximates
Note 2. The attracting vector ν2 is dependent not just on y but
also on δ, through the two attractors a1 and a2 . In Lemma
2, for the deviation probability in (20) to fall exponentially
in n, δ needs to be held constant and independent of n.
From a practical design point
Pn of view, what is needed is
σ2
nδ 2 1. Indeed , for 2nδ
i=0 1{|yi −ȳ|≤δ} to be a reliable
(θ̄−θi )2
Pn
approximation of the term n√σ2π i=1 e− 2σ2 , it is shown
in Appendix E, specifically in (100) that we √
need nδ 2 1.
Numerical experiments suggest a value of 5/ n for δ to be
large enough for a good approximation.
We now present the first main result of the paper.
Theorem 1. The loss function of the two-cluster JS-estimator
in (11) satisfies the following:
(1) For any > 0, and for any fixed δ > 0 that is independent
of n,
1
βn σ 2
P
kθ − θ̂JS2 k2 − min βn ,
+ κn δ
n
αn + σ 2
!
nk min(2 ,1)
−
+ o(δ) ≥ ≤ Ke max(kθk2 /n,1) ,
(22)
where αn , βn are given by (25) and (24) below, and K is
a positive constant that is independent of n and δ, while
k = Θ(δ 2 ) is another positive constant that is independent
of n (for a fixed δ).
(2) For a sequence of θ with increasing dimension n, if
lim supn→∞ kθk2 /n < ∞, we have
1
βn σ 2
+ κn δ + o(δ)
lim
R(θ, θ̂JS2 ) − min βn ,
n→∞ n
αn + σ 2
= 0.
(23)
The constants βn , αn are given by
n
n
kθk2
c21 X
θ̄ − θi
c22 X c θ̄ − θi
Q
−
Q
,
βn :=
−
n
n i=1
σ
n i=1
σ
(24)
!
X
n
(θ̄−θ )2
2σ
− 2σ2i
√
e
(c1 − c2 ) , (25)
αn := βn −
n 2π
i=1
where
Pn
i=1 θi Q
c1 := P
n
i=1 Q
θ̄−θi
σ
θ̄−θi
σ
Pn
c
i=1 θi Q
, c2 := P
n
c
i=1 Q
θ̄−θi
σ
θ̄−θi
σ
. (26)
The proof of the theorem is given in Section VI-B.
Remark 1. In Theorem 1, βn represents the concentrating
value for the distance between θ and the attracting vector ν2 .
(It is shown in Sec. VI-B that kθ−ν2 k2 /n concentrates around
βn +κn δ.) Therefore, the closer θ is to the attracting subspace,
the lower the normalized asymptotic risk R(θ, θ̂JS2 )/n. The
term αn + σ 2 represents the concentrating value for the
distance between y and ν2 . (It is shown in Sec. VI-B that
ky − ν2 k2 /n concentrates around αn + σ 2 + κn δ.)
Remark 2. Comparing βn in (24) and αn in (25), we note
that βn ≥ αn because
Pn
−n i=1 (θi − θ̄)Q θiσ−θ̄
P
≥ 0.
c1 − c2 = P
n
n
θi −θ̄
c θi −θ̄
Q
Q
i=1
i=1
σ
σ
(27)
To see (27), observe that in the sum in the numerator, the Q(·)
function assigns larger weight to the terms with (θi − θ̄) < 0
than to the terms with (θi − θ̄) > 0.
Furthermore, αn ≈ βn for large n if either |θi − θ̄| ≈ 0, ∀i,
or |θi − θ̄| → ∞, ∀i. In the first case, if θi = θ̄, for
i = 1, · · · , n, we get βn = αn = kθ − θ̄1k2 /n = 0.
In the second case, suppose that n1 of the θi values equal
p1 and the remaining (n − n1 ) values equal −p2 for some
p1 , p2 > 0. Then, as p1 , p2 → ∞, it can be verified that
βn → [kθk2 − n1 p21 − (n − n1 )p22 ]/n = 0. Therefore, the
asymptotic normalized risk R(θ, θ̂JS2 )/n converges to 0 in
both cases.
The proof of Theorem 1 further leads to the following
corollaries.
Corollary 1. The loss function of the positive-part JSestimator in (7) satisfies the following:
6
1.2
P
kθ − θ̂JS+ k2
γn σ 2
≥
−
n
γn + σ 2
!
2
≤ Ke−nk min(
,1)
,
2
where γn := kθk /n, and K and k are positive constants.
(2) For a sequence of θ with increasing dimension n, if
lim supn→∞ kθk2 /n < ∞, we have
lim
n→∞
1
γn σ 2
= 0.
R(θ, θ̂JS+ ) −
n
γn + σ 2
min {βn , βn /(αn + 1)}
(1) For any > 0,
1
ρ = 0.1
0.8
0.6
Corollary 2. The loss function of the positive-part Lindley’s
estimator in (8) satisfies the following:
ρ = 0.5
ρ = 0.25
0.4
ρ=1
0.2
0
−0.2
0
Note that the positive-part Lindley’s estimator in (8) is
essentially a single-cluster estimator which shrinks all the
points towards ȳ. Henceforth, we denote it by θ̂JS1 .
ρ = 0.15
5
10
15
20
25
τ
30
35
40
45
50
Fig. 2: The asymptotic risk term min{βn , βn /(αn + σ 2 )} for the
two-cluster estimator is plotted vs τ for n = 1000, σ = 1, and
different values of ρ. Here, the components of θ take only two
values, τ and −ρτ . The number of components taking the value
τ is bnρ/(1 + ρ)c.
(1) For any > 0,
P
kθ − θ̂JS1 k2
ρn σ 2
≥
−
n
ρn + σ 2
!
≤ Ke−nk min(
2
,1)
,
where K and k are positive constants, and
2
θ − θ̄1
.
(28)
n
(2) For a sequence of θ with increasing dimension n, if
lim supn→∞ kθk2 /n < ∞, we have
1
ρn σ 2
lim
= 0.
(29)
R θ, θ̂JS1 −
n→∞ n
ρn + σ 2
ρn :=
Remark 3. Statement (2) of Corollary 1, which is known
in the literature [14], implies that θ̂JS+ is asymptotically
minimax over Euclidean balls. Indeed, if Θn denotes the set of
θ such that γn = kθk2 /n ≤ c2 , then Pinsker’s theorem [15,
Ch. 5] implies that the minimax risk over Θn is asymptotically
2 2
c
(as n → ∞) equal to cσ2 +σ
2.
Statement (1) of Corollary 1 and both the statements of
Corollary 2 are new, to the best of our knowledge. Comparing
Corollaries 1 and 2, we observe that ρn ≤ γn since kθ−θ̄1k ≤
kθk for all θ ∈ Rn with strict inequality whenever θ̄ 6= 0.
Therefore the positive-part Lindley’s estimator asymptotically
dominates the positive part JS-estimator.
It is well known that both θ̂JS+ and θ̂JS1 dominate the MLestimator [11]. From Corollary 1, it is clear that asymptotically,
the normalized risk of θ̂JS+ is small when γn is small, i.e.,
when θ is close to the origin. Similarly, from Corollary 2, the
asymptotic normalized risk of θ̂JS1 is small when ρn is small,
which occurs when the components of θ are all very close to
the mean θ̄. It is then natural to ask if the two-cluster estimator
θ̂JS2 dominates θ̂M L , and when its asymptotic normalized risk
is close to 0. To answer these questions, we use the following
example, shown in Fig. 2. Consider θ whose components take
one of two values, τ or −ρτ , such that θ̄ is as close to zero as
possible. Hence the number of components taking the value
τ is bnρ/(1 + ρ)c. Choosing σ = 1, n = 1000, the key
asymptotic risk term min{βn , βn /(αn + σ 2 )} in Theorem 1
is plotted as a function of τ in Fig. 2 for various values of ρ.
Two important observations can be made from the plots.
Firstly, min{βn , βn /(αn + σ 2 )} exceeds σ 2 = 1 for certain
values of ρ and τ . Hence, θ̂JS2 does not dominate θ̂M L .
Secondly, for any ρ, the normalized risk of θ̂JS2 goes to
zero for large enough τ . Note that when τ is large, both
γn = kθk2 /n and ρn = kθ − θ̄1k2 /n are large and hence,
the normalized risks of both θ̂JS+ and θ̂JS1 are close to 1.
So, although θ̂JS2 does not dominate θ̂M L , θ̂JS+ or θ̂JS2 ,
there is a range of θ for which R(θ, θ̂JS2 ) is much lower than
both R(θ, θ̂JS+ ) and R(θ, θ̂JS1 ). This serves as motivation for
designing a hybrid estimator that attempts to pick the better
of θ̂JS1 and θ̂JS2 for the θ in context. This is described in
the next section.
In the example of Fig. 2, it is worth examining why the
two-cluster estimator performs poorly for a certain range of
τ , while giving significantly risk reduction for large enough
τ . First consider an ideal case, where it is known which
components of theta are equal to τ and which ones are equal
to −ρτ (although the values of ρ, τ may not be known). In
this case, we could use a James-Stein estimator θ̂JSV of the
form (5) with the target subspace V being the two-dimensional
subspace with basis vectors
1{θ1 =τ }
1{θ1 =−ρτ }
1{θ2 =τ }
1{θ2 =−ρτ }
u1 :=
, u2 :=
.
..
..
.
.
1{θn =τ }
1{θn =−ρτ }
Since V is a fixed subspace that does not depend on the data,
it can be shown that θ̂JSV dominates the ML-estimator [7],
[8]. In the actual problem, we do not have access to the ideal
basis vectors u1 , u2 , so we cannot use θ̂JSV . The two-cluster
estimator θ̂JS2 attempts to approximate θ̂JSV by choosing the
target subspace from the data. As shown in (10), this is done
using the basis vectors:
1{y1 ≥ȳ}
1{y1 <ȳ}
1{y2 ≥ȳ}
1{y2 <ȳ}
:=
b 1 :=
b
u
,
u
.
..
..
2
.
.
1{yn ≥ȳ}
1{yn <ȳ}
7
Since ȳ is a good approximation for θ̄ = 0, when the
separation between θi and θ̄ is large enough, the noise term wi
is unlikely to pull yi = θi + wi into the wrong region; hence,
b1, u
b 2 will be close to the ideal
the estimated basis vectors u
ones u1 , u2 . Indeed, Fig. 2 indicates that when the minimum
separation between θi and θ̄ (here, equal to ρτ ) is at least
b1, u
b 2 approximate the ideal basis vectors very
4.5σ, then u
well, and the normalized risk is close to 0. On the other hand,
the approximation to the ideal basis vectors turns out to be
poor when the components of θ are neither too close to nor
too far from θ̄, as evident from Remark 1.
where a1 , a2 are defined in (21). We now use (34) and (35)
to estimate the concentrating value in (22), noting that
βn σ 2
βn σ 2
=
min βn ,
,
αn + σ 2
g(αn + σ 2 )
where g(x) = max(x, σ 2 ). This yields the following estimate
of L(θ, θ̂JS2 )/n:
1
L̂(θ, θ̂JS2 ) =
n
2
σ 2 n1 ky − ν2 k − σ 2 +
σ2
nδ (a1
− a2 )
Pn
i=0
1{|yi −ȳ|≤δ}
g(ky − ν2 k2 /n)
III. H YBRID JAMES -S TEIN ESTIMATOR WITH UP TO TWO
CLUSTERS
Depending on the underlying θ, either the positive-part
Lindley estimator θ̂JS1 or the two-cluster estimator θ̂JS2 could
have a smaller loss (cf. Theorem 1 and Corollary 2). So we
would like an estimator that selects the better among θ̂JS1 and
θ̂JS2 for the θ in context. To this end, we estimate the loss
of θ̂JS1 and θ̂JS2 based on y. Based on these loss estimates,
denoted by L̂(θ, θ̂JS1 ) and L̂(θ, θ̂JS2 ) respectively, we define
a hybrid estimator as
θ̂JSH = γy θ̂JS1 + (1 − γy )θ̂JS2 ,
(30)
where θ̂JS1 and θ̂JS2 are respectively given by (8) and (11),
and γy is given by
1
1
1
if
n L̂(θ, θ̂JS1 ) ≤ n L̂(θ, θ̂JS2 ),
γy =
0 otherwise.
(31)
The loss function estimates L̂(θ, θ̂JS1 ) and L̂(θ, θ̂JS2 ) are
obtained as follows. Based on Corollary 2, the loss function
of θ̂JS1 can be estimated via an estimate of ρn σ 2 /(ρn + σ 2 ),
where ρn is given by (28). It is straightforward to check, along
the lines of the proof of Theorem 1, that
!
2
ky − ȳ1k
.
g
= g ρn + σ 2 = ρn + σ 2 .
(32)
n
Therefore, an estimate of the normalized loss L(θ, θ̂JS1 )/n is
2
1
σ
.
L̂(θ, θ̂JS1 ) = σ 2 1 −
(33)
2
n
g ky − ȳ1k /n
The loss function of the two-cluster estimator θ̂JS2 can be
estimated using Theorem 1, by estimating βn and αn defined
in (25) and (24), respectively. From Lemma 13 in Section
VI-B, we have
1
2 .
ky − ν2 k = αn + σ 2 + κn δ + o(δ).
(34)
n
Further, using the concentration inequalities in Lemmas 1 and
2 in Section II, we can deduce that
!
n
1
σ2 X
2
2
ky − ν2 k − σ +
1{|yi −ȳ|≤δ} (a1 − a2 )
n
nδ i=0
.
= βn + κn δ + o(δ),
(35)
.
(36)
The loss function estimates in (33) and (36) complete the
specification of the hybrid estimator in (30) and (31). The
following theorem characterizes the loss function of the hybrid
estimator, by showing that the loss estimates in (33) and (36)
concentrate around the values specified in Corollary 2 and
Theorem 1, respectively.
Theorem 2. The loss function of the hybrid JS-estimator in
(30) satisfies the following:
(1) For any > 0,
P
kθ − θ̂JSH k2
− min
n
!
kθ − θ̂JS1 k2 kθ − θ̂JS2 k2
,
n
n
!
nk min(2 ,1)
≥
− max(kθk2 /n,1)
≤ Ke
,
where K and k are positive constants.
(2) For a sequence of θ with increasing dimension n, if
lim supn→∞ kθk2 /n < ∞, we have
1
lim sup
R θ, θ̂JSH
n→∞ n
h
i
− min R θ, θ̂JS1 , R θ, θ̂JS2
≤ 0.
The proof of the theorem in given in Section VI-C. The
theorem implies that the hybrid estimator chooses the better of
the θ̂JS1 and θ̂JS2 with high probability, with the probability
of choosing the worse estimator decreasing exponentially in
n. It also implies that asymptotically, θ̂JSH dominates both
θ̂JS1 and θ̂JS2 , and hence, θ̂M L as well.
Remark 4. Instead of picking one among the two (or several)
candidate estimators, one could consider a hybrid estimator
which is a weighted combination of the candidate estimators.
Indeed, George [8] and Leung and Barron [9], [10] have
proposed combining the estimators using exponential mixture
weights based on Stein’s unbiased risk estimates (SURE) [5].
Due to the presence of indicator functions in the definition of
the attracting vector, it is challenging to obtain a SURE for
θ̂JS2 . We therefore use loss estimates to choose the better
estimator. Furthermore, instead of choosing one estimator
based on the loss estimate, if we were to follow the approach
in [9] and employ a combination of the estimators using
exponential mixture weights based on the un-normalized loss
estimates, then the weight assigned to the estimator with the
8
smallest loss estimate is exponentially larger (in n) than the
other. Therefore, when the dimension is high, this is effectively
equivalent to picking the estimator with the smallest loss
estimate.
for 1 ≤ j ≤ L. With δ > 0 chosen to be a small positive
number as before, this completes the specification of the
attracting vector in (38), and hence the L-cluster JS-estimator
in (39).
IV. G ENERAL MULTIPLE - CLUSTER JAMES -S TEIN
Theorem 3. The loss function of the L-cluster JS-estimator
in (39) satisfies the following:
(1) For any > 0,
1
βn,L σ 2
P
kθ − θ̂JSL k2 − min βn,L ,
n
αn,L + σ 2
!
ESTIMATOR
In this section, we generalize the two-cluster estimator of
Section II to an L-cluster estimator defined by an arbitrary
partition of the real line. The partition is defined by L − 1
functions sj : Rn → R, such that
.
sj (y) := sj (y1 , · · · , yn ) = µj , ∀j = 1, · · · , (L − 1), (37)
with constants µ1 > µ2 > · · · > µL−1 . In words, the partition
can be defined via any L − 1 functions of y, each of which
concentrates around a deterministic value as n increases. In the
two-cluster estimator, we only have one function s1 (y) = ȳ,
which concentrates around θ̄. The points in (37) partition the
real line as
R = (−∞, sL−1 (y)]∪(sL−1 (y), sL−2 (y)]∪· · ·∪(s1 (y), ∞) .
The clusters are defined as Cj = {yi , 1 ≤ i ≤ n | yi ∈
(sj (y), sj−1 (y)]}, for 1 ≤ j ≤ L, with s0 (y) = ∞ and
sL (y) = −∞. In Section IV-B, we discuss one choice of
partitioning points to define the L clusters, but here we first
construct and analyse an estimator based on a general partition
satisfying (37).
The points in Cj are all shrunk towards the same point aj ,
defined in (41) later in this section. The attracting vector is
1{y1 ∈Cj }
L
X
..
(38)
νL :=
aj
,
.
j=1
1{yn ∈Cj }
and the proposed L-cluster JS-estimator is
σ2
(y − νL ) ,
θ̂JSL = νL + 1 −
g (ky − ν2 k2 /n)
(39)
where g(x) = max(σ 2 , x).
The attracting vector νL lies in an L-dimensional
subspace defined by the L orthogonal vectors
[1{y1 ∈C1 } , · · · , 1{yn ∈C1 } ]T , . . . , [1{y1 ∈CL } , · · · , 1{yn ≤CL } ]T .
The desired values for a1 , . . . , aL in (38) are such that
the attracting vector νL is the projection of θ onto the
L-dimensional subspace. Computing this projection, we find
the desired values to be the means of the θi ’s in each cluster:
Pn
Pn
θi 1{yi ∈CL }
i=1 θi 1{yi ∈C1 }
des
des
P
, . . . , aL = Pi=1
.
a1 =
n
n
1
i=1 {yi ∈C1 }
i=1 1{yi ∈CL }
(40)
As the θi ’s are unavailable, we set a1 , . . . , aL to be approxides
mations of ades
1 , . . . , aL , obtained using concentration results
similar to Lemmas 1 and 2 in Section II. The attractors are
given by
Pn
yi 1{yi ∈Cj }
aj = Pi=1
n
i=1 1{yi ∈Cj }
(41)
Pn
σ2
i=0 1{|yi −sj (y)|≤δ} − 1{|yi −sj−1 (y)|≤δ}
2δ
Pn
−
i=1 1{yi ∈Cj }
nk min(2 ,1)
− κn δ + o(δ) ≥
− max(kθk2 /n,1)
≤ Ke
,
where K and k are positive constants, and
kθk2
n
n
L
X
c2j X
µj − θi
µj−1 − θi
−
Q
−Q
n i=1
σ
σ
j=0
L
n
(µj−1 −θi )2
2σ X X − (µj −θ2i )2
2σ
cj
e
− e− 2σ2
− √
,
n 2π j=1 i=1
(42)
kθk2
βn,L :=
n
L
n
X
c2j X
µj − θi
µj−1 − θi
−
Q
−Q
,
n i=1
σ
σ
j=0
αn,L :=
(43)
with
i
h
µ
−θ
µ −θ
Q j σ i − Q j−1σ i
h
i ,
cj := P
µj −θi
µj−1 −θi
n
−
Q
i=1 Q
σ
σ
Pn
i=1 θi
(44)
for 1 ≤ j ≤ L.
(2) For a sequence of θ with increasing dimension n, if
lim supn→∞ kθk2 /n < ∞, we have
σ 2 βn,L
1
R θ, θ̂JSL − min βn,L ,
lim
n→∞ n
αn,L + σ 2
− κn δ + o(δ) = 0.
The proof is similar to that of Theorem 1, and we provide its
sketch in Section VI-D.
To get intuition on how the asymptotic normalized risk
R(θ, θ̂JSL )/n depends on L, consider the four-cluster estimator with L = 4. For the same setup as in Fig. 2, i.e.,
the components of θ take one of two values: τ or −ρτ , Fig.
3a plots the asymptotic risk term min{βn , βn,4 /(αn,4 + 1)}
versus τ for the four-cluster estimator. Comparing Fig. 3a
and Fig. 2, we observe that the four-cluster estimator’s risk
min{βn , βn,4 /(αn,4 + 1)} behaves similarly to the two-cluster
estimator’s risk min{βn , βn,2 /(αn,2 + 1)} with the notable
difference being the magnitude. For ρ = 0.5, 1, the peak value
of βn,4 /(αn,4 + 1) is smaller than that of βn,2 /(αn,2 + 1).
However, for the smaller values of ρ, the reverse is true. This
means that θ̂JS4 can be better than θ̂JS2 , even in certain
9
define a hybrid estimator as
min {βn,4 , βn,4 /(αn,4 + 1)}
1.4
1.2
θ̂JSH,L =
1
0.8
ρ = 0.5
ρ = 0.1
ρ=1
ρ = 0.25
γ` =
0
−0.2
0
5
10
15
20
25
30
τ
35
40
45
50
(a)
min {βn,4 , βn,4 /(αn,4 + 1)}
ρ = 0.25
0.8
ρ = 0.5
ρ = 0.1
0.4
ρ = 0.15
0.2
0
0
5
10
15
20
τ
25
1
0
if n1 L̂(θ, θ̂JS` ) = min1≤k≤L
otherwise
1
n L̂(θ, θ̂JSk ),
with L̂(θ, θ̂JS` ) denoting the loss function estimate of θ̂JS` .
For ` ≥ 2, we estimate the loss of θ̂JS` using Theorem
3, by estimating βn,` and αn,` which are defined in (42) and
(43), respectively. From (78) in Section VI-D, we obtain
1
2 .
ky − ν` k = αn,` + σ 2 + κn δ + o(δ).
(46)
n
Using concentration inequalities similar to those in Lemmas
1 and 2 in Section II, we deduce that
`
n
2
X
X
1
σ
2
ky − ν` k − σ 2 +
aj
1{|yi −sj (y)|≤δ}
n
nδ j=1 i=0
!
.
− 1{|yi −sj−1 (y)|≤δ}
= βn,` + κn δ + o(δ),
(47)
1
0.6
(45)
where
ρ = 0.15
0.2
γ` θ̂JS`
`=1
0.6
0.4
L
X
30
35
40
(b)
Fig. 3: The asymptotic risk term min{βn , βn,4 /(αn,4 + 1)} for the
four-cluster estimator is plotted vs τ , for n = 1000, σ = 1, and
different values of ρ. In (a), the components of θ take only two
values τ and −ρτ , with b1000ρ/(1 + ρ)c components taking the
value τ . In (b), they take values from the set {τ, ρτ, −ρτ, −τ } with
equal probability.
scenarios where the θi take only two values. In the twovalue example, θ̂JS4 is typically better when two of the four
attractors of θ̂JS4 are closer to the θi values, while the two
attracting points of θ̂JS2 are closer to θ̄ than the respective θi
values.
Next consider an example where θi take values from
{τ, ρτ, −ρτ, −τ } with equal probability. This is the scenario favorable to θ̂JS4 . Figure 3b shows the plot of
min{βn , βn,4 /(αn,4 + 1)} as a function of τ for different
values of ρ. Once again, it is clear that when the separation
between the points is large enough, the asymptotic normalized
risk approaches 0.
A. L-hybrid James-Stein estimator
Suppose that we have estimators θ̂JS1 , . . . , θ̂JSL , where
θ̂JS` is an `-cluster JS-estimator constructed as described
above, for ` = 1, . . . , L. (Recall that ` = 1 corresponds to
Lindley’s positive-part estimator in (8).) Depending on θ, any
one of these L estimators could achieve the smallest loss. We
would like to design a hybrid estimator that picks the best
of these L estimators for the θ in context. As in Section III,
we construct loss estimates for each of the L estimators, and
where a1 , . . . , a` are as defined in (41). We now use (46) and
(47) to estimate the concentrating value in Theorem 3, and
thus obtain the following estimate of L(θ, θ̂JS` )/n:
σ2
1
L̂(θ, θ̂JS` ) =
n
g(ky − νl k2 /n)
1
ky − ν` k2 − σ 2
n
`
n
σ2 X X
+
aj
1{|yi −sj (y)|≤δ} − 1{|yi −sj−1 (y)|≤δ} .
nδ j=1 i=0
(48)
The loss function estimator in (48) for 2 ≤ ` ≤ L, together
with the loss function estimator in (33) for ` = 1, completes
the specification of the L-hybrid estimator in (45). Using steps
similar to those in Theorem 2, we can show that
1
σ 2 βn,`
.
L̂ θ, θ̂JS` = min βn,` ,
+ κn δ + o(δ), (49)
n
αn,` + σ 2
for 2 ≤ ` ≤ L.
Theorem 4. The loss function of the L-hybrid JS-estimator in
(45) satisfies the following:
(1) For any > 0,
!
!
kθ − θ̂JSH,L k2
kθ − θ̂JS` k2
P
− min
≥
1≤`≤L
n
n
nk min(2 ,1)
− max(kθk2 /n,1)
≤ Ke
,
where K and k are positive constants.
(2) For a sequence of θ with increasing dimension n, if
lim supn→∞ kθk2 /n < ∞, we have
1
lim sup
R θ, θ̂JSH,L − min R θ, θ̂JS`
≤ 0.
1≤`≤L
n→∞ n
The proof of the theorem is omitted as it is along the same
10
lines as the proof of Theorem 3. Thus with high probability,
the L-hybrid estimator chooses the best of the θ̂JS1 , . . . , θ̂JSL ,
with the probability of choosing a worse estimator decreasing
exponentially in n.
1
R̃(θ, θ̂)/n
B. Obtaining the clusters
In this subsection, we present a simple method to obtain
the (L − 1) partitioning points sj (y), 1 ≤ j ≤ (L − 1), for
an L-cluster JS-estimator when L = 2a for an integer a > 1.
We do this recursively, assuming that we already have a 2a−1 cluster estimator with its associated partitioning points s0j (y),
j = 1, · · · , 2a−1 − 1. This means that for the 2a−1 -cluster
estimator, the real line is partitioned as
R = −∞, s02a−1 −1 (y) ∪ s02a−1 −1 (y), s02a−1 −2 (y) ∪ · · ·
n = 10
0.8
0.6
Regular JS−estimator
Lindley’s estimator
Two−cluster JS−estimator
Hybrid JS−estimator
ML−estimator
0.4
0.2
0
0
1
2
3
4
5
τ
6
7
8
9
10
(a)
∪ (s01 (y), ∞) .
n = 50
i
1
R̃(θ, θ̂)/n
Recall that Section II considered the case of a = 1, with the
single partitioning point being ȳ.
The new partitioning points sk (y), k = 1, · · · , (2a − 1), are
obtained as follows. For j = 1, · · · , (2a−1 − 1), define
Pn
i=1 yi 1{s0j (y)<yi ≤s0j−1 (y)}
0
s2j (y) = sj (y), s2j−1 (y) = Pn
,
i=1 1{s0j (y)<yi ≤s0j−1 (y)}
Pn
n
o
i=1 yi 1 −∞<yi ≤s0
(y)
2a−1 −1
s2a −1 (y) = Pn
n
o
i=1 1 −∞<y ≤s0
(y)
0.8
0.6
0.4
Regular JS−estimator
Lindley’s estimator
Two−cluster JS−estimator
Hybrid JS−estimator
ML−estimator
0.2
0
0
1
2
3
4
where s00 (y) = ∞. Hence, the partition for the L-cluster
estimator is
8
9
10
1
R̃(θ, θ̂)/n
0.8
0.6
0.4
Regular JS−estimator
Lindley’s estimator
Two−cluster JS−estimator
Hybrid JS−estimator
ML−estimator
0.2
0
0
1
2
3
4
5
τ
6
7
8
9
10
(c)
n = 1000
1
R̃(θ, θ̂)/n
V. S IMULATION R ESULTS
In this section, we present simulation plots that compare
the average normalized loss of the proposed estimators with
that of the regular JS-estimator and Lindley’s estimator, for
various choices of θ. In each plot, the normalized loss, labelled
1
n R̃(θ, θ̂) on the Y -axis, is computed by averaging over 1000
realizations of w. We use w ∼ N (0, I), i.e., the noise variance
σ 2 = 1. Both the regular JS-estimator θ̂JS and Lindley’s
estimator θ̂JS1 used are the positive-part √
versions, respectively
given by (7) and (8). We choose δ = 5/ n for our proposed
estimators.
In Figs. 4–7, we consider three different structures for
θ, representing varying degrees of clustering. In the first
structure, the components {θi }ni=1 are arranged in two clusters.
In the second structure for θ, {θi }ni=1 are uniformly distributed
within an interval whose length is varied. In the third structure,
{θi }ni=1 are arranged in four clusters. In both clustered structures, the locations and the widths of the clusters as well as the
number of points within each cluster are varied; the locations
of the points within each cluster are chosen uniformly at
random. The captions of the figures explain the details of each
structure.
7
n = 100
R = (−∞, s2a −1 (y)] ∪ (s2a −1 (y), s2a −2 (y)] ∪ · · ·
We use such a partition to construct a 4-cluster estimator for
our simulations in the next section.
6
(b)
2a−1 −1
∪ (s1 (y), ∞) .
5
τ
0.8
0.6
0.4
Regular JS−estimator
Lindley’s estimator
Two−cluster JS−estimator
Hybrid JS−estimator
ML−estimator
0.2
0
0
1
2
3
4
5
τ
6
7
8
9
10
(d)
Fig. 4: Average normalized loss of various estimators for different
values of n. The {θi }n
i=1 are placed in two clusters, one centred at
τ and another at −τ . Each cluster has width 0.5τ and n/2 points.
11
1.5
Lindley’s estimator
R̃(θ, θ̂)/n
R̃(θ, θ̂)/n
1
0.8
0.6
Regular JS−estimator
1
ρn σ 2
ρn + σ 2
Two−cluster JS−estimator
Hybrid JS−estimator
0.5
Lindley’s estimator
0.4
βn σ 2
= min
αn + σ 2
Two−cluster JS−estimator
Hybrid JS−estimator
0.2
ML−estimator
0
0
1
2
3
4
5
τ
6
7
0 1
10
8
9
2
!
ρn σ 2
βn σ 2
, βn ,
ρn + σ 2
αn + σ 2
"
3
10
n
10
4
10
10
(a)
(a)
1.5
1.4
Lindley’s JS−estimator
Two−cluster JS−estimator
1.2
Hybrid JS−estimator
R̃(θ, θ̂)/n
! "
R̃ θ, θ̂ /n
1
0.8
0.6
Regular JS−estimator
1
ρn σ 2
ρn + σ 2
0.5
Lindley’s estimator
0.4
Two−cluster JS−estimator
!
ρn σ 2
βn σ 2
, βn ,
2
ρn + σ
αn + σ 2
"
Hybrid JS−estimator
0.2
0
0
βn σ 2
= min
αn + σ 2
ML−estimator
1
2
3
4
5
τ
6
0 1
10
7
8
9
2
3
10
n
10
4
10
10
(b)
(b)
0.5
R̃(θ, θ̂)/n
R̃(θ, θ̂)/n
0.8
Regular JS−estimator
Lindley’s estimator
0.6
Two−cluster JS−estimator
Hybrid JS−estimator
0.4
ML−estimator
1
2
3
4
5
τ
6
7
8
Hybrid JS−estimator
0.2
0.1
9
ρn σ 2
= min
ρn + σ 2
0 1
10
10
Two−cluster JS−estimator
0.3
0.2
0
0
Lindley’s estimator
βn σ 2
αn + σ 2
0.4
1
!
ρn σ 2
βn σ 2
, βn ,
2
ρn + σ
αn + σ 2
2
"
3
10
n
4
10
10
(c)
(c)
1.5
Lindley’s estimator
R̃(θ, θ̂)/n
R̃(θ, θ̂)/n
1
0.8
Regular JS−estimator
0.6
Lindley’s estimator
0.4
1
Two−cluster JS−estimator
Hybrid JS−estimator
0.5
ρn σ 2
= min
ρn + σ 2
Two−cluster JS−estimator
Hybrid JS−estimator
0.2
βn σ 2
αn + σ 2
!
ρn σ 2
βn σ 2
, βn ,
ρn + σ 2
αn + σ 2
"
ML−estimator
0
0
1
2
3
4
5
τ
6
7
8
9
10
0 1
10
arrangements of the samples {θi }n
i=1 , with n = 1000. In (a), there
are 2 clusters each of width 0.5τ , one around 0.25τ containing 300
points, and the other around −τ and containing 700 points. In (b),
θ consists of 200 components taking the value τ and the remaining
800 taking the value −0.25τ . In (c), there are two clusters of width
0.125τ , one around τ containing 300 points and another around −τ
containing 700 points. In (d), {θi }n
i=1 are arranged uniformly from
−τ to τ .
3
n
10
4
10
(d)
(d)
Fig. 5: Average normalized loss of various estimators for different
2
10
Fig. 6: Average normalized loss of various estimators versus
βn
,
αn +σ 2
βn
n
min( ρnρ+σ
2 , βn , α +σ 2 )
n
n
arrangements of {θi }i=1 . In (a),
ρn
,
ρn +σ 2
as a function of n for different
the {θi }n
i=1 are placed in two
clusters of width 1, one around 2 and the other around −2, each
containing an equal number of points. In (b), {θi }n
i=1 are placed in
two clusters of width 1.25, one around 5 and the other around −5,
each containing an equal number of points. In (c), {θi }n
i=1 are placed
in two clusters of width 0.25, one around 0.5 and the other around
−0.5, each containing an equal number of points. In (d), {θi }n
i=1 are
placed uniformly between −2 and 2.
12
VI. P ROOFS
1
R̃(θ, θ̂)/n
0.8
A. Mathematical preliminaries
0.6
Lindley’s estimator
Two−cluster JS−estimator
Four−cluster JS−estimator
Four−hybrid JS−estimator
ML−estimator
0.4
0.2
0
0
1
2
3
4
5
τ
Here we list some concentration results that are used in the
proofs of the theorems.
6
7
8
9
10
(a)
Lemma 3. Let {Xn (θ), θ ∈ Rn }∞
n=1 be a sequence of random
.
variables such that Xn (θ) = 0, i.e., for any > 0,
nk min(2 ,1)
P(|Xn (θ)| ≥ ) ≤ Ke
1
R̃(θ, θ̂)/n
,
:=
where K and k are positive constants. If C
a.s.
lim supn→∞ kθk2 /n < ∞, then Xn −→ 0.
0.8
0.6
Proof. For any τ > 0, there exists a positive integer M such
that ∀n ≥ M , kθk2 /n < C + τ . Hence, we have, for any
> 0, and for some τ > 1,
Lindley’s estimator
0.4
Two−cluster JS−estimator
Four−cluster JS−estimator
0.2
∞
X
Four−hybrid JS−estimator
ML−estimator
0
0
− max(kθk2 /n,1)
1
2
3
4
5
τ
6
7
8
9
10
P(|Xn (θ)| ≥ ) ≤
∞
X
nk min(2 ,1)
n=1
n=1
(b)
≤ C0 +
Fig. 7: Average normalized loss of various estimators for n = 1000
{θi }n
i=1 .
{θi }ni=1
In Fig. 4,
are arranged in two clusters, one centred
at −τ and the other at τ . The plots show the average
normalized loss as a function of τ for different values of n,
for four estimators: θ̂JS , θ̂JS1 , the two-attractor JS-estimator
θ̂JS2 given by (11), and the hybrid JS-estimator θ̂JSH given
by (30). We observe that as n increases, the average loss of
θ̂JSH gets closer to the minimum of that of θ̂JS1 and θ̂JS2 ;
Fig. 5 shows the the average normalized loss for different
arrangements of {θi }, with n fixed at 1000. The plots illustrate
a few cases where θ̂JS2 has significantly lower risk than θ̂JS1 ,
and also the strength of θ̂JSH when n is large.
Fig. 6 compares the average normalized losses of θ̂JS1 ,
θ̂JS2 , and θ̂JSH with their asymptotic risk values, obtained
in Corollary 2, Theorem 1, and Theorem 2, respectively. Each
subfigure considers a different arrangement of {θi }ni=1 , and
shows how the average losses converge to their respective
theoretical values with growing n.
Fig. 7 demonstrates the effect of choosing four attractors
when {θi }ni=1 form four clusters. The four-hybrid estimator
θ̂JSH,4 attempts to choose the best among θ̂JS1 , θ̂JS2 and
θ̂JS4 based on the data y. It is clear that depending on the
values of {θi }, θ̂JSH,4 reliably tracks the best of these. and
can have significantly lower loss than both θ̂JS1 and θ̂JS2 ,
especially for large values of n.
∞
X
Ke−
nk min(2 ,1)
C+τ
< ∞.
n=M
{θi }n
i=1
In (a),
and for different arrangements of the samples
are placed in four equal-sized clusters of width 0.5τ and in (b), the
clusters are of width 0.25τ . In both cases, the clusters are centred at
1.5τ , 0.9τ , −0.5τ and −1.25τ .
− max(kθk2 /n,1)
Ke
Therefore, we can use the Borel-Cantelli lemma to conclude
a.s.
that Xn −→ 0.
Lemma 4. For sequences of random variables {Xn }∞
n=1 ,
.
.
{Yn }∞
n=1 such that Xn = 0, Yn = 0, it follows that
.
Xn + Yn = 0.
nk
Proof. For > 0, if P(|Xn | ≥ ) ≤ K1 e
nk
min(2 ,1)
1
− max(kθk
2 /n,1)
and
min(2 ,1)
2
− max(kθk
2 /n,1)
P(|Yn | ≥ ) ≤ K2 e
for positive constants K1 ,
K2 , k1 and k2 , then by the triangle inequality,
P(|Xn + Yn | ≥ ) ≤ P |Xn | ≥
+ P |Yn | ≥
2
2
nk min(2 ,1)
− max(kθk2 /n,1)
≤ Ke
where K = K1 + K2 and k = min
k1 k2
4 . 4
.
Lemma 5. Let X and Y be random variables such that for
any > 0,
2
P (|X − aX | ≥ ) ≤ K1 e−nk1 min( ,1) ,
2
P (|Y − a | ≥ ) ≤ K e−nk2 min( ,1)
Y
2
where k1 , k2 are positive constants, and K1 , K2 are positive
integer constants. Then,
P (|XY − aX aY | ≥ ) ≤ Ke−nk min(
2
,1)
where K = 2(K1 + K2 ), and k is a positive constant
depending on k1 and k2 .
13
Proof. We have
P (|XY − aX aY | ≥ )
= P (|(X − aX )(Y − aY ) + XaY + Y aX − 2aX aY | ≥ )
≤ P |(X − aX )(Y − aY )| ≥
2
+ P |XaY + Y aX − 2aX aY | ≥
2
≤ P |(X − aX )(Y − aY )| ≥
2
+ P |(X − aX )aY | ≥
+ P |(Y − aY )aX | ≥
r 4
r 4
≤ P |(X − aX )| ≥
+ P |(Y − aY )| ≥
2
2
+ P |(Y − aY )aX | ≥
+ P |(X − aX )aY | ≥
4
4
−nk100 min(2 ,1)
−nk10 min(,1)
−nk20 min(,1)
≤ K1 e
+ K2 e
+ K1 e
−nk200 min(2 ,1)
−nk min(2 ,1)
+K e
≤ Ke
2
k1
2 ,
min(k1 ,k2 )
16(aY )2 .
where k10 =
k=
k20 =
k2
2 ,
k100 =
k1
16(aY )2 ,
k2
16(aY )2 ,
k200 =
and
Y > 0. Therefore,
!
1
1
1
−
≤ − = P Y ≥ 1
P
Y
aY
a −
!Y
1
= P Y − aY ≥ 1
− aY
aY −
aY
= P Y − aY ≥ aY
1 − aY
≤ K2 e
2
(aY )2
−nk2 min
,1
1−a
Y
0
2
≤ K2 e−nk2 min(
≤ K2 e−nk2 min(
(51)
where k20 = k2 min (aY )4 , 1 . Using (50) and (51), we
obtain, for any > 0,
2
1
1
≥ ≤ Ke−nk min( ,1)
P
−
Y
aY
where k = min(k10 , k20 ).
Lemma 7. Let {Xn }∞
n=1 be a sequence of random variables
and X be another random variable (or a constant) such that
2
for any > 0, P (|Xn − X| ≥ ) ≤ Ke−nk min( ,1) for
positive constants K and k. Then, for the function g(x) :=
max(σ 2 , x), we have
Proof. Let An := {ω
2
P (Y − aY ≤ −) ≤ K1 e−nk1 min( ,1) ,
2
P (Y − a ≥ ) ≤ K e−nk2 min( ,1)
Y
where K = K1 + K2 , and k is a positive constant.
Proof. We have
1
1
1
1
P
−
≥ =P
≥
+
Y
aY
Y
a
Y
1
1
=P Y ≤
= P Y − aY ≤
− aY
+ 1/aY
+ 1/aY
aY
= P Y − aY ≤ −aY
1 + aY
−nk1 min
(aY )2
1+aY
2
,1
0
2
≤ K1 e−nk1 min(
2
(aY )2
where k10 = k1 min
, 1 . Similarly
1+aY
≤ K1 e
P
Note that when >
1
aY
(50)
1
1
=P
≤
− .
Y
aY
, P Y1 − a1Y ≤ − = 0 because
1
1
−
≤ −
Y
aY
,1)
,1)
.
P (|g(Xn ) − g(X)| ≥ )
= P (An ) P |g(Xn ) − g(X)| ≥
n
where k1 , k2 are positive constants, and K1 , K2 are positive
integer constants. Then, for any > 0,
2
1
1
P
−
≥ ≤ Ke−nk min( ,1)
Y
aY
2
|Xn (ω) − X(ω)| ≥ }. We have
An
+ P (Acn ) P |g(Xn ) − g(X)| ≥ Acn
2
≤ Ke−nk min( ,1) + P |g(X ) − g(X)| ≥
2
(aY )4 ,1)
,1)
P (|g(Xn ) − g(X)| ≥ ) ≤ Ke−nk min(
Lemma 6. Let Y be a non-negative random variable such
that there exists aY > 0 such that for any > 0,
2
Acn . (52)
Now, when Xn ≥ σ 2 and X ≥ σ 2 , it follows that
g(Xn ) − g(X) = Xn − X, and the second term of the
RHS of (52) equals 0, as it also does when Xn < σ 2 and
X < σ 2 . Let us consider the case where Xn ≥ σ 2 and
X < σ 2 . Then, g(Xn ) − g(X) = Xn − σ 2 < Xn − X < ,
as we condition on the fact that |Xn − X| < ; hence in
this case P |g(Xn ) − g(X)| ≥ Acn = 0. Finally, when
Xn < σ 2 and X ≥ σ 2 , we have g(X) − g(Xn ) =
X − σ 2 < X − Xn < ; hence
in this case also we
have P |g(Xn ) − g(X)| ≥ Acn = 0. This proves the
lemma.
Lemma 8. (Hoeffding’s Inequality [16, Thm. 2.8]). Let
X1 , · · · , Xn be independent random variables such
Pnthat Xi ∈
[ai , bi ] almost surely, for all i ≤ n. Let Sn =
i −
i=12(X
2
− Pn 2n
2
|Sn |
E[Xi ]). Then for any > 0, P n ≥ ≤ 2e i=1 (bi −ai ) .
Lemma 9. (Chi-squared concentration [16]). For i.i.d. Gaussian random variables w1 , . . . , wn ∼ N (0, σ 2 ), we have for
any > 0,
!
n
2
1X 2
2
P
wi − σ ≥ ≤ 2e−nk min(, ) ,
n i=1
where k = min 4σ1 4 , 2σ1 2 .
14
Lemma 10. For i = 1, · · · , n, let wi ∼ (0, σ 2 ) be independent, and ai be real-valued and finite constants. We have for
any > 0,
!
n
n
σ X − a2i2
1 X
P
wi 1{wi >ai } − √
e 2σ ≥
n i=1
2π i=1
2
≤ 2e−nk1 min(, ) ,
!
n
n
1 X
σ X − a2i2
P
wi 1{wi ≤ai } + √
e 2σ ≥
n i=1
2π i=1
≤ 2e−nk2 min(,
2
)
(53)
(54)
where k1 and k2 are positive constants.
The proof is given in Appendix A.
−nk2
P (|f (y) − a| ≥ ) ≤ 2e
for some constants a, k, such that k > 0. Then for any > 0,
we have
!
n
n
X
2
1 X
P
1{yi >a} ≥ ≤ 4e−nk1 ,
1{yi >f (y)} −
n i=1
i=1
(55)
!
n
n
nk1 2
X
1 X
−
P
θi 1{yi >f (y)} −
θi 1{yi >a} ≥ ≤ 4e kθk2 /n ,
n i=1
i=1
(56)
!
n
n
X
X
1
wi 1{yi >a} ≥
wi 1{yi >f (y)} −
P
n i=1
i=1
2
,)
(57)
where k1 is a positive constant.
The proof is given in Appendix B.
Lemma 12. With the assumptions of Lemma 11, let h : Rn →
R be a function such that b > a and P (|h(y) − b| ≥ ) ≤
2
2e−nl for some l > 0. Then for any > 0, we have
!
n
n
X
1 X
P
1{h(y)≥yi >f (y)} −
1{b≥yi >a} ≥
n i=1
i=1
≤ 8e
−nk2
We have,
1
kθ − θ̂JS2 k2
n
2
σ2
1
θ − ν2 − 1 −
(y
−
ν
)
=
2
n
g (ky − ν2 k2 /n)
1
σ2
=
y − ν2 − 1 −
(y − ν2 ) − w
n
g (ky − ν2 k2 /n)
2
1
σ2
=
(y
−
ν
)
−
w
2
n
g (ky − ν2 k2 /n)
=
Lemma 11. Let y ∼ N θ, σ 2 I , and let f : Rn → R be a
function such that for any > 0,
≤ 4e−nk1 min(
B. Proof of Theorem 1
.
Proof. The result follows from Lemma 11 by noting that
1{h(y)≥yi >f (y)} = 1{yi >f (y)} − 1{yi >h(y)} , and 1{b≥yi >a} =
1{yi >a} − 1{yi >b} .
2
2
σ 4 ky − ν2 k2 /n
2
+
(g (ky − ν2 k2 /n))
2
σ2
−
n g (ky − ν2 k2 /n)
kwk
n
hy − ν2 , wi .
(58)
We also have
1
1
2
2
kθ − ν2 k = ky − ν2 − wk
n
n
1
1
2
2
2
= ky − ν2 k + kwk − hy − ν2 , wi
n
n
n
and so,
2
1
1
1
2
2
2
− hy − ν2 , wi = kθ − ν2 k − ky − ν2 k − kwk .
n
n
n
n
(59)
Using (59) in (58), we obtain
kθ − θ̂JS2 k2
n
2
kwk
σ2
σ 4 ky − ν2 k2 /n
+
+
=
2
n
g (ky − ν2 k2 /n)
(g (ky − ν2 k2 /n))
1
1
1
2
2
2
×
kθ − ν2 k − ky − ν2 k − kwk .
(60)
n
n
n
We now use the following results whose proofs are given in
Appendix F and Appendix G.
Lemma 13.
1
2 .
ky − ν2 k = αn + σ 2 + κn δ + o(δ).
n
where αn is given by (25).
(61)
Lemma 14.
1
2 .
kθ − ν2 k = βn + κn δ + o(δ)
n
where βn is given by (24).
Using Lemma 7 together with (61), we have
!
2
ky − ν2 k
.
g
= g(αn + σ 2 ) + κn δ + o(δ).
n
(62)
(63)
Using (61), (62) and (63) together with Lemmas 5, 6, and 9,
15
we obtain
4
2
2
2
kθ − θ̂JS2 k . σ αn + σ
σ
2
=
2 +σ +
n
g (αn + σ 2 )
(g (αn + σ 2 ))
× β 2 − αn + σ 2 − σ 2 + κn δ + o(δ) (64)
(
βn + κn δ + o(δ),
if αn < 0,
=
βn σ 2
αn +σ 2 + κn δ + o(δ) otherwise.
Therefore, for any > 0,
kθ − θ̂JS2 k2
βn σ 2
− min βn ,
n
αn + σ 2
nk min(2 ,1)
− kθk2 /n
+ κn δ + o(δ) ≥ ≤ Ke
.
P
This proves (22) and hence, the first part of the theorem.
To prove the second part of the theorem, we use the
following definition and result.
Definition VI.1. (Uniform Integrability [17, p. 81]) A sequence {Xn }∞
n=1 is said to be uniformly integrable (UI) if
lim lim sup E |Xn |1{|Xn |≥K} = 0.
(65)
K→∞
Now, using (60) and (64), we can write
kθ − θ̂JS2 k2
βn σ 2
+ κn δ + o(δ)
− min βn ,
n
αn + σ 2
!
2
kwk
2
= Sn + Tn − Un − Vn +
−σ .
n
Note from Jensen’s inequality that |E[Xn ] − EX| ≤ E(|Xn −
X|). We therefore have
1
βn σ 2
+
κ
δ
+
o(δ)
R(θ, θ̂JS2 ) − min βn ,
n
n
αn + σ 2
kθ − θ̂JS2 k2
βn σ 2
≤E
+ κn δ + o(δ)
− min βn ,
n
αn + σ 2
= E Sn + Tn − Un − Vn +
≤ E |Sn | + E |Tn | − E |Un | − E |Vn | + E
L1
kwk2
n
−→ σ 2 , i.e.,
kwk2
2
lim E
−σ
= 0.
n→∞
n
We first show that
1
Fact 1. [18, Sec. 13.7] Let {Xn }∞
n=1 be a sequence in L ,
L1
equivalently E|Xn | < ∞, ∀n. Also, let X ∈ L . Then Xn −→
X, i.e., E(|Xn − X|) → 0, if and only if the following two
conditions are satisfied:
kwk2
− σ2 .
n
(68)
n→∞
1
kwk2
− σ2
n
(69)
Now, consider the individual terms of the RHS of (60).
Using Lemmas 5, 6 and 7, we obtain
σ 4 ky − ν2 k2 /n . σ 4 αn + σ 2
=
2
2 + κn δ + o(δ),
(g (ky − ν2 k2 /n))
(g (αn + σ 2 ))
This holds because
Z ∞
kwk2
kwk2
2
2
E
−σ
=
− σ > x dx
P
n
n
0
Z
Z
1
∞
(i)
2
≤
2e−nkx dx +
2e−nkx dx
0
1
Z ∞
Z ∞
2
≤
2e−nkx dx +
2e−nkx dx
0
0
Z ∞
Z ∞
2
2
n→∞
−t2
e dt +
e−t dt −→ 0,
=√
nk 0
nk 0
and so, from Lemma 3,
where inequality (i) is due to Lemma 9.
P
1) Xn −→ X,
2) The sequence {Xn }∞
n=1 is UI.
σ 4 ky − ν2 k2 /n
Sn :=
2
(g (ky − ν2 k2 /n))
#
"
σ 4 αn + σ 2
a.s.
−
2 + κn δ + o(δ) −→ 0.
(g (αn + σ 2 ))
(66)
Similarly, we obtain
2
σ 2 kθ − ν2 k /n
g (ky − ν2 k2 /n)
βn σ 2
a.s.
−
+ κn δ + o(δ) −→ 0,
g(αn + σ 2 )
Tn :=
and since the sum of the terms in (66) that involve δ have
bounded absolute value for a chosen and fixed δ (see Note 1),
there exists M > 0 such that ∀n, |Sn | ≤ 2σ 2 + M . Hence,
from Definition VI.1, {Sn }∞
n=1 is UI. By a similar argument,
so is {Un }∞
.
Next,
considering
Vn , we have
n=1
2
σ 2 ky − ν2 k /n
g (ky − ν2 k2 /n)
2
σ (αn + σ 2 )
a.s.
−
+ κn δ + o(δ) −→ 0,
g(αn + σ 2 )
Un :=
2
σ 2 kwk /n
g (ky − ν2 k2 /n)
σ4
a.s.
−
+ κn δ + o(δ) −→ 0.
g(αn + σ 2 )
Vn :=
Thus, from (68), to prove (23), it is sufficient to show that
E |Sn |, E |Tn |, E |Un | and E |Vn | all converge to 0 as n →
∞. From Fact 1 and (66), (67), this implies that we need
∞
∞
∞
to show that {Sn }∞
n=1 , {Tn }n=1 , {Un }n=1 , {Vn }n=1 are UI.
Considering Sn , we have
σ 4 αn + σ 2
σ 4 ky − ν2 k2 /n
2
2
∀n,
2 ≤σ ,
2 ≤σ ,
(g (ky − ν2 k2 /n))
(g (αn + σ 2 ))
(67)
2
2
σ 2 kwk /n
kwk
≤
,
g (ky − ν2 k2 /n)
n
2
σ4
≤ σ2 ,
g(αn + σ 2 )
∀n,
+ σ 2 + M , ∀n. Note from (69) and
and hence, |Vn | ≤ kwk
n
Fact 1 that {kwk2 /n}∞
n=1 is UI. To complete the proof, we
use the following result whose proof is provided in Appendix
C.
16
Lemma 15. Let {Yn }∞
n=1 be a UI sequence of positive-valued
random variables, and let {Xn }∞
n=1 be a sequence of random
variables such that |Xn | ≤ cYn + a, ∀n, where c and a are
positive constants. Then, {Xn }∞
n=1 is also UI.
Hence, {Vn }∞
n=1 is UI. Finally, considering Tn in (67), we see
that
2
2
2
2
2k
2k
2 ky−ν2 −wk
2σ 2 ky−ν
+ kwk
σ 2 kθ−ν
σ
n
n
n
n
=
≤
ky−ν2 k2
ky−ν2 k2
ky−ν2 k2
g
g
g
n
n
n
2
kwk
2
≤2 σ +
,
n
βn σ 2
≤ βn < ∞.
g(αn + σ 2 )
Note that the last inequality is due to the assumption that
2
lim supn→∞ kθk2 /n < ∞. Therefore, |Tn | ≤ 2kwk /n +
2
2σ + M , ∀n, where M is some finite constant. Thus, by
Lemma 15, Tn is UI. Therefore, each of the terms of the RHS
of (68) goes to 0 as n → ∞, and this completes the proof of
the theorem.
for some positive constants k and K. Let P (γy = 0, ∆n > )
denote the probability that γy = 0 and ∆n > for a chosen
> 0. Therefore,
P (γy = 0, ∆n > )
1
1
=P
L̂(θ, θ̂JS1 ) > L̂(θ, θ̂JS2 ) , ∆n >
n
n
1
1
≤P
L̂(θ, θ̂JS1 ) + ∆n > L̂(θ, θ̂JS2 ) +
n
n
nk min(2 ,1)
≤ Ke
− max(kθk2 /n,1)
,
(74)
where the last inequality is obtained from (73). So for any
> 0, we have
!
kθ − θ̂JSH k2
kθ − θ̂JS1 k2
P
−
≥ y ∈ En
n
n
nk min(2 ,1)
− max(kθk2 /n,1)
≤ P (γy = 0, ∆n > ) ≤ Ke
.
In a similar manner, we obtain for any > 0,
kθ − θ̂JS2 k2
kθ − θ̂JSH k2
−
≥
n
n
P
C. Proof of Theorem 2
!
y ∈ Enc
nk min(2 ,1)
− max(kθk2 /n,1)
≤ P (γy = 1, −∆n > ) ≤ Ke
Let
n
o
En := y ∈ Rn : kθ − θ̂JS1 k2 < kθ − θ̂JS2 k2 ,
.
Therefore, we arrive at
kθ − θ̂JS2 k2
kθ − θ̂JS1 k2
−
.
∆n :=
n
n
Without loss of generality, for a given > 0, we can assume
that |∆n | > because if not, it is clear that
!
kθ − θ̂JSH k2
kθ − θ̂JS1 k2 kθ − θ̂JS2 k2
− min
,
≤ .
n
n
n
P
kθ − θ̂JSH k2
− min
i=1,2
n
kθ − θ̂JSi k2
n
!
!
≥
nk min(2 ,1)
− max(kθk2 /n,1)
≤ Ke
.
This proves the first part of the theorem.
From (32) and Lemma 6, we obtain the following concentration inequality for the loss estimate in (33):
1
. ρn σ 2
L̂(θ, θ̂JS1 ) =
.
n
ρn + σ 2
Using this together with Corollary 2, we obtain
1
. 1
L̂(θ, θ̂JS1 ) = kθ − θ̂JS1 k2 .
(70)
n
n
Following steps similar to those in the proof of Lemma 13,
we obtain the following for the loss estimate in (36):
1
βn σ 2
.
L̂(θ, θ̂JS2 ) =
+ κn δ + o(δ).
n
g(αn + σ 2 )
(71)
Combining this with Theorem 1, we have
1
. 1
L̂(θ, θ̂JS2 ) = kθ − θ̂JS2 k2 .
n
n
(72)
Then, from (70), (72), and Lemma 4, we have n1 L̂(θ, θ̂JS1 ) −
.
1
n L̂(θ, θ̂JS2 ) = −∆n . We therefore have, for any > 0,
1
1
P
L̂(θ, θ̂JS1 ) − L̂(θ, θ̂JS2 ) − (−∆n ) ≥
n
n
nk min(2 ,1)
− max(kθk2 /n,1)
≤ Ke
(73)
For the second part, fix > 0. First suppose that θ̂JS1 has
lower risk. For a given θ, let
AJS1 (θ) := {y ∈ Rn : γy = 1},
AJS2a (θ) := {y ∈ Rn : γy = 0, and ∆n ≤ } ,
AJS2b (θ) := Rn \(AJS1 (θ) ∪ AJS2a (θ))
= {y ∈ Rn : γy = 0, and ∆n > } .
17
2
by φ(y; θ), we have
exp − ky−θk
2σ 2
Z
R(θ, θ̂JSH ) =
φ(y; θ)kθ̂JS1 − θk2 dy
AJS1 (θ)
Z
+
φ(y; θ)kθ̂JS2 − θk2 dy
√ 1
2πσ 2
Denoting
D. Proof of Theorem 3
The proof is similar to that of Theorem 1, so we only
provide a sketch. Note that for ai , bi , real-valued and finite,
i = 1, · · · , n, with ai < bi ,
n
n
bi
1X
ai
1X
−Q
E 1{ai <wi ≤bi } =
Q
,
n i=1
n i=1
σ
σ
n
n
b2
i
1X
σ X − a2i2
E wi 1{ai <wi ≤bi } = √
e 2σ − e− 2σ2 .
n i=1
n 2π i=1
AJS2a (θ) ∪ AJS2b (θ)
(a)
Z
φ(y; θ)kθ̂JS1 − θk2 dy
≤
AJS1 (θ)
Z
φ(y; θ) (kθ̂JS1 − θk2 + n) dy
+
AJS2a (θ)
Z
Since 1{ai <wi ≤bi } ∈ [0, 1], it follows that wi 1{ai <wi ≤bi } ∈
[mi , ni ] where mi = min(0, ai ), ni = max(0, bi ). So, from
Hoeffding’s inequality, we obtain
!
n
n
X
1 X
bi
ai
P
1{ai <wi ≤bi } −
−Q
≥
Q
n i=1
σ
σ
i=1
2
φ(y; θ)kθ̂JS2 − θk dy
+
AJS2b (θ)
(b)
1/2
≤ R(θ, θ̂JS1 ) + n + (P (γy = 0, ∆n > ))
!1/2
Z
4
×
φ(y; θ)kθ̂JS2 − θk dy
2
AJS2b (θ)
(c)
≤ R(θ, θ̂JS1 ) + n
nk min(2 ,1)
− max(kθk2 /n,1)
+ Ke
Ekθ̂JS2 − θk4
1/2
(75)
where step (a) uses the definition of AJS2a , in step (b) the last
term is obtained using the
p Cauchy-Schwarz
p inequality on the
product of the functions φ(y; θ), and φ(y; θ)kθ̂JS2 −θk2 .
Step (c) is from (74).
Similarly, when θ̂JS2 has lower risk, we get
R(θ, θ̂JSH ) ≤ R(θ, θ̂JS2 ) + n
nk min(2 ,1)
− max(kθk2 /n,1)
+ Ke
Ekθ̂JS1 − θk4
1/2
. (76)
Subsequently, the steps of Lemma 13 are used to obtain
1
ky − νyL k2
n
L
2 n
µj − θi
µj−1 − θi
. kθk2 X cj X
Q
−
−Q
=
n
n i=1
σ
σ
j=0
X
L
n
X
−θi )2
(µ
(µ −θ )2
2
σ
− j2σ2i
− j−1
2
2σ
√
cj
e
−
−e
n
2π j=1 i=1
+ κn δ + o(δ).
Hence, from (75)-(76), we obtain
1
1
R(θ, θ̂JSH ) ≤
min R(θ, θ̂JSi ) + n
n
n i=1,2
1/2
nk min(2 ,1)
− max(kθk2 /n,1)
4
max
Ekθ̂JSi − θk
+ Ke
.
(78)
Finally, employing the steps of Lemma 14, we get
i=1,2
Now,
noting
that
by
assumption,
1/2
/n is finite, we get
lim supn→∞ Ekθ̂JSi − θk4
1
R(θ, θ̂JSH ) − min R(θ, θ̂JSi ) − ≤ 0.
lim sup
i=1,2
n→∞ n
Since this is true for every > 0, we therefore have
1
lim sup
R(θ, θ̂JSH ) − min R(θ, θ̂JSi ) ≤ 0.
i=1,2
n→∞ n
≤ 2e−2n ,
n
n
b2
i
1 X
σ X − a2i2
−
2
P
e 2σ − e 2σ
wi 1{ai <wi ≤bi } − √
n i=1
2π i=1
2
− Pn 2n
2
≥ ≤ 2e i=1 (ni −mi ) .
1
2
kθ − νyL k
n
L
2 n
µj − θi
µj−1 − θi
. kθk2 X cj X
=
−
Q
−Q
n
n i=1
σ
σ
j=0
+ κn δ + o(δ).
The subsequent steps of the proof are along the lines of that
of Theorem 1.
VII. C ONCLUDING REMARKS
(77)
This completes the proof of the theorem.
2
Note3. Note that in the best case
scenario, kθ − θ̂JSH k =
min kθ − θ̂JS1 k2 , kθ − θ̂JS2 k2 , which occurs when for
each realization of y, the hybrid estimator picks the better
of the two rival estimators θ̂JS1 and θ̂JS2 . In this case, the
inequality in (77) is strict, provided that there are realizations
of y with non-zero probability measure for which one estimator is strictly better than the other.
In this paper, we presented a class of shrinkage estimators
that take advantage of the large dimensionality to infer the
clustering structure of the parameter values from the data.
This structure is then used to construct an attracting vector for
the shrinkage estimator. A good cluster-based attracting vector
enables significant risk reduction over the ML-estimator even
when θ is composed of several inhomogeneous quantities.
We obtained concentration bounds for the squared-error loss
of the constructed estimators and convergence results for the
risk. The estimators have significantly smaller risks than the
regular JS-estimator for a wide range of θ ∈ Rn , even though
18
they do not dominate the regular (positive-part) JS-estimator
for finite n.
An important next step is to test the performance of the
proposed estimators on real data sets. It would be interesting
to adapt these estimators and analyze their risks when the
sample values are bounded by a known value, i.e., when |θi | ≤
τ , ∀i = 1, · · · , n, with τ known. Another open question is
how one should decide the maximum number of clusters to
be considered for the hybrid estimator.
An interesting direction for future research is to study
confidence sets centered on the estimators in this paper, and
compare them to confidence sets centered on the positive-part
JS-estimator, which were studied in [19], [20].
The James-Stein estimator for colored Gaussian noise, i.e.,
for w ∼ N (0, Σ) with Σ known, has been studied in [21],
and variants have been proposed in [22], [23]. It would be
interesting to extend the ideas in this paper to the case of
colored Gaussian noise, and to noise that has a general subGaussian distribution. Yet another research direction is to
construct multi-dimensional target subspaces from the data that
are more general than the cluster-based subspaces proposed
here. The goal is to obtain greater risk savings for a wider
range of θ ∈ Rn , at the cost of having a more complex
attractor.
b2
Clearly, f (−∞; b) = e 2 , and since b > 0, we have for
x ≤ 0,
i
x2 h
b2
b2
− √b2π e− 2
e 2 Q (x − b) + e 2 (1 − Q (x))
f (x; b) < e
x2
b2
− √b2π e− 2
e 2 [Q (x − b) + 1 − Q (x)]
= e
Z x
x2
b2
1 − u2
− √b2π e− 2
2
2
√ e
= e
e
1+
du
2π
x−b
b2
x2
c2
b
(i)
− √b e− 2
1 + √ e− 2
= e 2π
e2
2π
2
2
(j)
2
−x
−c
(k) b2
b
b
b
2
√
− √2π e 2
e
e 2π
≤ e
e2
< e2
where (i) is from the first mean value theorem for integrals
for some c ∈ (x − b, x), (j) is because ex ≥ 1 + x for x ≥ 0,
2
2
and (k) is because for x ≤ 0, e−x > e−(x−b) for b > 0.
Therefore,
b2
sup f (x; b) = e 2 .
(80)
x∈(−∞,0]
Now, for x ≥ 0, consider
h(x) :=
f (−x; b) − f (x; b)
e
=e
A PPENDIX
A. Proof of Lemma 10
Note that E wi 1{wi >ai } =
a2
i
√σ e− 2σ2
2π
. So, with
n
σ X − a2i2
e 2σ ,
X :=
wi 1{wi >ai } − √
2π i=1
i=1
n
X
a2
i
we have EX = 0. Let mi := √σ2π e− 2σ2 , and consider the
moment generating function (MGF) of X. We have
Z ∞
n
2
wi
λX Y
e−λmi
λwi 1{w >a }
i
i
√
E e
=
e
e− 2σ2 dwi
2πσ 2 −∞
i=1
Z
Z ai
n
∞
Y
w2
w2
e−λmi
− 2σi2
λwi − 2σi2
√
=
e e
dwi +
e
dwi
2πσ 2 ai
−∞
i=1
Z ∞
n
a
Y
w2
1
i
−λmi
λwi − 2σi2
√
=
e
e e
dwi + 1 − Q
2
σ
2πσ
a
i
i=1
n
h λ2 σ 2 a
a i
Y
i
i
=
e−λmi e 2 Q
− λσ + 1 − Q
.
σ
σ
i=1
(79)
Now, for any positive real number b, consider the function
i
x2 h
b2
− √b e− 2
f (x; b) = e 2π
e 2 Q (x − b) + 1 − Q (x) .
Qn
Note that the RHS of (79) can be written as i=1 f ( aσi ; λσ).
We will bound the MGF in (79) by bounding f (x; b).
b2
2
− √b2π e−
x2
2
[Q (−x − b) − Q (x − b)] + Q (x) − Q (−x) .
We have h(0) = 0 and
√ dh(x)
(x+b)2
(x−b)2
b2
x2
2π
= e 2 e− 2 + e− 2
− 2e− 2
dx
−x2
x2
= e 2 e−bx + ebx − 2e− 2
−x2
= e 2 e−bx + ebx − 2
= 2e
−x2
2
[cosh(bx) − 1] ≥ 0
because cosh(bx) ≥ 1. This establishes that h(x) is monotone
non-decreasing in [0, ∞) with h(0) = 0, and hence, for x ∈
[0, ∞),
h(x) ≥ 0 ⇒ f (−x; b) ≥ f (x; b).
(81)
Finally, from (80) and (81), it follows that
sup
b2
f (x; b) = e 2 .
(82)
x∈(−∞,∞)
nλ2 σ 2
Using (82) in (79), we obtain E eλX ≤ e 2 . Hence,
applying the Chernoff trick, we have for λ > 0:
2 2
E eλX
− λ− nλ2 σ
λX
λ
P (X ≥ ) = P e
≥e
≤
≤
e
.
eλ
Choosing λ =
nσ 2
which minimizes e
2 2
− λ− nλ2 σ
, we get
19
2
P (X ≥ ) ≤ e− 2nσ2 and so,
X
P
≥
n
!
!
n
n
1 X
σ X − a2i2
=P
≥
wi 1{wi >ai } − √
e 2σ
n i=1
2π i=1
n2
≤ e− 2σ2 .
(83)
To obtain the lower tail inequality, we use the following result:
Fact 2. [24, Thm. 3.7]. For independent random variables
Xi satisfying Xi ≥ −M , for 1 ≤ i ≤ n, we have for any
> 0,
!
2
n
n
− Pn
X
X
2(
E[X 2 ]+ M )
i=1
i
3
P
X −
E[X ] ≤ − ≤ e
.
i
i
i=1
i=1
So, for Xi = w
i 1{w
i >ai } , we have Xi ≥ min{0, ai , i =
1, · · · , n}, and E Xi2 ≤ σ 2 , ∀i = 1, · · · , n. Clearly, we can
take M = − min{0, ai , i = 1, · · · , n} < ∞. Therefore, for
any > 0,
!
2
n
n
− Pn
X
X
M
2
P
X −
E[X ] ≤ − ≤ e 2( i=1 E[Xi ]+ 3 )
i
i
i=1
i=1
≤e
2
−
2 nσ 2 + M
3
(
Then, for any t > 0, we have
P(E)
≤ P(E, {a < f (y) ≤ a + t}) + P(E, {a − t ≤ f (y) ≤ a})
+ P(|f (y) − a| > t)
!
n
1 X
=P
θi 1{a<yi ≤f (y)} ≥ , {a < f (y) ≤ a + t}
n i=1
+P
!
n
1 X
θi 1{f (y)<yi ≤a} ≥ , {a − t < f (y) ≤ a}
n i=1
+ P(|f (y) − a| > t)
!
X
n
1
≤P
|θi |1{a<yi ≤a+t} ≥
n i=1
!
X
n
2
1
|θi |1{a−t<yi ≤a} ≥ + 2e−nkt .
+P
n i=1
Now,
Z
P(1{a<yi ≤a+t} = 1) =
)
(85)
a
a+t
√
1
2πσ 2
e−
(yi −θi )2
2σ 2
dyi ≤ √
and hence,
P
n
X
n
σ X − a2i2
wi 1{wi >ai } − √
e 2σ
2π
i=1
i=1
1
n
−
≤e
!
≤ −
n
2 σ2 + M
3
).
≤ 2e
n2
(
2 σ2 + M
3
1X
|θi | P(1{a<yi ≤a+t} = 1)
0 ≤ EY =
n i=1
(84)
Using the upper and lower tail inequalities obtained in (83)
and (84), respectively, we get
!
n
n
σ X − a2i2
1 X
e 2σ ≥
wi 1{wi >ai } − √
P
n i=1
2π i=1
−
2πσ 2
(86)
(yi −θi )2
Pn
where we have used e− 2σ2
≤ 1. Let Y := n1 i=1 Yi
where Yi := |θi |1{a<yi ≤a+t} . Then, from (86), we have
!
n2
(
t
) ≤ 2e−nk min(,2 )
where k is a positive constant (this is due to M being finite).
This proves (53). The concentration inequality in (54) can be
similarly proven, and will not be detailed here.
n
X
t
|θi |.
≤ √
n 2πσ 2 i=1
Since Yi ∈ [0, |θi |], from Hoeffding’s inequality, for any
2n21
1 > 0, we have P (Y − EY ≥ 1 ) ≤ exp{− kθk2 /n
}, which
implies
2n2
1
tkθk1
−
≤ e kθk2 /n ,
P Y ≥ 1 + √
2
n 2πσ
Pn
where kθk1 := i=1 |θi |. Now, set 1 = /2 and
p
πσ 2 /2
t=
(87)
kθk1 /n
to obtain
2
B. Proof of Lemma 11
n
We first prove (56). Then (55) immediately follows by
setting θi = 1, ∀i.
Let us denote the event whose probability we want to bound
by E. In our case,
)
(
n
n
X
1 X
θi 1{yi >f (y)} −
θi 1{yi >a} ≥ .
E=
n i=1
i=1
⇒P
P (Y ≥ ) ≤ e
!
1X
|θi |1{a<yi ≤a+t} ≥
n i=1
≤e
n
− 2kθk
2 /n
2
n
− 2kθk
2 /n
.
(88)
A similar analysis yields
n
P
1X
|θi |1{a−t<yi ≤a} ≥
n i=1
!
2
≤e
n
− 2kθk
2 /n
.
(89)
Using (88) and (89) in (85) and recalling that t is given by
20
(87), we obtain
we obtain,
n
X
n
!
n
X
1
P
θi 1{yi >f (y)} −
θi 1{yi >a} ≥
n i=1
i=1
n2 kπσ2
2
n2
−
− nk
− 2kθk
2
2 /n
2kθk2
/n
1
+e
≤2 e
≤ 4e kθk2 /n
where k is a positive constant. The last inequality holds
because kθk21 /n2 < kθk2 /n (by the Cauchy-Schwarz inequality), and lim supn→∞ kθk2 /n < ∞ (by assumption). This
proves (56).
P
1X
|wi |1{a<yi ≤a+t} ≥
n i=1
!
≤ 2e−nk min(
2
,)
(92)
where k is a positive constant. Using similar steps, it can be
shown that the third term on the RHS of (90) can also be
bounded as
!
n
2
1X
|wi |1{a−t<yi ≤a} ≥ ≤ 2e−nk min( ,) . (93)
P
n i=1
This completes the proof of (57).
Next, we prove (57). Using steps very similar to (85), we
have, for t > 0, > 0,
!
n
n
X
1 X
wi 1{yi >f (y)} −
P
wi 1{yi >a} ≥
n i=1
i=1
!
n
1X
−nkt2
≤ 2e
+P
|wi |1{a<yi ≤a+t} ≥
n i=1
!
n
1X
|wi |1{a−t<yi ≤a} ≥ .
(90)
+P
n i=1
Pn
Now, let Y := n1 i=1 Yi where
Yi := |wi |1{a<yi ≤a+t} = |wi |1{a−θi <wi ≤a−θi +t} .
Noting that |wi | ≤ t + |a − θi | when wi ∈ [a − θi , a − θi + t],
we have
Z a−θi +t
|w| −w2 /2σ2
√
E[Yi ] =
e
dw
2
2πσ
a−θi
(j)
2
2
t
|c|
(i)
e−c /2σ
≤ √
=t √
.
2
2πe
2πσ
Note that (i) is from the mean value theorem for integrals √
with
2
c ∈ (a − θi , a − θi + t), and (j) is because xe−x ≤ 1/ 2e
for x ≥ 0. Hence
n
t
1X
E[Yi ] ≤ √
.
0 ≤ E[Y ] =
n i=1
2πe
So,
lim sup E |Xn |1{|Xn |≥K}
K→∞
n→∞
i
h
≤ c lim lim sup E Yn 1{Yn ≥ K−a }
c
K→∞
n→∞
K −a
+ a lim lim sup P Yn ≥
= 0.
K→∞
c
n→∞
lim
D. Proof of Lemma 1
We first prove (16) and (17). Then, (18) and (19) immediately follow by setting θi = 1, for 1 ≤ i ≤ n.
From Lemma 11, for any > 0,
!
n
n
2
X
1 X
− nk
P
θi 1{yi >ȳ} −
θi 1{yi >θ̄} ≤ 4e kθk2 /n . (94)
n
i=1
As each Yi takes values in an interval of length at most t, by
Hoeffding’s inequality we have for any 1 > 0
2
C. Proof of Lemma 15
Since {Yn }∞
VI.1, we have
n=1 is UI, from Definition
limK→∞ lim supn→∞ E Yn 1{Yn ≥K} = 0. Therefore,
E |Xn |1{|Xn |≥K}
≤ E c|Yn |1{|Xn |≥K} + E a1{|Xn |≥K}
≤ cE Yn 1{cYn +a≥K} + aE 1{cYn +a≥K}
i
h
i
h
= cE Yn 1{Yn ≥ K−a } + aE 1{Yn ≥ K−a }
c
c
h
i
K −a
= cE Yn 1{Yn ≥ K−a } + aP Yn ≥
.
c
c
2
P(Y ≥ 1 + E[Y ]) ≤ 2e−2n1 /t
2
2
t
⇒ P Y ≥ 1 + √
≤ 2e−2n1 /t .
(91)
2πe
√
t
Now, set √2πe
= 1 . Using this value of t in the RHS of
(91), we obtain
!
n
√
1X
P
|wi |1{a<yi ≤a+t} ≥ 1 + 1 ≤ 2e−nk1 1
n i=1
√
√
where
k1 = 1/(πe). Setting 1 + 1 = , we get 1 =
√
4+1−1
. Using the following inequality for x > 0:
2
2
√
2
x /32, 0 ≤ x ≤ 3
1+x−1 ≥
3x/4,
x > 3,
i=1
Since θi 1{yi >θ̄} ∈ {0, θi } are independent for 1 ≤ i ≤ n,
from Hoeffding’s inequality, we have, for any > 0,
!
n
n
h
i
1X
1X
P
θi 1{yi >θ̄} −
θi E 1{yi >θ̄} >
n i=1
n i=1
2
≤ 2e
2n
− kθk
2 /n
.
(95)
Also for each i,
h
i
E 1{yi >θ̄} = P yi > θ̄ = P wi > θ̄ − θi
θ̄ − θi
=Q
.
σ
Therefore, from (94) and (95), we obtain
n
n
1X
θ̄ − θi
. 1X
θi 1{yi >ȳ} =
θi Q
.
n i=1
n i=1
σ
(96)
21
The concentration result in (17) immediately follows by writing 1{yi ≤ȳ} = 1 − 1{yi >ȳ} .
To prove (14), we write
n
n
n
1X
1X
1X
yi 1{yi >ȳ} =
θi 1{yi >ȳ} +
wi 1{yi >ȳ} .
n i=1
n i=1
n i=1
Hence, we have to show that
n
n
σ X − (θ̄−θ2i )2
1X
.
wi 1{yi >ȳ} = √
e 2σ .
n i=1
n 2π i=1
(97)
From the first mean value theorem for integrals, ∃εi ∈ (−δ, δ)
such that
Z θ̄+δ
(y −θ )2
(θ̄+εi −θi )2
1
1
− i2σ2i
−
2
2σ
√
√
e
dyi = 2δ
e
2πσ 2
2πσ 2
θ̄−δ
and so the RHS of (102) can be written as
n Z
(yi −θi )2
σ 2 X θ̄+δ
1
√
e− 2σ2 dyi
2nδ i=0 θ̄−δ
2πσ 2
n
2
i)
σ X − (θ̄+εi −θ
2σ 2
= √
.
e
n 2π i=0
From Lemma 11, for any > 0, we have
n
n
X
1 X
wi 1{yi >ȳ} −
wi 1{yi >θ̄}
n i=1
i=1
P
!
Now, let xi := θ̄ − θi . Then, since |εi | ≤ δ, we have
2
≤ 4e
nk
− kθk
2 /n
.
(98)
Now,
h
i Z ∞
w2
1
− 2σi2
e
dwi
E wi 1{yi >θ̄} =
wi 1{yi >θ̄} √
2πσ 2
Z−∞
∞
2
wi
(θ̄−θi )2
w
σ
√ i e− 2σ2 dwi = √ e− 2σ2 .
=
2π
2πσ 2
θ̄−θi
1
n
X
n 2πσ 2
i=0
√
d
≤ δ max
dx
P
2
(99)
We obtain (97) by combining (98) and (99).
Similarly, (15) can be shown using Lemma 11 and Lemma
10 to establish that
n
n
σ X − (θ̄−θ2i )2
1X
.
e 2σ .
wi 1{wi ≤ȳ} = − √
n i=1
n 2π i=1
√
1
2σ 2
2
2πσ 2
e
x
− 2σ
2
=
σ2
δ
√
.
2πe
n Z
(yi −θi )2
σ 2 X θ̄+δ
1
√
e− 2σ2 dyi
2nδ i=0 θ̄−δ
2πσ 2
n
σ X − (θ̄−θ2i )2
= √
e 2σ
+ κn δ
n 2π i=0
!
≤ 2e−nk min(, ) .
(xi +εi )2
Therefore,
Using Lemma 10, we get, for any > 0,
n
n
1 X
σ X − (θ̄−θ2i )2
e 2σ
≥
wi 1{wi >θ̄} − √
n i=1
2π i=1
x2
i
e− 2σ2 − e−
(103)
1
where |κn | ≤ √2πe
. Using (103) in (102), and then the
obtained result in (101) and (100), the proof of the lemma
is complete.
F. Proof of Lemma 13
We have
E. Proof of Lemma 2
n
1
1X
2
2
ky − ν2 k =
(yi − a1 ) 1{yi >ȳ}
n
n i=1
From Lemma 12, we have, for any > 0,
P
n
n
X
1 σ2 X
1{|yi −ȳ|≤δ} −
1{|yi −θ̄|≤δ} ≥
n 2δ i=0
i=0
≤ 8e−
nk2 δ 2
σ4
.
(100)
Further, from Hoeffding’s inequality,
P
!
n
n
i
1 σ2 X
σ2 X h
1
−
E 1{|yi −θ̄|≤δ} ≥
n 2δ i=0 {|yi −θ̄|≤δ} 2δ i=0
≤ 2e−
8nδ 2 2
σ4
.
n
!
(101)
Also,
n
n
i σ2 X
σ2 X h
E 1{|yi −θ̄|≤δ} =
P yi − θ̄ ≤ δ
2δ i=0
2δ i=0
n Z
(yi −θi )2
σ 2 X θ̄+δ
1
√
=
e− 2σ2 dyi .
(102)
2δ i=0 θ̄−δ
2πσ 2
+
1X
2
(yi − a2 ) 1{yi ≤ȳ} .
n i=1
(104)
Now,
n
1X
2
(yi − a1 ) 1{yi >ȳ}
n i=1
" n
#
n
n
X
X
1 X 2
2
y 1{yi >ȳ} +
a1 1{yi >ȳ} − 2
a1 yi 1{yi >ȳ}
=
n i=1 i
i=1
i=1
" n
n
X
1 X 2
=
wi 1{wi >ȳ−θi } +
θi2 1{yi >ȳ}
n i=1
i=1
+2
n
X
θi wi 1{wi >ȳ−θi } +
i=1
−2
n
X
i=1
n
X
i=1
a1 yi 1{yi >ȳ}
a21 1{yi >ȳ}
22
and similarly,
1
n
n
X
2
(yi − a2 ) 1{yi ≤ȳ} =
i=1
+
+
n
X
i=1
n
X
θi2 1{yi ≤ȳ} + 2
a22 1{yi ≤ȳ} − 2
i=1
n
X
i=1
n
X
1
n
X
n
Employing the same steps as above, we get
!
n
n
1X
. c2 X c θ̄ − θi
2
a2
1{yi ≤ȳ} = 2
Q
+ κn δ + o(δ),
n i=1
n i=1
σ
(110)
!
n
n
2X
. 2c2 X c θ̄ − θi
yi 1{yi ≤ȳ} = 2
Q
a2
n i=1
n i=1
σ
wi2 1{wi ≤ȳ−θi }
i=1
θi wi 1{wi ≤ȳ−θi }
a2 yi 1{yi ≤ȳ} .
n
2c2 σ X − (θ̄−θ2i )2
√
−
+ κn δ + o(δ).
e 2σ
n 2π i=1
i=1
Therefore, from (104)
n
Therefore, using (106)-(111) in (105), we finally obtain
n
1 X 2 kθk2
2X
1
2
ky − ν2 k =
wi +
+
θi wi
n
n i=1
n
n i=1
+
+
1
n
n
X
n
X
i=1
a22 1{yi ≤ȳ}
i=1
Since
1
n
Pn
a21 1{yi >ȳ} − 2
n
X
a1 yi 1{yi >ȳ}
i=1
−2
n
X
a2 yi 1{yi ≤ȳ} .
(105)
i=1
2
∼ N 0, kθk
,
n2
!
n
2
1X
− n
θi wi ≥ ≤ e 2kθk2 /n .
n i=1
which completes the proof of the lemma.
(106)
(107)
where c1 , c2 are defined in (26). The concentration in (107)
follows from Lemmas 1 and 2, together with the results on
concentration of products and reciprocals in Lemmas 5 and
6, respectively. Further, using (107) and Lemma 5 again, we
.
obtain a21 = c21 + κn δ + o(δ) and
!
n
n
θ̄ − θi
1X
. c2 X
2
1{yi >ȳ} = 1
Q
+ κn δ + o(δ).
a1
n i=1
n i=1
σ
(108)
Similarly,
!
n
2X
a1
yi 1{yi >ȳ}
n i=1
!
n
n
θ̄ − θi
σ X − (θ̄−θ2i )2
. 2c1 X
=
θi Q
+√
e 2σ
n
σ
2π i=1
i=1
+ κn δ + o(δ)
!
n
n
X
θ̄ − θi
σ X − (θ̄−θ2i )2
. 2c1
=
c1
Q
+√
e 2σ
n
σ
2π i=1
i=1
+ κn δ + o(δ)
n
n
θ̄ − θi
2c1 σ X − (θ̄−θ2i )2
. 2c2 X
= 1
Q
+ √
e 2σ
n i=1
σ
n 2π i=1
+ κn δ + o(δ).
G. Proof of Lemma 14
The proof is along the same lines as that of Lemma 13. We
have
1
kθ − ν2 k2
n
" n
#
n
X
1 X
2
2
=
(θi − a1 ) 1{yi >ȳ} +
(θi − a2 ) 1{yi ≤ȳ}
n i=1
i=1
From Lemma 9, we have, for any > 0,
!
n
2
1X 2
2
P
wi − σ ≥ ≤ 2e−nk min(, )
n i=1
where k is a positive constant. Next, we claim that
.
.
a1 = c1 + κn δ + o(δ), a2 = c2 + κn δ + o(δ),
1
2
ky − ν2 k
n
n
n
c2 X
θ̄ − θi
c2 X c θ̄ − θi
. kθk2
=
+ σ2 − 1
Q
− 2
Q
n
n i=1
σ
n i=1
σ
!
X
n
(θ̄−θi )2
σ
2
√
e− 2σ2
(c1 − c2 )
−
n
2π
i=1
+ κn δ + o(δ),
i=1 θi wi
P
(111)
=
1
kθk2
+
n
n
+
n
X
n
X
a21 1{yi >ȳ} − 2
i=1
a1 θi 1{yi >ȳ}
i=1
a22 1{yi ≤ȳ} − 2
i=1
2
n
X
n
X
!
a2 θi 1{yi ≤ȳ}
i=1
n
n
c21 X
θ̄ − θi
2c1 X
θ̄ − θi
. kθk
=
+
Q
−
θi Q
n
n i=1
σ
n i=1
σ
!
n
n
X
X
1
θ̄ − θi
θ̄ − θi
2
c
c
+
c2
Q
− 2c2
θi Q
n
σ
σ
i=1
i=1
+ κn δ + o(δ)
n
n
c21 X
θ̄ − θi
c22 X c θ̄ − θi
. kθk2
−
Q
−
Q
=
n
n i=1
σ
n i=1
σ
+ κn δ + o(δ).
Acknowledgement
The authors thank R. Samworth for useful discussions on
James-Stein estimators, and A. Barron and an anonymous
referee for their comments which led to a much improved
manuscript.
R EFERENCES
(109)
[1] W. James and C. M. Stein, “Estimation with Quadratic Loss,” in Proc.
Fourth Berkeley Symp. Math. Stat. Probab., pp. 361–380, 1961.
23
[2] E. L. Lehmann and G. Casella, Theory of Point Estimation. Springer,
New York, NY, 1998.
[3] B. Efron and C. Morris, “Data Analysis Using Stein’s Estimator and Its
Generalizations,” J. Amer. Statist. Assoc., vol. 70, pp. 311–319, 1975.
[4] D. V. Lindley, “Discussion on Professor Stein’s Paper,” J. R. Stat. Soc.,
vol. 24, pp. 285–287, 1962.
[5] C. Stein, “Estimation of the mean of a multivariate normal distribution,”
Ann. Stat., vol. 9, pp. 1135–1151, 1981.
[6] B. Efron and C. Morris, “Stein’s estimation rule and its competitors—an
empirical Bayes approach,” J. Amer. Statist. Assoc., vol. 68, pp. 117–
130, 1973.
[7] E. George, “Minimax Multiple Shrinkage Estimation,” Ann. Stat.,
vol. 14, pp. 188–205, 1986.
[8] E. George, “Combining Minimax Shrinkage Estimators,” J. Amer. Statist.
Assoc., vol. 81, pp. 437–445, 1986.
[9] G. Leung and A. R. Barron, “Information theory and mixing leastsquares regressions,” IEEE Trans. Inf. Theory, vol. 52, no. 8, pp. 3396–
3410, 2006.
[10] G. Leung, Improving Regression through Model Mixing. PhD thesis,
Yale University, 2004.
[11] A. J. Baranchik, “Multiple Regression and Estimation of the Mean of a
Multivariate Normal Distribution,” Tech. Report, 51, Stanford University,
1964.
[12] P. Shao and W. E. Strawderman, “Improving on the James-Stein Positive
Part Estimator,” Ann. Stat., vol. 22, no. 3, pp. 1517–1538, 1994.
[13] Y. Maruyama and W. E. Strawderman, “Necessary conditions for
dominating the James-Stein estimator,” Ann. Inst. Stat. Math., vol. 57,
pp. 157–165, 2005.
[14] R. Beran, The unbearable transparency of Stein estimation. Nonparametrics and Robustness in Modern Statistical Inference and Time Series
Analysis: A Festschrift in honor of Professor Jana Jurečková, pp. 25–34.
Institute of Mathematical Statistics, 2010.
[15] I. M. Johnstone, Gaussian estimation: Sequence and wavelet models.
[Online]: http://statweb.stanford.edu/∼imj/GE09-08-15.pdf, 2015.
[16] S. Boucheron, G. Lugosi, and P. Massart, Concentration Inequalities: A
Nonasymptotic Theory of Independence. Oxford University Press, 2013.
[17] L. Wasserman, All of Statistics: A Concise Course in Statistical Inference. Springer, New York, NY, 2nd ed., 2005.
[18] D. Williams, Probability with Martingales. Cambridge University Press,
1991.
[19] J. T. Hwang and G. Casella, “Minimax confidence sets for the mean
of a multivariate normal distribution,” The Annals of Statistics, vol. 10,
no. 3, pp. 868–881, 1982.
[20] R. Samworth, “Small confidence sets for the mean of a spherically
symmetric distribution,” Journal of the Royal Statistical Society: Series
B (Statistical Methodology), vol. 67, no. 3, pp. 343–361, 2005.
[21] M. E. Bock, “Minimax estimators of the mean of a multivariate normal
distribution,” Ann. Stat., vol. 3, no. 1, pp. 209–218, 1975.
[22] J. H. Manton, V. Krishnamurthy, and H. V. Poor, “James-Stein State
Filtering Algorithms,” IEEE Trans. Sig. Process., vol. 46, pp. 2431–
2447, Sep. 1998.
[23] Z. Ben-Haim and Y. C. Eldar, “Blind Minimax Estimation,” IEEE Trans.
Inf. Theory, vol. 53, pp. 3145–3157, Sep. 2007.
[24] F. Chung and L. Lu, “Concentration inequalities and martingale inequalities: a survey,” Internet Mathematics, vol. 3, no. 1, pp. 79–127, 2006.
| 7 |
On PROGRESS Operation
How to Make Object-Oriented Programming System More Object-Oriented (DRAFT)
Evgeniy Grigoriev
www.RxO project.com
[email protected]
Keywords:
Object-oriented paradigm, Von Neumann architecture, linear addressable memory, persistent object,
modifiable object, relational data model, RxO, static typing, inheritance, progress operation, role, F1.1,
D3.3, H2.3, H2.4
Abstract:
A system, which implements persistent objects, has to provide different opportunities to change the objects
in arbitrary ways during their existence. A traditional realization of OO paradigm in modern programming
systems has fundamental drawbacks which complicate an implementation of persistent modifiable objects
considerably. There is alternative realization that does not have these drawbacks. In the article the
PROGRESS operation is offered, which modify existing object within an existing inheritance hierarchy.
1
INTRODUCTION
OO programming paradigm (Booch,1991) claims to
be the best and most natural way to model the real
world in information systems. Existing OO
programming tools are the result of years of von
Neumann
machines
programming
systems
evolution. Organization of target ALM-machines
had and has a significant impact on abilities and
features of existing OO languages. An important
feature of the machines is the using of addressable
linear memory (further, ALM). As shown in
(Grigoriev, 2012), core features of the addressable
linear memory make an implementation of persistent
modifiable objects a very complex task.
An obvious drawback is that the AML itself is
not persistent usually. But it's not the only negative
feature. Let us consider other ones. First feature is
that both links between objects and internal
structures of the objects are mapped into a single
address space. Second one is that unidirectional
address pointers are the only possible ways to link
memory areas. These features together are critical
when modifiable structures are tried to be
implemented. A memory area allocated to an object
data is limited by the areas allocated to other objects.
An attempt to modify an object structure, which
increases the corresponding memory area, requires
reallocation of the area in the memory. All address
pointers referencing to the reallocated memory area
have to be changed to keep links existing in the
system. But there is no a way to track existing
address pointers because of their unidirectionality. If
a memory area is reallocated, the links will be lost.
Of course, all these difficulties can be got round
by sophisticated programming. A huge amount of
very different technologies, approaches, patterns,
environments etc. exist, which try to implement
persistent modifiable objects. A result of such
programming generally looks like an attempt to
avoid or to hide some negative features of used
ALM-machines.
Alternative way to avoid totally these features is
to avoid ALM-machines themselves.
2
OTHER MACHINES
ALM-machines are not the only target machines for
OO programming systems. In (Grigoriev, 2012) an
object-oriented translator is described which uses a
relational database as a target machine (R-machine).
This possibility is based on the fact, that today
relational DBMS fully correspond to the concept of
the target machine (Pratt and Zelkowitz, 2001). They
are programmable data systems which can create,
save, and execute a command sequences on
relational variables and on their values. Of course,
they are virtual machines, but from formal
standpoint this fact has no matter. In comparison
with the ALM-machines, the R-machines have the
following features:
associative
principles
of
memory
organization;
persistent memory;
ability to manipulate with groups of values by
means of set operations;
formal foundation (relational data model).
The proposed approach "OO translator for Rmachine" was used to create "RxO system"
prototype (Grigoriev, 2013-1), which fully combines
the core properties of object-oriented languages and
relational DBMS. It is based on a formal possibility
to convert the description of complex objectoriented data structures and operations on these data,
into a description of relational structures and
operations on last ones (Grigoriev, 2013-2),.
Data in this system are described as a set of
persistent objects and represented as an orthogonal
set of relations. The system is managed by
commands of non-procedural language. The
commands are used to create and to change classes,
to create persistent objects and change their state, to
get data about the objects state (inc. by means of adhoc queries), to execute object methods, to
manipulate with groups of the objects.
The above features of the target R-machines
affect properties of the source OO language
appreciably, giving the possibilities that are not
present in the traditional object-oriented languages.
Associative memory of R-target machine has no
principal
drawbacks
that
prevent
easy
implementation of persistent modifiable objects.
Partially, this ability is demonstrated in the "RxO
system" prototype; new classes can be added on-therun into a set of existing ones in working models
(inc. by mean of multiply inheritance),
implementations of attributes and methods can be
changed in existing objects
Fundamentally, object interfaces can be changed
too in different ways. This feature can be realized in
two different operations, which is going to be
implemented into next version of RxO system.
The first operation is used to change a class
specification class, for example, to add new attribute
or method. This operation affects all objects of the
class by changing their interface. Its meaning is
clear, so we will not dwell on it.
The second operation leaves all existing
specifications unchanged but puts existing object
into other group of objects defined by a class or by a
role. It means that interface and implementation of
the objects is changed as it is defined for group
objects, to which the object will belong after
operation.
Of course, the ability to put any existing object in
any class has a little sense. In addition, such ability
will evidently contradict to a static typing that is
implemented in RxO system. However, limited
versions of such operation exist, which do not
contradict to the static typing. Interestingly, it is
precisely the limitation, which makes this operation
meaningful as a way to create more perfect
information model of object domains.
3
OBJECTS IN PROGRESS
Consider the information system, which is an
information model of some firm. The firm staff
consists of employees and some of the employees
are managers. Data in the system are describes as a
set of persistent objects.
3.1 Progress in Inheritance Hierarchy
Let's take the classic example used often to
demonstrate the inheritance principle, where
Manager class is a subclass of Employee class.
CREATE CLASS Employee
{
…
}
CREATE CLASS Manager
EXTENDS Employee
{
…
}
Accordingly, objects of Employee class exist in
the system among others objects, and some of them
belong to Manager class. All these objects can be
referenced from other objects of the system, and the
inherited class existence has no meaning in some
cases of the referencing; for example, Library
object referencing all the Employee objects equally
to track issued books, even if some of the Employee
objects are Manager objects.
The world is inconstant and a time has come
when some employee is promoted to a manager
position. It's important that the employee has not
become a different person. For example, its
relationship with the firm library has not changed in
this moment, but the promotion is very important for
HR department. Remaining the same, the employee
has acquired a new quality, entered into a new
group. It has progressed.
The simplest way to reflect this situation in the
information system is an operation that shifts an
object of Employee class up within the inheritance
hierarchy, which has already been defined. Let us
call this operation as PROGRESS. This operation
keeps both OID of applied object and its interface,
described in the parent class, unchanged. Therefore
all existing references to the objects stay valid. At
the same time, the object gets new interface
elements as they are defined in the child class. So,
after the object was progressed, it can be used by
other objects of the system as an object of the
derived class. In RxO system, which implements a
principle "class is a stored set of objects", the object
also will be available as an element of the child
class.
The PROGRESS variant of putting of object in
other class doesn't reduce the system reliability,
which is achieved by static typing, because the old
object type remains unchanged by this operation.
The implementation of the interface, defined for
objects of the parent class, can be redefined in the
child class. The PROGRESS operation can be used
with a special method (re-constructor) to transform
the object to make it corresponding to the new
implementation. This re-constructor can take
parameters to add new data into the object during the
transformation. So, the operation syntax can look
like
PROGRESS someEmployee
TO Manager(parameter, ….)
A combination of the PROGRESS operation and
an ability to inherit on-the-run classes of an
information models provides a simple and logical
way to develop the model, in order to reflect
changes of an object domain. If it's necessary, a class
can be inherited by a new-created class, which has
new specification elements and/or changes
implementations of existing elements. Then, objects
of the class can be progressed to the new child class.
3.2 Progress in Roles
Other interpretation of "Employee to Manager"
example can be offered, where "a manager" is
considered just as one of roles, which are possible
for an employee.
A role is applied to a class. Like a class, a role
can have both attributes and methods which have to
be implemented before role can be used. Also it can
have special method (role constructor) to build a role
data from the applied object or/and from taken
parameters. In RxO system the role defines a set of
objects which the role has been applied to. Speaking
generally, a role definition is very similar to
inherited class definition. The only difference is that
a role cannot re-implement an applied class.
Roles in RxO system look similar to class
interfaces available in some traditional OO
languages like Java and C#. As opposed to the
interfaces, roles can have attributes
An advantage of the roles is clear when an object
can have different independent roles. For example
some employee can be a manager for other
employees or/and a mentor for new-coming
employee. These two roles are independent.
CREATE CLASS Employee
{
…
}
CREATE ROLE Manager FOR Employee
{
…//the same as in inherited class
}
CREATE ROLE Mentor FOR Employee
{
…
}
After the two roles was created and
implemented, some Employee object can be
progressed to both these roles.
PROGRESS someEmployee
TO Manager(parameter, ….)
PROGRESS someEmployee
TO Mentor(parameter, ….)
Now the object can be used in the both new roles
by other objects of the system. In RxO system,
which implements a principle "class is a stored set of
objects", the object also will be available as an
element of both object groups defined by the roles.
This situation can be hardly described by usual
class inheritance because two different child classes
would define two non-overlapping subsets of
objects. Subsets, defined by independent roles, can
overlap.
A role can be applied to other role. For example
Top role can be defined for Manager one.
CREATE ROLE Top FOR Manager
{
…
}
Now the Employee object having Manager role
can get new Top role.
PROGRESS someManager
TO Top(parameter, ….)
At that all other independent roles will stay
unchanged.
A combination of the PROGRESS operation and
an ability to create on-the-run new roles provides
other way to develop the model. If it's necessary, a
new role can be created for existing class or role.
Then, existing objects can be progressed to the new
role.
4
CONCLUSIONS
Persistent objects are meaningful only if they are
truly modifiable, because of modeled object domain
inconsistence. Here the word "modifiable" has the
widest sense which includes at least the following
items:
an ability to change the state of the object;
an ability to change on-the-run the class
implementation, that changes behavior of all
its objects;
an ability to change on-the-run the class
specification, that changes interface of all its
objects;
an ability to put an existing object in child
class or in applied role, that can change the
state and the behavior of the object and add
new elements to its interface.
In systems which claims to be the best way to
model the real world, all these abilities have to be
equally and easily accessible. But most of traditional
OO languages have only the first ability
implemented as a basic one. Other three abilities
require a non-trivial programming or are unavailable
at all.
As an example, obvious PROGRESS operation
can be hardly implemented in existing OO
programming systems for ALM-machine. Perhaps,
this is why such operations are practically unknown
(similar operations exist in very rare classless
languages, for example, NewtonScript). This
situation clearly demonstrates an impact of
architecture of habitual target ALM-machines on
OO languages and also on understanding of OO
paradigm itself.
A reason of the described drawback is not the
OO paradigm itself but only its usual
implementation. Other implementations of the
paradigm can exists which do not have such
drawbacks. An example of such implementation is
the "RxO system" prototype.
REFERENCES
Booch , G ., Object-oriented Analysis and Design with
Applications. 1991 . Benjamin-Cummings Publishing.
Grigoriev, E. Impedance mismatch is not an "Objects vs.
Relations" problem, 2012, ODBMS.org
http://odbms.org/download/EvgeniyGrigoriev.pdf
Grigoriev, E. RxO DBMS prototype, 2013(1)
Youtube.com, http://youtu.be/K9opP7-vh18
Grigoriev, E. Object-Oriented Translation for
Programmable Relational System, 2013(2) Arxiv.org,
http://arxiv.org/abs/1304.2184
Pratt,T.W. and Zelkovitz,M.V., Programming Languages:
Design and Implementation. 2001, London: Prentice
Hall, Inc.
NewtonScript 2013 Wikipedia
http://en.wikipedia.org/wiki/NewtonScript
| 6 |
SUBMITTED TO IEEE-SPM, APRIL 2017
1
Generative Adversarial Networks: An Overview
arXiv:1710.07035v1 [] 19 Oct 2017
Antonia Creswell§ , Tom White¶ ,
Vincent Dumoulin‡ , Kai Arulkumaran§ , Biswa Sengupta†§ and Anil A Bharath§ , Member IEEE
§ BICV Group, Dept. of Bioengineering, Imperial College London
¶ School of Design, Victoria University of Wellington, New Zealand
‡ MILA, University of Montreal, Montreal H3T 1N8
† Cortexica Vision Systems Ltd., London, United Kingdom
Abstract—Generative adversarial networks (GANs) provide a way to learn deep representations without extensively
annotated training data. They achieve this through deriving
backpropagation signals through a competitive process involving a pair of networks. The representations that can be
learned by GANs may be used in a variety of applications,
including image synthesis, semantic image editing, style
transfer, image super-resolution and classification. The aim
of this review paper is to provide an overview of GANs
for the signal processing community, drawing on familiar
analogies and concepts where possible. In addition to
identifying different methods for training and constructing
GANs, we also point to remaining challenges in their theory
and application.
Index Terms—neural networks, unsupervised learning,
semi-supervised learning.
I. I NTRODUCTION
ENERATIVE adversarial networks (GANs) are an
emerging technique for both semi-supervised and
unsupervised learning. They achieve this through implicitly
modelling high-dimensional distributions of data. Proposed
in 2014 [1], they can be characterized by training a pair
of networks in competition with each other. A common
analogy, apt for visual data, is to think of one network
as an art forger, and the other as an art expert. The
forger, known in the GAN literature as the generator, G ,
creates forgeries, with the aim of making realistic images.
The expert, known as the discriminator, D , receives both
forgeries and real (authentic) images, and aims to tell them
apart (see Fig. 1). Both are trained simultaneously, and in
competition with each other.
Crucially, the generator has no direct access to real
images - the only way it learns is through its interaction
with the discriminator. The discriminator has access to
both the synthetic samples and samples drawn from the
stack of real images. The error signal to the discriminator
is provided through the simple ground truth of knowing
whether the image came from the real stack or from the
generator. The same error signal, via the discriminator, can
G
be used to train the generator, leading it towards being
able to produce forgeries of better quality.
The networks that represent the generator and discriminator are typically implemented by multi-layer networks
consisting of convolutional and/or fully-connected layers.
The generator and discriminator networks must be differentiable, though it is not necessary for them to be
directly invertible. If one considers the generator network
as mapping from some representation space, called a
latent space, to the space of the data (we shall focus
on images), then we may express this more formally as
G : G(z) → R|x| , where z ∈ R|z| is a sample from the
latent space, x ∈ R|x| is an image and | · | denotes the
number of dimensions.
In a basic GAN, the discriminator network, D , may
be similarly characterized as a function that maps from
image data to a probability that the image is from the
real data distribution, rather than the generator distribution:
D : D(x) → (0, 1). For a fixed generator, G , the
discriminator, D , may be trained to classify images as
either being from the training data (real, close to 1) or from
a fixed generator (fake, close to 0). When the discriminator
is optimal, it may be frozen, and the generator, G , may
continue to be trained so as to lower the accuracy of the
discriminator. If the generator distribution is able to match
the real data distribution perfectly then the discriminator
will be maximally confused, predicting 0.5 for all inputs. In
practice, the discriminator might not be trained until it is
optimal; we explore the training process in more depth in
Section IV.
On top of the interesting academic problems related to
training and constructing GANs, the motivations behind
training GANs may not necessarily be the generator or
the discriminator per se: the representations embodied by
either of the pair of networks can be used in a variety of
subsequent tasks. We explore the applications of these
representations in Section VI.
SUBMITTED TO IEEE-SPM, APRIL 2017
2
Fig. 1. In this figure, the two models which are learned during the training process for a GAN are the discriminator (D ) and the generator (G ).
These are typically implemented with neural networks, but they could be implemented by any form of differentiable system that maps data from
one space to another; see text for details.
II. P RELIMINARIES
A. Terminology
Generative models learn to capture the statistical distribution of training data, allowing us to synthesize samples
from the learned distribution. On top of synthesizing novel
data samples, which may be used for downstream tasks
such as semantic image editing [2], data augmentation [3]
and style transfer [4], we are also interested in using the
representations that such models learn for tasks such as
classification [5] and image retrieval [6].
We occasionally refer to fully connected and convolutional layers of deep networks; these are generalizations of
perceptrons or of spatial filter banks with non-linear postprocessing. In all cases, the network weights are learned
through backpropagation [7].
As with all deep learning systems, training requires
that we have some clear objective function. Following the
usual notation, we use JG (ΘG ; ΘD ) and JD (ΘD ; ΘG )
to refer to the objective functions of the generator and
discriminator, respectively. The choice of notation reminds
us that the two objective functions are in a sense codependent on the evolving parameter sets ΘG and ΘD
of the networks as they are iteratively updated. We shall
explore this further in Section IV. Finally, note that multidimensional gradients are used in the updates; we use
∇ΘG to denote the gradient operator with respect to the
weights of the generator parameters, and ∇ΘD to denote
the gradient operator with respect to the weights of the
discriminator. The expected gradients are indicated by the
notation E∇• .
C. Capturing Data Distributions
B. Notation
The GAN literature generally deals with multidimensional vectors, and often represents vectors in a
probability space by italics (e.g. latent space is z ). In
the field of signal processing, it is common to represent
vectors by bold lowercase symbols, and we adopt this
convention in order to emphasize the multi-dimensional
nature of variables. Accordingly, we will commonly refer to
pdata (x) as representing the probability density function
over a random vector x which lies in R|x| . We will use
pg (x) to denote the distribution of the vectors produced by
the generator network of the GAN. We use the calligraphic
symbols G and D to denote the generator and discriminator networks, respectively. Both networks have sets
of parameters (weights), ΘD and ΘG , that are learned
through optimization, during training.
A central problem of signal processing and statistics
is that of density estimation: obtaining a representation –
implicit or explicit, parametric or non-parametric – of data
in the real world. This is the key motivation behind GANs.
In the GAN literature, the term data generating distribution
is often used to refer to the underlying probability density
or probability mass function of observation data. GANs
learn through implicitly computing some sort of similarity
between the distribution of a candidate model and the
distribution corresponding to real data.
Why bother with density estimation at all? The answer
lies at the heart of – arguably – many problems of visual
inference, including image categorization, visual object
detection and recognition, object tracking and object registration. In principle, through Bayes’ Theorem, all inference
problems of computer vision can be addressed through
SUBMITTED TO IEEE-SPM, APRIL 2017
estimating conditional density functions, possibly indirectly
in the form of a model which learns the joint distribution of
variables of interest and the observed data. The difficulty
we face is that likelihood functions for high-dimensional,
real-world image data are difficult to construct. Whilst
GANs don’t explicitly provide a way of evaluating density
functions, for a generator-discriminator pair of suitable
capacity, the generator implicitly captures the distribution
of the data.
D. Related Work
One may view the principles of generative models by
making comparisons with standard techniques in signal
processing and data analysis. For example, signal processing makes wide use of the idea of representing a
signal as the weighted combination of basis functions.
Fixed basis functions underlie standard techniques such
as Fourier-based and wavelet representations. Data-driven
approaches to constructing basis functions can be traced
back to the Hotelling [8] transform, rooted in Pearson’s
observation that principal components minimize a reconstruction error according to a minimum squared error criterion. Despite its wide use, standard Principal Components
Analysis (PCA) does not have an overt statistical model
for the observed data, though it has been shown that the
bases of PCA may be derived as a maximum likelihood
parameter estimation problem.
Despite wide adoption, PCA itself is limited – the basis
functions emerge as the eigenvectors of the covariance
matrix over observations of the input data, and the mapping from the representation space back to signal or image
space is linear. So, we have both a shallow and a linear
mapping, limiting the complexity of the model, and hence
of the data, that can be represented.
Independent Components Analysis (ICA) provides another level up in sophistication, in which the signal components no longer need to be orthogonal; the mixing
coefficients used to blend components together to construct examples of data are merely considered to be
statistically independent. ICA has various formulations that
differ in their objective functions used during estimating signal components, or in the generative model that
expresses how signals or images are generated from
those components. A recent innovation explored through
ICA is noise contrastive estimation (NCE); this may be
seen as approaching the spirit of GANs [9]: the objective
function for learning independent components compares a
statistic applied to noise with that produced by a candidate
generative model [10]. The original NCE approach did not
include updates to the generator.
What other comparisons can be made between GANs
and the standard tools of signal processing? For PCA,
3
ICA, Fourier and wavelet representations, the latent space
of GANs is, by analogy, the coefficient space of what we
commonly refer to as transform space. What sets GANs
apart from these standard tools of signal processing is
the level of complexity of the models that map vectors
from latent space to image space. Because the generator
networks contain non-linearities, and can be of almost
arbitrary depth, this mapping – as with many other deep
learning approaches – can be extraordinarily complex.
With regard to deep image-based models, modern
approaches to generative image modelling can be grouped
into explicit density models and implicit density models.
Explicit density models are either tractable (change of
variables models, autoregressive models) or intractable
(directed models trained with variational inference, undirected models trained using Markov chains). Implicit density models capture the statistical distribution of the data
through a generative process which makes use of either
ancestral sampling [11] or Markov chain-based sampling.
GANs fall into the directed implicit model category. A more
detailed overview and relevant papers can be found in Ian
Goodfellow’s NIPS 2016 tutorial [12].
III. GAN A RCHITECTURES
A. Fully Connected GANs
The first GAN architectures used fully connected neural
networks for both the generator and discriminator [1]. This
type of architecture was applied to relatively simple image
datasets, namely MNIST (hand written digits), CIFAR-10
(natural images) and the Toronto Face Dataset (TFD).
B. Convolutional GANs
Going from fully-connected to convolutional neural networks is a natural extension, given that CNNs are extremely well suited to image data. Early experiments conducted on CIFAR-10 suggested that it was more difficult
to train generator and discriminator networks using CNNs
with the same level of capacity and representational power
as the ones used for supervised learning.
The Laplacian pyramid of adversarial networks (LAPGAN) [13] offered one solution to this problem, by decomposing the generation process using multiple scales:
a ground truth image is itself decomposed into a Laplacian
pyramid, and a conditional, convolutional GAN is trained
to produce each layer given the one above.
Additionally, Radford et al. [5] proposed a family of network architectures called DCGAN (for “deep convolutional
GAN”) which allows training a pair of deep convolutional
generator and discriminator networks. DCGANs make use
of strided and fractionally-strided convolutions which allow
the spatial down-sampling and up-sampling operators to
SUBMITTED TO IEEE-SPM, APRIL 2017
4
Fig. 2. During GAN training, the generator is encouraged to produce a distribution of samples, pg (x) to match that of real data, pdata (x). For
an appropriately parametrized and trained GAN, these distributions will be nearly identical. The representations embodied by GANs are captured
in the learned parameters (weights) of the generator and discriminator networks.
be learned during training. These operators handle the
change in sampling rates and locations, a key requirement in mapping from image space to possibly lowerdimensional latent space, and from image space to a
discriminator. Further details of the DCGAN architecture
and training are presented in Section IV-B.
As an extension to synthesizing images in 2D, Wu et
al. [14] presented GANs that were able to synthesize 3D
data samples using volumetric convolutions. Wu et al. [14]
synthesized novel objects including chairs, table and cars;
in addition, they also presented a method to map from 2D
image images to 3D versions of objects portrayed in those
images.
C. Conditional GANs
Mirza et al. [15] extended the (2D) GAN framework to
the conditional setting by making both the generator and
the discriminator networks class-conditional (Fig. 3). Conditional GANs have the advantage of being able to provide
better representations for multi-modal data generation. A
parallel can be drawn between conditional GANs and
InfoGAN [16], which decomposes the noise source into
an incompressible source and a “latent code”, attempting
to discover latent factors of variation by maximizing the
mutual information between the latent code and the generator’s output. This latent code can be used to discover
object classes in a purely unsupervised fashion, although
it is not strictly necessary that the latent code be categorical. The representations learned by InfoGAN appear
to be semantically meaningful, dealing with complex intertangled factors in image appearance, including variations
in pose, lighting and emotional content of facial images
[16].
D. GANs with Inference Models
In their original formulation, GANs lacked a way to map
a given observation, x, to a vector in latent space – in the
GAN literature, this is often referred to as an inference
mechanism. Several techniques have been proposed to
invert the generator of pre-trained GANs [17], [18]. The
independently proposed Adversarially Learned Inference
(ALI) [19] and Bidirectional GANs [20] provide simple but
effective extensions, introducing an inference network in
which the discriminators examine joint (data, latent) pairs.
In this formulation, the generator consists of two networks: the “encoder” (inference network) and the “decoder”. They are jointly trained to fool the discriminator.
The discriminator itself receives pairs of (x, z) vectors
(see Fig. 4), and has to determine which pair constitutes
a genuine tuple consisting of real image sample and its
encoding, or a fake image sample and the corresponding
latent-space input to the generator.
Ideally, in an encoding-decoding model the output,
referred to as a reconstruction, should be similar to the
input. Typically, the fidelity of reconstructed data samples
synthesised using an ALI/BiGAN are poor. The fidelity of
samples may be improved with an additional adversarial
cost on the distribution of data samples and their reconstructions [21].
E. Adversarial Autoencoders (AAE)
Autoencoders are networks, composed of an “encoder”
and “decoder”, that learn to map data to an internal
latent representation and out again. That is, they learn a
deterministic mapping (via the encoder) from a data space
– e.g., images – into a latent or representation space, and
a mapping (via the decoder) from the latent space back
to data space. The composition of these two mappings
results in a “reconstruction”, and the two mappings are
SUBMITTED TO IEEE-SPM, APRIL 2017
5
Fig. 3. Left, the Conditional GAN, proposed by Mirza et al. [15] performs class-conditional image synthesis; the discriminator performs classconditional discrimination of real from fake images. The InfoGAN (right) [16], on the other hand, has a discriminator network that also estimates
the class label.
Fig. 4. The ALI/BiGAN structure [20], [19] consists of three networks. One of these serves as a discriminator, another maps the noise vectors
from latent space to image space (decoder, depicted as a generator G in the figure), with the final network (encoder, depicted as E ) mapping
from image space to latent space.
trained such that a reconstructed image is as close as
possible to the original.
Autoencoders are reminiscent of the perfectreconstruction filter banks that are widely used in
image and signal processing. However, autoencoders
generally learn non-linear mappings in both directions.
Further, when implemented with deep networks, the
possible architectures that can be used to implement
autoencoders are remarkably flexible. Training can
be unsupervised, with backpropagation being applied
between the reconstructed image and the original in
order to learn the parameters of both the encoder and
the decoder.
As suggested earlier, one often wants the latent space
to have a useful organization. Additionally, one may want to
perform feedforward, ancestral sampling [11] from an autoencoder. Adversarial training provides a route to achieve
these two goals. Specifically, adversarial training may be
applied between the latent space and a desired prior
distribution on the latent space (latent-space GAN). This
results in a combined loss function [22] that reflects both
the reconstruction error and a measure of how different
the distribution of the prior is from that produced by a
candidate encoding network. This approach is akin to a
variational autoencoder (VAE) [23] for which the latentspace GAN plays the role of the KL-divergence term of
the loss function.
Mescheder et al. [24] unified variational autoencoders
with adversarial training in the form of the Adversarial
Variational Bayes (AVB) framework. Similar ideas were
SUBMITTED TO IEEE-SPM, APRIL 2017
presented in Ian Goodfellow’s NIPS 2016 tutorial [12]. AVB
tries to optimise the same criterion as that of variational
autoencoders, but uses an adversarial training objective
rather than the Kullback-Leibler divergence.
IV. T RAINING GAN S
A. Introduction
Training of GANs involves both finding the parameters
of a discriminator that maximize its classification accuracy,
and finding the parameters of a generator which maximally
confuse the discriminator. This training process is summarized in Fig. 5.
The cost of training is evaluated using a value function,
V (G, D) that depends on both the generator and the
discriminator. The training involves solving:
max min V (G, D)
D
G
where
V (G, D) = Epdata (x) log D(x) + Epg (x) log(1 − D(x))
6
Several authors suggested heuristic approaches to address these issues [1], [25]; these are discussed in Section
IV-B.
Early attempts to explain why GAN training is unstable
were proposed by Goodfellow and Salimans et al. [1],
[25] who observed that gradient descent methods typically
used for updating both the parameters of the generator
and discriminator are inappropriate when the solution to
the optimization problem posed by GAN training actually
constitutes a saddle point. Salimans et al. provided a
simple example which shows this [25]. However, stochastic
gradient descent is often used to update neural networks,
and there are well developed machine learning programming environments that make it easy to construct and
update networks using stochastic gradient descent.
Although an early theoretical treatment [1] showed that
the generator is optimal when pg (x) = pdata (x), a
very neat result with a strong underlying intuition, the
real data samples reside on a manifold which sits in a
high-dimensional space of possible representations. For
instance, if colour image samples are of size N × N × 3
with pixel values [0, R+ ]3 , the space that may be represented – which we can call X – is of dimensionality
3N 2 , with each dimension taking values between 0 and
the maximum measurable pixel intensity. The data samples
in the support of pdata , however, constitute the manifold
of the real data associated with some particular problem,
typically occupying a very small part of the total space, X.
Similarly, the samples produced by the generator should
also occupy only a small portion of X.
During training, the parameters of one model are
updated, while the parameters of the other are fixed.
Goodfellow et al. [1] show that for a fixed generator there
p
(x)
.
is a unique optimal discriminator, D ∗ (x) = p data
data (x)+pg (x)
They also show that the generator, G , is optimal when
pg (x) = pdata (x), which is equivalent to the optimal
discriminator predicting 0.5 for all samples drawn from x.
In other words, the generator is optimal when the discriminator, D , is maximally confused and cannot distinguish
real samples from fake ones.
Ideally, the discriminator is trained until optimal with
respect to the current generator; then, the generator is
again updated. However in practice, the discriminator
might not be trained until optimal, but rather may only be
trained for a small number of iterations, and the generator
is updated simultaneously with the discriminator. Further,
an alternate, non-saturating training criterion is typically
used for the generator, using maxG log D(G(z)) rather
than minG log(1 − D(G(z))).
Despite the theoretical existence of unique solutions,
GAN training is challenging and often unstable for several reasons [5][25][26]. One approach to improving GAN
training is to asses the empirical “symptoms” that might
be experienced during training. These symptoms include:
Arjovsky et al. [26] showed that the support pg (x)
and pdata (x) lie in a lower dimensional space than that
corresponding to X. The consequence of this is that pg (x)
and pdata (x) may have no overlap, and so there exists a
nearly trivial discriminator that is capable of distinguishing
real samples, x ∼ pdata (x) from fake samples, x ∼ pg (x)
with 100% accuracy. In this case, the discriminator error
quickly converges to zero. Parameters of the generator
may only be updated via the discriminator, so when this
happens, the gradients used for updating parameters of
the generator also converge to zero and so may no longer
be useful for updates to the generator. Arjovsky et al.’s [26]
explanations account for several of the symptoms related
to GAN training.
Difficulties in getting the pair of models to converge
[5];
• The generative model, “collapsing”, to generate very
similar samples for different inputs [25];
• The discriminator loss converging quickly to zero [26],
providing no reliable path for gradient updates to the
generator.
Goodfellow et al. [1] also showed that when D is
optimal, training G is equivalent to minimizing the JensenShannon divergence between pg (x) and pdata (x). If D
is not optimal, the update may be less meaningful, or
inaccurate. This theoretical insight has motivated research
into cost functions based on alternative distances. Several
of these are explored in Section IV-C.
•
SUBMITTED TO IEEE-SPM, APRIL 2017
7
Fig. 5. The main loop of GAN training. Novel data samples, x0 , may be drawn by passing random samples, z through the generator network.
The gradient of the discriminator may be updated k times before updating the generator.
B. Training Tricks
One of the first major improvements in the training of
GANs for generating images were the DCGAN architectures proposed by Radford et al. [5]. This work was the
result of an extensive exploration of CNN architectures
previously used in computer vision, and resulted in a set of
guidelines for constructing and training both the generator
and discriminator. In Section III-B, we alluded to the importance of strided and fractionally-strided convolutions [27],
which are key components of the architectural design. This
allows both the generator and the discriminator to learn
good up-sampling and down-sampling operations, which
may contribute to improvements in the quality of image
synthesis. More specifically to training, batch normalization
[28] was recommended for use in both networks in order
to stabilize training in deeper models. Another suggestion
was to minimize the number of fully connected layers
used to increase the feasibility of training deeper models.
Finally, Radford et al. [5] showed that using leaky ReLU
activation functions between the intermediate layers of the
discriminator gave superior performance over using regular
ReLUs.
Later, Salimans et al. [25] proposed further heuristic
approaches for stabilizing the training of GANs. The first,
feature matching, changes the objective of the generator
slightly in order to increase the amount of information
available. Specifically, the discriminator is still trained to
distinguish between real and fake samples, but the generator is now trained to match the discriminator’s expected
intermediate activations (features) of its fake samples with
the expected intermediate activations of the real samples.
The second, mini-batch discrimination, adds an extra input
to the discriminator, which is a feature that encodes the
distance between a given sample in a mini-batch and the
other samples. This is intended to prevent mode collapse,
as the discriminator can easily tell if the generator is
producing the same outputs.
A third heuristic trick, heuristic averaging, penalizes
the network parameters if they deviate from a running
average of previous values, which can help convergence
to an equilibrium. The fourth, virtual batch normalization,
reduces the dependency of one sample on the other
samples in the mini-batch by calculating the batch statistics
for normalization with the sample placed within a reference
mini-batch that is fixed at the beginning of training.
Finally, one-sided label smoothing makes the target
for the discriminator 0.9 instead of 1, smoothing the
discriminator’s classification boundary, hence preventing
an overly confident discriminator that would provide weak
gradients for the generator. Sønderby et al. [29] advanced
the idea of challenging the discriminator by adding noise
to the samples before feeding them into the discriminator.
Sønderby et al. [29] argued that one-sided label smoothing
biases the optimal discriminator, whilst their technique,
instance noise, moves the manifolds of the real and fake
samples closer together, at the same time preventing
the discriminator easily finding a discrimination boundary
that completely separates the real and fake samples. In
practice, this can be implemented by adding Gaussian
noise to both the synthesized and real images, annealing
the standard deviation over time. The process of adding
noise to data samples to stabilize training was, later,
formally justified by Arjovsky et al. [26].
C. Alternative formulations
The first part of this section considers other informationtheoretic interpretations and generalizations of GANs. The
SUBMITTED TO IEEE-SPM, APRIL 2017
second part looks at alternative cost functions which aim
to directly address the problem of vanishing gradients.
1) Generalisations of the GAN cost function: Nowozin
et al. [30] showed that GAN training may be generalized
to minimize not only the Jensen-Shannon divergence,
but an estimate of f -divergences; these are referred
to as f -GANs. The f -divergences include well-known
divergence measures such as the Kullback-Leibler divergence. Nowozin et al. showed that f -divergence may
be approximated by applying the Fenchel conjugates of
the desired f -divergence to samples drawn from the
distribution of generated samples, after passing those
samples through a discriminator [30]. They provide a list
of Fenchel conjugates for commonly used f -divergences,
as well as activation functions that may be used in the
final layer of the generator network, depending on the
choice of f -divergence. Having derived the generalized
cost functions for training the generator and discriminator
of an f -GAN, Nowozin et al. [30] observe that, in its
raw form, maximizing the generator objective is likely to
lead to weak gradients, especially at the start of training,
and proposed an alternative cost function for updating the
generator which is less likely to saturate at the beginning of
training. This means that when the discriminator is trained,
the derivative of the f -divergence on the ratio of the real
and fake data distributions is estimated, while when the
generator is trained only an estimate of the f -divergence
is minimized. Uehara et al. [31] extend the f -GAN further,
where in the discriminator step the ratio of the distributions
of real and fake data are predicted, and in the generator
step the f -divergence is directly minimized. Alternatives
to the JS-divergence are also covered by Goodfellow [12].
2) Alternative Cost functions to prevent vanishing gradients: Arjovsky et al. [32] proposed the WGAN, a GAN
with an alternative cost function which is derived from an
approximation of the Wasserstein distance. Unlike the original GAN cost function, the WGAN is more likely to provide
gradients that are useful for updating the generator. The
cost function derived for the WGAN relies on the discriminator, which they refer to as the “critic”, being a k -Lipschitz
continuous function; practically, this may be implemented
by simply clipping the parameters of the discriminator.
However, more recent research [33] suggested that weight
clipping adversely reduces the capacity of the discriminator
model, forcing it to learn simpler functions. Gulrajani et
al. [33] proposed an improved method for training the
discriminator for a WGAN, by penalizing the norm of
discriminator gradients with respect to data samples during
training, rather than performing parameter clipping.
8
D. A Brief Comparison of GAN Variants
GANs allow us to synthesize novel data samples from
random noise, but they are considered difficult to train
due partially to vanishing gradients. All GAN models that
we have discussed in this paper require careful hyperparameter tuning and model selection for training. However,
perhaps the easier models to train are the AAE and the
WGAN. The AAE is relatively easy to train because the
adversarial loss is applied to a fairly simple distribution
in lower dimensions (than the image data). The WGAN
[33], is designed to be easier to train, using a different
formulation of the training objective which does not suffer
from the vanishing gradient problem. The WGAN may also
be trained successfully even without batch normalisation;
it is also less sensitive to the choice of non-linearities used
between convolutional layers.
Samples synthesised using a GAN or WGAN may belong to any class present in the training data. Conditional
GANs provide an approach to synthesising samples with
user specified content.
It is evident from various visualisation techniques
(Fig. 6) that the organisation of the latent space harbours
some meaning, but vanilla GANs do not provide an
inference model to allow data samples to be mapped to
latent representations. Both BiGANs and ALI provide a
mechanism to map image data to a latent space (inference), however, reconstruction quality suggests that they
do not necessarily faithfully encode and decode samples.
A very recent development shows that ALI may recover
encoded data samples faithfully [21]. However, this model
shares a lot in common with the AVB and AAE. These are
autoencoders, similar to variational autoencoders (VAEs),
where the latent space is regularised using adversarial
training rather than a KL-divergence between encoded
samples and a prior.
V. T HE S TRUCTURE
OF
L ATENT S PACE
GANs build their own representations of the data they
are trained on, and in doing so produce structured geometric vector spaces for different domains. This is a quality
shared with other neural network models, including VAEs
[23], as well as linguistic models such as word2vec
[34]. In general, the domain of the data to be modelled
is mapped to a vector space which has fewer dimensions
than the data space, forcing the model to discover interesting structure in the data and represent it efficiently. This
latent space is at the “originating” end of the generator
network, and the data at this level of representation (the
latent space) can be highly structured, and may support
high level semantic operations [5]. Examples include rotation of faces from trajectories through latent space, as
SUBMITTED TO IEEE-SPM, APRIL 2017
9
well as image analogies which have the effect of adding
visual attributes such as eyeglasses on to a “bare” face.
All (vanilla) GAN models have a generator which maps
data from the latent space into the space to be modelled, but many GAN models have an “encoder” which
additionally supports the inverse mapping [19], [20]. This
becomes a powerful method for exploring and using the
structured latent space of the GAN network. With an encoder, collections of labelled images can be mapped into
latent spaces and analysed to discover “concept vectors”
that represent high level attributes such as “smiling” or
“wearing a hat”. These vectors can be applied at scaled
offsets in latent space to influence the behaviour of the
generator (Fig. 6). Similar to using an encoding process
to model the distribution of latent samples, Gurumurthy et
al. [35] propose modelling the latent space as a mixture
of Gaussians and learning the mixture components that
maximize the likelihood of generated data samples under
the data generating distribution.
VI. A PPLICATIONS
OF
GAN S
Discovering new applications for adversarial training
of deep networks is an active area of research. We
examine a few computer vision applications that have
appeared in the literature and have been subsequently
refined. These applications were chosen to highlight some
different approaches to using GAN-based representations
for image-manipulation, analysis or characterization, and
do not fully reflect the potential breadth of application of
GANs.
Using GANs for image classification places them within
the broader context of machine learning and provides a
useful quantitative assessment of the features extracted
in unsupervised learning. Image synthesis remains a
core GAN capability, and is especially useful when the
generated image can be subject to pre-existing constraints.
Super-resolution [36], [37], [38] offers an example of
how an existing approach can be supplemented with
an adversarial loss component to achieve higher quality
results. Finally, image-to-image translation demonstrates
how GANs offer a general purpose solution to a family
of tasks which require automatically converting an input
image into an output image.
A. Classification and Regression
After GAN training is complete, the neural network
can be reused for other downstream tasks. For example,
outputs of the convolutional layers of the discriminator
can be used as a feature extractor, with simple linear
models fitted on top of these features using a modest
quantity of (image, label) pairs [5], [25]. The quality of
the unsupervised representations within a DCGAN network have been assessed by applying a regularized L2SVM classifier to a feature vector extracted from the
(trained) discriminator [5]. Good classification scores were
achieved using this approach on both supervised and
semi-supervised datasets, even those that were disjoint
from the original training data.
The quality of the data representation may be improved when adversarial training includes jointly learning
an inference mechanism such as with an ALI [19]. A
representation vector was built using last three hidden
layers of the ALI encoder, a similar L2-SVM classifier, yet
achieved a misclassification rate significantly lower than
the DCGAN [19]. Additionally, ALI has achieved stateof-the art classification results when label information is
incorporated into the training routine.
When labelled training data is in limited supply, adversarial training may also be used to synthesize more
training samples. Shrivastava et al. [39] use GANs to
refine synthetic images, while maintaining their annotation information. By training models only on GAN-refined
synthetic images (i.e. no real training data) Shrivastava
et al. [39] achieved state-of-the-art performance on pose
and gaze estimation tasks. Similarly, good results were
obtained for gaze estimation and prediction using a spatiotemporal GAN architecture [40]. In some cases, models
trained on synthetic data do not generalize well when
applied to real data [3]. Bousmalis et al. [3] propose
to address this problem by adapting synthetic samples
from a source domain to match a target domain using
adversarial training. Additionally, Liu et al. [41] propose
using multiple GANs – one per domain – with tied weights
to synthesize pairs of corresponding images samples from
different domains.
Because the quality of generated samples is hard to
quantitatively judge across models, classification tasks
are likely to remain an important quantitative tool for
performance assessment of GANs, even as new and
diverse applications in computer vision are explored.
B. Image Synthesis
Much of the recent GAN research focuses on improving
the quality and utility of the image generation capabilities.
The LAPGAN model introduced a cascade of convolutional
networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion [13]. A similar
approach is used by Huang et al. [42] with GANs operating on intermediate representations rather than lower
resolution images.
LAPGAN also extended the conditional version of the
GAN model where both G and D networks receive additional label information as input; this technique has proved
SUBMITTED TO IEEE-SPM, APRIL 2017
10
Fig. 6. Example of applying a “smile vector” with an ALI model [19]. On the left hand side is an example of a woman without a smile and on
the right a woman with a smile. A z value for the image of the woman on the left is inferred, z1 and for the right, z2 . Interpolating along a
vector that connects z1 and z2 , gives z values that may be passed through a generator to synthesize novel samples. Note the implication: a
displacement vector in latent space traverses smile “intensity” in image space.
useful and is now a common practice to improve image
quality. This idea of GAN conditioning was later extended
to incorporate natural language. For example, Reed et al.
[43] used a GAN architecture to synthesize images from
text descriptions, which one might describe as reverse
captioning. For example, given a text caption of a bird
such as “white with some black on its head and wings
and a long orange beak”, the trained GAN can generate
several plausible images that match the description.
In addition to conditioning on text descriptions, the
Generative Adversarial What-Where Network (GAWWN)
conditions on image location [44]. The GAWWN system
supported an interactive interface in which large images
could be built up incrementally with textual descriptions of
parts and user-supplied bounding boxes (Fig. 7).
Conditional GANs not only allow us to synthesize novel
samples with specific attributes, they also allow us to
develop tools for intuitively editing images – for example
editing the hair style of a person in an image, making them
wear glasses or making them look younger [35]. Additional
applications of GANs to image editing include work by Zhu
and Brock et al. [2], [45].
C. Image-to-image translation
Conditional adversarial networks are well suited for
translating an input image into an output image, which is
a recurring theme in computer graphics, image processing, and computer vision. The pix2pix model offers a
general purpose solution to this family of problems [46].
In addition to learning the mapping from input image
to output image, the pix2pix model also constructs
a loss function to train this mapping. This model has
demonstrated effective results for different problems of
computer vision which had previously required separate
machinery, including semantic segmentation, generating
maps from aerial photos, and colorization of black and
white images. Wang et al. present a similar idea, using
GANs to first synthesize surface-normal maps (similar
to depth maps) and then map these images to natural
scenes.
CycleGAN [4] extends this work by introducing a cycle
consistency loss that attempts to preserve the original
image after a cycle of translation and reverse translation.
In this formulation, matching pairs of images are no
longer needed for training. This makes data preparation
much simpler, and opens the technique to a larger family
of applications. For example, artistic style transfer [47]
renders natural images in the style of artists, such as
Picasso or Monet, by simply being trained on an unpaired
collection of paintings and natural images (Fig. 8).
D. Super-resolution
Super-resolution allows a high-resolution image to be
generated from a lower resolution image, with the trained
model inferring photo-realistic details while up-sampling.
The SRGAN model [36] extends earlier efforts by adding
an adversarial loss component which constrains images
to reside on the manifold of natural images.
The SRGAN generator is conditioned on a low resolution image, and infers photo-realistic natural images with
4x up-scaling factors. Unlike most GAN applications, the
adversarial loss is one component of a larger loss function,
which also includes perceptual loss from a pretrained
classifier, and a regularization loss that encourages spatially coherent images. In this context, the adversarial loss
constrains the overall solution to the manifold of natural
images, producing perceptually more convincing solutions.
Customizing deep learning applications can often be
hampered by the availability of relevant curated training
datasets. However, SRGAN is straightforward to customize
to specific domains, as new training image pairs can
easily be constructed by down-sampling a corpus of highresolution images. This is an important consideration in
practice, since the inferred photo-realistic details that the
GAN generates will vary depending on the domain of
images used in the training set.
VII. D ISCUSSION
A. Open Questions
GANs have attracted considerable attention due to their
ability to leverage vast amounts of unlabelled data. While
SUBMITTED TO IEEE-SPM, APRIL 2017
11
Fig. 7. Examples of Image Synthesis using the the Generative Adversarial What-Where Network (GAWWN). In GAWWN, images are conditioned
on both text descriptions and image location specified as either by keypoint or bounding box. Figure reproduced from [44] with authors’ permission.
Fig. 8. CycleGAN model learns image to image translations between two unordered image collections. Shown here are the examples of bidirectional image mappings: Monet paintings to landscape photos, zebras to horses, and summer to winter photos in Yosemite park. Figure
reproduced from [4].
much progress has been made to alleviate some of the
challenges related to training and evaluating GANs, there
still remain several open challenges.
1) Mode Collapse: As articulated in Section IV, a common problem of GANs involves the generator collapsing to
produce a small family of similar samples (partial collapse),
and in the worst case producing simply a single sample
(complete collapse) [26], [48].
Diversity in the generator can be increased by practical
hacks to balance the distribution of samples produced
by the discriminator for real and fake batches, or by
employing multiple GANs to cover the different modes
of the probability distribution [49]. Yet another solution to
alleviate mode collapse is to alter the distance measure
used to compare statistical distributions. Arjovsky [32]
proposed to compare distributions based on a Wasserstein
distance rather than a KL-based divergence (DCGAN [5])
or a total-variation distance (energy-based GAN [50]).
Metz et al. [51] proposed unrolling the discriminator for
several steps, i.e., letting it calculate its updates on the
current generator for several steps, and then using the
“unrolled” discriminators to update the generator using the
normal minimax objective. As normal, the discriminator
only trains on its update from one step, but the generator
now has access to how the discriminator would update
itself. With the usual one step generator objective, the
discriminator will simply assign a low probability to the
generator’s previous outputs, forcing the generator to
move, resulting either in convergence, or an endless cycle
of mode hopping. However, with the unrolled objective,
the generator can prevent the discriminator from focusing
on the previous update, and update its own generations
with the foresight of how the discriminator would have
responded.
2) Training instability – saddle points: In a GAN, the
Hessian of the loss function becomes indefinite. The
optimal solution, therefore, lies in finding a saddle point
rather than a local minimum. In deep learning, a large
number of optimizers depend only on the first derivative
of the loss function; converging to a saddle point for
GANs requires good initialization. By invoking the stable
manifold theorem from non-linear systems theory, Lee et
al. [52] showed that, were we to select the initial points
of an optimizer at random, gradient descent would not
converge to a saddle with probability one (also see [53],
[25]). Additionally, Mescheder et al. [54] have argued that
convergence of a GAN’s objective function suffers from
the presence of a zero real part of the Jacobian matrix
SUBMITTED TO IEEE-SPM, APRIL 2017
as well as eigenvalues with large imaginary parts. This is
disheartening for GAN training; yet, due to the existence of
second-order optimizers, not all hope is lost. Unfortunately,
Newton-type methods have compute-time complexity that
scales cubically or quadratically with the dimension of the
parameters. Therefore, another line of questions lies in applying and scaling second-order optimizers for adversarial
training.
A more fundamental problem is the existence of an
equilibrium for a GAN. Using results from Bayesian nonparametrics, Arora et al. [48] connects the existence of
the equilibrium to a finite mixture of neural networks – this
means that below a certain capacity, no equilibrium might
exist. On a closely related note, it has also been argued
that whilst GAN training can appear to have converged,
the trained distribution could still be far away from the
target distribution. To alleviate this issue, Arora et al. [48]
propose a new measure called the ‘neural net distance’.
3) Evaluating Generative Models: How can one gauge
the fidelity of samples synthesized by a generative models? Should we use a likelihood estimation? Can a GAN
trained using one methodology be compared to another
(model comparison)? These are open-ended questions
that are not only relevant for GANs, but also for probabilistic models, in general. Theis [55] argued that evaluating GANs using different measures can lead conflicting
conclusions about the quality of synthesised samples; the
decision to select one measure over another depends on
the application.
B. Conclusions
The explosion of interest in GANs is driven not only by
their potential to learn deep, highly non-linear mappings
from a latent space into a data space and back, but also
by their potential to make use of the vast quantities of
unlabelled image data that remain closed to deep representation learning. Within the subtleties of GAN training,
there are many opportunities for developments in theory
and algorithms, and with the power of deep networks,
there are vast opportunities for new applications.
ACKNOWLEDGMENT
The authors would like to thank David Warde-Farley for
his valuable feedback on previous revisions of the paper.
Antonia Creswell acknowledges the support of the EPSRC
through a Doctoral training scholarship.
R EFERENCES
[1] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,”
in Advances in Neural Information Processing Systems, 2014, pp.
2672–2680.
12
[2] J.-Y. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros, “Generative visual manipulation on the natural image manifold,” in
European Conference on Computer Vision. Springer, 2016, pp.
597–613.
[3] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan, “Unsupervised pixel-level domain adaptation with generative
adversarial networks,” in IEEE Conference on Computer Vision
and Pattern Recognition, 2016.
[4] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-toimage translation using cycle-consistent adversarial networks,” in
Proceedings of the International Conference on Computer Vision,
2017. [Online]. Available: https://arxiv.org/abs/1703.10593
[5] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial
networks,” in Proceedings of the 5th International Conference on
Learning Representations (ICLR) - workshop track, 2016.
[6] A. Creswell and A. A. Bharath, “Adversarial training for sketch retrieval,” in Computer Vision – ECCV 2016 Workshops: Amsterdam,
The Netherlands, October 8-10 and 15-16, 2016, Proceedings,
Part I. Springer International Publishing, 2016.
[7] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol.
521, no. 7553, pp. 436–444, 2015.
[8] H. Hotelling, “Analysis of a complex of statistical variables into
principal components.” Journal of educational psychology, vol. 24,
no. 6, p. 417, 1933.
[9] I. J. Goodfellow, “On distinguishability criteria for estimating generative models,” International Conference on Learning Representations - workshop track, 2015.
[10] M. Gutmann and A. Hyvärinen, “Noise-contrastive estimation: A
new estimation principle for unnormalized statistical models.” in
AISTATS, vol. 1, no. 2, 2010, p. 6.
[11] Y. Bengio, L. Yao, G. Alain, and P. Vincent, “Generalized denoising
auto-encoders as generative models,” in Advances in Neural
Information Processing Systems, 2013, pp. 899–907.
[12] I. Goodfellow, “Nips 2016 tutorial: Generative adversarial
networks,” 2016, presented at the Neural Information Processing
Systems Conference. [Online]. Available: https://arxiv.org/abs/
1701.00160
[13] E. L. Denton, S. Chintala, R. Fergus et al., “Deep generative
image models using a laplacian pyramid of adversarial networks,”
in Advances in Neural Information Processing Systems, 2015, pp.
1486–1494.
[14] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum,
“Learning a probabilistic latent space of object shapes via 3d
generative-adversarial modeling,” in Advances in Neural Information Processing Systems, 2016, pp. 82–90.
[15] M. Mirza and S. Osindero, “Conditional generative adversarial
nets,” arXiv preprint arXiv:1411.1784, 2014.
[16] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever,
and P. Abbeel, “Infogan: Interpretable representation learning by
information maximizing generative adversarial nets,” in Advances
in Neural Information Processing Systems, 2016.
[17] A. Creswell and A. A. Bharath, “Inverting the generator of a
generative adversarial network,” in NIPS Workshop on Adversarial
Training, 2016.
[18] Z. C. Lipton and S. Tripathi, “Precise recovery of latent vectors
from generative adversarial networks,” in ICLR (workshop track),
2017.
[19] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb,
M. Arjovsky, and A. Courville, “Adversarially learned inference,” in
(accepted, to appear) Proceedings of the International Conference
on Learning Representations, 2017.
[20] J. Donahue, P. Krähenbühl, and T. Darrell, “Adversarial feature
learning,” in (accepted, to appear) Proceedings of the International
Conference on Learning Representations, 2017.
SUBMITTED TO IEEE-SPM, APRIL 2017
[21] C. Li, H. Liu, C. Chen, Y. Pu, L. Chen, R. Henao, and L. Carin,
“Towards understanding adversarial learning for joint distribution
matching,” in Advances in Neural Information Processing Systems,
2017.
[22] A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow,
“Adversarial autoencoders,” in International Conference on
Learning Representations (to appear), 2016. [Online]. Available:
http://arxiv.org/abs/1511.05644
[23] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,”
in Proceedings of the 2nd International Conference on Learning
Representations (ICLR), 2014.
[24] L. M. Mescheder, S. Nowozin, and A. Geiger, “Adversarial
variational bayes: Unifying variational autoencoders and
generative adversarial networks,” 2017. [Online]. Available:
http://arxiv.org/abs/1701.04722
[25] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford,
and X. Chen, “Improved techniques for training gans,” in Advances
in Neural Information Processing Systems, 2016, pp. 2226–2234.
[26] M. Arjovsky and L. Bottou, “Towards principled methods for
training generative adversarial networks,” NIPS 2016 Workshop
on Adversarial Training, 2016.
[27] E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE transactions on pattern
analysis and machine intelligence, vol. 39, no. 4, pp. 640–651,
2017.
[28] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep
network training by reducing internal covariate shift,” in Proceedings of The 32nd International Conference on Machine Learning,
2015, pp. 448–456.
[29] C. K. Sønderby, J. Caballero, L. Theis, W. Shi, and F. Huszár,
“Amortised map inference for image super-resolution,” in International Conference on Learning Representations, 2017.
[30] S. Nowozin, B. Cseke, and R. Tomioka, “f-gan: Training generative
neural samplers using variational divergence minimization,” in
Advances in Neural Information Processing Systems, 2016, pp.
271–279.
[31] M. Uehara, I. Sato, M. Suzuki, K. Nakayama, and Y. Matsuo,
“Generative adversarial nets from a density ratio estimation perspective,” arXiv preprint arXiv:1610.02920, 2016.
[32] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” in
Proceedings of The 34nd International Conference on Machine
Learning, 2017.
[33] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville,
“Improved training of wasserstein gans,” in (accepted, to appear)
Advances in Neural Information Processing Systems, 2017.
[34] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation
of word representations in vector space,” in International Conference on Learning Representations, 2013.
[35] S. Gurumurthy, R. K. Sarvadevabhatla, and V. B. Radhakrishnan,
“Deligan: Generative adversarial networks for diverse and limited
data,” in IEEE Conference On Computer Vision and Pattern
Recognition (CVPR), 2017.
[36] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Aitken, A. Tejani,
J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image
super-resolution using a generative adversarial network,” in IEEE
Conference on Computer Vision and Pattern Recognition, 2017.
[37] X. Yu and F. Porikli, “Ultra-resolving face images by discriminative generative networks,” in European Conference on Computer
Vision. Springer, 2016, pp. 318–333.
[38] ——, “Hallucinating very low-resolution unaligned and noisy face
images by transformative discriminative autoencoders,” in Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2017, pp. 3760–3768.
[39] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and
R. Webb, “Learning from simulated and unsupervised images
13
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
[55]
through adversarial training,” in IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2016.
M. Zhang, K. T. Ma, J. H. Lim, Q. Zhao, and J. Feng, “Deep future
gaze: Gaze anticipation on egocentric videos using adversarial
networks,” in IEEE Conference on Computer Vision and Pattern
Recognition, 2017, pp. 4372–4381.
M.-Y. Liu and O. Tuzel, “Coupled generative adversarial networks,”
in Advances in neural information processing systems, 2016, pp.
469–477.
X. Huang, Y. Li, O. Poursaeed, J. Hopcroft, and S. Belongie,
“Stacked generative adversarial networks,” in IEEE Conference
on Computer Vision and Pattern Recognition, 2016.
S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and
H. Lee, “Generative adversarial text to image synthesis,” in
International Conference on Machine Learning, 2016. [Online].
Available: https://arxiv.org/abs/1605.05396
S. E. Reed, Z. Akata, S. Mohan, S. Tenka, B. Schiele, and
H. Lee, “Learning what and where to draw,” in Advances in Neural
Information Processing Systems, 2016, pp. 217–225.
A. Brock, T. Lim, J. M. Ritchie, and N. Weston, “Neural photo
editing with introspective adversarial networks,” in Proceedings
of the 6th International Conference on Learning Representations
(ICLR), 2017.
P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in IEEE Conference
on Computer Vision and Pattern Recognition (CVPR), 2016.
C. Li and M. Wand, “Precomputed real-time texture synthesis
with Markovian generative adversarial networks,” in European
Conference on Computer Vision. Springer, 2016, pp. 702–716.
S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang, “Generalization and
equilibrium in generative adversarial nets (gans),” in Proceedings
of The 34nd International Conference on Machine Learning, 2017.
I. Tolstikhin, S. Gelly, O. Bousquet, C.-J. Simon-Gabriel, and
B. Schölkopf, “Adagan: Boosting generative models,” Tech. Rep.,
2017.
J. Zhao, M. Mathieu, and Y. LeCun, “Energy-based generative
adversarial network,” in International Conference on Learning
Representations, 2017. [Online]. Available: https://arxiv.org/abs/
1609.03126
L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein, “Unrolled
generative adversarial networks,” in Proceedings of the
International Conference on Learning Representations, 2017.
[Online]. Available: https://arxiv.org/abs/1611.02163
J. D. Lee, M. Simchowitz, M. I. Jordan, and B. Recht, “Gradient
descent only converges to minimizers,” in Conference on Learning
Theory, 2016, pp. 1246–1257.
R. Pemantle, “Nonconvergence to unstable points in urn models
and stochastic approximations,” Ann. Probab., vol. 18, no. 2, pp.
698–712, 04 1990.
L. M. Mescheder, S. Nowozin, and A. Geiger, “The numerics of
gans,” in Advances in Neural Information Processing Systems,
2017. [Online]. Available: http://arxiv.org/abs/1705.10461
L. Theis, A. van den Oord, and M. Bethge, “A note on the evaluation of generative models,” in Proceedings of the International
Conference on Learning Representations.
Antonia Creswell ([email protected]) holds a first-class degree from
Imperial College in Biomedical Engineering (2011), and is currently
a PhD student in the Biologically Inspired Computer Vision (BICV)
Group at Imperial College London (2015). The focus of her PhD is on
improving the training of generative adversarial networks and applying
SUBMITTED TO IEEE-SPM, APRIL 2017
them to visual search and to learning representations in unlabelled
sources of image data.
Tom White Tom received his BS in Mathematics from the University
of University of Georgia, USA, and MS from Massachusetts Institute
of Technology in Media Arts and Sciences. He is currently a senior
lecturer in the School of Design at Victoria University of Wellington,
New Zealand. His current research focuses on exploring the growing
use of constructive machine learning in computational design and
the creative potential of human designers working collaboratively with
artificial neural networks during the exploration of design ideas and
prototyping.
Vincent Dumoulin holds a BSc in Physics and Computer Science from
the University of Montréal. He is a doctoral candidate at the Montréal
Institute for Learning Algorithms under the co-supervision of Yoshua
Bengio and Aaron Courville, working on deep learning approaches to
generative modelling.
Kai Arulkumaran ([email protected]) is a Ph.D. candidate in the Department of Bioengineering at Imperial College London. He received
a B.A. in Computer Science at the University of Cambridge in 2012,
and an M.Sc. in Biomedical Engineering at Imperial College London in
2014. He was a Research Intern in Twitter Magic Pony and Microsoft
Research in 2017. His research focus is deep reinforcement learning
and computer vision for visuomotor control.
Biswa Sengupta received his B.Eng. (Hons.) and M.Sc. degrees in
electrical and computer engineering (2004) and theoretical computer
science (2005) respectively from the University of York. He then read
for a second M.Sc. degree in neural and behavioural sciences (2007) at
the Max Planck Institute for Biological Cybernetics, obtaining his PhD in
theoretical neuroscience (2011) from the University of Cambridge. He
received further training in Bayesian statistics and differential geometry
at the University College London and University of Cambridge before
leading Cortexica Vision Systems as its Chief Scientist. Currently, he
is a visiting scientist at Imperial College London along with leading
machine learning research at Noah’s Ark Lab of Huawei Technologies
UK.
Anil A Bharath ([email protected]) Anil Anthony Bharath is
a Reader in the Department of Bioengineering at Imperial College
London, an Academic Fellow of Imperial’s Data Science Institute and a
Fellow of the Institution of Engineering and Technology. He received a
B.Eng. in Electronic and Electrical Engineering from University College
London in 1988, and a Ph.D. in Signal Processing from Imperial
College London in 1993. He was an academic visitor in the Signal
Processing Group at the University of Cambridge in 2006. He is a
co-founder of Cortexica Vision Systems. His research interests are in
deep architectures for visual inference.
14
| 1 |
arXiv:1707.06203v2 [cs.LG] 14 Feb 2018
Imagination-Augmented Agents
for Deep Reinforcement Learning
Théophane Weber∗ Sébastien Racanière∗ David P. Reichert∗ Lars Buesing
Arthur Guez Danilo Rezende Adria Puigdomènech Badia Oriol Vinyals
Nicolas Heess Yujia Li Razvan Pascanu
Peter Battaglia
Demis Hassabis David Silver Daan Wierstra
DeepMind
Abstract
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep
reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods,
which prescribe how a model should be used to arrive at a policy, I2As learn to
interpret predictions from a learned environment model to construct implicit plans
in arbitrary ways, by using the predictions as additional context in deep policy
networks. I2As show improved data efficiency, performance, and robustness to
model misspecification compared to several baselines.
1
Introduction
A hallmark of an intelligent agent is its ability to rapidly adapt to new circumstances and "achieve
goals in a wide range of environments" [1]. Progress has been made in developing capable agents for
numerous domains using deep neural networks in conjunction with model-free reinforcement learning
(RL) [2–4], where raw observations directly map to values or actions. However, this approach usually
requires large amounts of training data and the resulting policies do not readily generalize to novel
tasks in the same environment, as it lacks the behavioral flexibility constitutive of general intelligence.
Model-based RL aims to address these shortcomings by endowing agents with a model of the
world, synthesized from past experience. By using an internal model to reason about the future,
here also referred to as imagining, the agent can seek positive outcomes while avoiding the adverse
consequences of trial-and-error in the real environment – including making irreversible, poor decisions.
Even if the model needs to be learned first, it can enable better generalization across states, remain
valid across tasks in the same environment, and exploit additional unsupervised learning signals, thus
ultimately leading to greater data efficiency. Another appeal of model-based methods is their ability
to scale performance with more computation by increasing the amount of internal simulation.
The neural basis for imagination, model-based reasoning and decision making has generated a
lot of interest in neuroscience [5–7]; at the cognitive level, model learning and mental simulation
have been hypothesized and demonstrated in animal and human learning [8–11]. Its successful
deployment in artificial model-based agents however has hitherto been limited to settings where an
exact transition model is available [12] or in domains where models are easy to learn – e.g. symbolic
environments or low-dimensional systems [13–16]. In complex domains for which a simulator is
not available to the agent, recent successes are dominated by model-free methods [2, 17]. In such
domains, the performance of model-based agents employing standard planning methods usually
suffers from model errors resulting from function approximation [18, 19]. These errors compound
during planning, causing over-optimism and poor agent performance. There are currently no planning
∗
Equal contribution, corresponding authors: {theophane, sracaniere, reichert}@google.com.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
or model-based methods that are robust against model imperfections which are inevitable in complex
domains, thereby preventing them from matching the success of their model-free counterparts.
We seek to address this shortcoming by proposing Imagination-Augmented Agents, which use
approximate environment models by "learning to interpret" their imperfect predictions. Our algorithm
can be trained directly on low-level observations with little domain knowledge, similarly to recent
model-free successes. Without making any assumptions about the structure of the environment
model and its possible imperfections, our approach learns in an end-to-end way to extract useful
knowledge gathered from model simulations – in particular not relying exclusively on simulated
returns. This allows the agent to benefit from model-based imagination without the pitfalls of
conventional model-based planning. We demonstrate that our approach performs better than modelfree baselines in various domains including Sokoban. It achieves better performance with less data,
even with imperfect models, a significant step towards delivering the promises of model-based RL.
2
The I2A architecture
Figure 1: I2A architecture. ˆ· notation indicates imagined quantities. a): the imagination core (IC)
predicts the next time step conditioned on an action sampled from the rollout policy π̂. b): the IC
imagines trajectories of features fˆ = (ô, r̂), encoded by the rollout encoder. c): in the full I2A,
aggregated rollout encodings and input from a model-free path determine the output policy π.
In order to augment model-free agents with imagination, we rely on environment models – models
that, given information from the present, can be queried to make predictions about the future. We
use these environment models to simulate imagined trajectories, which are interpreted by a neural
network and provided as additional context to a policy network.
In general, an environment model is any recurrent architecture which can be trained in an unsupervised
fashion from agent trajectories: given a past state and current action, the environment model predicts
the next state and any number of signals from the environment. In this work, we will consider
in particular environment models that build on recent successes of action-conditional next-step
predictors [20–22], which receive as input the current observation (or history of observations) and
current action, and predict the next observation, and potentially the next reward. We roll out the
environment model over multiple time steps into the future, by initializing the imagined trajectory
with the present time real observation, and subsequently feeding simulated observations into the
model.
The actions chosen in each rollout result from a rollout policy π̂ (explained in Section 3.1). The
environment model together with π̂ constitute the imagination core module, which predicts next time
steps (Fig 1a). The imagination core is used to produce n trajectories Tˆ1 , . . . , Tˆn . Each imagined
trajectory T̂ is a sequence of features (fˆt+1 , . . . , fˆt+τ ), where t is the current time, τ the length
of the rollout, and fˆt+i the output of the environment model (i.e. the predicted observation and/or
reward).
Despite recent progress in training better environment models, a key issue addressed by I2As is that a
learned model cannot be assumed to be perfect; it might sometimes make erroneous or nonsensical
predictions. We therefore do not want to rely solely on predicted rewards (or values predicted
2
input observations
input action
stacked context
ConvNet
predicted observation
predicted reward
one-hot
tile
Figure 2: Environment model. The
input action is broadcast and concatenated to the observation. A convolutional network transforms this into a
pixel-wise probability distribution for
the output image, and a distribution
for the reward.
from predicted states), as is often done in classical planning. Additionally, trajectories may contain
information beyond the reward sequence (a trajectory could contain an informative subsequence – for
instance solving a subproblem – which did not result in higher reward). For these reasons, we use
a rollout encoder E that processes the imagined rollout as a whole and learns to interpret it, i.e. by
extracting any information useful for the agent’s decision, or even ignoring it when necessary (Fig 1b).
Each trajectory is encoded separately as a rollout embedding ei = E(Tˆi ). Finally, an aggregator A
converts the different rollout embeddings into a single imagination code cia = A(e1 , . . . , en ).
The final component of the I2A is the policy module, which is a network that takes the information
cia from model-based predictions, as well as the output cmf of a model-free path (a network which
only takes the real observation as input; see Fig 1c, right), and outputs the imagination-augmented
policy vector π and estimated value V . The I2A therefore learns to combine information from its
model-free and imagination-augmented paths; note that without the model-based path, I2As reduce to
a standard model-free network [3]. I2As can thus be thought of as augmenting model-free agents by
providing additional information from model-based planning, and as having strictly more expressive
power than the underlying model-free agent.
3
Architectural choices and experimental setup
3.1
Rollout strategy
For our experiments, we perform one rollout for each possible action in the environment. The first
action in the ith rollout is the ith action of the action set A, and subsequent actions for all rollouts are
produced by a shared rollout policy π̂. We investigated several types of rollout policies (random, pretrained) and found that a particularly efficient strategy was to distill the imagination-augmented policy
into a model-free policy. This distillation strategy consists in creating a small model-free network
π̂(ot ), and adding to the total loss a cross entropy auxiliary loss between the imagination-augmented
policy π(ot ) as computed on the current observation, and the policy π̂(ot ) as computed on the same
observation. By imitating the imagination-augmented policy, the internal rollouts will be similar to
the trajectories of the agent in the real environment; this also ensures that the rollout corresponds
to trajectories with high reward. At the same time, the imperfect approximation results in a rollout
policy with higher entropy, potentially striking a balance between exploration and exploitation.
3.2
I2A components and environment models
In our experiments, the encoder is an LSTM with convolutional encoder which sequentially processes
a trajectory T . The features fˆt are fed to the LSTM in reverse order, from fˆt+τ to fˆt+1 , to mimic
Bellman type backup operations.2 The aggregator simply concatenates the summaries. For the
model-free path of the I2A, we chose a standard network of convolutional layers plus one fully
connected one [e.g. 3]. We also use this architecture on its own as a baseline agent.
Our environment model (Fig. 2) defines a distribution which is optimized by using a negative loglikelihood loss lmodel . We can either pretrain the environment model before embedding it (with frozen
weights) within the I2A architecture, or jointly train it with the agent by adding lmodel to the total
loss as an auxiliary loss. In practice we found that pre-training the environment model led to faster
runtime of the I2A architecture, so we adopted this strategy.
2
The choice of forward, backward or bi-directional processing seems to have relatively little impact on the
performance of the I2A, however, and should not preclude investigating different strategies.
3
For all environments, training data for our environment model was generated from trajectories of
a partially trained standard model-free agent (defined below). We use partially pre-trained agents
because random agents see few rewards in some of our domains. However, this means we have to
account for the budget (in terms of real environment steps) required to pretrain the data-generating
agent, as well as to then generate the data. In the experiments, we address this concern in two
ways: by explicitly accounting for the number of steps used in pretraining (for Sokoban), or by
demonstrating how the same pretrained model can be reused for many tasks (for MiniPacman).
3.3
Agent training and baseline agents
Using a fixed pretrained environment model, we trained the remaining I2A parameters with asynchronous advantage actor-critic (A3C) [3]. We added an entropy regularizer on the policy π to
encourage exploration and the auxiliary loss to distill π into the rollout policy π̂ as explained above.
We distributed asynchronous training over 32 to 64 workers; we used the RMSprop optimizer [23]. We
report results after an initial round of hyperparameter exploration (details in Appendix A). Learning
curves are averaged over the top three agents unless noted otherwise.
A separate hyperparameter search was carried out for each agent architecture in order to ensure
optimal performance. In addition to the I2A, we ran the following baseline agents (see Appendix B
for architecture details for all agents).
Standard model-free agent. For our main baseline agent, we chose a model-free standard architecture similar to [3], consisting of convolutional layers (2 for MiniPacman, and 3 for Sokoban) followed
by a fully connected layer. The final layer, again fully connected, outputs the policy logits and the
value function. For Sokoban, we also tested a ‘large’ standard architecture, where we double the
number of all feature maps (for convolutional layers) and hidden units (for fully connected layers).
The resulting architecture has a slightly larger number of parameters than I2A.
Copy-model agent. Aside from having an internal environment model, the I2A architecture is
very different from the one of the standard agent. To verify that the information contained in the
environment model rollouts contributed to an increase in performance, we implemented a baseline
where we replaced the environment model in the I2A with a ‘copy’ model that simply returns the input
observation. Lacking a model, this agent does not use imagination, but uses the same architecture,
has the same number of learnable parameters (the environment model is kept constant in the I2A),
and benefits from the same amount of computation (which in both cases increases linearly with the
length of the rollouts). This model effectively corresponds to an architecture where policy logits and
value are the final output of an LSTM network with skip connections.
4
Sokoban experiments
We now demonstrate the performance of I2A over baselines in a puzzle environment, Sokoban. We
address the issue of dealing with imperfect models, highlighting the strengths of our approach over
planning baselines. We also analyze the importance of the various components of the I2A.
Sokoban is a classic planning problem, where the agent has to push a number of boxes onto given target
locations. Because boxes can only be pushed (as opposed to pulled), many moves are irreversible, and
mistakes can render the puzzle unsolvable. A human player is thus forced to plan moves ahead of time.
We expect that artificial agents will similarly benefit from internal simulation. Our implementation
of Sokoban procedurally generates a new level each episode (see Appendix D.4 for details, Fig. 3
for examples). This means an agent cannot memorize specific puzzles.3 Together with the planning
aspect, this makes for a very challenging environment for our model-free baseline agents, which
solve less than 60% of the levels after a billion steps of training (details below). We provide videos of
agents playing our version of Sokoban online [24].
While the underlying game logic operates in a 10 × 10 grid world, our agents were trained directly
on RGB sprite graphics as shown in Fig. 4 (image size 80 × 80 pixels). There are no aspects of I2As
that make them specific to grid world games.
3
Out of 40 million levels generated, less than 0.7% were repeated. Training an agent on 1 billion frames
requires less than 20 million episodes.
4
Figure 3: Random examples of procedurally generated Sokoban levels. The player (green sprite)
needs to push all 4 boxes onto the red target squares to solve a level, while avoiding irreversible
mistakes. Our agents receive sprite graphics (shown above) as observations.
4.1
I2A performance vs. baselines on Sokoban
Figure 4 (left) shows the learning curves of the I2A architecture and various baselines explained
throughout this section. First, we compare I2A (with rollouts of length 5) against the standard
model-free agent. I2A clearly outperforms the latter, reaching a performance of 85% of levels solved
vs. a maximum of under 60% for the baseline. The baseline with increased capacity reaches 70% still significantly below I2A. Similarly, for Sokoban, I2A far outperforms the copy-model.
Sokoban performance
0.8
0.6
0.4
I2A
standard(large)
standard
no reward I2A
copy-model I2A
0.2
0.0
0.0
0.2
0.4
0.6
Unroll depth analysis
1.0
fraction of levels solved
fraction of levels solved
1.0
0.8
environment steps
0.8
0.6
0.4
0.0
0.0
1.0
1e9
unroll depth
15
5
3
1
0.2
0.2
0.4
0.6
environment steps
0.8
1.0
1e9
Figure 4: Sokoban learning curves. Left: training curves of I2A and baselines. Note that I2A use
additional environment observations to pretrain the environment model, see main text for discussion.
Right: I2A training curves for various values of imagination depth.
Since using imagined rollouts is helpful for this task, we investigate how the length of individual
rollouts affects performance. The latter was one of the hyperparameters we searched over. A
breakdown by number of unrolling/imagination steps in Fig. 4 (right) shows that using longer rollouts,
while not increasing the number of parameters, increases performance: 3 unrolling steps improves
speed of learning and top performance significantly over 1 unrolling step, 5 outperforms 3, and as a
test for significantly longer rollouts, 15 outperforms 5, reaching above 90% of levels solved. However,
in general we found diminishing returns with using I2A with longer rollouts. It is noteworthy that
5 steps is relatively small compared to the number of steps taken to solve a level, for which our
best agents need about 50 steps on average. This implies that even such short rollouts can be highly
informative. For example, they allow the agent to learn about moves it cannot recover from (such
as pushing boxes against walls, in certain contexts). Because I2A with rollouts of length 15 are
significantly slower, in the rest of this section, we choose rollouts of length 5 to be our canonical I2A
architecture.
It terms of data efficiency, it should be noted that the environment model in the I2A was pretrained
(see Section 3.2). We conservatively measured the total number of frames needed for pretraining to
be lower than 1e8. Thus, even taking pretraining into account, I2A outperforms the baselines after
seeing about 3e8 frames in total (compare again Fig. 4 (left)). Of course, data efficiency is even better
if the environment model can be reused to solve multiple tasks in the same environment (Section 5).
4.2
Learning with imperfect models
One of the key strengths of I2As is being able to handle learned and thus potentially imperfect
environment models. However, for the Sokoban task, our learned environment models actually
perform quite well when rolling out imagined trajectories. To demonstrate that I2As can deal with
less reliable predictions, we ran another experiment where the I2A used an environment model that
had shown much worse performance (due to a smaller number of parameters), with strong artifacts
accumulating over iterated rollout predictions (Fig. 5, left). As Fig. 5 (right) shows, even with such a
5
clearly flawed environment model, I2A performs similarly well. This implies that I2As can learn to
ignore the latter parts of the rollout as errors accumulate, but still use initial predictions when errors
are less severe. Finally, note that in our experiments, surprisingly, the I2A agent with poor model
ended outperforming the I2A agent with good model. We posit this was due to random initialization,
though we cannot exclude the noisy model providing some form of regularization — more work will
be required to investigate this effect.
fraction of levels solved
1.0
Sokoban good vs. bad models
I2A: good model
I2A: poor model
MC: good model
MC: poor model
0.8
0.6
0.4
0.2
0.0
0.0
0.2
0.4
0.6
environment steps
0.8
1.0
1e9
Figure 5: Experiments with a noisy environment model. Left: each row shows an example 5-step
rollout after conditioning on an environment observation. Errors accumulate and lead to various
artefacts, including missing or duplicate sprites. Right: comparison of Monte-Carlo (MC) search and
I2A when using either the accurate or the noisy model for rollouts.
Learning a rollout encoder is what enables I2As to deal with imperfect model predictions. We can
further demonstrate this point by comparing them to a setup without a rollout encoder: as in the
classic Monte-Carlo search algorithm of Tesauro and Galperin [25], we now explicitly estimate the
value of each action from rollouts, rather than learning an arbitrary encoding of the rollouts, as in
I2A. We then select actions according to those values. Specifically, we learn a value function V from
states, and, using a rollout policy π̂, sample a trajectory
rollout for each initial action, and compute
P
the corresponding estimated Monte Carlo return t≤T γ t rta + V (xaT ) where ((xat , rta ))t=0..T comes
from a P
trajectory initialized with action a. Action a is chosen with probability proportional to
exp(−( t=0..T γ t rta + V (xaT ))/δ), where δ is a learned temperature. This can be thought of as a
form of I2A with a fixed summarizer (which computes returns), no model-free path, and very simple
policy head. In this architecture, only V, π̂ and δ are learned.4
We ran this rollout encoder-free agent on Sokoban with both the accurate and the noisy environment
model. We chose the length of the rollout to be optimal for each environment model (from the same
range as for I2A, i.e. from 1 to 5). As can be seen in Fig. 5 (right),5 when using the high accuracy
environment model, the performance of the encoder-free agent is similar to that of the baseline
standard agent. However, unlike I2A, its performance degrades catastrophically when using the poor
model, showcasing the susceptibility to model misspecification.
4.3
Further insights into the workings of the I2A architecture
So far, we have studied the role of the rollout encoder. To show the importance of various other
components of the I2A, we performed additional control experiments. Results are plotted in Fig. 4
(left) for comparison. First, I2A with the copy model (Section 3.3) performs far worse, demonstrating
that the environment model is indeed crucial. Second, we trained an I2A where the environment
model was predicting no rewards, only observations. This also performed worse. However, after
much longer training (3e9 steps), these agents did recover performance close to that of the original
I2A (see Appendix D.2), which was never the case for the baseline agent even with that many
steps. Hence, reward prediction is helpful but not absolutely necessary in this task, and imagined
observations alone are informative enough to obtain high performance on Sokoban. Note this is in
contrast to many classical planning and model-based reinforcement learning methods, which often
rely on reward prediction.
4
5
the rollout policy is still learned by distillation from the output policy
Note: the MC curves in Fig. 5 only used a single agent rather than averages.
6
4.4
Imagination efficiency and comparison with perfect-model planning methods
I2A@87
∼ 1400
I2A MC search @95 ∼ 4000
MCTS@87
∼ 25000
MCTS@95
∼ 100000
Random search ∼ millions
Boxes
1 2 3 4 5 6 7
I2A (%) 99.5 97 92 87 77 66 53
Standard (%) 97 87 72 60 47 32 23
Table 1: Imagination efficiency of various
architectures.
Table 2: Generalization of I2A to environments with different number of boxes.
In previous sections, we illustrated that I2As can be used to efficiently solve planning problems and
can be robust in the face of model misspecification. Here, we ask a different question – if we do
assume a nearly perfect model, how does I2A compare to competitive planning methods? Beyond
raw performance we focus particularly on the efficiency of planning, i.e. the number of imagination
steps required to solve a fixed ratio of levels. We compare our regular I2A agent to a variant of
Monte Carlo Tree Search (MCTS), which is a modern guided tree search algorithm [12, 26]. For
our MCTS implementation, we aimed to have a strong baseline by using recent ideas: we include
transposition tables [27], and evaluate the returns of leaf nodes by using a value network (in this case,
a deep residual value network trained with the same total amount of data as I2A; see appendix D.3
for further details).
Running MCTS on Sokoban, we find that it can achieve high performance, but at a cost of a much
higher number of necessary environment model simulation steps: MCTS reaches the I2A performance
of 87% of levels solved when using 25k model simulation steps on average to solve a level, compared
to 1.4k environment model calls for I2A. Using even more simulation steps, MCTS performance
increases further, e.g. reaching 95% with 100k steps.
If we assume access to a high-accuracy environment model (including the reward prediction), we
can also push I2A performance further, by performing basic Monte-Carlo search with a trained I2A
for the rollout policy: we let the agent play whole episodes in simulation (where I2A itself uses the
environment model for short-term rollouts, hence corresponding to using a model-within-a-model),
and execute a successful action sequence if found, up to a maximum number of retries; this is
reminiscent of nested rollouts [28]. With a fixed maximum of 10 retries, we obtain a score of 95%
(up from 87% for the I2A itself). The total average number of model simulation steps needed to
solve a level, including running the model in the outer loop, is now 4k, again much lower than the
corresponding MCTS run with 100k steps. Note again, this approach requires a nearly perfect model;
we don’t expect I2A with MC search to perform well with approximate models. See Table 1 for a
summary of the imagination efficiency for the different methods.
4.5
Generalization experiments
Lastly, we probe the generalization capabilities of I2As, beyond handling random level layouts in
Sokoban. Our agents were trained on levels with 4 boxes. Table 2 shows the performance of I2A
when such an agent was tested on levels with different numbers of boxes, and that of the standard
model-free agent for comparison. We found that I2As generalizes well; at 7 boxes, the I2A agent is
still able to solve more than half of the levels, nearly as many as the standard agent on 4 boxes.
5
Learning one model for many tasks in MiniPacman
In our final set of experiments, we demonstrate how a single model, which provides the I2A with a
general understanding of the dynamics governing an environment, can be used to solve a collection
of different tasks. We designed a simple, light-weight domain called MiniPacman, which allows us to
easily define multiple tasks in an environment with shared state transitions and which enables us to
do rapid experimentation.
In MiniPacman (Fig. 6, left), the player explores a maze that contains food while being chased by
ghosts. The maze also contains power pills; when eaten, for a fixed number of steps, the player moves
faster, and the ghosts run away and can be eaten. These dynamics are common to all tasks. Each task
7
is defined by a vector wrew ∈ R5 , associating a reward to each of the following five events: moving,
eating food, eating a power pill, eating a ghost, and being eaten by a ghost. We consider five different
reward vectors inducing five different tasks. Empirically we found that the reward schemes were
sufficiently different to lead to very different high-performing policies6 (for more details on the game
and tasks, see appendix C.
To illustrate the benefits of model-based methods in this multi-task setting, we train a single environment model to predict both observations (frames) and events (as defined above, e.g. "eating a ghost").
Note that the environment model is effectively shared across all tasks, so that the marginal cost of
learning the model is nil. During training and testing, the I2As have access to the frame and reward
predictions generated by the model; the latter was computed from model event predictions and the
task reward vector wrew . As such, the reward vector wrew can be interpreted as an ‘instruction’ about
which task to solve in the same environment [cf. the Frostbite challenge of 11]. For a fair comparison,
we also provide all baseline agents with the event variable as input.7
We trained baseline agents and I2As separately on each task. Results in Fig. 6 (right) indicate the
benefit of the I2A architecture, outperforming the standard agent in all tasks, and the copy-model
baseline in all but one task. Moreover, we found that the performance gap between I2As and baselines
is particularly high for tasks 4 & 5, where rewards are particularly sparse, and where the anticipation
of ghost dynamics is especially important. We posit that the I2A agent can leverage its environment
and reward model to explore the environment much more effectively.
Task Name
Regular
Avoid
Hunt
Ambush
Rush
Standard model-free
192
-16
-35
-40
1.3
Copy-model
919
3
33
-30
178
I2A
859
23
334
294
214
Figure 6: Minipacman environment. Left: Two frames from a minipacman game. Frames are 15 × 19
RGB images. The player is green, dangerous ghosts red, food dark blue, empty corridors black,
power pills in cyan. After eating a power pill (right frame), the player can eat the 4 weak ghosts
(yellow). Right: Performance after 300 million environment steps for different agents and all tasks.
Note I2A clearly outperforms the other two agents on all tasks with sparse rewards.
6
Related work
Some recent work has focused on applying deep learning to model-based RL. A common approach is
to learn a neural model of the environment, including from raw observations, and use it in classical
planning algorithms such as trajectory optimization [29–31]. These studies however do not address a
possible mismatch between the learned model and the true environment.
Model imperfection has attracted particular attention in robotics, when transferring policies from
simulation to real environments [32–34]. There, the environment model is given, not learned, and
used for pretraining, not planning at test time. Liu et al. [35] also learn to extract information from
trajectories, but in the context of imitation learning. Bansal et al. [36] take a Bayesian approach to
model imperfection, by selecting environment models on the basis of their actual control performance.
The problem of making use of imperfect models was also approached in simplified environment in
Talvitie [18, 19] by using techniques similar to scheduled sampling [37]; however these techniques
break down in stochastic environments; they mostly address the compounding error issue but do not
address fundamental model imperfections.
A principled way to deal with imperfect models is to capture model uncertainty, e.g. by using Gaussian
Process models of the environment, see Deisenroth and Rasmussen [15]. The disadvantage of this
method is its high computational cost; it also assumes that the model uncertainty is well calibrated
and lacks a mechanism that can learn to compensate for possible miscalibration of uncertainty. Cutler
et al. [38] consider RL with a hierarchy of models of increasing (known) fidelity. A recent multi-task
6
For example, in the ‘avoid’ game, any event is negatively rewarded, and the optimal strategy is for the agent
to clear a small space from food and use it to continuously escape the ghosts.
7
It is not necessary to provide the reward vector wrew to the baseline agents, as it is equivalent a constant bias.
8
GP extension of this study can further help to mitigate the impact of model misspecification, but
again suffers from high computational burden in large domains, see Marco et al. [39].
A number of approaches use models to create additional synthetic training data, starting from Dyna
[40], to more recent work e.g. Gu et al. [41] and Venkatraman et al. [42]; these models increase data
efficiency, but are not used by the agent at test time.
Tamar et al. [43], Silver et al. [44], and Oh et al. [45] all present neural networks whose architectures
mimic classical iterative planning algorithms, and which are trained by reinforcement learning or
to predict user-defined, high-level features; in these, there is no explicit environment model. In our
case, we use explicit environment models that are trained to predict low-level observations, which
allows us to exploit additional unsupervised learning signals for training. This procedure is expected
to be beneficial in environments with sparse rewards, where unsupervised modelling losses can
complement return maximization as learning target as recently explored in Jaderberg et al. [46] and
Mirowski et al. [47].
Internal models can also be used to improve the credit assignment problem in reinforcement learning:
Henaff et al. [48] learn models of discrete actions environments, and exploit the effective differentiability of the model with respect to the actions by applying continuous control planning algorithms to
derive a plan; Schmidhuber [49] uses an environment model to turn environment cost minimization
into a network activity minimization.
Kansky et al. [50] learn symbolic networks models of the environment and use them for planning,
but are given the relevant abstractions from a hand-crafted vision system.
Close to our work is a study by Hamrick et al. [51]: they present a neural architecture that queries
learned expert models, but focus on meta-control for continuous contextual bandit problems. Pascanu
et al. [52] extend this work by focusing on explicit planning in sequential environments, and learn
how to construct a plan iteratively.
The general idea of learning to leverage an internal model in arbitrary ways was also discussed by
Schmidhuber [53].
7
Discussion
We presented I2A, an approach combining model-free and model-based ideas to implement
imagination-augmented RL: learning to interpret environment models to augment model-free decisions. I2A outperforms model-free baselines on MiniPacman and on the challenging, combinatorial
domain of Sokoban. We demonstrated that, unlike classical model-based RL and planning methods,
I2A is able to successfully use imperfect models (including models without reward predictions),
hence significantly broadening the applicability of model-based RL concepts and ideas.
As all model-based RL methods, I2As trade-off environment interactions for computation by pondering before acting. This is essential in irreversible domains, where actions can have catastrophic
outcomes, such as in Sokoban. In our experiments, the I2A was always less than an order of magnitude slower per interaction than the model-free baselines. The amount of computation can be varied
(it grows linearly with the number and depth of rollouts); we therefore expect I2As to greatly benefit
from advances on dynamic compute resource allocation (e.g. Graves [54]). Another avenue for
future research is on abstract environment models: learning predictive models at the "right" level of
complexity and that can be evaluated efficiently at test time will help to scale I2As to richer domains.
Remarkably, on Sokoban I2As compare favourably to a strong planning baseline (MCTS) with a
perfect environment model: at comparable performance, I2As require far fewer function calls to the
model than MCTS, because their model rollouts are guided towards relevant parts of the state space
by a learned rollout policy. This points to further potential improvement by training rollout policies
that "learn to query" imperfect models in a task-relevant way.
Acknowledgements
We thank Victor Valdes for designing and implementing the Sokoban environment, Joseph Modayil
for reviewing an early version of this paper, and Ali Eslami, Hado Van Hasselt, Neil Rabinowitz,
Tom Schaul, Yori Zwols for various help and feedback.
9
References
[1] Shane Legg and Marcus Hutter. Universal intelligence: A definition of machine intelligence. Minds and
Machines, 17(4):391–444, 2007.
[2] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra,
and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602,
2013.
[3] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley,
David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In
International Conference on Machine Learning, pages 1928–1937, 2016.
[4] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy
optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15),
pages 1889–1897, 2015.
[5] Demis Hassabis, Dharshan Kumaran, and Eleanor A Maguire. Using imagination to understand the neural
basis of episodic memory. Journal of Neuroscience, 27(52):14365–14374, 2007.
[6] Daniel L Schacter, Donna Rose Addis, Demis Hassabis, Victoria C Martin, R Nathan Spreng, and Karl K
Szpunar. The future of memory: remembering, imagining, and the brain. Neuron, 76(4):677–694, 2012.
[7] Demis Hassabis, Dharshan Kumaran, Seralynne D Vann, and Eleanor A Maguire. Patients with hippocampal amnesia cannot imagine new experiences. Proceedings of the National Academy of Sciences, 104(5):
1726–1731, 2007.
[8] Edward C Tolman. Cognitive maps in rats and men. Psychological Review, 55(4):189, 1948.
[9] Anthony Dickinson and Bernard Balleine. The Role of Learning in the Operation of Motivational Systems.
John Wiley & Sons, Inc., 2002.
[10] Brad E Pfeiffer and David J Foster. Hippocampal place-cell sequences depict future paths to remembered
goals. Nature, 497(7447):74–79, 2013.
[11] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines
that learn and think like people. arXiv preprint arXiv:1604.00289, 2016.
[12] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian
Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go
with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
[13] Jing Peng and Ronald J Williams. Efficient learning and planning within the dyna framework. Adaptive
Behavior, 1(4):437–454, 1993.
[14] Pieter Abbeel and Andrew Y Ng. Exploration and apprenticeship learning in reinforcement learning. In
Proceedings of the 22nd international conference on Machine learning, pages 1–8. ACM, 2005.
[15] Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policy search.
In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465–472,
2011.
[16] Sergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under
unknown dynamics. In Advances in Neural Information Processing Systems, pages 1071–1079, 2014.
[17] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David
Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. ICLR, 2016.
[18] Erik Talvitie. Model regularization for stable sample rollouts. In UAI, pages 780–789, 2014.
[19] Erik Talvitie. Agnostic system identification for monte carlo planning. In AAAI, pages 2986–2992, 2015.
[20] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video
prediction using deep networks in atari games. In Advances in Neural Information Processing Systems,
pages 2863–2871, 2015.
[21] Silvia Chiappa, Sébastien Racaniere, Daan Wierstra, and Shakir Mohamed. Recurrent environment
simulators. In 5th International Conference on Learning Representations, 2017.
10
[22] Felix Leibfried, Nate Kushman, and Katja Hofmann. A deep learning approach for joint video frame and
reward prediction in atari games. CoRR, abs/1611.07078, 2016. URL http://arxiv.org/abs/1611.
07078.
[23] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-RMSprop: Divide the gradient by a running average of
its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012.
[24] https://drive.google.com/open?id=0B4tKsKnCCZtQY2tTOThucHVxUTQ, 2017.
[25] Gerald Tesauro and Gregory R Galperin. On-line policy improvement using monte-carlo search. In NIPS,
volume 96, pages 1068–1074, 1996.
[26] Rémi Coulom. Efficient selectivity and backup operators in monte-carlo tree search. In International
Conference on Computers and Games, pages 72–83. Springer, 2006.
[27] Benjamin E Childs, James H Brodeur, and Levente Kocsis. Transpositions and move groups in monte
carlo tree search. In Computational Intelligence and Games, 2008. CIG’08. IEEE Symposium On, pages
389–395. IEEE, 2008.
[28] Christopher D Rosin. Nested rollout policy adaptation for monte carlo tree search. In Ijcai, pages 649–654,
2011.
[29] Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally
linear latent dynamics model for control from raw images. In Advances in Neural Information Processing
Systems, pages 2746–2754, 2015.
[30] Ian Lenz, Ross A Knepper, and Ashutosh Saxena. DeepMPC: Learning deep latent features for model
predictive control. In Robotics: Science and Systems, 2015.
[31] Chelsea Finn and Sergey Levine. Deep visual foresight for planning robot motion. In IEEE International
Conference on Robotics and Automation (ICRA), 2017.
[32] Matthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A survey.
Journal of Machine Learning Research, 10(Jul):1633–1685, 2009.
[33] Eric Tzeng, Coline Devin, Judy Hoffman, Chelsea Finn, Xingchao Peng, Sergey Levine, Kate Saenko, and
Trevor Darrell. Towards adapting deep visuomotor representations from simulated to real environments.
arXiv preprint arXiv:1511.07111, 2015.
[34] Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider, Trevor Blackwell, Joshua Tobin, Pieter
Abbeel, and Wojciech Zaremba. Transfer from simulation to real world through learning deep inverse
dynamics model. arXiv preprint arXiv:1610.03518, 2016.
[35] YuXuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to
imitate behaviors from raw video via context translation. arXiv preprint arXiv:1707.03374, 2017.
[36] Somil Bansal, Roberto Calandra, Ted Xiao, Sergey Levine, and Claire J Tomlin. Goal-driven dynamics
learning via bayesian optimization. arXiv preprint arXiv:1703.09260, 2017.
[37] Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence
prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pages
1171–1179, 2015.
[38] Mark Cutler, Thomas J Walsh, and Jonathan P How. Real-world reinforcement learning via multifidelity
simulators. IEEE Transactions on Robotics, 31(3):655–671, 2015.
[39] Alonso Marco, Felix Berkenkamp, Philipp Hennig, Angela P Schoellig, Andreas Krause, Stefan Schaal,
and Sebastian Trimpe. Virtual vs. real: Trading off simulations and physical experiments in reinforcement
learning with bayesian optimization. arXiv preprint arXiv:1703.01250, 2017.
[40] Richard S Sutton. Integrated architectures for learning, planning, and reacting based on approximating
dynamic programming. In Proceedings of the seventh international conference on machine learning, pages
216–224, 1990.
[41] Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, and Sergey Levine. Continuous deep q-learning with
model-based acceleration. In International Conference on Machine Learning, pages 2829–2838, 2016.
[42] Arun Venkatraman, Roberto Capobianco, Lerrel Pinto, Martial Hebert, Daniele Nardi, and J Andrew
Bagnell. Improved learning of dynamics models for control. In International Symposium on Experimental
Robotics, pages 703–713. Springer, 2016.
11
[43] Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value iteration networks. In
Advances in Neural Information Processing Systems, pages 2154–2162, 2016.
[44] David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel DulacArnold, David Reichert, Neil Rabinowitz, Andre Barreto, et al. The predictron: End-to-end learning and
planning. arXiv preprint arXiv:1612.08810, 2016.
[45] Junhyuk Oh, Satinder Singh, and Honglak Lee. Value prediction network. arXiv preprint arXiv:1707.03497,
2017.
[46] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver,
and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint
arXiv:1611.05397, 2016.
[47] Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andy Ballard, Andrea Banino, Misha Denil,
Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, et al. Learning to navigate in complex environments.
arXiv preprint arXiv:1611.03673, 2016.
[48] Mikael Henaff, William F Whitney, and Yann LeCun. Model-based planning in discrete action spaces.
arXiv preprint arXiv:1705.07177, 2017.
[49] Jürgen Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive
environments. In Neural Networks, 1990., 1990 IJCNN International Joint Conference on, pages 253–258.
IEEE, 1990.
[50] Ken Kansky, Tom Silver, David A Mély, Mohamed Eldawy, Miguel Lázaro-Gredilla, Xinghua Lou, Nimrod
Dorfman, Szymon Sidor, Scott Phoenix, and Dileep George. Schema networks: Zero-shot transfer with a
generative causal model of intuitive physics. Accepted at International Conference for Machine Learning,
2017, 2017.
[51] Jessica B. Hamrick, Andy J. Ballard, Razvan Pascanu, Oriol Vinyals, Nicolas Heess, and Peter W.
Battaglia. Metacontrol for adaptive imagination-based optimization. In Proceedings of the 5th International
Conference on Learning Representations (ICLR 2017), 2017.
[52] Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, David Reichert, Theophane Weber, Sebastien
Racaniere, Lars Buesing, Daan Wierstra, and Peter Battaglia. Learning model-based planning from scratch.
arXiv preprint, 2017.
[53] Jürgen Schmidhuber. On learning to think: Algorithmic information theory for novel combinations of
reinforcement learning controllers and recurrent neural world models. arXiv preprint arXiv:1511.09249,
2015.
[54] Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983,
2016.
[55] Leemon C Baird III. Advantage updating. Technical report, Wright Lab. Technical Report WL-TR-93-1l46.,
1993.
[56] John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic
computation graphs. In Advances in Neural Information Processing Systems, pages 3528–3536, 2015.
[57] Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In European conference on
machine learning, pages 282–293. Springer, 2006.
[58] Sylvain Gelly and David Silver. Combining online and offline knowledge in uct. In Proceedings of the
24th international conference on Machine learning, pages 273–280. ACM, 2007.
[59] Joshua Taylor and Ian Parberry. Procedural generation of sokoban levels. In Proceedings of the International
North American Conference on Intelligent Games and Simulation, pages 5–12, 2011.
[60] Yoshio Murase, Hitoshi Matsubara, and Yuzuru Hiraga. Automatic making of sokoban problems. PRICAI’96: Topics in Artificial Intelligence, pages 592–600, 1996.
12
Supplementary material for:
Imagination-Augmented Agents
for Deep Reinforcement Learning
A
Training and rollout policy distillation details
Each agent used in the paper defines a stochastic policy, i.e. a categorical distribution π(at |ot ; θ) over
discrete actions a. The logits of π(at |ot ; θ) are computed by a neural network with parameters θ,
taking observation ot at timestep t as input. During training, to increase the probability of rewarding
actions being taken, A3C applies an update ∆θ to the parameters θ using policy gradient g(θ):
g(θ) = ∇θ logπ(at |ot ; θ)A(ot , at )
where A(ot , at ) is an estimate of the advantage function [55]. In practice, we learn a value function
V (ot ; θv ) and use it to compute the advantage as the difference of the bootstrapped k-step return and
and the current value estimate:
A(ot , at ) =
X
0
γ t −t rt0 + γ k+1 V (ot+k+1 ; θv ) − V (ot ; θv ).
t≤t0 ≤t+k
The value function V (ot ; θv ) is also computed as the output of a neural network with parameters θv .
The input to the value function network was chosen to be the second to last layer of the policy network
that computes π. The parameter θv are updated with ∆θv towards bootstrapped k-step return:
g(θv ) = −A(ot , at )∂θv V (ot ; θv )
In our numerical implementation, we express the above updates as gradients of a corresponding
surrogate loss [56]. To this surrogate loss, we add an entropy regularizer of
P
λent at π(at |ot ; θ) log π(at |ot ; θ) to encourage exploration, with λent = 10−2 thoughout all experiments. Where applicable, we add a loss for policy distillation consisting of the cross-entropy
between π and π̂:
X
ldist (π, π̂)(ot ) = λdist
π(a|ot ) log π̂(a|ot ),
a
with scaling parameter λdist . Here π̄ denotes that we do not backpropagate gradients of ldist wrt. to the
parameters of the rollout policy through the behavioral policy π. Finally, even though we pre-trained
our environment models, in principle we can also learn it jointly with the I2A agent by a adding an
appropriate log-likelihood term of observations under the model. We will investigate this in future
research. We optimize hyperparameters (learning rate and momentum of the RMSprop optimizer,
gradient clipping parameter, distillation loss scaling λdist where applicable) separately for each agent
(I2A and baselines).
B
Agent and model architecture details
We used rectified linear units (ReLUs) between all hidden layers of all our agents. For the environment
models, we used leaky ReLUs with a slope of 0.01.
B.1
Agents
Standard model-free baseline agent
The standard model-free baseline agent, taken from [3], is a multi-layer convolutional neural network
(CNN), taking the current observation ot as input, followed by a fully connected (FC) hidden layer.
1
This FC layer feeds into two heads: into a FC layer with one output per action computing the policy
logits log π(at |ot , θ); and into another FC layer with a single output that computes the value function
V (ot ; θv ). The sizes of the layers were chosen as follows:
• for MiniPacman: the CNN has two layers, both with 3x3 kernels, 16 output channels and
strides 1 and 2; the following FC layer has 256 units
• for Sokoban: the CNN has three layers with kernel sizes 8x8, 4x4, 3x3, strides of 4, 2, 1 and
number of output channels 32, 64, 64; the following FC has 512 units
I2A
The model free path of the I2A consists of a CNN identical to one of the standard model-free baseline
(without the FC layers). The rollout encoder processes each frame generated by the environment
model with another identically sized CNN. The output of this CNN is then concatenated with the
reward prediction (single scalar broadcast into frame shape). This feature is the input to an LSTM
with 512 (for Sokoban) or 256 (for MiniPacman) units. The same LSTM is used to process all 5
rollouts (one per action); the last output of the LSTM for all rollouts are concatenated into a single
vector cia of length 2560 for Sokoban, and 1280 on MiniPacman. This vector is concatenated with
the output cmf of the model-free CNN path and is fed into the fully connected layers computing policy
logits and value function as in the baseline agent described above.
Copy-model
The copy-model agent has the exact same architecture as the I2A, with the exception of the environment model being replaced by the identity function (constantly returns the input observation).
B.2
Environment models
For the I2A, we pre-train separate auto-regressive models of order 1 for the raw pixel observations of
the MiniPacman and Sokoban environments (see figures 7 and 8) . In both cases, the input to the
model consisted of the last observation ot , and a broadcasted, one-hot representation of the last action
at . Following previous studies, the outputs of the models were trained to predict the next frame ot+1
by stochastic gradient decent on the Bernoulli cross-entropy between network outputs and data ot+1 .
The Sokoban model is a simplified case of the MiniPacman model; the Sokoban model is nearly
entirely local (save for the reward model), while the MiniPacman model needs to deal with nonlocal
interaction (movement of ghosts is affected by position of Pacman, which can be arbitrarily far from
the ghosts).
MiniPacman model
The input and output frames were of size 15 x 19 x 3 (width x height x RGB). The model is depicted
in figure 7. It consisted of a size preserving, multi-scale CNN architecture with additional fully
connected layers for reward prediction. In order to capture long-range dependencies across pixels,
we also make use of a layer we call pool-and-inject, which applies global max-pooling over each
feature map and broadcasts the resulting values as feature maps of the same size and concatenates the
result to the input. Pool-and-inject layers are therefore size-preserving layers which communicate the
max-value of each layer globally to the next convolutional layer.
Sokoban model
The Sokoban model was chosen to be a residual CNN with an additional CNN / fully-connected MLP
pathway for predicting rewards. The input of size 80x80x3 was first processed with convolutions
with a large 8x8 kernel and stride of 8. This reduced representation was further processed with two
size preserving CNN layers before outputting a predicted frame by a 8x8 convolutional layer.
2
output image
concat
concat
pool
and
inject
1x1,
1x1, n1
n3
basic bloc
(n1,n2,n3)
tile WxH
1x1,
n1
concat
1x1, 3
softmax
basic bloc
(16,32,64)
fc(5)
basic bloc
(16,32,64)
1x1, 64
1x1, 64
1x1, 64
concat
1x1, n1n1
10x10,
3x3, n2
1x1, n1
max-pool WxH
output reward
1x1, n2
Input frame
pool and inject
tile 15x19
one-hot
Input
action
Figure 7: The minipacman environment model. The overview is given in the right panel with blowups of the basic convolutional building block (middle panel) and the pool-and-inject layer (left panel).
The basic build block has three hyperparameters n1 , n2 , n3 determining the number of channels in
the convolutions; their numeric values are given in the right panel.
output image
output reward
8x8, 3, *8
softmax
fc(5)
2x2 max-pool
3x3, 32
3x3, 32
3x3, 32
2x2 max-pool
8x8, 32, /8
3x3, 32
concat
Input frame
tile 80x80
one-hot
Input
action
Figure 8: The sokoban environment model.
C
MiniPacman additional details
MiniPacman is played in a 15 × 19 grid-world. Characters, the ghosts and Pacman, move through
a maze. Walls positions are fixed. At the start of each level 2 power pills, a number of ghosts, and
Pacman are placed at random in the world. Food is found on every square of the maze. The number
of ghosts on level k is 1 + level−1
rounded down, where level = 1 on the first level.
2
Game dynamics
Ghosts always move by one square at each time step. Pacman usually moves by one square, except
when it has eaten a power pill, which makes it move by two squares at a time. When moving by 2
squares, if Pacman new position ends up inside a wall, then it is moved back by one square to get
back to a corridor.
We say that Pacman and a ghost meet when they either end up at the same location, or when their
path crosses (even if they do not end up at the same location). When Pacman moves to a square with
food or a power pill, it eats it. Eating a power pill gives Pacman super powers, such as moving at
3
double speed and being able to eat ghosts. The effects of eating a power pill last for 19 time steps.
When Pacman meets a ghost, either Pacman dies eaten by the ghost, or, if Pacman has recently eaten
a power pill, the ghost dies eaten by Pacman.
If Pacman has eaten a power pill, ghosts try to flee from Pacman. They otherwise try to chase Pacman.
A more precise algorithm for the movement of a ghost is given below in pseudo code:
Algorithm 1 move ghost
1: function MOVEGHOST
2:
Inputs: Ghost object
. Contains position and some helper methods
3:
PossibleDirections ← [DOWN, LEFT, RIGHT, UP]
4:
CurrentDirection ← Ghost.current_direction
5:
AllowedDirections ← []
6:
for dir in PossibleDirections do
7:
if Ghost.can_move(dir) then
8:
AllowedDirections + = [dir]
9:
if len(AllowedDirections) == 2 then
. We are in a straight corridor, or at a bend
10:
if Ghost.current_direction in AllowedDirections then
11:
return Ghost.current_direction
12:
if opposite(Ghost.current_direction) == AllowedDirections[0] then
13:
return AllowedDirections[1]
14:
return AllowedDirections[0]
15:
else
. We are at an intersection
16:
if opposite(Ghost.current_direction) in AllowedDirections then
17:
AllowedDirections.remove(opposite(Ghost.current_direction)) . Ghosts do
18:
19:
20:
21:
22:
23:
24:
25:
not turn around
X = normalise(Pacman.position - Ghost.position)
DotProducts = []
for dir in AllowedDirections do
DotProducts + = [dot_product(X, dir)]
if Pacman.ate_super_pill then
return AllowedDirections[argmin(DotProducts)]
else
return AllowedDirections[argmax(DotProducts)]
. Away from Pacman
. Towards Pacman
Task collection
We used 5 different tasks available in MiniPacman. They all share the same environment dynamics
(layout of maze, movement of ghosts, . . . ), but vary in their reward structure and level termination.
The rewards associated with various events for each tasks are given in the table below.
Task
Regular
Avoid
Hunt
Ambush
Rush
At each step
0
0.1
0
0
0
Eating food
1
-0.1
0
-0.1
-0.1
Eating power pill
2
-5
1
0
10
Eating ghost
5
-10
10
10
0
Killed by ghost
0
-20
-20
-20
0
When a level is cleared, a new level starts. Tasks also differ in the way a level was cleared.
• Regular: level is cleared when all the food is eaten;
• Avoid: level is cleared after 128 steps;
• Hunt: level is cleared when all ghosts are eaten or after 80 steps.
• Ambush: level is cleared when all ghosts are eaten or after 80 steps.
• Rush: level is cleared when all power pills are eaten.
4
Figure 9: The pink bar appears when Pacman eats a power pill, and it decreases in size over the
duration of the effect of the pill.
There are no lives, and episode ends when Pacman is eaten by a ghost.
The time left before the effect of the power pill wears off is shown using a pink shrinking bar at the
bottom of the screen as in Fig. 9.
Training curves
1400
Minipacman performance on 'regular'
40
standard
copy model
I2A
1200
30
20
800
Score
Score
1000
600
400
1.0
1.5
Minipacman performance on 'rush'
2.0
400
standard
copy model
I2A
200
350
300
0
−10
2.5
−40
0.0
3.0
1e8
0.5
1.0
1.5
Minipacman performance on 'hunt'
350
standard
copy model
I2A
50
250
200
150
100
0.5
1.0
1.5
2.0
environment steps
2.5
3.0
1e8
−50
0.0
standard
copy model
I2A
200
150
100
0
0
−50
0.0
3.0
1e8
50
50
0
2.5
Minipacman performance on 'ambush'
300
Score
100
2.0
environment steps
250
Score
150
Score
10
−30
0.5
environment steps
250
standard
copy model
I2A
−20
200
0
0.0
Minipacman performance on 'avoid'
0.5
1.0
1.5
2.0
environment steps
2.5
3.0
1e8
−50
0.0
0.5
1.0
1.5
2.0
environment steps
2.5
3.0
1e8
Figure 10: Learning curves for different agents and various tasks
D
D.1
Sokoban additional details
Sokoban environment
In the game of Sokoban, random actions on the levels would solve levels with vanishing probability,
leading to extreme exploration issues for solving the problem with reinforcement learning. To
alleviate this issue, we use a shaping reward scheme for our version of Sokoban:
• Every time step, a penalty of -0.1 is applied to the agent.
• Whenever the agent pushes a box on target, it receives a reward of +1.
• Whenever the agent pushes a box off target, it receives a penalty of -1.
• Finishing the level gives the agent a reward of +10 and the level terminates.
5
The first reward is to encourage agents to finish levels faster, the second to encourage agents to
push boxes onto targets, the third to avoid artificial reward loop that would be induced by repeatedly
pushing a box off and on target, the fourth to strongly reward solving a level. Levels are interrupted
after 120 steps (i.e. agent may bootstrap from a value estimate of the last frame, but the level resets to
a new one). Identical levels are nearly never encountered during training or testing (out of 40 million
levels generated, less than 0.7% were repeated). Note that with this reward scheme, it is always
optimal to solve the level (thus our shaping scheme is valid). An alternative strategy would have been
to have the agent play through a curriculum of increasingly difficult tasks; we expect both strategies
to work similarly.
D.2
Additional experiments
Our first additional experiment compared I2A with and without reward prediction, trained over a
longer horizon. I2A with reward prediction clearly converged shortly after 1e9 steps and we therefore
interrupted training; however, I2A without reward prediction kept increasing performance, and after
3e9 steps, we recover a performance level of close to 80% of levels solved, see Fig. 11.
Sokoban performance
fraction of levels solved
1.0
0.8
0.6
0.4
0.2
I2A
no reward I2A
0.0
0.0
0.5
1.0
1.5
2.0
environment steps
2.5
3.0
1e9
Figure 11: I2A with and without reward prediction, longer training horizon.
Next, we investigated the I2A with Monte-Carlo search (using a near perfect environment model
of Sokoban). We let the agent try to solve the levels up to 16 times within its internal model. The
base I2A architecture was solving around 87% of levels; mental retries boosted its performance to
around 95% of levels solved. Although the agent was allowed up to 16 mental retries, in practice
all the performance increase was obtained within the first 10 mental retries. Exact percentage gain
by each mental retry is shown in Fig. 12. Note in Fig. 12, only 83% of the levels are solved on the
first mental attempt, even though the I2A architecture could solve around 87% of levels. The gap is
explained by the use of an environment model: although it looks nearly perfect to the naked eye, the
model is not actually equivalent to the environment.
Figure 12: Gain in percentage by each additional mental retry using a near perfect environment
model.
6
D.3
Planning with the perfect model and Monte-Carlo Tree Search in Sokoban
We first trained a value network that estimates the value function of a trained model-free policy; to do
this, we trained a model-free agent for 1e9 environment steps. This agent solved close to 60 % of
episodes. Using this agent, we generated 1e8 (frame, return) pairs, and trained the value network to
predict the value (expected return) from the frame; training and test error were comparable, and we
don’t expect increasing the number of training points would have significantly improved the quality
of the the value network.
The value network architecture is a residual network which stacks one convolution layer and 3
convolution blocks with a final fully-connected layer of 128 hidden units. The first convolution is
1 × 1 convolution with 128 feature maps. Each of the three residual convolution block is composed
of two convolutional layers; the first is a 1 × 1 convolution with 32 feature maps, the second a 3 × 3
convolution with 32 feature maps, and the last a 1 × 1 layer with 128 feature maps. To help the
value networks, we trained them not on the pixel representation, but on a 10 × 10 × 4 symbolic
representation.
The trained value network is then employed during search to evaluate leaf-nodes — similar to [12],
replacing the role of traditional random rollouts in MCTS. The tree policy uses [57, 58] with a
fine-tuned exploration constant of 1. Depth-wise transposition tables for the tree nodes are used to
deal with the symmetries in the Sokoban environment. External actions are selected by taking the
max Q value at the root node. The tree is reused between steps but selecting the appropriate subtree
as the root node for the next step.
Reported results are obtained by averaging the results over 250 episodes.
D.4
Level Generation for Sokoban
We detail here our procedural generation for Sokoban levels - we follow closely methods described
in [59, 60].
The generation of a Sokoban level involves three steps: room topology generation, position configuration and room reverse-playing. Topology generation: Given an initial width*height room entirely
constituted by wall blocks, the topology generation consists in creating the ‘empty’ spaces (i.e.
corridors) where boxes, targets and the player can be placed. For this simple random walk algorithm
with a configurable number of steps is applied: a random initial position and direction are chosen.
Afterwards, for every step, the position is updated and, with a probability p = 0.35, a new random
direction is selected. Every ‘visited’ position is emptied together with a number of surrounding wall
blocks, selected by randomly choosing one of the following patterns indicating the adjacent room
blocks to be removed (the darker square represents the reference position, that is, the position being
visited). Note that the room ‘exterior’ walls are never emptied, so from a width×height room only
a (width-2)×(height-2) space can actually be converted into corridors. The random walk approach
guarantees that all the positions in the room are, in principle, reachable by the player. A relatively
small probability of changing the walk direction favours the generation of longer corridors, while
the application of a random pattern favours slightly more convoluted spaces. Position configuration:
Once a room topology is generated, the target locations for the desired N boxes and the player initial
position are randomly selected. There is the obvious prerequisite of having enough empty spaces in
the room to place the targets and the player but no other constraints are imposed in this step.
7
Reverse playing: Once the topology and targets/player positions are generated the room is reverseplayed. In this case, on each step, the player has eight possible actions to choose from: simply moving
or moving+pulling from a box in each possible direction (assuming for the latter, that there is a box
adjacent to the player position).
Initially the room is configured with the boxes placed over their corresponding targets. From that
position a depth-first search (with a configurable maximum depth) is carried out over the space of
possible moves, by ‘expanding’ each reached player/boxes position by iteratively applying all the
possible actions (which are randomly permuted on each step). An entire tree is not explored as
there are different combinations of actions leading to repeated boxes/player configurations which are
skipped.
Statistics are collected for each boxes/player configuration, which is, in turn, scored with a simple
heuristic:
X
RoomScore = BoxSwaps ×
BoxDisplacementi
i
where BoxSwaps represents the number of occasions in which the player stopped pulling from a
given box and started pulling from a different one, while BoxDisplacement represents the Manhattan
distance between the initial and final position of a given box. Also whenever a box or the player
are placed on top of one of the targets the RoomScore value is set to 0. While this scoring heuristic
doesn’t guarantee the complexity of the generated rooms it’s aimed to a) favour room configurations
where overall the boxes are further away from their original positions and b) increase the probability
of a room requiring a more convoluted combination of box moves to get to a solution (by aiming for
solutions with higher boxSwaps values). This scoring mechanism has empirically proved to generate
levels with a balanced combination of difficulties.
The reverse playing ends when there are no more available positions to explore or when a predefined
maximum number of possible room configurations is reached. The room with the higher RoomScore
is then returned.
Defaul parameters:
• A maximum of 10 room topologies and for each of those 10 boxes/player positioning are
retried in case a given combination doesn’t produce rooms with a score > 0.
• The room configuration tree is by default limited to a maximum depth of 300 applied actions.
• The total number of visited positions is by default limited to 1000000.
• Default random-walk steps: 1.5× (room width + room height).
8
| 2 |
arXiv:1704.01960v1 [physics.med-ph] 29 Mar 2017
A coupled mitral valve – left ventricle model with
fluid-structure interaction
Hao Gao
School of Mathematics and Statistics, University of Glasgow, UK
Liuyang Feng
School of Mathematics and Statistics, University of Glasgow, UK
Nan Qi
School of Mathematics and Statistics, University of Glasgow, UK
Colin Berry
Institute of Cardiovascular and Medical Science, University of Glasgow, UK
Boyce Griffith
Departments of Mathematics and Biomedical Engineering and McAllister Heart Institute,
University of North Carolina, Chapel Hill, NC, USA
Xiaoyu Luo
School of Mathematics and Statistics, University of Glasgow, UK
Abstract
Understanding the interaction between the valves and walls of the heart is important in assessing and subsequently treating heart dysfunction. With advancements in cardiac imaging, nonlinear mechanics and computational techniques, it is now possible to explore the mechanics of valve-heart interactions
using anatomically and physiologically realistic models. This study presents an
integrated model of the mitral valve (MV) coupled to the left ventricle (LV),
with the geometry derived from in vivo clinical magnetic resonance images.
Numerical simulations using this coupled MV-LV model are developed using an
immersed boundary/finite element method. The model incorporates detailed
valvular features, left ventricular contraction, nonlinear soft tissue mechanics,
and fluid-mediated interactions between the MV and LV wall. We use the model
to simulate the cardiac function from diastole to systole, and investigate how
myocardial active relaxation function affects the LV pump function. The results
of the new model agree with in vivo measurements, and demonstrate that the
diastolic filling pressure increases significantly with impaired myocardial active
relaxation to maintain the normal cardiac output. The coupled model has the
Preprint submitted to Journal of Medical Engineering & Physics
April 7, 2017
potential to advance fundamental knowledge of mechanisms underlying MV-LV
interaction, and help in risk stratification and optimization of therapies for heart
diseases.
Keywords: mitral valve, left ventricle, fluid structure interaction, immersed
boundary method, finite element method, soft tissue mechanics
2
1. Introduction
The mitral valve (MV) has a complex structure that includes two distinct
asymmetric leaflets, a mitral annulus, and chordal tendinae that connect the
leaflets to papillary muscles that attach to the wall of the left ventricle (LV).
MV dysfunction remains a major medical problem because of its close link to
cardiac dysfunctions leading to morbidity and premature mortality [1].
Computational modelling for understanding the MV mechanics promises
more effective MV repairs and replacement [2, 3, 4, 5]. Biomechanical MV
models have been developed for several decades, starting from the simplified twodimensional approximation to three-dimensional models, and to multi-physics/scale models [6, 7, 8, 9, 10, 11, 12]. Most of previous studies were based on
structural and quasi-static analysis applicable to a closed valve [13]; however,
MV function during the cardiac cycle cannot be fully assessed without modelling
the ventricular dynamics and the fluid-structure interaction (FSI) between the
MV, ventricles, and the blood flow [13, 14].
Because of the complex interactions among the MV, the sub-mitral apparatus, the heart walls, and the associated blood flow, few modelling studies have been carried out that integrate the MV and ventricles in a single
model [15, 16, 17]. Kunzelman, Einstein, and co-workers first simulated normal
and pathological mitral function [18, 19, 20] with FSI using LS-DYNA (Livermore Software Technology Corporation, Livermore, CA, USA) by putting the
MV into a straight tube. Using similar modelling approach, Lau et al. [21] compared MV dynamics with and without FSI, and they found that valvular closure
configuration is different when using the FSI MV model. Similar findings are
reported by Toma et al [22]. Over the last few years, there have also been a
number of FSI valvular models using the immersed boundary (IB) method to
study the flow across the MV [23, 24, 25]. In a series of studies, Toma [26, 22, 27]
developed a FSI MV model based on in vitro MV experimental system to study
the function of the chordal structure, and good agreement was found between
the computational model and in vitro experimental measurements. However,
none of the aforementioned MV models accounted for the MV interaction with
the LV dynamics. Indeed, Lau et al. [21] found that even with a fixed U-shaped
ventricle, the flow pattern is substantially different from that estimated using
a tubular geometry. Despite the advancements in computational modelling of
individual MV [13, 12] and LV models [28, 29, 30], it remains challenging to develop an integrated MV-LV model which includes the strong coupling between
the valvular deformation and the blood flow. Reasons for this include limited
data for model construction, difficult choices of boundary conditions, and large
computational resources required by these simulations.
Wenk et al. [15] reported a structure-only MV-LV model using LS-DYNA
that included the LV, MV, and chordae tendineae. This model was later extended to study MV stress distributions using a saddle shaped and asymmetric
mitral annuloplasty ring [16]. A more complete whole-heart model was recently
developed using a human cardiac function simulator in the Dassault Systemes’s
Living Heart project [17], which includes four ventricular chambers, cardiac
3
valves, electrophysiology, and detailed myofibre and collagen architecture. Using the same simulator, effects of different mitral annulus ring were studied by
Rausch et al. [31]. However, this simulator does not yet account for detailed
FSI.
The earliest valve-heart coupling model that includes FSI is credited to Peskin and McQueen’s pioneering work in the 1970s [32, 33, 34] using the classical
IB approach [35]. Using this same method, Yin et al. [36] investigated fluid vortices associated with the LV motion as a prescribed moving boundary. Recently,
Chandran and Kim [37] reported a prototype FSI MV dynamics in a simplified
LV chamber model during diastolic filling using an immersed interface-like approach. One of the key limitations of these coupled models is the simplified
representation of the biomechanics of the LV wall. To date, there has been no
work reported a coupled MV-LV model which has full FSI and based on realistic
geometry and experimentally-based models of soft tissue mechanics.
This study reports an integrated MV-LV model with FSI derived from in vivo
images of a healthy volunteer. Although some simplifications are made, this is
the first three-dimensional FSI MV-LV model that includes MV dynamics, LV
contraction, and experimentally constrained descriptions of nonlinear soft tissue
mechanics. This work is built on our previous models of the MV [24, 25] and
LV [38, 29]. The model is implemented using a hybrid immersed boundary
method with finite element elasticity (IB/FE) [39].
2. Methodology
2.1. IB/FE Framework
The coupled MV-LV model employs an Eulerian description for the blood,
which is modelled as a viscous incompressible fluid, along with a Lagrangian
description for the structure immersed in the fluid. The fixed physical coordinates are x = (x1 , x2 , x3 ) ∈ Ω, and the Lagrangian reference coordinate system
is X = (X1 , X2 , X3 ) ∈ U . The exterior unit normal along ∂U is N(X). Let
χ(X, t) denote the physical position of any material point X at time t, so that
χ(U, t) = Ωs (t) is the physical region occupied by the immersed structure. The
IB/FE formulation of the FSI system reads
∂u
(x, t) + u(x, t) · ∇u(x, t) = −∇p(x, t) + µ∇2 u(x, t) + f s (x, t),
(1)
ρ
∂t
∇ · u(x, t) = 0,
(2)
Z
f s (x, t) =
∇ · Ps (X, t) δ(x − χ(X, t)) dX
U
Z
−
Ps (X, t) N(X) δ(x − χ(X, t)) dA(X),
∂U
(3)
∂χ
(X, t) =
∂t
Z
u(x, t) δ(x − χ(X, t)) dx,
Ω
4
(4)
where ρ is the fluid density, µ is the fluid viscosity, u is the Eulerian velocity, p is
the Eulerian pressure, and f s is the Eulerian elastic force density. Different from
the classical IB approach [35], here the elastic force density f s is determined from
the first Piola-Kirchoff stress tensor of the immersed structure Ps as in Eq. 3.
This allows the solid deformations to be described using nonlinear soft tissue
constitutive laws. Interactions between the Lagrangian and Eulerian fields are
achieved by integral transforms with a Dirac delta function kernel δ(x) [35] in
Eqs. 3 4. For more details of the hybrid IB/FE framework, please refer to [39].
2.2. MV-LV Model Construction
A cardiac magnetic resonance (CMR) study was performed on a healthy
volunteer (male, age 28). The study was approved by the local NHS Research
Ethics Committee, and written informed consent was obtained before the CMR
scan. Twelve imaging planes along the LV outflow tract (LVOT) view were
imaged to cover the whole MV region shown in Fig. 1(a). LV geometry and
function was imaged with conventional short-axis and long-axis cine images.
The parameters for the LVOT MV cine images were: slice thickness: 3 mm with
0 gap, in-plane pixel size: 0.7×0.7 mm2 , field of view: 302 × 400 mm2 , frame
rate: 25 per cardiac cycle. Short-axis cine images covered the LV region from
the basal plane to the apex, with slice thickness: 7 mm with 3 mm gap, in-plane
pixel size: 1.3 × 1.3 mm2 , and frame rate: 25 per cardiac cycle.
The MV geometry was reconstructed from LVOT MV cine images at earlydiastole, just after the MV opens. The leaflet boundaries were manually delineated from MR images, as shown in Fig. 1(a), in which the heads of papillary
muscle and the annulus ring were identified as shown in Fig. 1(b). The MV
geometry and its sub-valvular apparatus were reconstructed using SolidWorks
(Dassault Systmes SolidWorks Corporation, Waltham, MA, USA). Because it
is difficult to see the chordal structural in the CMR, we modelled the chordae
structure using sixteen evenly distributed chordae tendineae running through
the leaflet free edges to the annulus ring, as shown in Fig. 1(c), following prior
studies [25, 24]. In a similar approach to the MV reconstruction, the LV geometry was reconstructed from the same volunteer at early-diastole by using both
the short-axis and long-axis cine images [40, 29]. Fig. 1(d) shows the inflow and
outflow tracts from one MR image. The LV wall was assembled from the short
and long axis MR images (Fig. 1(e)) to form the three dimensional reconstruction (Fig. 1(f)). The LV model was divided into four regions: the LV and the
valvular region and the inflow and the outflow tracts, as shown in Fig. 1(g).
The MV model was mounted into the inflow tract of the LV model according
to the relative positions derived from the MR images in Fig. 1(g). The left
atrium was not reconstructed but modelled as a tubular structure, the gap
between the MV annulus ring and the LV model was filled using a housing
disc structure. A three-element Windkessel model was attached to the outflow
tract of the LV model to provide physiological pressure boundary conditions
when the LV is in systolic ejection [40]. The chordae were not directly attached
to the LV wall since the papillary muscles were not modelled, similar to [25].
The myocardium has a highly layered myofibre architecture, which is usually
5
described using a fibre-sheet-normal (f , s, n) system. A rule-based method was
used to construct the myofibre orientation within the LV wall. The myofibre
angle was assumed to rotate from -60o to 60o from endocardium to epicardium,
represented by the red arrows in Fig. 1(h). In a similar way, the collagen fibres
in the MV leaflets were assumed to be circumferentially distributed, parallel
along the annulus ring, represented by the yellow arrows in Fig. 1(h).
2.3. Soft Tissue Mechanics
The total Cauchy stress (σ) in the coupled MV-LV system is
(
σ f (x, t) + σ s (x, t) for x ∈ Ωs ,
σ(x, t) =
σ f (x, t)
otherwise,
(5)
where σ f is the fluid-like stress tensor, defined as
σ f (x, t) = −pI + µ[∇u + (∇u)T ].
(6)
σ s is the solid stress tensor obtained from the nonlinear soft tissue consitutive
laws. The first Piola-Kirchhoff stress tensor Ps in Eq. 3 is related to σ s through
Ps = Jσ s F−T ,
(7)
in which F = ∂χ/∂X is the deformation gradient and J = det(F).
In the MV-LV model, we assume the structure below the LV base is contractile (Fig 1(g)), the regions above the LV basal plane, including the MV and its
apparatuses, are passive. Namely,
(
Pp + Pa below the basal plane,
s
P =
(8)
Pp
above the basal plane,
where Pa and Pp are the active and passive Piola-Kirchhoff stress tensors, respectively. The MV leaflets are modelled as an incompressible fibre-reinforced
material with the strain energy function
WMV = C1 (I1 − 3) +
av
(exp[bv (max(Ifc , 1) − 1)2 ] − 1),
2bv
(9)
in which I1 = trace(C) is the first invariant of the right Cauchy-Green deformation tensor C = FT F, Ifc = f0c · (Cf0c ) is the squared stretch along the collagen
fibre direction, and f0c denotes the collagen fibre orientation in the reference
configuration. The max() function ensures the embedded collagen network only
bears the loads when stretched, but not in compression. C1 , av , and bv are
material parameters adopted from a prior study [25] and listed in Table 1. The
passive stress tensor Pp in the MV leaflets is
Pp =
∂WMV
− C1 F−T + βs log(I3 )F−T ,
∂F
6
(10)
where I3 = det(C), and βs is the bulk modulus for ensuring the incompressibility
of immersed solid, so that the pressure-like term C1 F−T ensures the elastic stress
response is zero when F = I.
We model the chordae tendineae as the Neo-Hookean material,
Wchordae = C (I1 − 3),
(11)
where C is the shear modulus. We further assume C is much larger in systole
when the MV is closed than in diastole when the valve is opened. The much
larger value of C models the effects of papillary muscle contraction. Values of
C are listed in Table 1. Pp for the chordae tendineae is similarly derived as in
Eq. 10.
The passive response of the LV myocardium is described using the HolzapfelOgden model [41],
Wmyo =
X ai
a
exp[b(I1 − 3)] +
{exp[bi (max(I4i , 1) − 1)2 ] − 1}
2b
2bi
i=f,s
afs
{exp[bfs (I8fs )2 ] − 1}
+
2bfs
(12)
in which a, b, af , bf , as , bs , afs , bfs are the material parameters, I4f , I4s and I8fs
are the strain invariants related to the the myofibre orientations. Denoting the
myofibre direction in the reference state is f 0 and the sheet direction is s0 , we
have
I4f = f 0 · (Cf 0 ), I4s = s0 · (Cs0 ), and I8fs = f 0 · (Cs0 ).
(13)
The myocardial active stress is defined as
Pa = J T F f 0 ⊗ f 0
(14)
where T is the active tension described by the myofilament model of Niederer et
al. [42], using a set of ordinary differential equations involving the intracellular
calcium transient (Ca2+ ), sarcomere length and the active tension at the resting
sarcomere length (T req ). In our simulations, we use the same parameters as in
ref. [42], except that T req is adjusted to yield realistic contraction as the imaged
volunteer.
All the constitutive parameters in Eqs.9, 11, 12 are summarized in Table 1.
2.4. Boundary Conditions and Model Implementation
Because only the myocardium below the LV basal plane contracts, we fix the
LV basal plane along the circumferential and longitudinal displacements, but allow the radial expansion. The myocardium below the LV basal plane is left free
to move. The valvular region is assumed to be much softer than the LV region.
In diastole, a maximum displacement of 6 mm is allowed in the valvular region
using a tethering force. In systole, the valve region is gradually pulled back to
the original position. The inflow and outflow tracts are fixed. Because the MV
annulus ring are attached to a housing structure which is fixed, no additional
7
boundary conditions are applied to the MV annulus ring. Fluid boundary conditions are applied to the top planes of the inflow and outflow tracts. The function
of the aortic valve is modelled simply: the aortic valve is either fully opened
or fully closed, determined by the pressure difference between the values inside
the LV chamber and the aorta. After end-diastole, the LV region will contract
simultaneously triggered by a spatially homogeneously prescribed intracellular
Ca2+ transient [29], as shown in Fig. 3. The flow boundary conditions in a
cardiac cycle are summarized below.
• Diastolic filling: A linearly ramped pressure from 0 to a populationbased end-diastolic pressure (EDP=8 mmHg) is applied to the inflow tract
over 0.8 s, which is slightly longer than the actual diastolic duration of
the imaged volunteer (0.6 s). In diastole about 80% of diastolic filling
volume is due to the sucking effect of the left ventricle in early-diastole [43].
This negative pressure field inside the LV cavity is due to the myocardial
relaxation. We model this sucking effect using an additional pressure
loading applied to the endocardial surface, denoted as Pendo , which is
linearly ramped from 0 to 12 mmHg over 0.4 s, and then linearly decreased
to zero at end-diastole. The value of Pendo is chosen by matching the
simulated end-diastolic volume to the measured data from CMR images.
Blood flow is not allowed to move out of the LV cavity through the inflow
tract in diastole. Zero flow boundary conditions are applied to the top
plane of the outflow tract.
• Iso-volumetric contraction: Along the top plane of the inflow tract, the
EDP loading is maintained, but we allow free fluid flow in and out of the
inflow tract. Zero flow boundary conditions are retained for the outflow
tract. The duration of the iso-volumetric contraction is determined by
the myocardial contraction and ends when the aortic valve opens. The
aortic valve opens when the LV pressure is higher than the pressure in the
aorta, which is initially set to be the cuff-measured diastolic pressure in
the brachial artery, 85 mmHg.
• Systolic ejection: When the aortic valve opens, a three-element Windkessl model is coupled to the top plane of the outflow tract to provide
afterload. The volumetric flow rates across the top plane of the outflow
tract is calculated from the three-dimensional MV-LV model, and fed into
the Windkessel model [44], which returns an updated pressure for the outflow tract in the next time step. The systolic ejection phase ends when
the left ventricle cannot pump any flow through the outflow tract, and the
Windkessel model is detached.
• Iso-volumetric relaxation: Zero flow boundary conditions are applied
to both the top planes of the outflow and inflow tracts until the total cycle
ends at 1.2 seconds.
The coupled MV-LV model is immersed in a 17cm × 16cm × 16cm fluid
box. A basic time step size ∆t0 = 1.22 × 10−4 s is used in the diastolic and
8
relaxation phases, a reduced time step size (0.25 ∆t0 ) is used in the early systole
with a duration of 0.1 s, and an even smaller time step of 0.125 ∆t0 is used in
the remainder of the systolic phase. Because explicit time stepping is used in
the numerical simulations [39], we need to use a time step size small enough to
avoid numerical instabilities, particularly during the systolic phase to resolve the
highly dynamic LV deformation. The MV-LV model is implemented using the
open-source IBAMR software framework (https://github.com/IBAMR/IBAMR),
which provides an adaptive and distributed-memory parallel implementation of
the IB methods.
3. Results
Fig. 2 shows the computed volumetric flow rates across the MV and the
AV from beginning of diastole to end-systole. In diastole, the volumetric flow
rate across the MV linearly increases with Pendo , with a maximum value of
210 mL/s at 0.4 s. Diastolic filling is maintained by the increased pressure in
the inflow tract, but with decreased flow rates until end of diastole at 0.8 s. The
negative flow rate in Fig. 2 indicates the flow is entering the LV chamber. After
end-diastole, the myocardium starts to contract, and the central LV pressure
increases until it exceeds the aortic pressure (initially set to be 85 mmHg) at
0.857 s. During iso-volumetric contraction, the MV closes with a total closure
regurgitation flow of 7.2 mL, around 10% of the total filling volume, which is
comparable to the value reported by Laniado et al. [45]. There is only minor
regurgitation across the MV during systolic ejection after the iso-volumetric
contraction phase. Blood is then ejected out of the ventricle through the AV,
and the flow rate across the AV during systole reaches a peak value of 468 mL/s
(Fig. 2). The total ejection duration is 243 ms with a stroke volume of 63.2 mL.
The total blood ejected out of the LV chamber, including the regurgitation
across the MV, is 72.1 mL, which corresponds to an ejection fraction of 51%.
Fig. 3 shows the profiles of the normalized intracellular Ca2+ , LV cavity
volume, central LV pressure, and the average myocardial active tension from
diastole to systole. Until mid-diastole (0 s to 0.56 s), the central LV pressure
is negative, and the associated diastolic filling volume is around 65 mL, which
is 90% of the total diastolic filling volume. In late-diastole, the LV pressure
becomes positive. There is a delay between the myocardial active tension and
the intracellular Ca2+ profile, but the central LV pressure follows the active
tension closely throughout the cycle as shown in Fig 3.
Fig. 4 shows the deformed MV leaflets along with the corresponding CMR
cine images at early-diastole (the reference state), end-diastole, and mid-systole.
In general, the in vivo MV and LV dynamics from diastole to systole are qualitatively captured well by the coupled MV-LV model. However, a discrepancy
is observed during the diastolic filling, when the MV orifice in the model is
not opened as widely as in the CMR cine image (Fig. 4(b)). In addition, the
modelled MV leaflets have small gaps near the commissure areas even in the
fully closure state. This is partially caused by the finite size of the regularized
9
delta function at the interface and uncertainties in MV geometry reconstruction
using CMR images.
Figs. 5(a, b, c, d) show the streamlines at early-diastolic filling, late-diastolic
filling, when the MV is closing (iso-volumtric contraction), and mid-systolic
ejection when the left ventricle is ejecting. During the diastolic filling (Fig. 5(a)),
the blood flows directly through the MV into the LV chamber towards the LV
apex, in late-diastole in Fig. 5(b), the flow pattern becomes highly complex.
When iso-volmeric contraction ends, the MV is pushed back towards the left
atrium. In mid-systole, the blood is pumped out of the LV chamber through
the aortic valve into the systemic circulation, forming a strong jet as shown in
Fig. 5 (d).
The LV systolic strain related to end-diastole is shown in Fig. 6 (a), which
is negative throughout most of the region except near the basal plane, where
the LV motion is artificially constrained in the model. The average myocardial
strain along myofibre direction is -0.162±0.05. Fig. 6(b) is the fibre strain in the
MV leaflets at end-diastole, the leaflets are mostly slightly stretched during the
diastolic filling. In systole, because of the much higher pressure in the LV, the
leaflets are pushed towards the left atrium side as shown in Fig. 6(c). Near the
leaflet tip and the commissiour areas, the leaflets are highly compressed, while
in the trigons near the annulus ring, the leaflet is stretched.
From Fig. 3, one can see that the applied endocardial pressure (Pendo ) creates a negative pressure inside the LV chamber, similar to the effects of the
myocardial active relaxation. We further investigate how Pendo affects the MVLV dynamics by varying its value from 8 mmHg to 16 mmHg, and the effects
without Pendo but with an increased EDP from 8 mmHg to 20 mmHg. We observe that with an increased Pendo , the peak flow rate across the MV during the
filling phase becomes higher with more ejected volume through the aortic valve.
We also have a longer ejection duration, shorter iso-volumetric contraction time,
and higher ejection fraction as a result of increasing Pendo . On the other hand,
if we don’t apply Pendo , a much greater and nonphysiological EDP is needed for
the required ejection fraction. For example, with EDP=8 mmHg, the ejection
fraction is only 29%. Only when EDP=20 mmHg, the pump function is comparable to the case with EDP=8 mmHg and Pendo = 16 mmHg. These results are
summarized in Table 2.
4. Discussion
This study demonstrates the feasibility of integrating a MV model with a
LV model from a healthy volunteer based on in vivo CMR images. This is the
first physiologically based MV-LV model with fluid structure interaction that
includes nonlinear hyperelastic constitutive modelling of the soft tissue. The
coupled MV-LV model is used to simulate MV dynamics, LV wall deformation,
myocardial active contraction, as well as intraventricular flow. The modelling
results are in reasonable quantitative agreement with in vivo measurements
and clinical observations. For example, the peak aortic flow rate is 468 mL/s,
close to the measured peak value (498 mL/s); the ejection duration is 243 ms,
10
and the measured value is around 300 ms; the peak LV pressure is 162 mmHg,
comparable to the cuff-measured peak blood pressure 150 mmHg; the average
LV systolic strain is around -0.16, which also lies in the normal range of healthy
subjects [46].
Diastolic heart failure is usually associated with impaired myocardial relaxation and increased filling pressure [47, 48]. In this study, we model the effects of
myocardial relaxation by applying an endocardial surface pressure Pendo . Specifically we can enhance or suppress the myocardial relaxation by adjusting Pendo .
Our results in Table 2) show that, with an enhanced myocardial relaxation,
say, when Pendo ≥ 12 mmHg, there is more filling during diastole, compared to
the cases when Pendo < 12 mmHg under the same EDP. This in turn gives rise
to higher ejection fraction and stroke volume. However, if myocardial relaxation is suppressed, diastolic filling is less efficient, with subsequently smaller
ejection fraction and stroke volume. In the extreme case, when the myocardial relaxation is entirely absent, chamber volume increases by only 29.5 mL,
and ejection fraction decreases to 29%. To maintain stroke volume obtained for
Pendo =12 mmHg, EDP needs to be as high as 20 mmHg. Indeed, increased EDP
due to an impaired myocardial relaxation has been reported in a clinical study
by Zile et al. [48]. A higher EDP indicates the elevated filling pressure throughout the refilling phase. Increased filling pressure can help to maintain a normal
filling volume and ejection fraction, but runs the risks of ventricular dysfunction
in the longer term, because pump failure will occur if no other compensation
mechanism exists.
During diastole, the MV-LV model seems to yield a smaller orifice compared
to the corresponding CMR images. In our previous study [25], the MV was
mounted in a rigid straight tube, the peak diastolic filling pressure is around
10 mmHg, and the peak flow rate across the MV is comparable to the measured
value (600 mL/s). While in this coupled MV-LV model, even though with additional Pendo , the peak flow rate (200 mL/s) is much less than the measured
value. One reason is because of the extra resistance from the LV wall, which
is absent in the MV-tube model [25]. The diastolic phase can be divided into
three phasse [43]: the rapid filling, slow filling, and atrial contraction. During
rapid filling, the transvalvular flow is resulted from myocardial relaxation (the
sucking effect), which contributes to 80% of the total transvalvular flow volume.
During slow filling and atrial contraction, the left atrium needs to generate a
higher pressure to provide additional filling. In the coupled MV-LV model,
the ramped pressure in the top plane of the inflow tract during late-diastole is
related to the atrial contraction, and during this time, only 10% of the total
transvalvular flow occurs. However, the peak flow rate in rapid filling phase
is much lower compared to the measured value, which suggests the myocardial
relaxation would be much stronger.
In a series of studies based on in vitro µCT experiments, Toma [26, 27, 22]
suggested that MV models with simplified chordal structure would not compare
well with experimental data, and that a subject-specific 3D chordal structure
is necessary. This may explain some of the discrepancies we observed here. A
simplified chordal structure is used in this study because we are unable to re11
construct the chordal structure from the CMR data. CT imaging may allow the
chordae reconstruction but it comes with radiation risk. Patient-specific chordal
structure in the coupled MV-LV model would require further improvements of
in vivo imaging techniques.
Several other limitations in the model may also contribute to the discrepancies. These include the uncertainty of patient-specific parameter identification,
uncertainties in MV geometry reconstruction from CMR images, the passive
response assumption around the annulus ring and the valvular region of the LV
model, and the lack of pre-strain effects. Studies addressing these issues are
already under way. We expect that further improvement in personalized modelling and more efficient high performance computing would make the modelling
more physiologically detailed yet fast enough for applications in risk stratification and optimization of therapies in heart diseases.
5. Conclusion
We have developed a first fully coupled MV-LV model that includes fluidstructure interaction as well as experimentally constrained descriptions of the
soft tissue mechanics. The model geometry is derived from in vivo magnetic resonance images of a healthy volunteer. It incorporates three-dimensional finite
element representations of the MV leaflets, sub-valvular apparatus, and the LV
geometry. Fibre-reinforced hyperelastic constitutive laws are used to describe
the passive response of the soft tissues, and the myocardial active contraction is
also modelled. The developed MV-LV model is used to simulate MV dynamics,
LV wall deformation, and ventricular flow throughout the cardiac cycle. Despite
several modelling limitations, most of the results agree with in vivo measurements. We find that with impaired myocardial active relaxation, the diastolic
filling pressure needs to increase significantly in order to maintain a normal cardiac output, consistent with clinical observations. The model thereby represents
a further step towards a whole-heart multiphysics modelling with a target for
clinical applications.
Acknowledgement
We are grateful for the funding from the UK EPSRC (EP/N014642/1, and
EP/I029990/1) and the British Heart Foundation (PG/14/64/31043), and the
National Natural Science Foundation of China (No. 11471261). In addition,
Feng received the China Scholarship Council Studentship and the Fee Waiver
Programme at the University of Glasgow, Luo is funded by a Leverhulme Trust
Fellowship (RF-2015-510), and Griffith is supported by the National Science
Foundation (NSF award ACI 1450327) and the National Institutes of Health
(NIH award HL117063)
Conflict interests
The authors have no conflicts of interest.
12
References
References
[1] A. S. Go, D. Mozaffarian, V. L. Roger, E. J. Benjamin, J. D. Berry, M. J.
Blaha, S. Dai, E. S. Ford, C. S. Fox, S. Franco, et al., Heart disease and
stroke statistics-2014 update, Circulation 129 (3).
[2] M. S. Sacks, W. David Merryman, D. E. Schmidt, On the biomechanics of
heart valve function, Journal of biomechanics 42 (12) (2009) 1804–1824.
[3] E. Votta, T. B. Le, M. Stevanella, L. Fusini, E. G. Caiani, A. Redaelli,
F. Sotiropoulos, Toward patient-specific simulations of cardiac valves:
State-of-the-art and future directions, Journal of biomechanics 46 (2)
(2013) 217–228.
[4] W. Sun, C. Martin, T. Pham, Computational modeling of cardiac valve
function and intervention, Annual review of biomedical engineering 16
(2014) 53–76.
[5] A. Kheradvar, E. M. Groves, L. P. Dasi, S. H. Alavi, R. Tranquillo, K. J.
Grande-Allen, C. A. Simmons, B. Griffith, A. Falahatpisheh, C. J. Goergen,
et al., Emerging trends in heart valve engineering: Part i. solutions for
future, Annals of biomedical engineering 43 (4) (2015) 833–843.
[6] K. S. Kunzelman, R. Cochran, Stress/strain characteristics of porcine mitral valve tissue: parallel versus perpendicular collagen orientation, Journal
of cardiac surgery 7 (1) (1992) 71–78.
[7] S. K. Dahl, J. Vierendeels, J. Degroote, S. Annerel, L. R. Hellevik,
B. Skallerud, Fsi simulation of asymmetric mitral valve dynamics during
diastolic filling, Computer methods in biomechanics and biomedical engineering 15 (2) (2012) 121–130.
[8] E. J. Weinberg, D. Shahmirzadi, M. R. Kaazempur Mofrad, On the multiscale modeling of heart valve biomechanics in health and disease, Biomechanics and modeling in mechanobiology 9 (4) (2010) 373–387.
[9] Q. Wang, W. Sun, Finite element modeling of mitral valve dynamic deformation using patient-specific multi-slices computed tomography scans,
Annals of biomedical engineering 41 (1) (2013) 142–153.
[10] V. Prot, B. Skallerud, Nonlinear solid finite element analysis of mitral valves
with heterogeneous leaflet layers, Computational Mechanics 43 (3) (2009)
353–368.
[11] M. Stevanella, F. Maffessanti, C. A. Conti, E. Votta, A. Arnoldi, M. Lombardi, O. Parodi, E. G. Caiani, A. Redaelli, Mitral valve patient-specific
finite element modeling from cardiac mri: Application to an annuloplasty
procedure, Cardiovascular Engineering and Technology 2 (2) (2011) 66–76.
13
[12] C.-H. Lee, C. A. Carruthers, S. Ayoub, R. C. Gorman, J. H. Gorman, M. S.
Sacks, Quantification and simulation of layer-specific mitral valve interstitial cells deformation under physiological loading, Journal of theoretical
biology 373 (2015) 26–39.
[13] D. R. Einstein, F. Del Pin, X. Jiao, A. P. Kuprat, J. P. Carson, K. S. Kunzelman, R. P. Cochran, J. M. Guccione, M. B. Ratcliffe, Fluid–structure
interactions of the mitral valve and left heart: comprehensive strategies,
past, present and future, International Journal for Numerical Methods in
Biomedical Engineering 26 (3-4) (2010) 348–380.
[14] H. Gao, N. Qi, L. Feng, X. Ma, M. Danton, C. Berry, X. Luo, Modelling
mitral valvular dynamics–current trend and future directions, International
Journal for Numerical Methods in Biomedical Engineeringdoi:10.1002/
cnm.2858.
[15] J. F. Wenk, Z. Zhang, G. Cheng, D. Malhotra, G. Acevedo-Bolton,
M. Burger, T. Suzuki, D. A. Saloner, A. W. Wallace, J. M. Guccione,
et al., First finite element model of the left ventricle with mitral valve:
insights into ischemic mitral regurgitation, The Annals of thoracic surgery
89 (5) (2010) 1546–1553.
[16] V. M. Wong, J. F. Wenk, Z. Zhang, G. Cheng, G. Acevedo-Bolton,
M. Burger, D. A. Saloner, A. W. Wallace, J. M. Guccione, M. B. Ratcliffe, et al., The effect of mitral annuloplasty shape in ischemic mitral
regurgitation: a finite element simulation, The Annals of thoracic surgery
93 (3) (2012) 776–782.
[17] B. Baillargeon, I. Costa, J. R. Leach, L. C. Lee, M. Genet, A. Toutain, J. F.
Wenk, M. K. Rausch, N. Rebelo, G. Acevedo-Bolton, et al., Human cardiac function simulator for the optimal design of a novel annuloplasty ring
with a sub-valvular element for correction of ischemic mitral regurgitation,
Cardiovascular engineering and technology 6 (2) (2015) 105–116.
[18] D. R. Einstein, P. Reinhall, M. Nicosia, R. P. Cochran, K. Kunzelman, Dynamic finite element implementation of nonlinear, anisotropic hyperelastic
biological membranes, Computer Methods in Biomechanics and Biomedical
Engineering 6 (1) (2003) 33–44.
[19] D. R. Einstein, K. S. Kunzelman, P. G. Reinhall, M. A. Nicosia, R. P.
Cochran, Non-linear fluid-coupled computational model of the mitral valve,
Journal of Heart Valve Disease 14 (3) (2005) 376–385.
[20] K. Kunzelman, D. R. Einstein, R. Cochran, Fluid–structure interaction
models of the mitral valve: function in normal and pathological states,
Philosophical Transactions of the Royal Society B: Biological Sciences
362 (1484) (2007) 1393–1406.
14
[21] K. Lau, V. Diaz, P. Scambler, G. Burriesci, Mitral valve dynamics in structural and fluid–structure interaction models, Medical engineering & physics
32 (9) (2010) 1057–1064.
[22] M. Toma, D. R. Einstein, C. H. Bloodworth, R. P. Cochran, A. P. Yoganathan, K. S. Kunzelman, Fluid–structure interaction and structural
analyses using a comprehensive mitral valve model with 3d chordal structure, International Journal for Numerical Methods in Biomedical Engineeringdoi:10.1002/cnm.2815.
[23] P. N. Watton, X. Y. Luo, M. Yin, G. M. Bernacca, D. J. Wheatley, Effect
of ventricle motion on the dynamic behaviour of chorded mitral valves,
Journal of Fluids and Structures 24 (1) (2008) 58–74.
[24] X. Ma, H. Gao, B. E. Griffith, C. Berry, X. Luo, Image-based fluid–
structure interaction model of the human mitral valve, Computers & Fluids
71 (2013) 417–425.
[25] H. Gao, N. Ma, X.and Qi, C. Berry, B. E. Griffith, X. Y. Luo, A finite strain
nonlinear human mitral valve model with fluid-structure interaction, International journal for numerical methods in biomedical engineering 30 (12)
(2014) 1597–1613.
[26] M. Toma, M. Ø. Jensen, D. R. Einstein, A. P. Yoganathan, R. P. Cochran,
K. S. Kunzelman, Fluid–structure interaction analysis of papillary muscle
forces using a comprehensive mitral valve model with 3d chordal structure,
Annals of biomedical engineering 44 (4) (2016) 942–953.
[27] M. Toma, C. H. Bloodworth, E. L. Pierce, D. R. Einstein, R. P. Cochran,
A. P. Yoganathan, K. S. Kunzelman, Fluid-structure interaction analysis
of ruptured mitral chordae tendineae, Annals of Biomedical Engineering
(2016) 1–13doi:10.1007/s10439-016-1727-y.
[28] M. P. Nash, P. J. Hunter, Computational mechanics of the heart, Journal
of elasticity and the physical science of solids 61 (1-3) (2000) 113–141.
[29] W. W. Chen, H. Gao, X. Y. Luo, N. A. Hill, Study of cardiovascular function using a coupled left ventricle and systemic circulation model, Journal
of Biomechanics 49 (12) (2016) 2445–2454. doi:10.1016/j.jbiomech.
2016.03.009.
[30] A. Quarteroni, T. Lassila, S. Rossi, R. Ruiz-Baier, Integrated heart–
coupling multiscale and multiphysics models for the simulation of the cardiac function, Computer Methods in Applied Mechanics and Engineering
314 (2016) 345–407.
[31] M. K. Rausch, A. M. Zöllner, M. Genet, B. Baillargeon, W. Bothe, E. Kuhl,
A virtual sizing tool for mitral valve annuloplasty, International Journal for
Numerical Methods in Biomedical Engineeringdoi:10.1002/cnm.2788.
15
[32] C. S. Peskin, Flow patterns around heart valves: a numerical method,
Journal of computational physics 10 (2) (1972) 252–271.
[33] D. M. McQueen, C. S. Peskin, E. L. Yellin, Fluid dynamics of the mitral
valve: physiological aspects of a mathematical model, American Journal of
Physiology-Heart and Circulatory Physiology 242 (6) (1982) H1095–H1110.
[34] C. S. Peskin, Numerical analysis of blood flow in the heart, Journal of
computational physics 25 (3) (1977) 220–252.
[35] C. S. Peskin, The immersed boundary method, Acta Numerica 11 (2002)
479–517.
[36] M. Yin, X. Y. Luo, T. J. Wang, P. N. Watton, Effects of flow vortex
on a chorded mitral valve in the left ventricle, International Journal for
Numerical Methods in Biomedical Engineering 26 (3-4) (2010) 381–404.
[37] K. B. Chandran, H. Kim, Computational mitral valve evaluation and potential clinical applications, Annals of biomedical engineering 43 (6) (2015)
1348–1362.
[38] H. Gao, H. Wang, C. Berry, X. Y. Luo, B. E. Griffith, Quasi-static imagebased immersed boundary-finite element model of left ventricle under diastolic loading, International journal for numerical methods in biomedical
engineeringdoi:10.1002/cnm.2652.
[39] B. E. Griffith, X. Y. Luo, Hybrid finite difference/finite element
version of the immersed boundary method, eprint from arXiv (url:
https://arxiv.org/abs/1612.05916).
[40] H. Gao, C. Berry, X. Y. Luo, Image-derived human left ventricular modelling with fluid-structure interaction, in: Functional Imaging and Modeling
of the Heart, Springer, 2015, pp. 321–329.
[41] G. A. Holzapfel, R. W. Ogden, Constitutive modelling of passive myocardium: a structurally based framework for material characterization,
Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences 367 (1902) (2009) 3445–3475.
[42] S. Niederer, P. Hunter, N. Smith, A quantitative analysis of cardiac myocyte relaxation: a simulation study, Biophysical journal 90 (5) (2006)
1697–1722.
[43] R. A. Nishimura, A. J. Tajik, Evaluation of diastolic filling of left ventricle
in health and disease: Doppler echocardiography is the clinician?s rosetta
stone, Journal of the American College of Cardiology 30 (1) (1997) 8–18.
[44] B. E. Griffith, Immersed boundary model of aortic heart valve dynamics
with physiological driving and loading conditions, International Journal for
Numerical Methods in Biomedical Engineering 28 (3) (2012) 317–345.
16
[45] S. Laniado, E. Yellin, M. Kotler, L. Levy, J. Stadler, R. Terdiman, A study
of the dynamic relations between the mitral valve echogram and phasic
mitral flow, Circulation 51 (1) (1975) 104–113.
[46] K. Mangion, H. Gao, C. McComb, D. Carrick, G. Clerfond, X. Zhong,
X. Luo, C. E. Haig, C. Berry, A novel method for estimating myocardial
strain: Assessment of deformation tracking against reference magnetic resonance methods in healthy volunteers, Scientific Reports 6 (2016) 38774.
doi:10.1038/srep38774.
[47] I. Hay, J. Rich, P. Ferber, D. Burkhoff, M. S. Maurer, Role of impaired
myocardial relaxation in the production of elevated left ventricular filling
pressure, American Journal of Physiology-Heart and Circulatory Physiology 288 (3) (2005) H1203–H1208.
[48] M. R. Zile, C. F. Baicu, W. H. Gaasch, Diastolic heart failureabnormalities
in active relaxation and passive stiffness of the left ventricle, New England
Journal of Medicine 350 (19) (2004) 1953–1959.
17
MV leaflets
C1 (kPa)
Anterior
17.4
Posterior
10.2
Chordae
C (kPa)
systole
9000
diastole
540
Myocardium
passive
a (kPa)
b
af (kPa)
0.24
5.08
1.46
active
T req = 225 kPa
av (kPa)
31.3
50.0
bf
4.15
as (kPa)
0.87
bv (kPa)
55.93
63.48
bs
1.6
a8fs (kPa)
0.3
Table 1: Material parameter values for MV leaftlets, chordae and the myocardium
18
b8fs
1.3
Cases (mmHg)
EDP=8, Pendo =8
EDP=8, Pendo =10
EDP=8, Pendo =12
EDP=8, Pendo =14
EDP=8, Pendo =16
EDP=8, Pendo =0
EDP=12,Pendo =0
EDP=14,Pendo =0
EDP=16,Pendo =0
EDP=18,Pendo =0
EDP=20,Pendo =0
tiso-con (ms)
60
58
57
55
54
75
64
61
58
56
55
tejection (ms)
227
237
243
251
256
174
213
226
243
251
262
ejection
VLV
(mL)
52
57.6
63.2
67.8
72.3
20.8
41.0
50.6
61.8
68.5
75.7
filling
VMV
(mL)
60.6
65.9
72.1
76.8
81.3
29.5
50.9
59.8
71.9
79.3
86.2
peak
FMV
(mL/s)
412.93
442.60
468.41
486.84
503.76
209.54
343.47
406.81
459.64
486.58
511.16
Table 2: Effects of EDP and the endocardial pressure loading (Pendo ) on MV and LV dynamics.
19
LVEF(%)
47%
49%
51%
53%
54%
29%
42%
47%
51%
54%
55%
(a)
(b)
(d)
(g)
(c)
(e)
(f)
(h)
Figure 1: The CMR-derived MV-LV model. (a) The MV leaflets were segmented from a stack
of MR images of a volunteer at early-diastole, (b) positions of the papillary muscle heads and
the annulus ring, (c) reconstructed MV geometry with chordae, (d) a MR image showing the
LV and location of the outflow tract (AV) and inflow tract (MV), (e) the LV wall delineation
from short and long axis MR images, (f) the reconstructed LV model, in which the LV model
is divided into four part: the LV region bellow the LV base, the valvular region, and the inflow
and outflow tracts, (g) the rule-based fibre orientations in the LV and the MV, and (h) the
coupled MV-LV model.
20
Figure 2: Flow rates across the AV and MV from diastole to systole. Diastolic phase: 0 s to
0.8 s; Systolic phase: 0.8 s and onwards. Positive flow rate means the blood flows out of the
LV chamber.
21
Figure 3: Normalized intracellular Ca2 +, LV cavity volume, central LV pressure and average
myocardial active tension. All curves are normalized to their own maximum values, which are:
1µMol for Ca2 +, 145 mL for LV cavity volume, 162 mmHg for central LV pressure, 96.3 kPa
for average myocardial active tension.
22
(a)
(b)
(c)
Figure 4: Comparisons between the MV and LV structures at (a) reference configuration, (b)
end-diastole, and (c) end-systole, and the corresponding CMR cine images (left). Coloured
by the displacement magnitude.
23
(a)
(b)
(c)
(d)
Figure 5: Streamlines in the MV-LV model at early-diastolic filling (a), late-diastolic filling
(b), when isovolumtric contraction ends (c), and at the mid-systole. Streamline are colored
by velocity magnitude, the LV wall and MV are colored by the displacement magnitude. Red
: high; blue: low
24
(a)
(b)
(c)
Figure 6: Distributions of fibre strain in the left ventricle at end-systole (a), in the MV at
end-diastole (b) and end-systole (c).
25
| 5 |
Published as a conference paper at ICLR 2017
R ECURRENT E NVIRONMENT S IMULATORS
Silvia Chiappa, Sébastien Racaniere, Daan Wierstra & Shakir Mohamed
DeepMind, London, UK
{csilvia, sracaniere, wierstra, shakir}@google.com
arXiv:1704.02254v2 [] 19 Apr 2017
A BSTRACT
Models that can simulate how environments change in response to actions can be
used by agents to plan and act efficiently. We improve on previous environment
simulators from high-dimensional pixel observations by introducing recurrent
neural networks that are able to make temporally and spatially coherent predictions
for hundreds of time-steps into the future. We present an in-depth analysis of the
factors affecting performance, providing the most extensive attempt to advance
the understanding of the properties of these models. We address the issue of
computationally inefficiency with a model that does not need to generate a highdimensional image at each time-step. We show that our approach can be used to
improve exploration and is adaptable to many diverse environments, namely 10
Atari games, a 3D car racing environment, and complex 3D mazes.
1
I NTRODUCTION
In order to plan and act effectively, agent-based systems require an ability to anticipate the consequences of their actions within an environment, often for an extended period into the future. Agents
can be equipped with this ability by having access to models that can simulate how the environments
changes in response to their actions. The need for environment simulation is widespread: in psychology, model-based predictive abilities form sensorimotor contingencies that are seen as essential
for perception (O’Regan & Noë, 2001); in neuroscience, environment simulation forms part of
deliberative planning systems used by the brain (Niv, 2009); and in reinforcement learning, the ability
to imagine the future evolution of an environment is needed to form predictive state representations
(Littman et al., 2002) and for Monte Carlo planning (Sutton & Barto, 1998).
Simulating an environment requires models of temporal sequences that must possess a number
of properties to be useful: the models should make predictions that are accurate, temporally and
spatially coherent over long time periods; and allow for flexibility in the policies and action sequences
that are used. In addition, these models should be general-purpose and scalable, and able to learn
from high-dimensional perceptual inputs and from diverse and realistic environments. A model that
achieves these desiderata can empower agent-based systems with a vast array of abilities, including
counterfactual reasoning (Pearl, 2009), intuitive physical reasoning (McCloskey, 1983), model-based
exploration, episodic control (Lengyel & Dayan, 2008), intrinsic motivation (Oudeyer et al., 2007),
and hierarchical control.
Deep neural networks have recently enabled significant advances in simulating complex environments,
allowing for models that consider high-dimensional visual inputs across a wide variety of domains
(Wahlström et al., 2015; Watter et al., 2015; Sun et al., 2015; Patraucean et al., 2015). The model of
Oh et al. (2015) represents the state-of-the-art in this area, demonstrating high long-term accuracy in
deterministic and discrete-action environments.
Despite these advances, there are still several challenges and open questions. Firstly, the properties
of these simulators in terms of generalisation and sensitivity to the choices of model structure and
training are poorly understood. Secondly, accurate prediction for long time periods into the future
remains difficult to achieve. Finally, these models are computationally inefficient, since they require
the prediction of a high-dimensional image each time an action is executed, which is unnecessary in
situations where the agent is interested only in the final prediction after taking several actions.
In this paper we advance the state-of-the-art in environment modelling. We build on the work of Oh
et al. (2015), and develop alternative architectures and training schemes that significantly improve
performance, and provide in-depth analysis to advance our understanding of the properties of these
1
Published as a conference paper at ICLR 2017
at−1
st
x̂t−1
x̂t
xt−1
xt
···
s1
s2
······
aτ +1
sτ
sτ +1
xτ
x̂τ +1
xτ +1
en
co
x1
at−1
sτ +2
st−1
st
x̂τ +2
x̂t−1
x̂t
xτ +2
xt−1
xt
···
···
decod.
st−1
aτ
d.
···
aτ −1
a1
xτ −1
(a)
(b)
Figure 1: Graphical model representing (a) the recurrent structure used in Oh et al. (2015) and (b) our
recurrent structure. Filled and empty nodes indicate observed and hidden variables respectively.
models. We also introduce a simulator that does not need to predict visual inputs after every action,
reducing the computational burden in the use of the model. We test our simulators on three diverse
and challenging families of environments, namely Atari 2600 games, a first-person game where an
agent moves in randomly generated 3D mazes, and a 3D car racing environment; and show that they
can be used for model-based exploration.
2
R ECURRENT E NVIRONMENT S IMULATORS
An environment simulator is a model that, given a sequence of actions a1 , . . . , aτ −1 ≡ a1:τ −1 and
corresponding observations x1:τ of the environment, is able to predict the effect of subsequent
actions aτ :τ +τ 0 −1 , such as forming predictions x̂τ +1:τ +τ 0 or state representations sτ +1:τ +τ 0 of the
environment.
Our starting point is the recurrent simulator of Oh et al. (2015), which is the state-of-the-art in
simulating deterministic environments with visual observations (frames) and discrete actions. This
simulator is a recurrent neural network with the following backbone structure:
st = f (st−1 , C(I(x̂t−1 , xt−1 ))) ,
x̂t = D(st , at−1 ) .
In this equation, st is a hidden state representation of the environment, and f a non-linear deterministic
state transition function. The symbol I indicates the selection of the predicted frame x̂t−1 or real
frame xt−1 , producing two types of state transition called prediction-dependent transition and
observation-dependent transition respectively. C is an encoding function consisting of a series of
convolutions, and D is a decoding function that combines the state st with the action at−1 through
a multiplicative interaction, and then transforms it using a series of full convolutions to form the
predicted frame x̂t .
The model is trained to minimise the mean squared error between the observed time-series xτ +1:τ +τ 0 ,
corresponding to the evolution of the environment, and its prediction. In a probabilistic framework,
this corresponds to maximising the log-likelihood in the graphical model depicted in Fig. 1(a). In
this graph, the link from x̂t to xt represents stochastic dependence, as xt is formed by adding to
x̂t a Gaussian noise term with zero mean and unit variance, whilst all remaining links represent
deterministic dependences. The dashed lines indicate that only one of the two links is active,
depending on whether the state transition is prediction-dependent or observation-dependent.
The model is trained using stochastic gradient descent, in which each mini-batch consists of a set
of segments of length τ + T randomly sub-sampled from x1:τ +τ 0 . For each segment in the minibatch, the model uses the first τ observations to evolve the state and forms predictions of the last T
observations only. Training comprises three phases differing in the use of prediction-dependent or
observation-dependent transitions (after the first τ transitions) and in the value of the prediction
length T . In the first phase, the model uses observation-dependent transitions and predicts for T = 10
time-steps. In the second and third phases, the model uses prediction-dependent transitions and
predicts for T = 3 and T = 5 time-steps respectively. During evaluation or usage, the model can
only use prediction-dependent transitions.
2
Published as a conference paper at ICLR 2017
ACTION -D EPENDENT S TATE T RANSITION
A strong feature of the model of Oh et al. (2015) described above is that the actions influence the
state transitions only indirectly through the predictions or the observations. Allowing the actions
to condition the state transitions directly could potentially enable the model to incorporate action
information more effectively. We therefore propose the following backbone structure:
st = f (st−1 , at−1 , C(I(x̂t−1 , xt−1 ))) ,
x̂t = D(st ) .
In the graphical model representation, this corresponds to replacing the link from at−1 to x̂t with a
link from at−1 to st as in Fig. 1(b).
S HORT-T ERM VERSUS L ONG -T ERM ACCURACY
The last two phases in the training scheme of Oh et al. (2015) described above are used to address the
issue of poor accuracy that recurrent neural networks trained using only observation-dependent transitions display when asked to predict several time-steps ahead. However, the paper does not analyse
nor discuss alternative training schemes.
In principle, the highest accuracy should be obtained by training the model as closely as possible
to the way it will be used, and therefore by using a number of prediction-dependent transitions
which is as close as possible to the number of time-steps the model will be asked to predict for.
However, prediction-dependent transitions increase the complexity of the objective function such
that alternative schemes are most often used (Talvitie, 2014; Bengio et al., 2015; Oh et al., 2015).
Current training approaches are guided by the belief that using the observation xt−1 , rather than the
prediction x̂t−1 , to form the state st has the effect of reducing the propagation of the errors made in
the predictions, which are higher at earlier stages of the training, enabling the model to correct itself
from the mistakes made up to time-step t − 1. For example, Bengio et al. (2015) introduce a scheduled
sampling approach where at each time-step the type of state transition is sampled from a Bernoulli
distribution, with parameter annealed from an initial value corresponding to using only observationdependent transitions to a final value corresponding to using only prediction-dependent transitions,
according to a schedule selected by validation.
Our analysis of different training schemes on Atari, which considered the interplay among warm-up
length τ , prediction length T , and number of prediction-dependent transitions, suggests that, rather
than as having a corrective effect, observation-dependent transitions should be seen as restricting the
time interval in which the model considers its predictive abilities, and therefore focuses resources.
Indeed we found that, the higher the number of consecutive prediction-dependent transitions, the more
the model is encouraged to focus on learning the global dynamics of the environment, which results in
higher long-term accuracy. The highest long-term accuracy is always obtained by a training scheme
that uses only prediction-dependent transitions even at the early stages of the training. Focussing on
learning the global dynamics comes at the price of shifting model resources away from learning the
precise details of the frames, leading to a decrease in short-term accuracy. Therefore, for complex
games for which reasonable long-term accuracy cannot be obtained, training schemes that mix
prediction-dependent and observation-dependent transitions are preferable. It follows from this
analysis that percentage of consecutive prediction-dependent transitions, rather than just percentage
of such transitions, should be considered when designing training schemes.
From this viewpoint, the poor results obtained in Bengio et al. (2015) when using only predictiondependent transitions can be explained by the difference in the type of the tasks considered. Indeed,
unlike our case in which the model is tolerant to some degree of error such as blurriness in earlier
predictions, the discrete problems considered in Bengio et al. (2015) are such that one prediction
error at earlier time-steps can severely affect predictions at later time-steps, so that the model needs
to be highly accurate short-term in order to perform reasonably longer-term. Also, Bengio et al.
(2015) treated the prediction used to form st as a fixed quantity, rather than as a function of st−1 , and
therefore did not perform exact maximum likelihood.
P REDICTION -I NDEPENDENT S TATE T RANSITION
In addition to potentially enabling the model to incorporate action information more effectively,
allowing the actions to directly influence the state dynamics has another crucial advantage: it
allows to consider the case of a state transition that does not depend on the frame, i.e. of the form
st = f (st−1 , at−1 ), corresponding to removing the dashed links from x̂t−1 and from xt−1 to st in
3
Published as a conference paper at ICLR 2017
Fig. 1(b). We shall call such a model prediction-independent simulator, referring to its ability to
evolve the state without using the prediction during usage. Prediction-independent state transitions
for high-dimensional observation problems have also been considered in Srivastava et al. (2015).
A prediction-independent simulator can dramatically increase computational efficiency in situations
is which the agent is interested in the effect of a sequence of actions rather than of a single action.
Indeed, such a model does not need to project from the lower dimensional state space into the higher
dimensional observation space through the set of convolutions, and vice versa, at each time-step.
3
P REDICTION -D EPENDENT S IMULATORS
We analyse simulators with state transition of the form st = f (st−1 , at−1 , C(I(x̂t−1 , xt−1 ))) on
three families of environments with different characteristics and challenges, namely Atari 2600 games
from the arcade learning environment (Bellemare et al., 2013), a first-person game where an agent
moves in randomly generated 3D mazes (Beattie et al., 2016), and a 3D car racing environment called
TORCS (Wymann et al., 2013). We use two evaluation protocols. In the first one, the model is asked
to predict for 100 or 200 time-steps into the future using actions from the test data. In the second one,
a human uses the model as an interactive simulator. The first protocol enables us to determine how
the model performs within the action policy of the training data, whilst the second protocol enables
us to explore how the model generalises to other action policies.
As state transition, we used the following action-conditioned long short-term memory (LSTM)
(Hochreiter & Schmidhuber, 1997):
Encoding: zt−1 = C(I(x̂t−1 , xt−1 )) ,
(1)
Action fusion: vt = Wh ht−1 ⊗ Wa at−1 ,
iv
iz
(2)
fv
fz
Gate update: it = σ(W vt + W zt−1 ) , ft = σ(W vt + W zt−1 ) ,
ot = σ(Wov vt + Woz zt−1 ) ,
Cell update: ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt + Wcz zt−1 ) ,
State update: ht = ot ⊗ tanh(ct ) ,
(3)
(4)
(5)
where ⊗ denotes the Hadamard product, σ the logistic sigmoid function, at−1 is a one-hot vector
representation of at−1 , and W are parameter matrices. In Eqs. (2)–(5), ht and ct are the LSTM state
and cell forming the model state st = (ht , ct ); and it , ft , and ot are the input, forget, and output
gates respectively (for simplicity, we omit the biases in their updates). The vectors ht and vt had
dimension 1024 and 2048 respectively. Details about the encoding and decoding functions C and
D for the three families of environments can be found in Appendix B.1, B.2 and B.3. We used a
warm-up phase of length τ = 10 and we did not backpropagate the gradient to this phase.
3.1
ATARI
We considered the 10 Atari games Freeway, Ms Pacman, Qbert, Seaquest, Space Invaders, Bowling,
Breakout, Fishing Derby, Pong, and Riverraid. Of these, the first five were analysed in Oh et al.
(2015) and are used for comparison. The remaining five were chosen to better test the ability of
the model in environments with other challenging characteristics, such as scrolling backgrounds
(Riverraid), small/thin objects that are key aspects of the game (lines in Fishing Derby, ball in Pong
and Breakout), and sparse-reward games that require very long-term predictions (Bowling). We used
training and test datasets consisting of five and one million 210×160 RGB images respectively, with
actions chosen from a trained DQN agent (Mnih et al., 2015) according to an = 0.2-greedy policy.
Such a large number of training frames ensured that our simulators did not strongly overfit to the
training data (see training and test lines in Figs. 2 and 3, and the discussion in Appendix B.1).
S HORT-T ERM VERSUS L ONG -T ERM ACCURACY
Below we summarise our results on the interplay among warm-up length τ , prediction length T , and
number of prediction-dependent transitions – the full analysis is given in Appendix B.1.1.
The warm-up and prediction lengths τ and T regulate degree of accuracy in two different ways. 1)
The value of τ + T determines how far into the past the model can access information – this is the case
irrespectively of the type of transition used, although when using prediction-dependent transitions
4
Published as a conference paper at ICLR 2017
information about the last T time-steps of the environment would need to be inferred. Accessing
information far back into the past can be necessary even when the model is used to perform one-step
ahead prediction only. 2) The higher the value of T and the number of prediction-dependent transitions, the more the corresponding objective function encourages long-term accuracy. This is achieved
by guiding the one-step ahead prediction error in such a way that further predictions will not be
strongly affected, and by teaching the model to make use of information from the far past. The
more precise the model is in performing one-step ahead prediction, the less noise guidance should be
required. Therefore, models with very accurate convolutional and transition structures should need
less encouragement.
Increasing the percentage of consecutive prediction-dependent transitions increases long-term
accuracy, often at the expense of short-term accuracy. We found that using only observationdependent transitions leads to poor performance in most games. Increasing the number of consecutive
prediction-dependent transitions produces an increase in long-term accuracy, but also a decrease
in short-term accuracy usually corresponding to reduction in sharpness. For games that are too
complex, although the lowest long-term prediction error is still achieved with using only predictiondependent transitions, reasonable long-term accuracy cannot be obtained, and training schemes that
mix prediction-dependent and observation-dependent transitions are therefore preferable.
To illustrate these results, we compare the following training schemes for prediction length T = 15:
• 0% PDT: Only observation-dependent transitions.
• 33% PDT: Observation and prediction-dependent transitions for the first 10 and last 5 time-steps
respectively.
• 0%-20%-33% PDT: Only observation-dependent transitions in the first 10,000 parameter updates;
observation-dependent transitions for the first 12 time-steps and prediction-dependent transitions for
the last 3 time-steps for the subsequent 100,000 parameters updates; observation-dependent transitions for the first 10 time-steps and prediction-dependent transitions for the last 5 time-steps for the
remaining parameter updates (adaptation of the training scheme of Oh et al. (2015) to T = 15).
• 46% PDT Alt.: Alternate between observation-dependent and prediction-dependent transitions
from a time-step to the next.
• 46% PDT: Observation and prediction-dependent transitions for the first 8 and last 7 time-steps
respectively.
• 67% PDT: Observation and prediction-dependent transitions for the first 5 and last 10 time-steps
respectively.
• 0%-100% PDT: Only observation-dependent transitions in the first 1000 parameter updates; only
prediction-dependent transitions in the subsequent parameter updates.
• 100% PDT: Only prediction-dependent transitions.
For completeness, we also consider a training scheme as in Oh et al. (2015), which consists of three
phases with T = 10, T = 3, T = 5, and 500,000, 250,000, 750,000 parameter updates respectively.
In the first phase st is formed by using the observed frame xt−1 , whilst in the two subsequent phases
st is formed by using the predicted frame x̂t−1 .
In Figs. 2 and 3 we show the prediction error averaged over 10,000 sequences1 for the games of
Bowling2 , Fishing Derby, Pong and Seaquest. More specifically, Fig. 2(a) shows the error for
predicting up to 100 time-steps ahead after the model has seen 200 million frames (corresponding
to half million parameter updates using mini-batches of 16 sequences), using actions and warm-up
frames from the test data, whilst Figs. 2(b)-(c) and 3 show the error at time-steps 5, 10 and 100 versus
number of frames seen by the model.
These figures clearly show that long-term accuracy generally improves with increasing number
of consecutive prediction-dependent transitions. When using alternating (46% PDT Alt.), rather
than consecutive (46% PDT), prediction-dependent transitions long-term accuracy is worse, as we
are effectively asking the model to predict at most two time-steps ahead. We can also see that
P10,000
n 2
1
We define the prediction error as 3∗10,000
k xn
t − x̂t k .
n=1
2
In this game, the player is given two chances to roll a ball down an alley in an attempt to knock down as
many of the ten pins as possible, after which the score is updated and the knocked pins are relocated. Knocking
down every pin on the first shot is a strike, while knocking every pin down in both shots is a spare. The player’s
score is determined by the number of pins knocked down, as well as the number of strikes and spares acquired.
1
5
Published as a conference paper at ICLR 2017
25
40
0% PDT
0%-20%-33% PDT
33% PDT
46% PDT Alt.
46% PDT
67% PDT
0%-100% PDT
100% PDT
Oh et al.
15
10
5
0
1
25
50
Time-steps
Training
Test
35
Prediction Error at Time-step 100
Prediction Error
20
75
30
25
20
15
10
5
0
0
100
40
140
260
120
220
100
80
60
40
20
0
0
80
160
240
Number of Frames
160
200
320
400
(b)
Prediction Error at Time-step 100
Prediction Error at Time-step 10
(a)
80
120
Number of Frames
320
400
180
140
100
60
20
0
80
160
240
Number of Frames
(c)
Figure 2: Prediction error averaged over 10,000 sequences on (a)-(b) Bowling and (c) Fishing Derby
for different training schemes. The same color and line code is used in all figures. (a): Prediction
error vs time-steps after the model has seen 200 million frames. (b)-(c): Prediction error vs number
of frames seen by the model at time-steps 10 and 100.
using more prediction-dependent transitions produces lower short-term accuracy and/or slower
short-term convergence. Finally, the figures show that using a training phase with only observationdependent transitions that is too long, as in Oh et al. (2015), can be detrimental: the models reaches
at best a performance similar to the 46% PDT Alt. training scheme (the sudden drop in prediction
error corresponds to transitioning to the second training phase), but is most often worse.
By looking at the predicted frames we could notice that, in games containing balls and paddles, using
only observation-dependent transitions gives rise to errors in reproducing the dynamics of these
objects. Such errors decrease with increasing prediction-dependent transitions. In other games, using
only observation-dependent transitions causes the model to fail in representing moving objects, except
for the agent in most cases. Training schemes containing more prediction-dependent transitions
encourage the model to focus more on learning the dynamics of the moving objects and less on
details that would only increase short-term accuracy, giving rise to more globally accurate but less
sharp predictions. Finally, in games that are too complex, the strong emphasis on long-term accuracy
produces predictions that are overall not sufficiently good.
More specifically, from the videos available at3 PDTvsODT, we can see that using only observationdependent transitions has a detrimental effect on long-term accuracy for Fishing Derby, Ms Pacman,
3
Highlighted names like these are direct links to folders containing videos. Each video consists of 5 randomly
selected 200 time-steps ahead predictions separated by black frames (the generated frames are shown on the left,
whilst the real frames are shown on the right – the same convention will be used throughout the paper). Shown
6
Published as a conference paper at ICLR 2017
4
18
0% PDT
0%-20%-33% PDT
33% PDT
46% PDT Alt.
46% PDT
67% PDT
0%-100% PDT
100% PDT
Oh et al.
3
2.5
2
1.5
1
0.5
0
0
80
160
240
Number of Frames
320
Training
Test
16
Prediction Error at Time-step 100
Prediction Error at Time-step 5
3.5
14
12
10
8
6
4
2
0
0
400
80
160
240
Number of Frames
320
400
40
160
35
140
Prediction Error at Time-step 100
Prediction Error at Time-step 5
(a)
30
25
20
15
10
5
0
80
160
240
Number of Frames
320
120
100
80
60
40
20
0
0
400
80
160
240
Number of Frames
320
400
(b)
Figure 3: Prediction error on (a) Pong and (b) Seaquest for different training schemes.
Qbert, Riverraid, Seaquest and Space Invaders. The most salient features of the videos are: consistent
inaccuracy in predicting the paddle and ball in Breakout; reset to a new life after a few time-steps in
Ms Pacman; prediction of background only after a few time-steps in Qbert; no generation of new
objects or background in Riverraid; quick disappearance of existing fish and no appearance of new
fish from the sides of the frame in Seaquest. For Bowling, Freeway, and Pong, long-term accuracy
is generally good, but the movement of the ball is not always correctly predicted in Bowling and
Pong and the chicken sometimes disappears in Freeway. On the other hand, using only predictiondependent transitions results in good long-term accuracy for Bowling, Fishing Derby, Freeway, Pong,
Riverraid, and Seaquest: the model accurately represents the paddle and ball dynamics in Bowling
and Pong; the chicken hardly disappears in Freeway, and new objects and background are created and
most often correctly positioned in Riverraid and Seaquest.
The trading-off of long for short-term accuracy when using more prediction-dependent transitions is
particularly evident in the videos of Seaquest: the higher the number of such transitions, the better
the model learns the dynamics of the game, with new fish appearing in the right location more often.
However, this comes at the price of reduced sharpness, mostly in representing the fish.
This trade-off causes problems in Breakout, Ms Pacman, Qbert, and Space Invaders, so that schemes
that also use observation-dependent transitions are preferable for these games. For example, in
Breakout, the model fails at representing the ball, making the predictions not sufficiently good.
Notice that the prediction error (see Fig. 15) is misleading in terms of desired performance, as the
100%PDT training scheme performs as well as other mixing schemes for long-term accuracy – this
highlights the difficulties in evaluating the performance of these models.
are 15 frames per seconds. Videos associated with the material discussed in this and following sections can also
be found at https://sites.google.com/site/resvideos1729.
7
Published as a conference paper at ICLR 2017
18
4
T = 10
T = 15
T = 20
3
2.5
2
1.5
1
0.5
0
0
40
80
120
160
Number of Frames
200
0% PDT
67% PDT
100% PDT
16
Prediction Error at Time-step 100
Prediction Error at Time-step 5
3.5
14
12
10
8
6
4
2
0
0
240
40
80
120
160
Number of Frames
200
240
40
160
35
140
Prediction Error at Time-step 100
Prediction Error at Time-step 5
(a)
30
25
20
15
10
5
0
40
80
120
160
Number of Frames
200
120
100
80
60
40
20
0
0
240
40
80
120
160
Number of Frames
200
240
(b)
Figure 4: Prediction error vs number of frames seen by the model (excluding warm-up frames) for (a)
Pong and (b) Seaquest, using prediction lengths T = 10, 15, and 20, and training schemes 0%PDT,
67%PDT, and 100%PDT.
Increasing the prediction length T increases long-term accuracy when using predictiondependent transitions. In Fig. 4, we show the effect of using different prediction lengths T ≤ 20
on the training schemes 0%PDT, 67%PDT, and 100%PDT for Pong and Seaquest. In Pong, with
the 0%PDT training scheme, using higher T improves long-term accuracy: this is a game for which
this scheme gives reasonable accuracy and the model is able to benefit from longer history. This is
however not the case for Seaquest (or other games as shown in Appendix B.1.1). On the other hand,
with the 100%PDT training scheme, using higher T improves long-term accuracy in most games (the
difference is more pronounced between T = 10 and T = 15 than between T = 15 and T = 20),
but decreases short-term accuracy. Similarly to above, reduced short-term accuracy corresponds to
reduced sharpness: from the videos available at T ≤ 20 we can see, for example, that the moving
caught fish in Fishing Derby, the fish in Seaquest, and the ball in Pong are less sharp for higher T .
Truncated backpropagation still enables increase in long-term accuracy. Due to memory constraints, we could only backpropagate gradients over sequences of length up to 20. To use T > 20,
we split the prediction sequence into subsequences and performed parameter updates separately for
each subsequence. For example, to use T = 30 we split the prediction sequence into two successive
subsequences of length 15, performed parameter updates over the first subsequence, initialised the
state of the second subsequence with the final state from the first subsequence, and then performed
parameter updates over the second subsequence. This approach corresponds to a form of truncated
backpropagation through time (Williams & Zipser, 1995) – the extreme of this strategy (with T equal
to the length of the whole training sequence) was used by Zaremba et al. (2014).
8
Published as a conference paper at ICLR 2017
18
4
BPTT(15, 1)
BPTT(15, 2)
BPTT(15, 5)
3
2.5
2
1.5
1
0.5
0
0
40
80
120
160
Number of Frames
200
0% PDT
33% PDT
100% PDT
16
Prediction Error at Time-step 100
Prediction Error at Time-step 5
3.5
14
12
10
8
6
4
2
0
0
240
40
80
120
160
Number of Frames
200
240
40
160
35
140
Prediction Error at Time-step 100
Prediction Error at Time-step 5
(a)
30
25
20
15
10
5
0
40
80
120
160
Number of Frames
200
120
100
80
60
40
20
0
0
240
40
80
120
160
Number of Frames
200
240
(b)
Figure 5: Prediction error vs number of frames seen by the model (excluding warm-up frames) for
(a) Pong and (b) Seaquest, using BPTT(15, 1), BPTT(15, 2), and BTT(15, 5), and training schemes
0%PDT, 33%PDT, and 100%PDT.
In Fig. 5, we show the effect of using 2 and 5 subsequences of length 15 (indicated by BPTT(15, 2)
and BTT(15, 5)) on the training schemes 0%PDT, 33%PDT, and 100%PDT for Pong and Seaquest.
We can see that the 0%PDT and 33%PDT training schemes display no difference in accuracy for
different values of T . On the other hand, with the 100%PDT training scheme, using more than one
subsequence improves long-term accuracy (the difference is more pronounced between T = 15 and
T = 30 than between T = 30 and T = 75), but decreases short-term accuracy (the difference is
small at convergence between T = 15 and T = 30, but big between T = 30 and T = 75). The
decrease in accuracy with 5 subsequences is drastic in some games.
For Riverraid, using more than one subsequence with the 33%PDT and 100%PDT training schemes
improves long-term accuracy dramatically, as shown in Fig. 6, as it enables correct prediction after
a jet loss. Interestingly, for the 100%PDT training scheme, using τ = 25 with prediction length
T = 15 (black line) does not give the same amount of gain as when using BPTT(15, 2), even if
history length τ + T is the same. This would seem to suggest that some improvement in BPTT(15, 2)
is due to encouraging longer-term accuracy, indicating that this can be achieved even when not fully
backpropagating the gradient.
From the videos available at T > 20, we can see that with T = 75 the predictions in some of the
Fishing Derby videos are faded, whilst in Pong the model can suddenly switch from one dynamics to
another for the ball and the opponent’s paddle.
In conclusion, using higher T through truncated backpropagation can improve performance. However,
in schemes that use many prediction-dependent transitions, a high value of T can lead to poor
predictions.
9
Published as a conference paper at ICLR 2017
140
1600
1400
Prediction Error
1200
1),
2),
5),
1),
τ
τ
τ
τ
= 10
= 10
= 10
= 25
Prediction Error at Time-step 5
BPTT(15,
BPTT(15,
BPTT(15,
BPTT(15,
1000
800
600
400
200
1
25
50
Time-steps
75
100
80
60
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
440
390
240
Prediction Error at Time-step 15
Prediction Error at Time-step 10
280
200
160
120
80
40
0
100
40
0
0
0% PDT
33% PDT
100% PDT
120
40
80
120
160
Number of Frames
200
240
340
290
240
190
140
90
40
0
Figure 6: Prediction error vs number of frames seen by the model for Riverraid, using BPTT(15, 1),
BPTT(15, 2), and BTT(15, 5), and training schemes 0%PDT, 33%PDT, and 100%PDT. The black
line is obtained with the 100%PDT training scheme.
E VALUATION THROUGH H UMAN P LAY
Whilst we cannot expect our simulators to generalise to structured sequences of actions never chosen
by the DQN and that are not present in the training data, such as moving the agent up and down the
alley in Bowling, it is reasonable to expect some degree of generalisation in the action-wise simple
environments of Breakout, Freeway and Pong.
We tested these three games by having humans using the models as interactive simulators. We
generally found that models trained using only prediction-dependent transitions were more fragile to
states of the environment not experienced during training, such that the humans were able to play
these games for longer with simulators trained with mixing training schemes. This seems to indicate
that models with higher long-term test accuracy are at higher risk of overfitting to the training policy.
In Fig. 7(a), we show some salient frames from a game of Pong played by a human for 500 time-steps
(the corresponding video is available at Pong-HPlay). The game starts with score (2,0), after which
the opponent scores five times, whilst the human player scores twice. As we can see, the scoring is
updated correctly and the game dynamics is accurate. In Fig. 7(b), we show some salient fames from
a game of Breakout played by a human for 350 time-steps (the corresponding video is available at
Breakout-HPlay). As for Pong, the scoring is updated correctly and the game dynamics is accurate.
These images demonstrate some degree of generalisation of the model to a human style of play.
E VALUATION OF S TATE T RANSITIONS S TRUCTURES
In Appendix B.1.2 and B.1.3 we present an extensive evaluation of different action-dependent state
transitions, including convolutional transformations for the action fusion, and gate and cell updates,
and different ways of incorporating action information. We also present a comparison between
action-dependent and action-independent state transitions.
10
Published as a conference paper at ICLR 2017
(a)
(b)
Figure 7: Salient frames extracted from (a) 500 frames of Pong and (b) 350 frames of Breakout
generated using our simulator with actions taken by a human player (larger versions can be found in
Figs. 47 and 48).
Some action-dependent state transitions give better performance than the baseline (Eqs. (1)–(5)) in
some games. For example, we found that increasing the state dimension from 1024 to the dimension
of the convolved frame, namely 2816, might be preferable. Interestingly, this is not due to an increase
in the number of parameters, as the same gain is obtained using convolutions for the gate and cell
updates. These results seem to suggest that high-dimensional sparse state transition structures could
be a promising direction for further improvement. Regarding different ways of incorporation action
information, we found that using local incorporation such as augmenting the frame with action
information and indirect action influence gives worse performance that direct and global action
influence, but that there are several ways of incorporating action information directly and globally
that give similar performance.
3.2
3D E NVIRONMENTS
Both TORCS and the 3D maze environments highlight the need to learn dynamics that are temporally
and spatially coherent: TORCS exposes the need to learn fast moving dynamics and consistency
under motion, whilst 3D mazes are partially-observed and therefore require the simulator to build an
internal representation of its surrounding using memory, as well learn basic physics, such as rotation,
momentum, and the solid properties of walls.
TORCS. The data was generated using an artificial agent controlling a fast car without opponents
(more details are given in Appendix B.2).
When using actions from the test set (see Fig. 49 and the corresponding video at TORCS), the
simulator was able to produce accurate predictions for up to several hundreds time-steps. As the car
moved around the racing track, the simulator was able to predict the appearance of new features in
the background (towers, sitting areas, lamp posts, etc.), as well as model the jerky motion of the
car caused by our choices of random actions. Finally, the instruments (speedometer and rpm) were
correctly displayed.
The simulator was good enough to be used interactively for several hundred frames, using actions
provided by a human. This showed that the model had learnt well how to deal with the car hitting
the wall on the right side of the track. Some salient frames from the game are shown in Fig. 8 (the
corresponding video can be seen at TORCS-HPlay).
3D Mazes. We used an environment that consists of randomly generated 3D mazes, containing
textured surfaces with occasional paintings on the walls: the mazes were all of the same size, but
11
Published as a conference paper at ICLR 2017
Figure 8: Salient frames highlighting coherence extracted from 700 frames of TORCS generated
using our simulator with actions taken by a human player.
Figure 9: Predicted (left) and real (right) frames at time-steps 1, 25, 66, 158 and 200 using actions
from the test data.
differed in the layout of rooms and corridors, and in the locations of paintings (see Fig. 11(b) for an
example of layout). More details are given in Appendix B.3.
When using actions from the test set, the simulator was able to very reasonably predict frames even
after 200 steps. In Fig. 9 we compare predicted frames to the real frames at several time-steps (the
corresponding video can be seen at 3DMazes). We can see that the wall layout is better predicted
when walls are closer to the agent, and that corridors and far away-walls are not as long as they
should be. The lighting on the ceiling is correct on all the frames shown.
When using the simulator interactively with actions provided by a human, we could test that the
simulator had learnt consistent aspects of the maze: when walking into walls, the model maintained
their position and layout (in one case we were able to walk through a painting on the wall – paintings
are rare in the dataset and hence it is not unreasonable that they would not be maintained when stress
testing the model in this way). When taking 360◦ spins, the wall configurations were the same as
previously generated and not regenerated afresh, and shown in Fig. 10 (see also 3DMazes-HPLay).
The coherence of the maze was good for nearby walls, but not at the end of long-corridors.
3.3
M ODEL - BASED E XPLORATION
The search for exploration strategies better than -greedy is an active area of research. Various
solutions have been proposed, such as density based or optimistic exploration (Auer et al., 2002).
Oh et al. (2015) considered a memory-based approach that steers the agent towards previously
unobserved frames. In this section, we test our simulators using a similar approach, but select a group
of actions rather than a single action at a time. Furthermore, rather than a fixed 2D environment, we
consider the more challenging 3D mazes environment. This also enables us this present a qualitative
analysis, as we can exactly measure and plot the proportion of the maze visited over time. Our aim is
to be quantitatively and qualitatively better than random exploration (using dithering of 0.7, as this
lead to the best possible random agent).
We used a 3D maze simulator to predict the outcome of sequences of actions, chosen with a hardcoded policy. Our algorithm (see below) did N Monte-Carlo simulations with randomly selected
sequences of actions of fixed length d. At each time-step t, we stored the last 10 observed frames in
an episodic memory buffer and compared predicted frames to those in memory.
for t = 1, episodeLength, d do
Our method (see Fig. 11(a)) covered 50% more
for n = 1, N do
of the maze area after 900 time-steps than random
Choose random actions An = at:t+d−1 ; exploration. These results were obtained with 100
Predict x̂nt+1:t+d ;
Monte-Carlo simulations and sequences of 6 acend
tions (more details are given in Appendix B.4).
Follow actions in An0 where
Comparing typical paths chosen by the random exn0 = argmaxn minj=0,10 k x̂nt+d − xt−j k plorer and by our explorer (see Fig. 11(b)), we see
the our explorer has much smoother trajectories.
end
This is a good local exploration strategy that leads to faster movement through corridors. To
12
Published as a conference paper at ICLR 2017
Figure 10: Salient frames highlighting wall-layout memory after 360◦ spin generated using our
simulator with actions taken by a human player.
0.3
Random Explorer
Simulator-based Explorer
0.25
0.2
0.15
0.1
0.05
0
1
100 200 300 400 500 600 700 800 900
(a)
(b)
Figure 11: (a) Average ratio over 10 mazes (shaded is the 68% confidence interval) of area visited by
the random agent and an agent using our model. (b) Typical example of paths followed by (left) the
random agent and (right) our agent (see the Appendix for more examples).
transform this into a good global exploration strategy, our explorer would have to be augmented with
a better memory in order to avoid going down the same corridor twice. These sorts of smooth local
exploration strategies could also be useful in navigation problems.
4
P REDICTION -I NDEPENDENT S IMULATORS
A prediction-independent simulator has state transitions of the form st = f (st−1 , at−1 ), which
therefore do not require the high-dimensional predictions. In the Atari environment, for example,
this avoids having to project from the state space of dimension 1024 into the observation space
of dimension 100,800 (210×160×3) through the decoding function D, and vice versa through the
encoding function C – in the used structure this enables saving around 200 million flops at each
time-step.
For the state transition, we found that a working structure was to use Eqs. (1)–(5) with zt = ht and
with different parameters for the warm-up and prediction phases. As for the prediction-dependent
simulator, we used a warm-up phase of length τ = 10, but we did backpropagate the gradient back to
time-step five in order to learn the encoding function C.
Our analysis on Atari (see Appendix C) suggests that the prediction-independent simulator is much
more sensitive to changes in the state transition structure and in the training scheme than the predictiondependent simulator. We found that using prediction length T = 15 gave much worse long-term
accuracy than with the prediction-dependent simulator. This problem could be alleviated with the use
of prediction length T = 30 through truncated backpropagation.
Fig. 12 shows a comparison of the prediction-dependent and prediction-independent simulators using
T = 30 through two subsequences of length 15 (we indicate this as BPTT(15, 2), even though in the
prediction-independent simulator we did backpropagate the gradient to the warm-up phase).
When looking at the videos available at PI-Simulators, we can notice that the prediction-independent
simulator tends to give worse type of long-term prediction. In Fishing Derby for example, in the
long-term the model tends to create fish of smaller dimension in addition to the fish present in the
real frames. Nevertheless, for some difficult games the prediction-independent simulator achieves
better performance than the prediction-dependent simulator. More investigation about alternative
13
Published as a conference paper at ICLR 2017
4
15
Prediction Error at Time-step 10
3.5
Prediction Error at Time-step 100
P.-Independent S.
P.-Dependent S.
3
2.5
2
1.5
1
0.5
0
0
40
80
120
160
Number of Frames
200
Bowling
Freeway
Pong
12.5
10
7.5
5
2.5
0
0
240
40
80
120
160
Number of Frames
200
240
(a)
80
170
P.-Independent S.
P.-Dependent S.
60
50
40
30
20
10
0
0
40
80
120
160
Number of Frames
200
Breakout
Fishing Derby
Ms Pacman
Qbert
Seaquest
Space Invaders
150
Prediction Error at Time-step 100
Prediction Error at Time-step 10
70
240
130
110
90
70
50
30
10
0
40
80
120
160
Number of Frames
200
240
(b)
Figure 12: Prediction error vs number of frames seen by the model (excluding warm-up frames) for
the prediction-dependent and prediction-independent simulators using BPTT(15, 2) for (a) Bowling,
Freeway, Pong and (b) Breakout, Fishing Derby, Ms Pacman, Qbert, Seaquest, Space Invaders (the
prediction-dependent simulator is trained with the 0%-100%PDT training scheme).
state transitions and training schemes would need to be performed to obtain the same overall level of
accuracy as with the prediction-dependent simulator.
5
D ISCUSSION
In this paper we have introduced an approach to simulate action-conditional dynamics and demonstrated that is highly adaptable to different environments, ranging from Atari games to 3D car racing
environments and mazes. We showed state-of-the-art results on Atari, and demonstrated the feasibility
of live human play in all three task families. The system is able to capture complex and long-term
interactions, and displays a sense of spatial and temporal coherence that has, to our knowledge, not
been demonstrated on high-dimensional time-series data such as these.
We have presented an in-deep analysis on the effect of different training approaches on short and longterm prediction capabilities, and showed that moving towards schemes in which the simulator relies
less on past observations to form future predictions has the effect on focussing model resources on
learning the global dynamics of the environment, leading to dramatic improvements in the long-term
predictions. However, this requires a distribution of resources that impacts short-term performance,
which can be harmful to the overall performance of the model for some games. This trade-off is
also causing the model to be less robust to states of the environment not seen during training. To
alleviate this problem would require the design of more sophisticated model architectures than the
ones considered here. Whilst it is also expected that more ad-hoc architectures would be less sensitive
14
Published as a conference paper at ICLR 2017
to different training approaches, we believe that guiding the noise as well as teaching the model to
make use of past information through the objective function would still be beneficial for improving
long-term prediction.
Complex environments have compositional structure, such as independently moving objects and other
phenomena that only rarely interact. In order for our simulators to better capture this compositional
structure, we may need to develop specialised functional forms and memory stores that are better
suited to dealing with independent representations and their interlinked interactions and relationships.
More homogeneous deep network architectures such as the one presented here are clearly not optimal
for these domains, as can be seen in Atari environments such as Ms Pacman where the system has
trouble keeping track of multiple independently moving ghosts. Whilst the LSTM memory and our
training scheme have proven to capture long-term dependencies, alternative memory structures are
required in order, for example, to learn spatial coherence at a more global level than the one displayed
by our model in the 3D mazes in oder to do navigation.
In the case of action-conditional dynamics, the policy-induced data distribution does not cover the
state space and might in fact be nonstationary over an agent lifetime. This can cause some regions
of the state space to be oversampled, whereas the regions we might actually care about the most
– those just around the agent policy state distribution – to be underrepresented. In addition, this
induces biases in the data that will ultimately not enable the model learn the environment dynamics
correctly. As verified from the experiments in this paper, both on live human play and model-based
exploration, this problem is not yet as pressing as might be expected in some environments. However,
our simulators displayed limitations and faults due to the specificities of the training data, such as
for example predicting an event based on the recognition of a particular sequence of actions always
co-occurring with this event in the training data rather than on the recognition of the real causes.
Finally, a limitation of our approach is that, however capable it might be, it is a deterministic model
designed for deterministic environments. Clearly most real world environments involve noisy state
transitions, and future work will have to address the extension of the techniques developed in this
paper to more generative temporal models.
ACKNOWLEDGMENTS
The authors would like to thank David Barber for helping with the graphical model interpretation,
Alex Pritzel for preparing the DQN data, Yori Zwols and Frederic Besse for helping with the
implementation of the model, and Oriol Vinyals, Yee Whye Teh, Junhyuk Oh, and the anonymous
reviewers for useful discussions and feedback on the manuscript.
R EFERENCES
P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine
Learning, 47:235–256, 2002.
C. Beattie, J. Z. Leibo, D. Teplyashin, T. Ward, M. Wainwright, H. Küttler, A. Lefrancq, S. Green, V. Valdés,
A. Sadik, J. Schrittwieser, K. Anderson, S. York, M. Cant, A. Cain, A. Bolton, S. Gaffney, H. King,
D. Hassabis, S. Legg, and S. Petersen. Deepmind lab. CoRR, abs/1612.03801, 2016. URL http://arxiv.
org/abs/1612.03801.
M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The Arcade Learning Environment: An evaluation
platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, 2013.
S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer. Scheduled sampling for sequence prediction with recurrent
neural networks. In Advances in Neural Information Processing Systems 28 (NIPS), pp. 1171–1179. 2015.
A. Graves. Generating sequences with recurrent neural networks. 2013. URL http://arxiv.org/abs/
1308.0850.
S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
M. Lengyel and P. Dayan. Hippocampal contributions to control: The third way. In Advances in Neural
Information Processing Systems 20 (NIPS), pp. 889–896, 2008.
M. L. Littman, R. S. Sutton, and S. Singh. Predictive representations of state. In Advances in Neural Information
Processing Systems 14 (NIPS), pp. 1555–1561. 2002.
M. McCloskey. Intuitive physics. Scientific American, 248(4):122–130, 1983.
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K.
Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra,
S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):
529–533, 02 2015. URL http://dx.doi.org/10.1038/nature14236.
15
Published as a conference paper at ICLR 2017
V. Mnih, A. Puigdomènech Badia, M. Mirza, A. Graves, T. P Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu.
Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference
on Machine Learning (ICML), 2016.
Y. Niv. Reinforcement learning in the brain. Journal of Mathematical Psychology, 53(3):139–154, 2009.
J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. P. Singh. Action-conditional video prediction using deep networks in
Atari games. In Advances in Neural Information Processing Systems 28 (NIPS), pp. 2863–2871. 2015. URL
http://arxiv.org/abs/1507.08750.
J. K. O’Regan and A. Noë. A sensorimotor account of vision and visual consciousness. Behavioral and brain
sciences, 24(05):939–973, 2001.
P.-Y. Oudeyer, F. Kaplan, and V. V. Hafner. Intrinsic motivation systems for autonomous mental development.
Evolutionary Computation, IEEE Transactions on, 11(2):265–286, 2007.
V. Patraucean, A. Handa, and R. Cipolla. Spatio-temporal video autoencoder with differentiable memory. CoRR,
abs/1511.06309, 2015. URL http://arxiv.org/abs/1511.06309.
J. Pearl. Causality. Cambridge University Press, 2009.
N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using LSTMs.
In Proceedings of the 32nd International Conference on Machine Learning (ICML), pp. 843–852, 2015.
W. Sun, A. Venkatraman, B. Boots, and J. A. Bagnell. Learning to filter with predictive state inference machines.
CoRR, abs/1512.08836, 2015. URL http://arxiv.org/abs/1512.08836.
R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT Press, 1998.
E. Talvitie. Model regularization for stable sample rollouts. In Proceedings of the Thirtieth Conference Annual
Conference on Uncertainty in Artificial Intelligence (UAI-14), pp. 780–789, 2014.
N. Wahlström, T. B. Schön, and M. P. Deisenroth. From pixels to torques: Policy learning with deep dynamical
models. CoRR, abs/1502.02251, 2015. URL http://arxiv.org/abs/1502.02251.
M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent dynamics
model for control from raw images. In Advances in Neural Information Processing Systems 28 (NIPS), pp.
2728–2736, 2015.
R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational
complexity. Bibliometrics, pp. 433–486, 1995.
B. Wymann, E. Espié, C. Guionneau, C. Dimitrakakis, R. Coulom, and A. Sumner. Torcs: The open racing car
simulator, v1.3.5. 2013. URL http://www.torcs.org.
B. Xu, N. Wang, T. Chen, and M. Li. Empirical evaluation of rectified activations in convolutional network.
2015.
W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. CoRR, abs/1409.2329, 2014.
URL http://arxiv.org/abs/1409.2329.
16
Published as a conference paper at ICLR 2017
A
DATA , P REPROCESSING AND T RAINING A LGORITHM
When generating the data, each selected action was repeated for 4 time-steps and only the 4th frame
was recorded for the analysis. The RGB images were preprocessed by subtracting mean pixel values
(calculated separately for each color channel and over an initial set of 2048 frames only) and by
dividing each pixel value by 255.
As stochastic gradient algorithm, we used centered RMSProp (Graves, 2013) with learning rate4 1e-5,
epsilon 0.01, momentum 0.9, decay 0.95, and mini-batch size 16. The model was implemented in
Torch, using the default initialization of the parameters. The state s1 was initialized to zero.
B
P REDICTION -D EPENDENT S IMULATORS
As baseline for the single-step simulators we used the following state transition:
Encoding: zt−1 = C(I(x̂t−1 , xt−1 )) ,
Action fusion: vt = Wh ht−1 ⊗ Wa at−1 ,
Gate update: it = σ(Wiv vt + Wiz zt−1 ) , ft = σ(Wf v vt + Wf s zt−1 ) ,
ot = σ(Wov vt + Woz zt−1 ) ,
Cell update: ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt + Wcz zt−1 ) ,
State update: ht = ot ⊗ tanh(ct ) ,
with vectors ht−1 and vt of dimension 1024 and 2048 respectively.
B.1
ATARI
We used a trained DQN agent (the scores are given in the
table on the right) to generate training and test datasets
consisting of 5,000,000 and 1,000,000 (210×160) RGB
images respectively, with actions chosen according to an
= 0.2-greedy policy. Such a large number of training
frames was necessary to prevent our simulators from
strongly overfitting to the training data. This would be
the case with, for example, one million training frames,
as shown in Fig. 13 (the corresponding video can be
seen at MSPacman). The ghosts are in frightened mode
at time-step 1 (first image), and have returned to chase
mode at time-step 63 (second image). The simulator
is able to predict the exact time of return to the chase
mode without sufficient history, which suggests that the
sequence was memorized.
Game Name
DQN Score
Bowling
Breakout
Fishing Derby
Freeway
Ms Pacman
Pong
Qbert
Riverraid
Seaquest
Space Invaders
51.84
396.25
19.30
33.38
2963.31
20.88
14,865.43
13,593.49
17,250.31
2952.09
The encoding consisted of 4 convolutional layers with 64, 32, 32 and 32 filters, of size 8 × 8, 6 × 6,
6 × 6, and 4 × 4, stride 2, and padding 0, 1, 1, 0 and 1, 1, 1, 0 for the height and width respectively.
Every layer was followed by a randomized rectified linear function (RReLU) (Xu et al., 2015)
with parameters l = 1/8, u = 1/3. The output tensor of the convolutional layers of dimension
32 × 11 × 8 was then flattened into the vector zt of dimension 2816. The decoding consisted of one
fully-connected layer with 2816 hidden units followed by 4 full convolutional layers with the inverse
symmetric structure of the encoding transformation: 32, 32, 32 and 64 filters, of size 4 × 4, 6 × 6,
6 × 6, and 8 × 8, stride 2, and padding 0, 1, 1, 0 and 0, 1, 1, 1. Each full convolutional layer (except
the last one) was followed by a RReLU.
In Fig. 14, we show one example of successful prediction at time-steps 100 and 200 for each game.
B.1.1
S HORT-T ERM V ERSUS L ONG -T ERM ACCURACY
In Figures 15-19, we show the prediction error obtained with the training schemes described in Sec.
3.1 for all games. Below we discuss the main findings for each game.
4
We found that using a higher learning rate value of 2e-5 would generally increase convergence speed but
cause major instability issues, suggesting that gradient clipping would need to be used.
17
Published as a conference paper at ICLR 2017
Figure 13: Prediction that demonstrates overfitting of the model when trained on one million frames.
Bowling. Bowling is one of the easiest games to model. A simulator trained using only observationdependent transitions gives quite accurate predictions. However, using only prediction-dependent transitions reduces the error in updating the score and predicting the ball direction.
Breakout. Breakout is a difficult game to model. A simulator trained with only predictiondependent transitions predicts the paddle movement very accurately but almost always fails to
represent the ball. A simulator trained with only observation-dependent transitions struggles much
less to represent the ball but does not predict the paddle and ball positions as accurately, and the ball
also often disappears after hitting the paddle. Interestingly, the long-term prediction error (bottomright of Fig. 15(b)) for the 100%PDT training scheme is the lowest, as when not representing the ball
the predicted frames look closer to the real frames than when representing the ball incorrectly. A
big improvement in the model ability to represent the ball could be obtained by pre-processing the
frames with max-pooling as done for DQN, as this increases the ball size. We believe that a more
sophisticated convolutional structure would be even more effective, but did not succeed in discovering
such a structure.
Fishing Derby. In Fishing Derby, long-term accuracy is disastrous with the 0%PDT training
scheme and good with the 100%PDT training scheme. Short-term accuracy is better with schemes
using more observation-dependent transitions than in the 100% or 0%-100%PDT training schemes,
especially at low numbers of parameter updates.
Freeway. With Bowling, Freeway is one of the easiest games to model, but more parameter updates
are required for convergence than for Bowling. The 0%PDT training scheme gives good accuracy,
although sometimes the chicken disappears or its position is incorrectly predicted – this happens
extremely rarely with the 100%PDT training scheme. In both schemes, the score is often wrongly
updated in the warning phase.
Ms Pacman. Ms Pacman is a very difficult game to model and accurate prediction can only be
obtained for a few time-steps into the future. The movement of the ghosts, especially when in
frightened mode, is regulated by the position of Ms Pacman according to complex rules. Furthermore,
the DQN = 0.2-greedy policy does not enable the agent to explore certain regions of the state space.
As a result, the simulator can predict well the movement of Ms Pacman, but fails to predict long-term
the movement of the ghosts when in frightened mode or when in chase mode later in the episodes.
Pong. With the 0%PDT training scheme, the model often incorrectly predicts the direction of the
ball when hit by the agent or by the opponent. Quite rarely, the ball disappears when hit by the agent.
With the 100%PDT training scheme, the direction the ball is much more accurately predicted, but the
ball more often disappears when hit by the agent, and the ball and paddles are generally less sharp.
18
Published as a conference paper at ICLR 2017
Figure 14: One example of 200 time-step ahead prediction for each of the 10 Atari games. Displayed
are predicted (left) and real (right) frames at time-steps 100 and 200.
Qbert. Qbert is a game for which the 0%PDT training scheme is unable to predict accurately
beyond very short-term, as after a few frames only the background is predicted. The more predictiondependent transitions are used, the less sharply the agent and the moving objects are represented.
Riverraid. In Riverraid, prediction with the 0%PDT training scheme is very poor, as this scheme
causes no generation of new objects or background. With all schemes, the model fails to predict
the frames that follow a jet loss – that’s why the prediction error increases sharply after around
time-step 13 in Fig. 18(b). The long-term prediction error is lower with the 100%PDT training
scheme, as with this scheme the simulator is more accurate before, and sometimes after, a jet
loss. The problem of incorrect prediction after a jet loss disappears when using BBTT(15,2) with
prediction-dependent transitions.
Seaquest. In Seaquest, with the 0%PDT training scheme, the existing fish disappears after a few
time-steps and no new fish ever appears from the sides of the frame. The higher the number of
prediction-dependent transitions the less sharply the fish is represented, but the more accurately its
dynamics and appearance from the sides of the frame can be predicted.
Space Invaders Space Invaders is a very difficult game to model and accurate prediction can only
be obtained for a few time-steps into the future. The 0%PDT training scheme is unable to predict
accurately beyond very short-term. The 100%PDT training scheme struggles to represent the bullets.
In Figs. 20-24 we show the effect of using different prediction lengths T ≤ 20 with the training
schemes 0%PDT, 67%PDT, and 100%PDT for all games.
In Figs. 25-29 we show the effect of using different prediction lengths T > 20 through truncated
backpropagation with the training schemes 0%PDT, 33%PDT, and 100%PDT for all games.
19
Published as a conference paper at ICLR 2017
22.5
3.5
0% PDT
0%-20%-33% PDT
33% PDT
47% PDT Alt.
47% PDT
67% PDT
0%-100% PDT
100% PDT
Oh et al.
Prediction Error
17.5
15
12.5
10
7.5
5
2.5
2.5
2
1.5
1
0.5
0
25
50
Time-steps
75
0
0
100
4
40
3.5
35
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
3
2.5
2
1.5
1
0.5
0
0
80
160
240
Number of Frames
Training
Test
3
Prediction Error at Time-step 5
20
320
160
240
Number of Frames
320
400
30
25
20
15
10
5
0
0
400
80
80
160
240
Number of Frames
320
400
(a)
160
20
0% PDT
0%-20%-33% PDT
33% PDT
47% PDT Alt.
47% PDT
67% PDT
0%-100% PDT
100% PDT
Oh et al.
Prediction Error
120
100
80
60
40
20
15
12.5
10
7.5
5
2.5
0
1
25
50
Time-steps
75
0
0
100
80
160
240
Number of Frames
320
400
170
Prediction Error at Time-step 100
Prediction Error at Time-step 10
50
40
30
20
10
0
0
Training
Test
17.5
Prediction Error at Time-step 5
140
80
160
240
Number of Frames
320
400
150
130
110
90
70
50
0
80
160
240
Number of Frames
320
400
(b)
Figure 15: Prediction error (average over 10,000 sequences) for different training schemes on (a)
Bowling and (b) Breakout. Number of frames is in millions.
20
Published as a conference paper at ICLR 2017
210
120
180
Prediction Error
150
120
90
Prediction Error at Time-step 5
0% PDT
0%-20%-33% PDT
33% PDT
47% PDT Alt.
47% PDT
67% PDT
0%-100% PDT
100% PDT
Oh et al.
60
30
80
60
40
20
0
25
50
Time-steps
75
0
0
100
140
260
120
220
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
100
80
60
40
20
0
0
80
160
240
Number of Frames
320
80
160
240
Number of Frames
320
400
80
160
240
Number of Frames
320
400
180
140
100
60
20
0
400
Training
Test
100
(a)
3.5
2
0% PDT
0%-20%-33% PDT
33% PDT
47% PDT Alt.
47% PDT
67% PDT
0%-100% PDT
100% PDT
Oh et al.
Prediction Error
2.5
2
1.5
1
0.5
1.5
1.25
1
0.75
0.5
0.25
0
1
25
50
Time-steps
Training
Test
1.75
Prediction Error at Time-step 5
3
75
0
0
100
2
80
160
240
Number of Frames
320
400
5
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1.75
1.5
1.25
1
0.75
0.5
0.25
0
0
80
160
240
Number of Frames
320
400
4
3
2
1
0
0
80
160
240
Number of Frames
320
400
(b)
Figure 16: Prediction error for different training schemes on (a) Fishing Derby and (b) Freeway.
21
Published as a conference paper at ICLR 2017
200
45
0% PDT
0%-20%-33% PDT
33% PDT
47% PDT Alt.
47% PDT
67% PDT
0%-100% PDT
100% PDT
Oh et al.
120
80
40
0
1
25
50
Time-steps
75
30
25
20
15
80
160
240
Number of Frames
320
400
80
160
240
Number of Frames
320
400
250
Prediction Error at Time-step 100
Prediction Error at Time-step 10
35
10
0
100
60
50
40
30
20
10
0
Training
Test
40
Prediction Error at Time-step 5
Prediction Error
160
80
160
240
Number of Frames
320
215
180
145
110
75
40
0
400
(a)
9
4
0% PDT
0%-20%-33% PDT
33% PDT
47% PDT Alt.
47% PDT
67% PDT
0%-100% PDT
100% PDT
Oh et al.
Prediction Error
7
6
5
4
3
2
1
3
2.5
2
1.5
1
0.5
0
25
50
Time-steps
75
0
0
100
4
18
3.5
16
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
3
2.5
2
1.5
1
0.5
0
0
80
160
240
Number of Frames
Training
Test
3.5
Prediction Error at Time-step 5
8
320
400
80
160
240
Number of Frames
320
14
12
10
8
6
4
2
0
0
80
160
240
Number of Frames
320
(b)
Figure 17: Prediction error for different training schemes on (a) Ms Pacman and (b) Pong.
22
400
400
Published as a conference paper at ICLR 2017
360
50
240
180
120
Prediction Error at Time-step 5
300
Prediction Error
45
0% PDT
0%-20%-33% PDT
33% PDT
47% PDT Alt.
47% PDT
67% PDT
0%-100% PDT
100% PDT
Oh et al.
60
0
1
25
50
Time-steps
75
35
30
25
20
15
10
5
0
0
100
90
Training
Test
40
80
160
240
Number of Frames
320
400
80
160
240
Number of Frames
320
400
310
Prediction Error at Time-step 100
Prediction Error at Time-step 10
80
70
60
50
40
30
20
10
0
0
80
160
240
Number of Frames
320
260
210
160
110
60
0
400
(a)
1200
120
0% PDT
0%-20%-33% PDT
33% PDT
47% PDT Alt.
47% PDT
67% PDT
0%-100% PDT
100% PDT
Oh et al.
800
600
400
200
0
25
50
Time-steps
75
100
90
80
70
60
50
40
0
100
220
320
190
280
Prediction Error at Time-step 15
Prediction Error at Time-step 10
1
160
130
100
70
40
0
80
160
240
Number of Frames
Training
Test
110
Prediction Error at Time-step 5
Prediction Error
1000
320
400
80
160
240
Number of Frames
320
400
80
160
240
Number of Frames
320
400
240
200
160
120
80
40
0
(b)
Figure 18: Prediction error for different training schemes on (a) Qbert and (b) Riverraid.
23
Published as a conference paper at ICLR 2017
150
40
0% PDT
0%-20%-33% PDT
33% PDT
47% PDT Alt.
47% PDT
67% PDT
0%-100% PDT
100% PDT
Oh et al.
100
75
50
25
0
25
50
Time-steps
75
30
25
20
15
10
5
0
100
40
160
35
140
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
30
25
20
15
10
5
0
80
160
240
Number of Frames
Training
Test
35
Prediction Error at Time-step 5
Prediction Error
125
320
80
160
240
Number of Frames
320
400
80
160
240
Number of Frames
320
400
120
100
80
60
40
20
0
0
400
(a)
280
40
0% PDT
0%-20%-33% PDT
33% PDT
47% PDT Alt.
47% PDT
67% PDT
0%-100% PDT
100% PDT
Oh et al.
Prediction Error
200
160
120
80
40
0
25
50
Time-steps
75
30
25
20
15
10
5
0
100
40
210
35
190
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
30
25
20
15
10
0
80
160
240
Number of Frames
Training
Test
35
Prediction Error at Time-step 5
240
320
400
80
160
240
Number of Frames
320
400
80
160
240
Number of Frames
320
400
170
150
130
110
90
0
(b)
Figure 19: Prediction error for different training schemes on (a) Seaquest and (b) Space Invaders.
24
Published as a conference paper at ICLR 2017
3.5
16
T = 10
T = 15
T = 20
Prediction Error
12
10
8
6
4
2
2.5
2
1.5
1
0.5
0
0
0
25
50
Time-steps
75
100
4
40
3.5
35
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
3
2.5
2
1.5
1
0.5
0
0
40
0% PDT
67% PDT
100% PDT
3
Prediction Error at Time-step 5
14
80
120
160
Number of Frames
200
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
30
25
20
15
10
5
0
0
240
40
(a)
20
140
T = 10
T = 15
T = 20
Prediction Error
100
80
60
40
20
15
12.5
10
7.5
5
2.5
0
0
0
1
25
50
Time-steps
75
100
Prediction Error at Time-step 100
Prediction Error at Time-step 10
40
80
120
160
Number of Frames
200
240
180
50
40
30
20
10
0
0
0% PDT
67% PDT
100% PDT
17.5
Prediction Error at Time-step 5
120
40
80
120
160
Number of Frames
200
240
160
140
120
100
80
40
0
40
80
120
160
Number of Frames
200
240
(b)
Figure 20: Prediction error (average over 10,000 sequences) for different prediction lengths T ≤ 20
on (a) Bowling and (b) Breakout. Number of frames is in millions and excludes warm-up frames.
25
Published as a conference paper at ICLR 2017
95
320
T = 10
T = 15
T = 20
Prediction Error
240
200
160
120
80
40
75
65
55
45
35
25
15
5
0
0
25
50
Time-steps
75
100
145
270
125
230
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
105
85
65
45
25
5
0
40
0% PDT
67% PDT
100% PDT
85
Prediction Error at Time-step 5
280
80
120
160
Number of Frames
200
40
80
120
160
Number of Frames
200
240
190
150
110
70
30
0
240
40
80
120
160
Number of Frames
200
240
(a)
2
1.8
T = 10
T = 15
T = 20
Prediction Error
1.4
1.2
1
0.8
0.6
0.4
1.5
1.25
1
0.75
0.5
0.25
0
0
0.2
1
25
0% PDT
67% PDT
100% PDT
1.75
Prediction Error at Time-step 5
1.6
50
Time-steps
75
100
2
40
80
120
160
Number of Frames
200
240
5
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1.75
1.5
1.25
1
0.75
0.5
0.25
0
0
40
80
120
160
Number of Frames
200
240
4
3
2
1
0
0
40
80
120
160
Number of Frames
200
240
(b)
Figure 21: Prediction error for different prediction lengths T ≤ 20 on (a) Fishing Derby and (b)
Freeway.
26
Published as a conference paper at ICLR 2017
55
220
200
T = 10
T = 15
T = 20
Prediction Error
160
140
120
100
80
60
40
20
25
50
Time-steps
75
45
40
35
30
25
20
15
10
0
0
1
0% PDT
67% PDT
100% PDT
50
Prediction Error at Time-step 5
180
100
60
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
280
Prediction Error at Time-step 100
Prediction Error at Time-step 10
55
50
45
40
35
30
25
20
15
0
40
80
120
160
Number of Frames
200
240
200
160
120
80
40
0
240
(a)
4
9
T = 10
T = 15
T = 20
Prediction Error
7
6
5
4
3
2
1
3
2.5
2
1.5
1
0.5
0
0
0
25
50
Time-steps
75
100
4
18
3.5
16
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
3
2.5
2
1.5
1
0.5
0
0
40
0% PDT
67% PDT
100% PDT
3.5
Prediction Error at Time-step 5
8
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
14
12
10
8
6
4
2
0
0
40
80
120
160
Number of Frames
200
240
(b)
Figure 22: Prediction error for different prediction lengths T ≤ 20 on (a) Ms Pacman and (b) Pong.
27
Published as a conference paper at ICLR 2017
60
280
240
Prediction Error at Time-step 5
T = 10
T = 15
T = 20
Prediction Error
200
160
120
80
40
1
25
50
Time-steps
75
100
95
30
20
10
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
330
Prediction Error at Time-step 100
85
Prediction Error at Time-step 10
40
0
0
0
0% PDT
67% PDT
100% PDT
50
75
65
55
45
35
25
15
5
0
40
80
120
160
Number of Frames
200
290
250
210
170
130
90
50
0
240
(a)
140
1400
1200
Prediction Error at Time-step 5
T = 10
T = 15
T = 20
Prediction Error
1000
800
600
400
200
1
25
50
Time-steps
75
100
80
60
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
360
320
240
Prediction Error at Time-step 15
Prediction Error at Time-step 10
280
200
160
120
80
40
0
100
40
0
0
0% PDT
67% PDT
100% PDT
120
40
80
120
160
Number of Frames
200
240
280
240
200
160
120
80
40
0
(b)
Figure 23: Prediction error for different prediction lengths T ≤ 20 on (a) Qbert and (b) Riverraid.
28
Published as a conference paper at ICLR 2017
40
100
T = 10
T = 15
T = 20
60
40
20
25
50
Time-steps
75
100
45
25
20
15
10
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
135
Prediction Error at Time-step 100
40
Prediction Error at Time-step 10
30
5
0
0
1
0% PDT
67% PDT
100% PDT
35
Prediction Error at Time-step 5
Prediction Error
80
35
30
25
20
15
10
5
0
40
80
120
160
Number of Frames
200
115
95
75
55
35
15
0
240
(a)
50
180
T = 10
T = 15
T = 20
120
90
60
30
50
Time-steps
75
100
35
30
25
20
15
10
50
220
45
200
Prediction Error at Time-step 100
Prediction Error at Time-step 10
25
40
5
0
0
1
40
35
30
25
20
15
10
0
40
0% PDT
67% PDT
100% PDT
45
Prediction Error at Time-step 5
Prediction Error
150
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
180
160
140
120
100
80
0
(b)
Figure 24: Prediction error for different prediction lengths T ≤ 20 on (a) Seaquest and (b) Space
Invaders.
29
Published as a conference paper at ICLR 2017
3.5
18
BPTT(15, 1)
BPTT(15, 2)
BPTT(15, 5)
Prediction Error
14
12
10
8
6
4
2
2.5
2
1.5
1
0.5
0
0
0
25
50
Time-steps
75
100
4
40
3.5
35
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
3
2.5
2
1.5
1
0.5
0
0
40
0% PDT
33% PDT
100% PDT
3
Prediction Error at Time-step 5
16
80
120
160
Number of Frames
200
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
30
25
20
15
10
5
0
0
240
40
(a)
20
140
BPTT(15, 1)
BPTT(15, 2)
BPTT(15, 5)
Prediction Error
100
80
60
40
20
15
12.5
10
7.5
5
2.5
0
0
0
1
25
50
Time-steps
75
100
Prediction Error at Time-step 100
Prediction Error at Time-step 10
40
80
120
160
Number of Frames
200
240
180
50
40
30
20
10
0
0
0% PDT
33% PDT
100% PDT
17.5
Prediction Error at Time-step 5
120
40
80
120
160
Number of Frames
200
240
160
140
120
100
80
60
40
0
40
80
120
160
Number of Frames
200
240
(b)
Figure 25: Prediction error (average over 10,000 sequences) for different prediction lengths through
truncated BPTT on (a) Bowling and (b) Breakout. Number of frames is in millions and excludes
warm-up frames.
30
Published as a conference paper at ICLR 2017
105
350
300
Prediction Error
250
200
150
100
50
65
45
25
5
0
0
25
50
Time-steps
75
100
145
270
125
230
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
105
85
65
45
25
5
0
40
80
120
160
Number of Frames
200
0% PDT
33% PDT
100% PDT
85
Prediction Error at Time-step 5
BPTT(15, 1)
BPTT(15, 2)
BPTT(15, 5)
40
80
120
160
Number of Frames
200
240
190
150
110
70
30
0
240
40
80
120
160
Number of Frames
200
240
(a)
2
2
BPTT(15, 1)
BPTT(15, 2)
BPTT(15, 5)
Prediction Error at Time-step 5
Prediction Error
1.5
1
0.5
1.5
1.25
1
0.75
0.5
0.25
0
0
0
1
25
50
Time-steps
0% PDT
33% PDT
100% PDT
1.75
75
100
2
40
80
120
160
Number of Frames
200
240
5
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1.75
1.5
1.25
1
0.75
0.5
0.25
0
0
40
80
120
160
Number of Frames
200
240
4
3
2
1
0
0
40
80
120
160
Number of Frames
200
240
(b)
Figure 26: Prediction error for different prediction lengths through truncated BPTT on (a) Fishing
Derby and (b) Freeway.
31
Published as a conference paper at ICLR 2017
55
250
BPTT(15, 1)
BPTT(15, 2)
BPTT(15, 5)
150
100
50
25
50
Time-steps
75
45
40
35
30
25
20
15
10
0
0
1
0% PDT
33% PDT
100% PDT
50
Prediction Error at Time-step 5
Prediction Error
200
100
60
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
270
Prediction Error at Time-step 100
Prediction Error at Time-step 10
55
50
45
40
35
30
25
20
15
0
40
80
120
160
Number of Frames
200
220
170
120
70
20
0
240
(a)
4
6
BPTT(15, 1)
BPTT(15, 2)
BPTT(15, 5)
4
3
2
1
3
2.5
2
1.5
1
0.5
0
0
0
25
50
Time-steps
75
100
4
18
3.5
16
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
3
2.5
2
1.5
1
0.5
0
0
40
0% PDT
33% PDT
100% PDT
3.5
Prediction Error at Time-step 5
Prediction Error
5
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
14
12
10
8
6
4
2
0
0
40
80
120
160
Number of Frames
200
240
(b)
Figure 27: Prediction error for different prediction lengths through truncated BPTT on (a) Ms Pacman
and (b) Pong.
32
Published as a conference paper at ICLR 2017
70
300
BPTT(15, 1)
BPTT(15, 2)
BPTT(15, 5)
Prediction Error
200
150
100
50
25
50
Time-steps
75
100
95
40
30
20
10
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
330
Prediction Error at Time-step 100
85
Prediction Error at Time-step 10
50
0
0
0
1
0% PDT
33% PDT
100% PDT
60
Prediction Error at Time-step 5
250
75
65
55
45
35
25
15
5
0
40
80
120
160
Number of Frames
200
290
250
210
170
130
90
50
0
240
(a)
135
1400
BPTT(15, 1)
BPTT(15, 2)
BPTT(15, 5)
Prediction Error
1000
800
600
400
200
25
50
Time-steps
75
115
105
95
85
75
65
55
0
0
1
0% PDT
33% PDT
100% PDT
125
Prediction Error at Time-step 5
1200
100
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
340
260
Prediction Error at Time-step 15
Prediction Error at Time-step 10
300
220
180
140
100
60
0
40
80
120
160
Number of Frames
200
240
260
220
180
140
100
60
0
(b)
Figure 28: Prediction error for different prediction lengths through truncated BPTT on (a) Qbert and
(b) Riverraid.
33
Published as a conference paper at ICLR 2017
40
120
BPTT(15, 1)
BPTT(15, 2)
BPTT(15, 5)
80
60
40
20
50
Time-steps
75
100
25
20
15
10
40
160
35
140
Prediction Error at Time-step 100
Prediction Error at Time-step 10
25
30
5
0
0
1
30
25
20
15
10
5
0
40
0% PDT
33% PDT
100% PDT
35
Prediction Error at Time-step 5
Prediction Error
100
80
120
160
Number of Frames
200
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
120
100
80
60
40
20
0
0
240
(a)
50
200
BPTT(15, 1)
BPTT(15, 2)
BPTT(15, 5)
Prediction Error
150
125
100
75
50
25
50
Time-steps
75
100
35
30
25
20
15
10
50
220
45
200
Prediction Error at Time-step 100
Prediction Error at Time-step 10
25
40
5
0
0
1
40
35
30
25
20
15
10
0
40
80
120
160
Number of Frames
0% PDT
33% PDT
100% PDT
45
Prediction Error at Time-step 5
175
200
240
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
180
160
140
120
100
80
60
40
0
(b)
Figure 29: Prediction error for different prediction lengths through truncated BPTT on (a) Seaquest
and (b) Space Invaders.
34
Published as a conference paper at ICLR 2017
B.1.2
D IFFERENT ACTION -D EPENDENT S TATE T RANSITIONS
In this section we compare the baseline state transition
Encoding: zt−1 = C(I(x̂t−1 , xt−1 )) ,
Action fusion: vt = Wh ht−1 ⊗ Wa at−1 ,
Gate update: it = σ(Wiv vt + Wiz zt−1 ) , ft = σ(Wf v vt + Wf z zt−1 ) ,
ot = σ(Wov vt + Woz zt−1 ) ,
Cell update: ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt + Wcz zt−1 ) ,
State update: ht = ot ⊗ tanh(ct ) ,
where the vectors ht−1 and vt have dimension 1024 and 2048 respectively (this model has around 25
millions (25M) parameters), with alternatives using unconstrained or convolutional transformations,
for prediction length T = 15 and the 0%-100%PDT training scheme.
More specifically, in Figs. 30-34 we compare the baseline transition with the following alternatives:
• Base2816: The vectors ht−1 and vt have the same dimension as zt−1 , namely 2816. This model
has around 80M parameters.
• izt and izt 2816: Have a separate gating for zt−1 in the cell update, i.e.
ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt ) + izt ⊗ tanh(Wcz zt−1 ) .
This model has around 30 million parameters. We also considered removing the linear
projection of zt−1 , i.e.
ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt ) + izt ⊗ tanh(zt−1 ) ,
without RReLU after the last convolution and with vectors ht−1 and vt of dimensionality
2816. This model has around 88M parameters.
• ¬zt−1 and ¬zt−1 –izt 2816: Remove zt−1 in the gate updates, i.e.
it = σ(Wiv vt ) ,
ft = σ(Wf v vt ) ,
ot = σ(Wov vt ) ,
with one of the following cell updates
ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt + Wcz zt−1 ) , 17M parameters ,
ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt ) + izt ⊗ tanh(zt−1 ) , 56M parameters .
• ht−1 , ht−1 –izt , and ht−1 –izt 2816: Substitute zt−1 with ht−1 in the gate updates, i.e.
it = σ(Wiv vt + Wih ht−1 ) , ft = σ(Wf v vt + Wf h ht−1 ) ,
ot = σ(Wov vt + Woh ht−1 ) ,
with one of the following cell updates
ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt + Wch ht−1 + Wcz zt−1 ) , 21M parameters ,
ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt + Wch ht−1 ) + izt ⊗ tanh(Wcz zt−1 ) , 24M parameters ,
ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt + Wch ht−1 ) + izt ⊗ tanh(zt−1 ) , 95M parameters .
As we can see from the figures, there is no other transition that is clearly preferable to the baseline,
with the exception of Fishing Derby, for which transitions with 2816 hidden dimensionality perform
better and converge earlier in terms of number of parameter updates.
In Figs. 35-39 we compare the baseline transition with the following convolutional alternatives
(where to apply the convolutional transformations the vectors zt−1 and vt of dimensionality 2816 are
reshaped into tensors of dimension 32 × 11 × 8)
35
Published as a conference paper at ICLR 2017
• C and 2C: Convolutional gate and cell updates, i.e.
it = σ(C iv (vt ) + C iz (zt−1 )) , ft = σ(C f v (vt ) + C f z (zt−1 )) ,
ot = σ(C ov (vt ) + C oz (zt−1 )) ,
ct = ft ⊗ ct−1 + it ⊗ tanh(C cv (vt ) + C cz (zt−1 )) ,
where C denotes either one convolution with 32 filters of size 3×3, with stride 1 and
padding 1 (as to preserve the input size), or two such convolutions with RReLU nonlinearity
in between. These two models have around 16M parameters.
• CDA and 2CDA: As above but with different action fusion parameters for the gate and cell updates,
i.e.
vti = Wih ht−1 ⊗ Wia at−1 ,
vtf = Wf h ht−1 ⊗ Wf a at−1 ,
vto = Woh ht−1 ⊗ Woa at−1 ,
vtc = Wch ht−1 ⊗ Wca at−1 ,
it = σ(C iv (vti ) + C iz (zt−1 )) , ft = σ(C f v (vtf ) + C f z (zt−1 )) ,
ot = σ(C ov (vto ) + C oz (zt−1 )) ,
ct = ft ⊗ ct−1 + it ⊗ tanh(C cv (vtc ) + C cz (zt−1 )) .
These two models have around 40M parameters.
• ht−1 –izt 2816–2C: As ’ht−1 –izt 2816’ with convolutional gate and cell updates, i.e.
it = σ(C iv (vt ) + C ih (ht−1 )) ,
ft = σ(C f v (vt ) + C f h (ht−1 )) ,
ot = σ(C ov (vt ) + C oh (ht−1 )) ,
ct = ft ⊗ ct−1 + it ⊗ tanh(C cv (vt ) + C ch (ht−1 )) + izt ⊗ tanh(zt−1 ) ,
where C denotes two convolutions as above. This model has around 16M parameters.
• ht−1 –izt 2816–CDA and ht−1 –izt 2816–2CDA: As above but with different parameters for the gate
and cell updates, and one or two convolutions. These two models have around 48M
parameters.
• ht−1 –izt 2816–2CA: As ’ht−1 –izt 2816’ with convolutional action fusion, gate and cell updates, i.e.
vt = C h (ht−1 ) ⊗ Wa at−1 ,
it = σ(C iv (vt ) + C ih (ht−1 )) ,
ft = σ(C f v (vt ) + C f h (ht−1 )) ,
ot = σ(C ov (vt ) + C oh (ht−1 )) ,
ct = ft ⊗ ct−1 + it ⊗ tanh(C cv (vt ) + C ch (ht−1 )) + izt ⊗ tanh(zt−1 ) ,
where C indicates two convolutions as above. This model has around 8M parameters.
B.1.3
ACTION I NCORPORATION
In Figs. 40-44 we compare different ways of incorporating the action for action-dependent state
transitions, using prediction length T = 15 and the 0%-100%PDT training scheme. More specifically,
we compare the baseline structure (denoted as ’Wh ht−1 ⊗Wa at−1 ’ in the figures) with the following
alternatives:
• Wh ht−1 ⊗ Wa1 at−1 + Wa2 at−1 : Multiplicative/additive interaction of the action with ht−1 ,
i.e. vt = Wh ht−1 ⊗ Wa1 at−1 + Wa2 at−1 . This model has around 25M parameters.
s
• W st−1 ⊗ Wa at−1 : Multiplicative interaction of the action with the encoded frame zt−1 , i.e.
vt = Wz zt−1 ⊗ Wa at−1 ,
it = σ(Wih ht−1 + Wiv vt ) ,
ft = σ(Wf h ht−1 + Wf v vt ) ,
ot = σ(Woh ht−1 + Wov vt ) ,
ct = ft ⊗ ct−1 + it ⊗ tanh(Wch ht−1 + Wcv vt ) .
This model has around 22M parameters.
36
Published as a conference paper at ICLR 2017
• Wh ht−1 ⊗ Wz zt−1 ⊗ Wa at−1 : Multiplicative interaction of the action with both ht−1 and zt−1
in the following way
vt = Wh ht−1 ⊗ Wz zt−1 ⊗ Wa at−1 ,
it = σ(Wiv vt ) , ft = σ(Wf v vt ) , ot = σ(Wov vt ) ,
ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt ) .
This model has around 19M parameters. We also considered having different matrices for
the gate and cell updates (denoted in the figures as ’W∗h ht−1 ⊗ W∗z zt−1 ⊗ W∗a at−1 ’).
This model has around 43M parameters.
• Wh ht−1 ⊗ Wa1 at−1 : Alternative multiplicative interaction of the action with ht−1 and zt−1
vt1 = Wh ht−1 ⊗ Wa1 at−1 ,
vt2 = Wz zt−1 ⊗ Wa2 at−1 ,
it = σ(Wiv1 vt1 + Wiv2 vt2 ) ,
ft = σ(Wf v1 vt1 + Wf v2 vt2 ) ,
1
2
ot = σ(Wov vt1 + Wov vt2 ) ,
1
2
ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt1 + Wcv vt2 ) .
This model has around 28M parameters. We also considered having different matrices for
the gate and cell updates (denoted in the figures as ’W∗h ht−1 ⊗ W∗a1 at−1 ’). This model
has around 51M parameters.
• As Input: Consider the action as an additional input, i.e.
it = σ(Wih ht−1 + Wiz zt−1 + Wia at−1 ) ,
ft = σ(Wf h ht−1 + Wf z zt−1 + Wf a at−1 ) ,
ot = σ(Woh ht−1 + Woz zt−1 + Woa at−1 ) ,
ct = ft ⊗ ct−1 + it ⊗ tanh(Wch ht−1 + Wcz zt−1 + Wca at−1 ) .
This model has around 19M parameters.
• CA: Combine the action with the frame, by replacing the encoding with
zt−1 = C(A(I(x̂t−1 , xt−1 ), at−1 )) ,
where A indicates an augmenting operation: the frame of dimension nC = 3 × nH =
210 × nW = 160 is augmented with nA (number of actions) full-zero or full-one matrices
of dimension nH × nW , producing a tensor of dimension (nC + nA) × nH × nW . As
the output of the first convolution can be written as
yj,k,l =
nH X
nW n X
nC
X
h=1 w=1
o
i,j
Wh,w
xi,h+dH(k−1),w+dW (l−1) + xnC+a,h+dH(k−1),w+dW (l−1) ,
i=1
where dH and dW indicate the filter strides, with this augmentation the action has a local
linear interaction. This model has around 19M parameters.
As we can see from the figures, ’CA’ is generally considerably worse than the other structures.
ACTION -I NDEPENDENT VERSUS ACTION -D EPENDENT S TATE T RANSITION
In Fig. 45, we compare the baseline structure with one that is action-independent as in Oh et al.
(2015), using prediction length T = 15 and the 0%-100%PDT training scheme.
As we can see, having an an action-independent state transition generally gives worse performance in
the games with higher error. An interesting disadvantage of such a structure is its inability to predict
the moving objects around the agent in Seaquest. This can be noticed in the videos in Seaquest, which
show poor modelling of the fish. This structure also makes it more difficult to correctly update the
score in some games such as Seaquest and Fishing Derby.
37
Published as a conference paper at ICLR 2017
8
3
7
Prediction Error
6
5
4
Prediction Error at Time-step 5
Baseline
Base2816
izt
izt 2816
¬zt−1
¬zt−1 –izt 2816
ht−1
ht−1 –izt
ht−1 –izt 2816
3
2
1
2
1.5
1
0.5
0
25
50
Time-steps
75
0
0
100
4
16
3.5
14
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
3
2.5
2
1.5
1
0.5
0
0
0.2
0.4
0.6
Parameter Updates
0.8
Training
Test
2.5
0.2
0.8
1
12
10
8
6
4
2
0
0
1
0.4
0.6
Parameter Updates
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
100
16
Baseline
Base2816
izt
izt 2816
¬zt−1
¬zt−1 –izt 2816
ht−1
ht−1 –izt
ht−1 –izt 2816
60
40
20
12
10
0
25
50
Time-steps
75
18
95
16
90
14
12
10
8
6
4
2
0
0
0.2
0.4
0.6
Parameter Updates
0.8
8
6
4
2
0
0
100
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
Training
Test
14
Prediction Error at Time-step 5
Prediction Error
80
0.4
0.6
Parameter Updates
0.8
1
85
80
75
70
65
60
55
50
0
1
0.2
0.2
0.4
0.6
Parameter Updates
0.8
1
(b)
Figure 30: Prediction error (average over 10,000 sequences) for different action-dependent state
transitions on (a) Bowling and (b) Breakout. Parameter updates are in millions.
38
Published as a conference paper at ICLR 2017
100
80
Baseline
Base2816
izt
izt 2816
¬zt−1
¬zt−1 –izt 2816
ht−1
ht−1 –izt
ht−1 –izt 2816
50
Prediction Error at Time-step 5
Prediction Error
75
25
60
50
40
30
20
10
0
1
25
50
Time-steps
75
0
0
100
90
0.2
0.4
0.6
Parameter Updates
0.8
1
150
Prediction Error at Time-step 100
80
Prediction Error at Time-step 10
Training
Test
70
70
60
50
40
30
20
10
0
0
0.2
0.4
0.6
Parameter Updates
0.8
130
110
90
70
50
30
10
0
1
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
1.5
2
Baseline
Base2816
izt
izt 2816
¬zt−1
¬zt−1 –izt 2816
ht−1
ht−1 –izt
ht−1 –izt 2816
1
0.75
0.5
0.25
1.5
1.25
1
0.75
0.5
0.25
0
1
25
50
Time-steps
Training
Test
1.75
Prediction Error at Time-step 5
Prediction Error
1.25
75
0
0
100
2
0.2
0.4
0.6
Parameter Updates
0.8
1
5
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1.75
1.5
1.25
1
0.75
0.5
0.25
0
0
0.2
0.4
0.6
Parameter Updates
0.8
1
4
3
2
1
0
0
0.2
0.4
0.6
Parameter Updates
0.8
1
(b)
Figure 31: Prediction error for different action-dependent state transitions on (a) Fishing Derby and
(b) Freeway.
39
Published as a conference paper at ICLR 2017
140
45
Baseline
Base2816
izt
izt 2816
¬zt−1
¬zt−1 –izt 2816
ht−1
ht−1 –izt
ht−1 –izt 2816
Prediction Error
100
80
60
40
20
0
25
50
Time-steps
75
35
30
25
20
15
10
0
100
45
110
40
100
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
35
30
25
20
15
10
0
0.2
Training
Test
40
Prediction Error at Time-step 5
120
0.4
0.6
Parameter Updates
0.8
0.2
0.4
0.6
Parameter Updates
0.8
1
90
80
70
60
50
40
0
1
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
5
4
Baseline
Base2816
izt
izt 2816
¬zt−1
¬zt−1 –izt 2816
ht−1
ht−1 –izt
ht−1 –izt 2816
3
2
1
3
2.5
2
1.5
1
0.5
0
1
25
50
Time-steps
75
0
0
100
4
Prediction Error at Time-step 100
Prediction Error at Time-step 10
0.2
0.4
0.6
Parameter Updates
0.8
1
12
3.5
3
2.5
2
1.5
1
0.5
0
0
Training
Test
3.5
Prediction Error at Time-step 5
Prediction Error
4
0.2
0.4
0.6
Parameter Updates
0.8
10
8
6
4
2
0
0
1
0.2
0.4
0.6
Parameter Updates
0.8
1
(b)
Figure 32: Prediction error for different action-dependent state transitions on (a) Ms Pacman and (b)
Pong.
40
Published as a conference paper at ICLR 2017
40
250
Baseline
Base2816
izt
izt 2816
¬zt−1
¬zt−1 –izt 2816
ht−1
ht−1 –izt
ht−1 –izt 2816
150
100
50
30
25
20
15
10
0
1
25
50
Time-steps
75
5
0
100
0.2
0.4
0.6
Parameter Updates
0.8
1
230
Prediction Error at Time-step 100
55
Prediction Error at Time-step 10
Training
Test
35
Prediction Error at Time-step 5
Prediction Error
200
45
35
25
15
5
0
0.2
0.4
0.6
Parameter Updates
0.8
200
170
140
110
80
50
0
1
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
800
110
Baseline
Base2816
izt
izt 2816
¬zt−1
¬zt−1 –izt 2816
ht−1
ht−1 –izt
ht−1 –izt 2816
Prediction Error
600
500
400
300
200
100
0
1
25
50
Time-steps
75
80
70
60
50
40
0.2
0.4
0.6
Parameter Updates
0.8
1
0.2
0.4
0.6
Parameter Updates
0.8
1
130
Prediction Error at Time-step 15
Prediction Error at Time-step 10
90
30
0
100
130
110
90
70
50
30
0
Training
Test
100
Prediction Error at Time-step 5
700
0.2
0.4
0.6
Parameter Updates
0.8
1
110
90
70
50
30
0
(b)
Figure 33: Prediction error for different action-dependent state transitions on (a) Qbert and (b)
Riverraid.
41
Published as a conference paper at ICLR 2017
45
35
40
Prediction Error
35
30
25
20
Prediction Error at Time-step 5
Baseline
Base2816
izt
izt 2816
¬zt−1
¬zt−1 –izt 2816
ht−1
ht−1 –izt
ht−1 –izt 2816
15
10
5
25
20
15
10
0
25
50
Time-steps
75
5
0
100
40
70
35
60
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
30
25
20
15
10
5
0
0.2
0.4
0.6
Parameter Updates
0.8
0.2
0.4
0.6
Parameter Updates
0.8
1
50
40
30
20
10
0
1
Training
Test
30
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
140
40
Baseline
Base2816
izt
izt 2816
¬zt−1
¬zt−1 –izt 2816
ht−1
ht−1 –izt
ht−1 –izt 2816
Prediction Error
100
80
60
40
20
30
25
20
15
10
0
25
50
Time-steps
75
5
0
100
45
150
40
140
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
35
30
25
20
15
10
5
0
0.2
0.4
0.6
Parameter Updates
Training
Test
35
Prediction Error at Time-step 5
120
0.8
0.4
0.6
Parameter Updates
0.8
1
130
120
110
100
90
80
70
0
1
0.2
0.2
0.4
0.6
Parameter Updates
0.8
1
(b)
Figure 34: Prediction error for different action-dependent state transitions on (a) Seaquest and (b)
Space Invaders.
42
Published as a conference paper at ICLR 2017
9
3.5
Baseline
C
2C
C–DA
2C–DA
ht−1 –izt 2816–2C
ht−1 –izt 2816–C–DA
ht−1 –izt 2816–2C–DA
ht−1 –izt 2816–2CA
Prediction Error
7
6
5
4
3
2
1
2.5
2
1.5
1
0.5
0
25
50
Time-steps
75
0
0
100
4
14
3.5
12
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
3
2.5
2
1.5
1
0.5
0
0
0.2
0.4
0.6
Parameter Updates
Training
Test
3
Prediction Error at Time-step 5
8
0.8
0.2
0.8
1
10
8
6
4
2
0
0
1
0.4
0.6
Parameter Updates
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
14
120
Baseline
C
2C
C–DA
2C–DA
ht−1 –izt 2816–2C
ht−1 –izt 2816–C–DA
ht−1 –izt 2816–2C–DA
ht−1 –izt 2816–2CA
80
60
12
Prediction Error at Time-step 5
Prediction Error
100
40
20
10
0
1
25
50
Time-steps
75
8
6
4
2
0
0
100
18
0.2
0.4
0.6
Parameter Updates
0.8
1
110
Prediction Error at Time-step 100
Prediction Error at Time-step 10
16
14
12
10
8
6
4
2
0
0
0.2
0.4
0.6
Parameter Updates
0.8
100
90
80
70
60
50
0
1
0.2
0.4
0.6
Parameter Updates
0.8
1
(b)
Figure 35: Prediction error (average over 10,000 sequences) for different convolutional actiondependent state transitions on (a) Bowling and (b) Breakout. Parameter updates are in millions.
43
Published as a conference paper at ICLR 2017
120
80
Baseline
C
2C
C–DA
2C–DA
ht−1 –izt 2816–2C
ht−1 –izt 2816–C–DA
ht−1 –izt 2816–2C–DA
ht−1 –izt 2816–2CA
Prediction Error
80
60
40
20
60
50
40
30
20
10
0
1
25
50
Time-steps
75
0
0
100
90
0.2
0.4
0.6
Parameter Updates
0.8
1
150
Prediction Error at Time-step 100
80
Prediction Error at Time-step 10
Training
Test
70
Prediction Error at Time-step 5
100
70
60
50
40
30
20
10
0
0
0.2
0.4
0.6
Parameter Updates
0.8
130
110
90
70
50
30
10
0
1
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
1
2
Baseline
C
2C
C–DA
2C–DA
ht−1 –izt 2816–2C
ht−1 –izt 2816–C–DA
ht−1 –izt 2816–2C–DA
ht−1 –izt 2816–2CA
0.5
Prediction Error at Time-step 5
Prediction Error
0.75
0.25
1.5
1.25
1
0.75
0.5
0.25
0
1
25
50
Time-steps
Training
Test
1.75
75
0
0
100
2
0.2
0.4
0.6
Parameter Updates
0.8
1
5
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1.75
1.5
1.25
1
0.75
0.5
0.25
0
0
0.2
0.4
0.6
Parameter Updates
0.8
1
4
3
2
1
0
0
0.2
0.4
0.6
Parameter Updates
0.8
1
(b)
Figure 36: Prediction error for different convolutional action-dependent state transitions on (a)
Fishing Derby and (b) Freeway.
44
Published as a conference paper at ICLR 2017
45
150
Baseline
C
2C
C–DA
2C–DA
ht−1 –izt 2816–2C
ht−1 –izt 2816–C–DA
ht−1 –izt 2816–2C–DA
ht−1 –izt 2816–2CA
100
75
50
25
0
1
25
50
Time-steps
75
30
25
20
15
0.2
0.4
0.6
Parameter Updates
0.8
1
160
Prediction Error at Time-step 100
45
Prediction Error at Time-step 10
35
10
0
100
50
40
35
30
25
20
15
10
0
Training
Test
40
Prediction Error at Time-step 5
Prediction Error
125
0.2
0.4
0.6
Parameter Updates
0.8
140
120
100
80
60
40
0
1
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
5
4
Baseline
C
2C
C–DA
2C–DA
ht−1 –izt 2816–2C
ht−1 –izt 2816–C–DA
ht−1 –izt 2816–2C–DA
ht−1 –izt 2816–2CA
3
2
1
3
2.5
2
1.5
1
0.5
0
1
25
50
Time-steps
75
0
0
100
4
Prediction Error at Time-step 100
Prediction Error at Time-step 10
0.2
0.4
0.6
Parameter Updates
0.8
1
12
3.5
3
2.5
2
1.5
1
0.5
0
0
Training
Test
3.5
Prediction Error at Time-step 5
Prediction Error
4
0.2
0.4
0.6
Parameter Updates
0.8
1
10
8
6
4
2
0
0.2
0.4
0.6
Parameter Updates
0.8
1
(b)
Figure 37: Prediction error for different convolutional action-dependent state transitions on (a) Ms
Pacman and (b) Pong.
45
Published as a conference paper at ICLR 2017
40
200
Baseline
C
2C
C–DA
2C–DA
ht−1 –izt 2816–2C
ht−1 –izt 2816–C–DA
ht−1 –izt 2816–2C–DA
ht−1 –izt 2816–2CA
120
80
40
30
25
20
15
10
0
1
25
50
Time-steps
75
5
0
100
0.2
0.4
0.6
Parameter Updates
0.8
1
230
Prediction Error at Time-step 100
55
Prediction Error at Time-step 10
Training
Test
35
Prediction Error at Time-step 5
Prediction Error
160
45
35
25
15
5
0
0.2
0.4
0.6
Parameter Updates
0.8
200
170
140
110
80
50
0
1
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
1000
110
Baseline
C
2C
C–DA
2C–DA
ht−1 –izt 2816–2C
ht−1 –izt 2816–C–DA
ht−1 –izt 2816–2C–DA
ht−1 –izt 2816–2CA
600
400
200
0
1
25
50
Time-steps
75
80
70
60
50
40
0.2
0.4
0.6
Parameter Updates
0.8
1
0.2
0.4
0.6
Parameter Updates
0.8
1
130
Prediction Error at Time-step 15
Prediction Error at Time-step 10
90
30
0
100
130
110
90
70
50
30
0
Training
Test
100
Prediction Error at Time-step 5
Prediction Error
800
0.2
0.4
0.6
Parameter Updates
0.8
1
110
90
70
50
30
0
(b)
Figure 38: Prediction error for different convolutional action-dependent state transitions on (a) Qbert
and (b) Riverraid.
46
Published as a conference paper at ICLR 2017
60
35
Prediction Error
50
40
30
Prediction Error at Time-step 5
Baseline
C
2C
C–DA
2C–DA
ht−1 –izt 2816–2C
ht−1 –izt 2816–C–DA
ht−1 –izt 2816–2C–DA
ht−1 –izt 2816–2CA
20
10
25
20
15
10
0
25
50
Time-steps
75
5
0
100
40
80
35
70
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
30
25
20
15
10
5
0
0.2
0.4
0.6
Parameter Updates
0.8
0.2
0.4
0.6
Parameter Updates
0.8
1
60
50
40
30
20
0
1
Training
Test
30
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
40
150
Baseline
C
2C
C–DA
2C–DA
ht−1 –izt 2816–2C
ht−1 –izt 2816–C–DA
ht−1 –izt 2816–2C–DA
ht−1 –izt 2816–2CA
100
75
50
25
30
25
20
15
10
0
25
50
Time-steps
75
5
0
100
40
150
35
140
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
30
25
20
15
10
5
0
0.2
0.4
0.6
Parameter Updates
Training
Test
35
Prediction Error at Time-step 5
Prediction Error
125
0.8
0.4
0.6
Parameter Updates
0.8
1
130
120
110
100
90
80
70
0
1
0.2
0.2
0.4
0.6
Parameter Updates
0.8
1
(b)
Figure 39: Prediction error for different convolutional action-dependent state transitions on (a)
Seaquest and (b) Space Invaders.
47
Published as a conference paper at ICLR 2017
3.5
10
Wh ht−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1 + Wa2 at−1
Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1
W∗h ht−1 ⊗ W∗z zt−1 ⊗ W∗a at−1
W∗h ht−1 ⊗ W∗a1 at−1
As Input
CA
8
Prediction Error
7
6
5
3
Prediction Error at Time-step 5
9
4
3
2
1
0
1
25
50
Time-steps
75
2
1.5
1
0.5
0
0
100
4
0.2
0.4
0.6
Parameter Updates
0.8
1
0.2
0.4
0.6
Parameter Updates
0.8
1
15
Prediction Error at Time-step 100
3.5
Prediction Error at Time-step 10
2.5
3
2.5
2
1.5
1
0.5
0
0
0.2
0.4
0.6
Parameter Updates
0.8
13
11
9
7
5
3
0
1
(a)
14
120
Wh ht−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1 + Wa2 at−1
Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1
W∗h ht−1 ⊗ W∗z zt−1 ⊗ W∗a at−1
W∗h ht−1 ⊗ W∗a1 at−1
As Input
CA
80
60
12
Prediction Error at Time-step 5
Prediction Error
100
40
20
10
0
25
50
Time-steps
75
16
120
14
110
12
10
8
6
4
2
0
0
0.2
0.4
0.6
Parameter Updates
0.8
6
4
2
0
0
100
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
8
0.4
0.6
Parameter Updates
0.8
1
100
90
80
70
60
50
0
1
0.2
0.2
0.4
0.6
Parameter Updates
0.8
1
(b)
Figure 40: Prediction error (average over 10,000 sequences) for different ways of incorporating the
action on (a) Bowling and (b) Breakout. Parameter updates are in millions.
48
Published as a conference paper at ICLR 2017
80
110
Wh ht−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1 + Wa2 at−1
Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1
W∗h ht−1 ⊗ W∗z zt−1 ⊗ W∗a at−1
W∗h ht−1 ⊗ W∗a1 at−1
As Input
CA
70
70
Prediction Error at Time-step 5
Prediction Error
90
50
30
10
1
25
50
Time-steps
75
50
40
30
20
10
0
100
90
0.2
0.4
0.6
Parameter Updates
0.8
1
160
Prediction Error at Time-step 100
80
Prediction Error at Time-step 10
60
70
60
50
40
30
20
10
0
0.2
0.4
0.6
Parameter Updates
0.8
140
120
100
80
60
40
0
1
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
2
2
Wh ht−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1 + Wa2 at−1
Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1
W∗h ht−1 ⊗ W∗z zt−1 ⊗ W∗a at−1
W∗h ht−1 ⊗ W∗a1 at−1
As Input
CA
1
Prediction Error at Time-step 5
Prediction Error
1.5
1.75
0.5
1.5
1.25
1
0.75
0.5
0.25
0
1
25
50
Time-steps
75
0
0
100
2
0.2
0.4
0.6
Parameter Updates
0.8
1
5
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1.75
1.5
1.25
1
0.75
0.5
0.25
0
0
0.2
0.4
0.6
Parameter Updates
0.8
1
4
3
2
1
0
0
0.2
0.4
0.6
Parameter Updates
0.8
1
(b)
Figure 41: Prediction error for different ways of incorporating the action on (a) Fishing Derby and
(b) Freeway.
49
Published as a conference paper at ICLR 2017
50
120
Wh ht−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1 + Wa2 at−1
Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1
W∗h ht−1 ⊗ W∗z zt−1 ⊗ W∗a at−1
W∗h ht−1 ⊗ W∗a1 at−1
As Input
CA
Prediction Error
80
60
45
Prediction Error at Time-step 5
100
40
20
0
1
25
50
Time-steps
75
30
25
20
0.2
0.4
0.6
Parameter Updates
0.8
1
120
Prediction Error at Time-step 100
50
Prediction Error at Time-step 10
35
15
0
100
55
45
40
35
30
25
20
15
0
40
0.2
0.4
0.6
Parameter Updates
0.8
110
100
90
80
70
60
0
1
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
6
4
Wh ht−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1 + Wa2 at−1
Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1
W∗h ht−1 ⊗ W∗z zt−1 ⊗ W∗a at−1
W∗h ht−1 ⊗ W∗a1 at−1
As Input
CA
4
3
3.5
Prediction Error at Time-step 5
Prediction Error
5
2
1
3
2.5
2
1.5
1
0.5
0
1
25
50
Time-steps
75
0
0
100
4
Prediction Error at Time-step 100
Prediction Error at Time-step 10
0.4
0.6
Parameter Updates
0.8
1
14
3.5
3
2.5
2
1.5
1
0.5
0
0
0.2
0.2
0.4
0.6
Parameter Updates
0.8
12
10
8
6
4
2
0
1
0.2
0.4
0.6
Parameter Updates
0.8
1
(b)
Figure 42: Prediction error for different ways of incorporating the action on (a) Ms Pacman and (b)
Pong.
50
Published as a conference paper at ICLR 2017
140
40
Wh ht−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1 + Wa2 at−1
Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1
W∗h ht−1 ⊗ W∗z zt−1 ⊗ W∗a at−1
W∗h ht−1 ⊗ W∗a1 at−1
As Input
CA
Prediction Error
100
80
60
35
Prediction Error at Time-step 5
120
40
20
30
25
20
15
10
0
1
25
50
Time-steps
75
5
0
100
0.4
0.6
Parameter Updates
0.8
1
230
Prediction Error at Time-step 100
Prediction Error at Time-step 10
55
0.2
45
35
25
15
5
0
0.2
0.4
0.6
Parameter Updates
0.8
200
170
140
110
80
0
1
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
115
1000
Wh ht−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1 + Wa2 at−1
Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1
W∗h ht−1 ⊗ W∗z zt−1 ⊗ W∗a at−1
W∗h ht−1 ⊗ W∗a1 at−1
As Input
CA
600
105
Prediction Error at Time-step 5
Prediction Error
800
400
200
0
25
50
Time-steps
75
85
75
65
55
45
0
100
130
140
120
130
Prediction Error at Time-step 15
Prediction Error at Time-step 10
1
95
110
100
90
80
70
60
50
0
0.2
0.4
0.6
Parameter Updates
0.8
0.4
0.6
Parameter Updates
0.8
1
0.2
0.4
0.6
Parameter Updates
0.8
1
120
110
100
90
80
70
60
50
0
1
0.2
(b)
Figure 43: Prediction error for different ways of incorporating the action on (a) Qbert and (b)
Riverraid.
51
Published as a conference paper at ICLR 2017
45
35
40
Prediction Error
35
30
25
Prediction Error at Time-step 5
Wh ht−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1 + Wa2 at−1
Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1
W∗h ht−1 ⊗ W∗z zt−1 ⊗ W∗a at−1
W∗h ht−1 ⊗ W∗a1 at−1
As Input
CA
20
15
10
30
25
20
15
10
5
25
50
Time-steps
75
5
0
100
40
75
35
65
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
30
25
20
15
10
5
0
0.2
0.4
0.6
Parameter Updates
0.8
0.4
0.6
Parameter Updates
0.8
1
55
45
35
25
15
0
1
0.2
0.2
0.4
0.6
Parameter Updates
0.8
1
(a)
140
45
Wh ht−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1 + Wa2 at−1
Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wz zt−1 ⊗ Wa at−1
Wh ht−1 ⊗ Wa1 at−1
W∗h ht−1 ⊗ W∗z zt−1 ⊗ W∗a at−1
W∗h ht−1 ⊗ W∗a1 at−1
As Input
CA
Prediction Error
100
80
60
40
Prediction Error at Time-step 5
120
40
20
0
25
50
Time-steps
75
30
25
20
15
10
0
100
50
160
45
150
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
35
40
35
30
25
20
15
10
0
0.2
0.4
0.6
Parameter Updates
0.8
0.4
0.6
Parameter Updates
0.8
1
140
130
120
110
100
90
80
0
1
0.2
0.2
0.4
0.6
Parameter Updates
0.8
1
(b)
Figure 44: Prediction error for different ways of incorporating the action on (a) Seaquest and (b)
Space Invaders.
52
Published as a conference paper at ICLR 2017
7
Bowling
Freeway
Pong
4
3
2
1
0
0
0.2
0.4
0.6
Parameter Updates
0.8
3
2
1
14
10
12
8
6
4
2
0.2
0.4
0.6
Parameter Updates
0.8
0.8
1
0.2
0.4
0.6
Parameter Updates
0.8
1
0.2
0.4
0.6
Parameter Updates
0.8
1
0.2
0.4
0.6
Parameter Updates
0.8
1
8
6
4
2
90
100
Prediction Error at Time-step 25
125
75
50
25
0.2
0.4
0.6
Parameter Updates
0.8
80
70
60
50
40
30
20
10
0
0
1
200
150
180
Prediction Error at Time-step 100
130
110
90
70
50
30
10
0
0.4
0.6
Parameter Updates
100
Breakout
Fishing Derby
Ms Pacman
Qbert
Riverraid
Seaquest
Space Invaders
150
0
0
0.2
10
0
0
1
175
Prediction Error at Time-step 10
4
12
0
0
Prediction Error at Time-step 50
5
0
0
1
Prediction Error at Time-step 100
Prediction Error at Time-step 50
6
Prediction Error at Time-step 25
Prediction Error at Time-step 10
5
0.2
0.4
0.6
Parameter Updates
0.8
160
140
120
100
80
60
40
20
0
1
Figure 45: Prediction error (average over 10,000 sequences) with (continuous lines) action-dependent
and (dashed lines) action-independent state transition. Parameter updates are in millions.
53
Published as a conference paper at ICLR 2017
Figure 46: Salient frames extracted from 2000 frames of Freeway generated using our simulator with
actions chosen by a human player.
B.1.4
H UMAN P LAY
In Fig. 46, we show the results of a human playing Freeway for 2000 time-steps (the corresponding
video is available at Freeway-HPlay). The model is able to update the score correctly up to (14,0). At
that point the score starts flashing and to change color as a warn to the resetting of the game. The
model is not able to predict the score correctly in this warning phase, due to the bias in the data (DQN
always achieves score above 20 at this point in the game), but flashing starts at the right time as does
the resetting of the game.
Figs. 47 and 48 are larger views of the same frames shown in Fig. 7.
B.2
3D C AR R ACING
We generated 10 and one million (180×180) RGB images for training and testing respectively, with
an agent trained with the asynchronous advantage actor critic algorithm (Fig. 2 in (Mnih et al., 2016)).
The agent could choose among the three actions accelerate straight, accelerate left, and accelerate
right, according to an -greedy policy, with selected at random between 0 and 0.5, independently
for each episode. We added a 4th ‘do nothing’ action when generating actions at random. Smaller
lead to longer episodes (∼1500 frames), while larger lead to shorter episodes (∼200 frames).
We could use the same number of convolutional layers, filters and kernel sizes as in Atari, with no
padding.
Fig. 49 shows side by side predicted and real frames for up to 200 actions. We found that this quality
of predictions was very common.
When using our model as an interactive simulator, we observed that the car would slightly slow down
when selecting no action, but fail to stop. Since the model had never seen occurrences of the agent
54
Published as a conference paper at ICLR 2017
Figure 47: Salient frames extracted from 500 frames of Pong generated using our simulator with
actions chosen by a human player.
completely releasing the accelerator for more than a few consecutive actions, it makes sense it would
fail to deal with this case appropriately.
B.3
3D M AZES
Unlike Atari and TORCS, we could rely on agents with random policies to generate interesting
sequences. The agent could choose one of five actions: forward, backward, rotate left, rotate right
or do nothing. During an episode, the agent alternated between a random walk for 15 steps, and
spinning on itself for 15 steps (roughly, a complete 360◦ spin). This encourages coherent learning
of the predicted frames after a spin. The random walk was with dithering of 0.7, meaning that new
actions were chosen with a probability of 0.7 at every time-step. The training and test datasets were
made of 7,600 and 1,100 episodes, respectively. All episodes were of length 900 frames, resulting in
6,840,000 and 990,000 (48×48) RGB images for training and testing respectively.
We adapted the encoding by having only 3 convolutions with 64 filters of size 6 × 6, stride 2, and
padding 0, 1, and 2. The decoding transformation was adapted accordingly.
B.4
M ODEL - BASED E XPLORATION
We observed that increasing the number of Monte-Carlo simulations beyond 100 made little to
no difference, probably because with na possible actions the number of possible Monte-Carlo
simulations nda is so large that we quickly get diminishing returns with every new simulation.
55
Published as a conference paper at ICLR 2017
Figure 48: Salient frames extracted from 350 frames of Breakout generated using our simulator with
actions taken by a human player.
Increasing significantly the sequence length of actions beyond d = 6 lead to a large decrease in
performance. To explain this, we observed that after 6 steps, our average prediction error was less
than half the average prediction error after 30 steps (0.16 and 0.37 respectively). Since the average
minimum and maximum distances did not vary significantly (from 0.23 to 0.36, and from 0.24 to 0.4
respectively), for deep simulations we ended up with more noise than signal in our predictions and
our decisions were no better than random.
Fig. 50 shows some examples of trajectories chosen by our explorer. Note that all these trajectories
are much smoother than for our baseline agent.
C
P REDICTION -I NDEPENDENT S IMULATORS
In this section we compare different action-dependent state transitions and prediction lengths T for
the prediction-independent simulator.
More specifically, in Fig. 51 we compare (with T = 15) the state transition
C(xt−1 ) Up to t − 1 = τ − 1 ,
Encoding: zt−1 =
ht−1
From t − 1 = τ ,
Action fusion: vt = Wh ht−1 ⊗ Wa at−1 ,
Gate update: it = σ(Wiv vt + Wiz zt−1 ) , ft = σ(Wf v vt + Wf z zt−1 ) ,
ot = σ(Wov vt + Woz zt−1 ) ,
Cell update: ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt + Wcz zt−1 ) ,
56
Published as a conference paper at ICLR 2017
Figure 49: Salient frames, predicted (left) and real (right), for TORCS from a 200 time-steps video.
where the vectors ht−1 and vt have dimension 1024 and 2048 respectively and with different matrices
W for the warm-up and the prediction phases (the resulting model has around 40M parameters – we
refer to this structure as ’Base–zt−1 = ht−1 ’ in the figures), with the following alternatives:
• Base–zt−1 = 0: Remove the action-independent transformation of ht−1 , i.e.
C(xt−1 ) Up to t − 1 = τ − 1 ,
zt−1 =
0
From t − 1 = τ ,
where 0 represents a zero-vector and with different matrices W for the warm-up and the
prediction phases. This model has around 40M parameters.
• ht−1 –izt 2816–zt−1 = 0: Substitute zt−1 with ht−1 in the gate updates and have a separate gating
for the encoded frame, i.e.
C(xt−1 ) Up to t − 1 = τ − 1 ,
zt−1 =
0
From t − 1 = τ ,
vt = Wh ht−1 ⊗ Wa at−1 ,
it = σ(Wiv vt + Wih ht−1 ) ,
ft = σ(Wf v vt + Wf h ht−1 ) ,
ot = σ(Wov vt + Woh ht−1 ) ,
ct = ft ⊗ ct−1 + it ⊗ tanh(Wcv vt + Wcs ht−1 ) + izt ⊗ tanh(zt−1 ) ,
with shared W matrices for the warm-up and the prediction phases, without RReLU after
the last convolution of the encoding, and with vectors ht−1 and vt of dimensionality 2816.
This model has around 95M parameters.
As we can see, the ’Base–zt−1 = 0’ state transition performs quite poorly for long-term prediction
compared to the other transitions. With this transition, the prediction-independent simulator performs
much worse than the prediction-dependent simulator with the baseline state transition (Appendix
B.1.1). The best performance is obtained with the ’ht−1 –izt 2816–zt−1 = 0’ structure, which however
has a large number of parameters.
57
Published as a conference paper at ICLR 2017
Figure 50: Examples of paths followed by random baseline (left), and explorers based on our
simulator (right).
In Figs. 52 and 53, we show the effect of using different prediction lengths T on the structure ’Base–
zt−1 = ht−1 ’. As we can see, using longer prediction lengths dramatically improves long-term.
Overall, the best performance is obtained using two subsequences of length T = 15.
58
Published as a conference paper at ICLR 2017
3
12
Prediction Error
10
8
6
4
2
2
1.5
1
0.5
0
0
0
25
50
Time-steps
75
100
4
40
3.5
35
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
3
2.5
2
1.5
1
0.5
0
0
40
80
120
160
Number of Frames
200
Bowling
Freeway
Pong
2.5
Prediction Error at Time-step 5
Base–zt−1 = ht−1
Base–zt−1 = 0
ht−1 –izt 2816–zt−1 = 0
40
80
120
160
Number of Frames
200
240
30
25
20
15
10
5
0
0
240
40
80
120
160
Number of Frames
200
240
(a)
80
200
Base–zt−1 = ht−1
Base–zt−1 = 0
ht−1 –izt 2816–zt−1 = 0
Prediction Error
150
125
100
75
50
25
50
Time-steps
75
100
50
40
30
20
10
80
200
70
180
Prediction Error at Time-step 100
Prediction Error at Time-step 10
25
60
0
0
0
1
60
50
40
30
20
10
0
0
40
80
120
160
Number of Frames
Breakout
Fishing Derby
Ms Pacman
Qbert
Seaquest
Space Invaders
70
Prediction Error at Time-step 5
175
200
240
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
160
140
120
100
80
60
40
0
(b)
Figure 51: Prediction error (average over 10,000 sequences) for the prediction-independent simulator
with different action-dependent state transitions for (a) Bowling, Freeway, Pong, and (b) Breakout,
Fishing Derby, Ms Pacman, Qbert, Seaquest, Space Invaders. Number of frames is in million and
excludes warm-up frames.
59
Published as a conference paper at ICLR 2017
3
8
7
Prediction Error
6
5
4
3
2
1
2
1.5
1
0.5
0
0
0
25
50
Time-steps
75
100
4
20
3.5
17.5
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
Bowling
Freeway
Pong
2.5
Prediction Error at Time-step 5
T = 15
T = 20
T = 25
3
2.5
2
1.5
1
0.5
0
0
40
80
120
160
Number of Frames
200
40
80
120
160
Number of Frames
200
240
15
12.5
10
7.5
5
2.5
0
0
240
40
80
120
160
Number of Frames
200
240
(a)
80
150
T = 15
T = 20
T = 25
100
75
50
25
50
Time-steps
75
100
50
40
30
20
10
80
170
70
150
Prediction Error at Time-step 100
Prediction Error at Time-step 10
25
60
0
0
0
1
60
50
40
30
20
10
0
0
40
Breakout
Fishing Derby
Ms Pacman
Qbert
Seaquest
Space Invaders
70
Prediction Error at Time-step 5
Prediction Error
125
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
130
110
90
70
50
30
0
(b)
Figure 52: Prediction error for the prediction-independent simulator with different prediction lengths
T ≤ 25 for (a) Bowling, Freeway, Pong, and (b) Breakout, Fishing Derby, Ms Pacman, Qbert,
Seaquest, Space Invaders.
60
Published as a conference paper at ICLR 2017
3
8
7
Prediction Error
6
5
4
3
2
1
2
1.5
1
0.5
0
0
0
25
50
Time-steps
75
100
4
20
3.5
17.5
Prediction Error at Time-step 100
Prediction Error at Time-step 10
1
Bowling
Freeway
Pong
2.5
Prediction Error at Time-step 5
BBTT(15, 1)
BBTT(15, 2)
3
2.5
2
1.5
1
0.5
0
0
40
80
120
160
Number of Frames
200
40
80
120
160
Number of Frames
200
240
15
12.5
10
7.5
5
2.5
0
0
240
40
80
120
160
Number of Frames
200
240
(a)
80
150
BBTT(15, 1)
BBTT(15, 2)
100
75
50
25
50
Time-steps
75
100
50
40
30
20
10
80
170
70
150
Prediction Error at Time-step 100
Prediction Error at Time-step 10
25
60
0
0
0
1
60
50
40
30
20
10
0
0
40
80
120
160
Number of Frames
Breakout
Fishing Derby
Ms Pacman
Qbert
Seaquest
Space Invaders
70
Prediction Error at Time-step 5
Prediction Error
125
200
240
40
80
120
160
Number of Frames
200
240
40
80
120
160
Number of Frames
200
240
130
110
90
70
50
30
10
0
(b)
Figure 53: Prediction error for the prediction-independent simulator with BPTT(15, 1) and BPTT(15,
2) for (a) Bowling, Freeway, Pong, and (b) Breakout, Fishing Derby, Ms Pacman, Qbert, Seaquest,
Space Invaders.
61
| 2 |
Further insights into the damping-induced self-recovery
phenomenon
Tejas Kotwal ∗, Roshail Gerard †and Ravi Banavar
‡
arXiv:1709.05596v4 [] 11 Jan 2018
January 12, 2018
1
Abstract
In a series of papers [1, 2, 3, 4], D. E. Chang, et al., proved and experimentally demonstrated a phenomenon they termed “damping-induced self-recovery”. However, these papers left a few questions
concerning the observed phenomenon unanswered- in particular, the effect of the intervening lubricantfluid and its viscosity on the recovery, the abrupt change in behaviour with the introduction of damping,
a description of the energy dynamics, and the curious occurrence of overshoots and oscillations and
its dependence on the control law. In this paper we attempt to answer these questions through theory. In particular, we derive an expression for the infinite-dimensional fluid-stool-wheel system, that
approximates its dynamics to that of the better understood finite-dimensional case.
2
Introduction
The damping-induced self-recovery phenomenon refers to the fundamental property of underactuated
mechanical systems: if an unactuated cyclic variable is subject to a viscous damping-like force and the
system starts from rest, then the cyclic variable will always recover to its initial state as the actuated
variables are brought to rest. A popular illustration exhibiting self-recovery is when a person sits on
a rotating stool with damping, holding a wheel whose axis is parallel to the stool’s axis. The wheel
can be spun and stopped at will by the person (Refer to [5]). Initially, the system begins from rest;
when the person starts spinning the wheel (say anticlockwise), the stool begins moving in the expected
direction (i.e., clockwise). Then the wheel is brought to a halt; the stool then begins a recovery by
going back as many revolutions in the reverse direction (i.e., anticlockwise) as traversed before. This
phenomenon defies conventional intuition based on well-known conservation laws. Andy Ruina was the
first to report this phenomenon in a talk, where he demonstrated a couple of experiments on video
[6]. Independently, Chang et al., [1, 2] showed that in a mechanical system with an unactuated cyclic
variable and an associated viscous damping force, a new momentum-like quantity is conserved. When
∗ Department
of Mathematics
of Mechanical Engineering
‡ Systems and Control Engineering, Indian Institute of Technology Bombay, Mumbai, India
† Department
1
the other actuated variables are brought to rest, the conservation of this momentum leads to asymptotic
recovery of the cyclic variable to its initial position. Boundedness is another associated phenomenon, in
which the unactuated cyclic variable reaches a saturation eventually, when the velocity corresponding to
the actuated variable is kept constant; this occurs due to the presence of damping. In the experiment
explained above, this manifests as the angle of rotation of the stool reaching an upper limit, when the
wheel is spinning at a constant speed.
Chang et al. generalize this theory to an infinite-dimensional system in which the interaction of an
intermediate fluid is considered [3, 4]; they show that the fluid layers also display self-recovery, which
is confirmed via experiments as well [7]. Such a generalization of the model is considered because of
the interaction of the fluid in the bearing with the recovery phenomenon of the stool, in the experiment
explained above. In this work, we make the following points:
• We show that the dynamics of the stool and the wheel in the infinite-dimensional fluid system can
be approximated to that of the finite-dimensional case by finding an effective damping constant
that takes into account the effect of the viscous fluid on the system.
• We analyse the finite-dimensional system from a dynamical systems point of view, and show that
a bifurcation occurs when the damping constant switches from zero to a positive value. We also
derive an expression for the angle at which boundedness occurs.
• In addition to the recovery phenomenon described previously, further complex behaviour is observed
in the experiments reported in [5, 7]. In particular, the unactuated variable not only approaches
its initial state during recovery, but also overshoots and then oscillates about this initial position,
eventually reaching it asymptotically. This oscillation phenomenon has not been looked into in
previous works, and is one of the points that we address as well.
• In Chang et al. [5], in the experiment described, the oscillations are of significant amplitude, and
this would prompt one to assume that some sort of mechanical ‘spring-like’ energy is being stored,
as the stool appears to start moving after the entire system has come to a halt. The question of
energy has been touched upon in this work, and we present energy balance equations for the given
mechanical system.
The paper unfolds as follows. Initially we present mathematical models for the stool-wheel experiment
- the first is a finite-dimensional one, and the second one incorporates the intervening fluid (either in
the bearing or in a tank) using the Navier-Stokes equation for a Newtonian incompressible fluid. This
is followed by a section that presents an intuitive interpretation of recovery highlighting three distinct
types of behaviour. Then follows a theoretical section that presents a technique to reduce the infinitedimensional fluid effect to an effective damping constant and hence model the overall system in finite
dimensions. This part is followed up by a result on boundedness and the occurrence of a bifurcation
in the system dynamics. In the appendix, the derived results are used in conjunction with numerical
experiments to validate the expression for the effective damping constant. We then examine the case of
2
oscillations and overshoots, and present plausible explanations to why these occur, and possible sources
of future investigation.
3
Mathematical models
Finite-dimensional model: We first analyze a simplified, idealized version of the person with a wheel
in hand, sitting on a rotatable stool whose motion is opposed by damping, which for the purpose of
analysis is assumed to be linear viscous damping. This is a specific example of the model that Chang, et
al. studied [1, 2]. We assume two flat disks, one for the wheel and one for the stool-person mass as shown
in Fig. 1. The stool consists of an internal motor, that actuates the wheel, while the motor-rod-stool
setup rotates as one piece (henceforth just called the stool). There is linear viscous damping present in
the rotational motion of the stool, with damping coefficient k. The torque imparted on the wheel by the
motor is denoted by u(t).
Figure 1: A schematic diagram of the wheel-stool model.
The inertia matrix of the system is given by
(mij ) =
m11
m21
I
= w
m22
Iw
m12
Iw
Iw + Is
,
(1)
where Iw and Is are the moments of inertia of the wheel and stool respectively. The kinetic energy
of the described system is
K.E.(t) =
1
1
Iw (θ̇w + φ̇s )2 + Is φ̇2s ,
2
2
(2)
where θw denotes the angle rotated by the wheel relative to the stool, and φs denotes the angle
rotated by the stool relative to the ground frame. Since there is no external potential in our system, the
3
Lagrangian only comprises of the total kinetic energy. The Euler-Lagrange equations for the system are
given by
Iw φ̈s + Iw θ̈w = u(t)
(3a)
(Iw + Is )φ̈s + Iw θ̈w = −k φ̇s
(3b)
Although this model captures the damping-induced boundedness and recovery phenomena, further
behaviour that cannot be explained with this model are the overshoot and oscillations, which were
observed in experiments [5, 7]. It was suggested that the lubricating fluid inside the bearing of the
rotatable stool could be one of the reasons behind this additional behaviour. As a result, a generalized
model that involves the dynamics of the fluid interacting with the stool-wheel setup is analyzed [3, 4].
Infinite-dimensional fluid model: The model consists of two concentric infinitely long cylinders
with inner radius Ri and outer radius Ro . The outer radius is fixed, while the inner cylinder is free
to rotate (this is the stool in our case). A motor is installed in the inner cylinder (as described in the
previous case, i.e., φs ), to drive the wheel (i.e., θw ) attached to it. The annulus region between the two
cylinders is filled with an incompressible viscous fluid. Due to symmetry in the z direction, we may
regard this as a 2D system in a horizontal plane. The moments of inertia should be considered as per
unit depth, because of the infinite length in the z direction. Due to rotational symmetry, the fluid only
flows coaxially, i.e, there is no fluid flow in the radial direction. Let v(r, t) denote the tangential velocity
of the fluid at radius r and time t, where Ri ≤ r ≤ Ro . The subscripts t and
r
denote partial derivatives
with respect to time and radius respectively. As described before, u(t) is the driving torque on the wheel
imparted by the motor. The corresponding equations are
Iw φ̈s + Iw θ̈w = u(t)
(4a)
(Iw + Is )φ̈s + Iw θ̈w = 2πρνRi Ri vr (Ri , t) − v(Ri , t)
(4b)
vt = ν(vrr +
vr
v
− 2)
r
r
(4c)
where ν is the kinematic viscosity and ρ is the density of the fluid. The right hand side of Eq. (4b)
is the torque on the inner cylinder due to stress exerted by the surrounding fluid [8]. Equation (4c) is
the Navier-Stokes equation for an incompressible viscid fluid in radial coordiantes. The system (4) is
an infinite-dimensional fluid system, compared to the previous finite-dimensional system (3). The initial
and boundary conditions are given by
φs (0) = φ̇s (0) = θw (0) = θ̇w (0) = 0
(5a)
v(r, 0) = 0
(5b)
v(Ri , t) = Ri φ̇s (t)
(5c)
v(Ro , t) = 0
(5d)
4
where Eqs. (5c) and (5d) are the no-slip boundary conditions for the fluid. The given initial conditions
imply that the entire system begins from rest. In order to control the wheel independently of the effects of
inertia of the stool and damping of the fluid, we employ a method known as partial feedback linearization.
This is done by partially cancelling the nonlinearities in the dynamics, i.e. the wheel equations are
linearized by introducing a new input τ (t) and redefining the input u(t) as
u(t) =
Iw
Iw + Is
Is τ (t) + 2πρνRi Ri vr (Ri , t) − v(Ri , t)
(6)
where τ (t) is precisely equal to the acceleration of the wheel, i.e. θ̈w = τ (t). We assume a PD control
law for the new control variable given as
d
d
d
τ (t) = θ̈w
(t) + c1 θ̇w
(t) − θ̇w + c0 θw
(t) − θw
(7)
d
where θw
(t) is the desired trajectory of the wheel, and c0 and c1 are the proportional and differential
gains respectively. For the standard demonstration of recovery and/or boundedness, we require the wheel
to be driven from rest to constant velocity, and then abruptly braked to a stop. Such a desired trajectory
d
(t) = (1/2) θ̇steady |t| − |t − tstop | + tstop where θ̇steady is
of the wheel is given by a ramp function, θw
the desired constant velocity of the wheel, and tstop is the time at which the wheel is instantaneously
brought to rest (Refer to Fig. 2). By appropriately tuning the values of c0 and c1 , we can mimic the
desired trajectory. This tuning is important in accounting for the presence of oscillations, or lack thereof,
Desired trajectory of the wheel 3wd (rad)
as we shall see later.
slope = 3_steady
0
0
tstop
Time t (s)
d
Figure 2: An illustration of the desired trajectory θw
(t) in the control law of the wheel (7)
.
4
An intuitive interpretation of recovery
We present a qualitative interpretation of the boundedness and recovery phenomenon using the momentum equation of the finite dimensional stool-wheel case. A brief explanation of the recovery phenomenon
was also presented by Andy Ruina [6] which we summarize: Change in angular momentum is equal to
5
the net external torque, but since it is only due to linear viscous damping, we have L̇ = −cϕ̇ (where L is
the angular momentum, c is the damping constant and ϕ is the angle). If the system’s initial and final
state is the rest state, then ∆L = 0 =⇒ ∆ϕ = 0. Thus the net change in angle has to be zero, implying
that recovery must occur. We delve a little deeper into the phenomenon and attempt to provide an
intuitive interpretation for the reversal of direction of the stool.
Consider the case where the wheel follows the desired trajectory perfectly (i.e., the ramp function).
This means that at t = 0, an impulsive acceleration gets the wheel spinning instantaneously (to a constant
speed), and at t = tstop , an impulsive deceleration brings the wheel to a halt instantaneously (refer to
Fig. 3). Consider the damping-induced momentum equation, as derived by Chang, et al. [1, 2]
Acceleration of wheel 3Bw (rad=s2 )
Iw (θ̇w + φ̇s ) + Is (φ̇s ) + kφs = 0
(8)
2000
1000
0
-1000
-2000
0
2
4
6
8
10
Time t (s)
Figure 3: Acceleration vs time for the wheel. The first peak corresponds to the initial impulse to get the
wheel spinning to a constant velocity; the second negative impulse is to brake the wheel to a halt.
At t = 0, there is a jump in θ̇w , while φs remains constant (i.e., φs = 0) (refer to Figs. 8 and 9). The
momentum equation (8) then simplifies to
Iw (θ̇w (0) + φ̇s (0)) + Is (φ̇s (0)) = 0
(9)
which is the usual momentum conservation. Thus, at t = 0, the velocity of the stool is governed
by the conservation of standard angular momentum, and the velocity of the wheel. Let this velocity be
φ̇s (0) = φ̇fs .
At t = tstop , there is an instantaneous jump down to zero for θ̇w , while φs remains constant. This
time, the momentum equation (8) simplifies to
Iw (θ̇w (tstop ) + φ̇s (tstop )) + Is (φ̇s (tstop )) = C
(10)
where C is a constant (C = −kφs (tstop )). Once again, this equation may be interpreted as the
standard angular momentum conservation. We examine three distinct cases:
• Case 1: (No damping) The speed of the stool at t = tstop − ε is the same as what it was at t = 0,
6
(i.e. φ̇fs ), implying that at t = tstop , the stool comes to rest instantaneously (i.e., φ̇s (tstop ) = 0).
Recall, that this is the standard case, where the usual momentum conservation law holds. Refer
to Figs. 7 and 4.
Position of stool ?s (rad)
0
-0.2
-0.4
-0.6
-0.8
-1
0
2
4
6
8
10
Time t (s)
Figure 4: Case 1. k = 0, tstop = 5, vw = 2. This case corresponds to the usual momentum conservation.
• Case 2: (Damping present but damping induced boundedness not yet reached) During 0 < t <
tstop , the damping force keeps decreasing the speed of the stool. However, the momentum equation
(10) is still the same at t = tstop . Thus, the change in momentum of the stool would be the same (as
in Case 1). But since |φ̇s (tstop − ε)| < |φ̇fs |, the final momentum of the wheel ends up overshooting
zero at t = tstop , resulting in the change in direction of the stool. Refer to Figs. 7 and 5.
• Case 3: (Damping present and damping-induced boundedness has been attained) This case can
be viewed as a special instance of Case 2, where for 0 < t < tstop , the speed of the stool decreases
all the way to zero. The rest of the explanation follows as in Case 2. Refer to Figs. 7 and 6.
7
Position of stool ?s (rad)
0
-0.02
-0.04
-0.06
-0.08
-0.1
0
2
4
6
8
10
Time t (s)
Figure 5: Case 2. k = 1, tstop = 1, vw = 2. In this case, damping-induced boundedness is not yet
reached, but the recovery phenomenon can be seen.
Position of stool ?s (rad)
0
-0.05
-0.1
-0.15
0
2
4
6
8
10
Time t (s)
Figure 6: Case 3. k = 1, tstop = 5, vw = 2. In this case, both—damping-induced boundedness and
recovery phenomenon can be seen.
8
Change in velocity of the stool
before and after braking
0
Case 1
Case 2
Case 3
Figure 7: Comparison of the three cases before and after braking. Blue dots: velocity of stool just
before braking; red dots: velocity of stool immediately after braking. For cases 2, 3, the (magnitude of)
velocity of the stool before braking decreases due to damping; hence momentum transferred by the wheel
overshoots the stool velocity to something positive. This explains why the wheel changes direction.
9
5
Effective damping constant
In this section, we demonstrate that the dynamics of the infinite-dimensional fluid system can be approximated by that of a finite-dimensional one.
d
Claim 5.1. Let the desired trajectory of the wheel be the ramp function, given by θw
(t) = (1/2) θ̇steady |t|−
|t − tstop | + tstop . Then the solution of Eq. (4c) satisfies
lim v(r, t) = lim Ri φ̇s (t)
t→∞
t→∞
Ro − r
.
Ro − Ri
Proof: We consider an analytical solution to the PDE (4c), obtained by the method of separation
of variables. Since one of the boundary conditions is non-homogeneous (and in fact time dependent), we
perform a change of variables in order to make it homogeneous [9]. The change of variables is given as
v(r, t) = w(r, t) + Ri φ̇s (t)
Ro − r
Ro − Ri
(11)
where w(r, t) is transformed the variable. The correction term is taken as a linear interpolation
between the two boundary conditions for simplicity. The transformed PDE is given as
wt − ν(wrr +
w
wr
− 2 ) = F (r, t)
r
r
(12)
where F (r, t) is the driving force of the PDE resulting from the change of variables given by
F (r, t) = −Ri φ̈s (t)
Ro − r
Ro
− νRi φ̇s (t) 2
Ro − Ri
r (Ro − Ri )
(13)
The transformed initial and boundary conditions are given by
w(r, 0) = 0
(14a)
w(Ri , t) = 0
(14b)
w(Ro , t) = 0
(14c)
We assume the solution w(r, t) = W (r)T (t), and obtain the homogeneous solution of the spatial ODE
by analyzing the corresponding eigenvalue problem. The eigen solution obtained is given by
r
W (r) = aJ1
r
λ
λ
r + bY1
r
ν
ν
(15)
where J1 and Y1 are the Bessel’s functions of the first and second kind respectively, λ denotes the
eigen value, and a and b are arbitrary constants determined by the boundary conditions. Upon plugging
in the boundary conditions, and solving for λ, we find that the eigen values are positive due to the fact
that roots of Bessel’s functions are real [10]. Thus we have the following solution for v(r, t)
10
v(r, t) =
∞
X
r
J1
n=1
|
r
!
λn
λn
Ro − r
r + kn Y1
r
Tn (t) +Ri φ̇s (t)
ν
ν
Ro − Ri
{z
}
(16)
transient term
where kn and λn are constants obtained by solving the boundary conditions in the eigen value problem,
and Tn (t) is the time component of the solution, which is obtained by solving the corresponding initial
value problem. The series given in the right hand side of Eq. (16) is considered as a transient term
for the PDE (4c), and we show that it is insignificant as t → ∞. Consider the time component of the
solution (16)
Z
t
e
Tn (t) =
0
R Ro
−λn (t−τ ) Ri
F (r, τ )Wn (r)dr
R Ro
Ri
Wn (r)2 dr
dτ = e
−λn t
Z
t
eλn τ (mφ̈s (τ ) + lφ̇s (τ ))dτ
0
where Wn (r) is the nth term of the Fourier series, and m and l are constants obtained after evaluating
the integrals in the spatial variable. For our case, plug in θ̇w (t) equal to the desired ramp trajectory
d
(θw
(t) = (1/2) θ̇steady |t| − |t − tstop | + tstop ) in the momentum equation (8). Upon solving for φ̇s (t),
we see that it is exponentially decaying, and therefore Tn (t) → 0 as t → ∞ for all n.1 Thus, we have the
result.
.
In the limit where the transient term is insignificant, the solution (16) is plugged in Eq. (4b) and
simplified as follows
(Iw + Is )φ̈s + Iw θ̈w = −
2πρνRo Ri2
φ̇s
Ro − Ri
(17)
We would like to emphasize here that we have found an effective damping constant kef f for the
infinite-dimensional fluid system (4) that approximates its dynamics to that of a finite dimension case
(3). The effective damping constant is given by
kef f =
2πρνRo Ri2
Ro − Ri
(18)
We revisit this effective damping constant and its validity in the appendix, after we introduce a couple
of expressions from the boundedness and bifurcation section.
6
Boundedness and bifurcation
We now revisit the finite dimensional case (3), and rewrite the corresponding momentum equation as a
first order system. The damping-induced momentum equation (8) is given by
(Iw + Is )φ̇s + Iw θ̇w (t) + kφs = 0
(19)
1 In fact, the result holds for any wheel trajectory that grows polynomially for finite time before eventually coming to
rest.
11
where θ̇w (t) is written as a function of time, since it is an external driving force for the stool dynamics.
(Using partial feedback linearization, the wheel can be controlled independently of the stool).
Claim 6.1. Suppose k > 0. Then for a fixed speed of the wheel, say θ̇w (t) = vw = constant, we have the
following equality
lim φs (t) = φ∗s =
t→∞
−Iw vw
k
Proof: Rewritting Eq. (8) as a first order system in φs
φ̇s =
−k
−Iw
φs +
θ̇w (t) = −αφs − β θ̇w (t) = f (φs , t)
Iw + Is
Iw + Is
(20)
where α, β are constants. Since Eq. (20) is a nonautonomous system, we rewrite it in a higher
dimensional autonomous form as
φ̇s = −αφs − β θ̇w (t)
(21a)
ṫ = 1
(21b)
In the differential equation for the stool, for a fixed speed of the wheel, say θ̇w (t) = θ̇steady = vw =
constant, the solution is
Z
φs (t) = −β
t
[e−α(t−s) ]vw ds =
0
−βvw
[1 − e−αt ]
α
(22)
and as t → ∞, we have
φ∗s = lim φs (t) =
t→∞
−βvw
−Iw vw
=
α
k
.
The above equality corresponds to the damping-induced boundedness phenomenon, and is especially
relevant in Figs. 8 and 9. Now we get to the dynamical systems aspect of our discussion.
For the purpose of this discussion we present a few definitions from standard texts on dynamical
systems. A fixed point for a system of differential equations Ẋ = F (X) is defined as the set of points
that are solutions to the equation F (X) = 0. A fixed point for system (21) does not exist since ṫ 6= 0 for
any state (φs , t). We will consider (φ∗s , t∗ ) to be a fixed point if f (φ∗s , t∗ ) = 0. A bifurcation is defined as
a change in the qualitative structure of the flow as the parameters are varied. In particular, fixed points
can be created or destroyed, or their stability can change [11]. In this particular case, the drastic change
in behaviour occurs at k = 0.
d
Claim 6.2. Let the desired trajectory of the wheel be the ramp function, i.e., θw
(t) = (1/2) θ̇steady |t| −
|t − tstop | + tstop . Then
• If k = 0, the stool dynamics is neutrally stable
• If k > 0, the stool dynamics is asymptotically stable.
Thus, a bifurcation occurs when k is varied from 0 to a positive number.
12
Velocity of stool ?_ s (rad=s)
0.1
0
-0.1
-0.2
-0.8
-0.6
-0.4
-0.2
0
Position of stool ?s (rad)
Figure 8: Phase portrait of stool dynamics with varying values of damping coefficient k, and steady
state wheel velocity θ̇steady = 2. Green: k = 1; red: k = 0.1; blue: k = 0. Green: damping-induced
boundedness attained; red: damping induced boundedness is not yet reached; blue: neutral stability, no
recovery phenomenon. The final equilibrium point for the k = 0 case and k > 0 are drastically different.
Proof: k = 0: From eqn. (21),
k = 0 ⇒ φ̇s = −β θ̇w (t)
This implies that the stool dynamics has a fixed point only when the wheel is brought to a rest; the stool
is neutrally stable at any constant value φ∗s .
k > 0: The fixed point φ∗s for the stool dynamics is governed by
φ∗s =
−Iw
θ̇w (t∗ )
k
(23)
For a fixed point to occur, there are two possibilities:
The wheel is at a constant speed vw ⇒ φ∗s = −Iw vw /k
or
The wheel is at rest ⇒ φ∗s = 0
Clearly, if the wheel is braked (zero speed), the stool must go back to its initial state (φ∗s = 0) governed
by
φ̇s = −αφs
(24)
This is the damping-induced self recovery equilibrium state. As seen from the above analysis, the system
undergoes a change in its dynamical system behaviour, when k is switched from 0 to a positive number.
Eq. (23), rewritten as
φ∗s k = −Iw θ̇w (t∗ )
(25)
is the equation of a hyperbola in the variables φ∗s and k, and as k → 0, φ∗s → ∞. For k > 0 (no matter
how small it is), the recovery phenomenon occurs (implying that the stool must return to its initial
position regardless how many rotations it has completed). But for k = 0, the stool is neutrally stable,
implying a change in the stability behaviour. (Note that as φs is 2π−periodic, one should consider
13
Velocity of stool ?_ s (rad=s)
0.3
0.2
0.1
0
-0.1
-0.2
-0.15
-0.1
-0.05
0
Position of stool ?s (rad)
Figure 9: Phase portrait of stool dynamics with varying values of steady state wheel velocity θ̇steady .
Innermost to outermost loop: θ̇steady = 1, 2, 3 (k = 1 for all cases).
φs mod 2π to obtain the actual position of the stool. However, not doing so, gives us the number of
rotations the stool completes. For e.g., if φ∗s = 8π, then the damping-induced bound is attained after
four rotations)
.
7
Oscillations and energy description
Oscillations of the unactuated variable have been reported in a few experiments on the damping induced
self-recovery phenomenon [4, 5, 6, 7]. In our opinion, the cause of these oscillations has not been
adequately identified in previous works. We believe that a seemingly trivial, but important source of
these oscillations, is the nature of the control law used on the wheel. It is observed that if oscillations
are induced in the wheel via the control law, then the stool also mimics these oscillations, with a slight
time lag due to the damping. This type of oscillation can be produced in both, the finite-dimensional
as well as the infinite-dimensional system. Refer to Figs. 10 and 11 for an example via simulations. We
would like to emphasize that oscillations of this kind are a direct consequence of the actuation of the
wheel. It is likely that the oscillations in [4, 7] are of this kind.
However, the oscillations in Ref. [5] occur even though the wheel is stationary with respect to the
stool; in fact, at the extreme positions of the oscillation, it appears as though the entire system (stool
and wheel) begins moving again from a complete state of rest. That is, it appears that there is some
kind of mechanical ‘spring-like’ energy being stored at these extreme positions. These oscillations are
clearly not of the type mentioned above, as they are not a direct result of the actuation of the wheel.
Additionally, the observed oscillations in the stool are of high amplitude whereas those in the wheel, if
any, appear to be of small amplitude. This is contradictory to what is observed in the previous type of
oscillations (seen in Figs. 10 and 11) as the inertia of the stool-wheel system is greater than the inerita
of the wheel only. A simple order of magnitude analysis shows that a more complex model is probably
14
Position of wheel 3w (rad)
500
400
300
200
100
0
0
5
10
15
20
Time t (s)
Figure 10: Trajectory of the wheel (solid blue line) with control parameters c0 = 1 and c1 = 3. The red
dashed line denotes the desired trajectory, and the overshoot of the wheel may be seen at t = 2s.
Position of stool ?s (rad)
0.02
0
-0.02
-0.04
-0.06
-0.08
0
5
10
15
20
Time t (s)
Figure 11: Trajectory of the stool (solid blue line) under the influence of the wheel trajectory shown in
Fig. 10. The black dashed line denotes the zero position of the stool, and the overshoot of the stool may
be seen after it recovers (Note the slight delay when compared to the overshoot of the wheel; this is due
to the time constant of the system).
required to account for this phenomenon. In order to demonstrate this, we first consider the energy
dynamics of the system.
Energy description for the finite-dimensional system: In order to derive the energy equations
of the finite dimensional system, (3b) is multiplied by φ̇s , and (3a) with θ̇w , to yield
(Iw + Is )φ̈s φ̇s + Iw θ̈w φ̇s = −k φ̇s φ̇s
(26a)
Iw φ̈s θ̇w + Iw θ̈w θ̇w = uθ̇w
(26b)
Equations (26a) and (26b) are integrated, and then summed up, resulting in the following energy
balance (in the ground frame)
Z
t
Z
uθ̇w dt = K.E.(t) +
0
0
15
t
k φ̇2s dt
(27)
where K.E.(t) is the total kinetic energy of the system. The second term on the right hand side of
represents the cumulative energy lost due to damping losses up to time t. The energy lost in damping is
precisely equal to the work done by the damping force, that is
Z
φs (t)
L.E.(t) =
Z
k φ̇s dφs =
0
t
k φ̇2s dt.
(28)
0
The term on the left hand side of Eq. (27) represents the total energy pumped into the system by
the motor, up to time t. The power input into the system by the motor is the sum of power imparted to
the wheel by the actuation force u(t), and the power imparted to the stool by the reaction force. The
cumulative input energy (denoted by I.E.(t)) is given by
Z
t
Z
t
Z
(29)
0
0
0
t
uθ̇w dt
(−u)φ̇s dt =
u(θ̇w + φ̇s ) dt +
I.E.(t) =
We can simplify the energy balance (27) as
I.E.(t) = K.E.(t) + L.E.(t)
(30)
Energy description for the infinite-dimensional fluid system: The above energy equation can be
extended for the infinite dimensional case as follows.
1
I.E.(t) = K.E.(t) + ρ
2
Z
1
v dV + ρν
2
2
Z tZ
0
∂vi
∂vk
+
∂xk
∂xi
2
dV dt
(31)
where the second term on the right hand side is the kinetic energy of the fluid, while the third term
is the energy lost due to viscous damping [8]. The velocity of the fluid in cartesian coordinates is given
by v = −v sinθ î + v cosθ ĵ, where v is the tangential velocity of the fluid as discussed previously
(4). We consider the system at a state when the wheel is no longer being actuated and is in a state
of rest with respect to the stool. The system can be found in such a state at the extreme position of
an oscillation that occurs after the wheel has been brought to rest. This implies that I.E(t) in (31)
is zero and hence, the kinetic energy of the stool can only be exchanged with that of the fluid, or be
lost due to damping. We analyze this state with the parameters from table 1 for the fluid-stool-wheel
system. At the extremum of the oscillation, the stool-wheel system is momentarily at rest, and as is
clear from (31), must be imparted energy from the fluid to start rotating again. The maximum energy
that the stool-wheel system could possibly regain is therefore equal to the kinetic energy of the fluid at
this extreme position. This leads to an expression (32) for the minimum fluid velocity (averaged over
the thickness of the annulus) in terms of system constants and the velocity of the stool.
i 2
1h
1
(Iw + Is )φ̇2s = ρπ (Ri + t)2 − (Ri )2 h vavg
2
2
(32)
where t and h are the thickness and height of the bearing respectively, while vavg is the velocity of the
fluid averaged over the thickness of the annulus. Even for a stool velocity as low as 1 rpm, the required
average fluid velocity is around 11 rpm, which is quite high and unrealistic. Hence, it is likely that a
16
different model is required to address such behaviour.
Name
Moment of Inertia of Wheel-Stool System
Inner diameter of bearing
Thickness of bearing
Height of bearing
Density of Fluid
Value
1
0.1
0.01
0.01
1000
Units
kg/m2
m
m
m
kg/m3
Table 1: Parameter values of fluid-wheel-stool system that are representative of experiment in [5]
8
Conclusion
In this paper, we present certain aspects of the damping-induced self recovery phenomenon that have
not been investigated so far in the existing literature. We present a technique to reduce the infinitedimensional fluid model to the better understood finite dimensional case, by deriving a formula for an
effective damping constant. We show that a bifurcation takes place at k = 0, and upon varying the value
of k from zero to a positive number, the stability of the stool changes from neutral to asymptotically
stable. We also derive an expression for the angle at which boundedness occurs to validate the approximation of the fluid system (by comparing with numerical experiments). Finally, we present an energy
description of the system to give an intuitive understanding of the energy dynamics, and to point out
that further experimental and theoretical investigation is necessary to explain the peculiar oscillations
found in experiments [5, 6].
References
[1] Dong Eui Chang and Soo Jeon. Damping-induced self recovery phenomenon in mechanical systems with an unactuated cyclic variable. Journal of Dynamic Systems, Measurement, and Control,
135(2):021011, 2013.
[2] Dong Eui Chang and Soo Jeon. On the damping-induced self-recovery phenomenon in mechanical
systems with several unactuated cyclic variables. Journal of Nonlinear Science, 23(6):1023–1038,
2013.
[3] Dong Eui Chang and Soo Jeon. On the self-recovery phenomenon in the process of diffusion. arXiv
preprint arXiv:1305.6658, 2013.
[4] Dong Eui Chang and Soo Jeon. On the self-recovery phenomenon for a cylindrical rigid body
rotating in an incompressible viscous fluid. Journal of Dynamic Systems, Measurement, and Control,
137(2):021005, 2015.
[5] Dong Eui Chang and Soo Jeon. Video of an experiment that demonstrates the self-recovery phenomenon in the bicycle wheel and rotating stool system. https://youtu.be/og5h4QoqIFs.
17
[6] Andy
bikes,
Ruina.
arrows,
Dynamic
and
walking
muscle-smarts:
2010:
Cats,
astronauts,
Stability,
translation,
and
trucks,
rotation.
http://techtv.mit.edu/collections/locomotion:1216/videos/8007-dynamic-walking-2010-andyruina-cats-astronauts-trucks-bikes-arrows-and-muscle-smarts-stability-trans, 2010.
[7] Dong Eui Chang and Soo Jeon. Video of an experiment that demonstrates the self-recovery phenomenon in the vessel and fluid system. https://youtu.be/26qGQccK4Rc.
[8] L. D. Landau and E. M. Lifshitz. Fluid Mechanics, volume 6. 1959.
[9] Stanley J Farlow. Partial differential equations for scientists and engineers. Courier Corporation,
1993.
[10] Milton Abramowitz and Irene A Stegun. Handbook of mathematical functions: with formulas, graphs,
and mathematical tables, volume 55. Courier Corporation, 1964.
[11] S.H. Strogatz. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry,
and Engineering. Advanced book program. Avalon Publishing, 1994.
Appendix A
Validation of effective damping constant
With the computed, effective damping constant kef f (18) of the infinite-dimensional fluid system (4),
we find the angle at which boundedness is attained, given by
φ∗s =
−Iw vw (Ro − Ri )
2πρνRo Ri2
(33)
We now present some numerical experiments by solving the PDE-system (4) using the method of
lines. Equation (18) is then verified by comparing the numerical values obtained using formula (33)
and the numerical solutions of the original PDE. It is observed that the error tends to zero as Ro /Ri
approaches 1, independent of the value of Ri . The material constants used for the simulation, and for
the theoretical calculation are tabulated below
18
Ri (cm)
Ro (cm)
13.5
13.5
13.5
13.5
13.5
13.5
13.5
13.5
13.5
13.51
13.68
13.75
14
14.5
15
15.5
20
27
27
27
27
27
27
27
27
27
27
27.02
27.36
27.5
28
29
30
31
40
54
0.07
1.33
1.85
3.70
7.41
11.11
14.81
48.15
100
With the PDE
Angle (rad)
6.16
108.7
149.8
291.7
553.8
790.3
1004
2265
3123
With kef f
Angle (rad)
6.16
109.5
151.3
297.1
573.7
831.9
1073
2704
4160
0.07
1.33
1.85
3.70
7.41
11.11
14.81
48.15
100
1.54
27.19
37.47
72.96
138.5
197.6
251.1
566.3
780.8
1.54
27.37
37.81
74.28
143.4
208
268.4
675.9
1039.9
(Ro −Ri )
Ri
× 100
% Error
0.00
0.74
1.00
1.85
3.59
5.26
6.87
19.38
33.21
0.00
0.66
0.91
1.81
3.54
5.26
6.89
19.35
33.18
Table 2: A comparison of the angles at which boundedness occurs, i.e., between the numerical simulations
and the theoretically derived formula (33), for different values of (Ro − Ri )/Ri .
Name
Moment of Inertia of Wheel
Moment of Inertia of Stool
Kinematic Viscosity of Fluid
Density of Fluid
Steady State Velocity of Wheel
Proportional Control Parameter
Derivative Control Parameter
Value
6 × 10−3
1.96
1.17 × 10−6
1.0147 × 103
60π
1
100
Units
kg/m2
kg/m2
m2 /s
kg/m3
rad/s
-
Table 3: Parameter constants used for the simulations.
19
| 3 |
A Proposed Algorithm for Minimum Vertex Cover Problem and its Testing
Gang Hu
Email: [email protected]
Abstract
The paper presents an algorithm for minimum vertex cover problem, which is an NP-Complete
problem. The algorithm computes a minimum vertex cover of each input simple graph. Tested by the
attached MATLAB programs, Stage 1 of the algorithm is applicable to, i.e., yields a proved minimum
vertex cover for, about 99.99% of the tested 610,000 graphs of order 16 and 99.67% of the tested 1,200
graphs of order 32, and Stage 2 of the algorithm is applicable to all of the above tested graphs. All of
the tested graphs are randomly generated graphs of random "edge density" or in other words, random
probability of each edge. It is proved that Stage 1 and Stage 2 of the algorithm run in O(n5+logn) and
O(n3(5+logn)/2) time respectively, where n is the order of input graph. Because there is no theoretical
proof yet that Stage 2 is applicable to all graphs, further stages of the algorithm are proposed, which are
in a general form that is consistent with Stages 1 and 2.
1.
INTRODUCTION
If the algorithm is to be classified as one algorithm design technique, then transform-and-conquer
strategy maybe most suitable for it [1].
Suppose we need to find a minimum vertex cover of simple graph G, then the first part of the
algorithm is to generate an auxiliary simple graph H, which satisfies the following four conditions:
A. L(V(H)) ⊇ L(V(G)), of which L(X) denotes X’s label set, i.e., the set of X’s all vertex’s labels;
B. Different vertices of H may share a same label. For any edge of H, however, the two endpoints
have different labels;
C. Of each component P of H, all maximal-clique’s vertex-sets have a same cardinal number, and
thus this number is called "grade of component P" and denoted by g(P);
D. For any nonempty subset T of L(V(G)), there exists a maximal clique Q in H such that L(V(Q))
T.
Remark 1.1. H is generated by Steps 1 to 3 in Section 2, and it is not any other graph which
satisfies the above conditions.
The remaining part of the algorithm is to find out a maximal clique of H whose label set can be
proved to be the label set of a minimum vertex cover of graph G.
Let N[x] = N(x)∪{x}, of which N(x) is the neighbor set of vertex x. Suppose a subgraph of H
which satisfies both of the following conditions.
Condition 1. For any vertex v of the subgraph, there exists a vertex cover C of G such that
L(N[v]) L(C).
Condition 2. For any edge uv of the subgraph, there exists a vertex cover C of G such that
L(N[uv]) L(C), where N[uv] N[u]∩N[v].
Then for each component of H in ascending order of its grade, which is defined in Condition C,
find out a maximal subgraph S of it which satisfies Conditions 1 and 2. Iterate this computation for the
next component until S is nonempty. Then find out a minimal-order subgraph I of S which contains a
nonempty subgraph that satisfies Conditions 1 and 2. If I is a clique, then it is proved by Claim 2.10
that L(V(I)) is the label set of a minimum vertex cover of graph G. If I is not a clique, then go to Stage 2
of the algorithm, which is introduced in Section 5
In Section 3, Stage 1 of the algorithm is proved to run in O(n5+logn) time.
The algorithm was tested by MATLAB programs, and it was found that I is not a clique for only 44
of the tested 711,200 graphs. Then by Claim 2.10, Stage 1 of the algorithm cannot yield a minimum
vertex cover for only about 0.0062% of all tested graphs. The test results are detailed in Section 4.
For the graphs to which Stage 1 of the algorithm is not applicable, a stronger version of Condition
2 is introduced. With this change, the algorithm reaches Stage 2 for the graphs to which Stage 1 is not
1
applicable, and it was found that, with O(n3(5+logn)/2) time, Stage 2 is applicable to, i.e., yields a
minimum vertex cover of, each of the 44 tested graphs. Because Stage 2 is same as Stage 1 except that
Condition 2 is replaced by a stronger version, so Stage 2 actually works for all of the 711,200 tested
graphs.
Furthermore, if there exist graphs to which Stage 2 of the algorithm is not applicable, those graphs
will go to further stages of the algorithm which are expressed in a general form that is consistent with
Stages 1 and 2. Those stages are introduced in Section 6.
2.
STEPS OF THE ALGORITHM
Firstly, graph H is constructed by Steps 1 to 3.
Step 1. Suppose L(V(G)) {1, 2, …, n}, and k log2ni.e., k is the smallest integer which
satisfies n 2k. Let L(V(H)) {1, 2, 3, …,2k}.
Then define Z, which is a family of sets, as follows.
Definition 2.1 Let L(V(H)) be the "first" member of Z. Then partition L(V(H)) into two
equal-sized disjoint sets {1, 2, …, 2k1} and {2k11, 2k12, …,2k}, whose elements are consecutive
numbers, and let the two sets become members of Z. For each of the two "new" members, if its cardinal
number is larger than 1, then continue the partition and "member-assignment" process as described
above. Keep on the process until the cardinal number of each "new" member is 1.
■
Z can also be generated in a reversed way, which is shown in the attached MATLAB program.
For example, if k 3, then Z {{1}, {2}, {3}, {4}, {5}, {6}, {7}, {8}, {1, 2}, {3, 4}, {5, 6}, {7,
8}, {1, 2, 3, 4}, {5, 6, 7, 8}, {1, 2, …, 8}}.
Step 2. l ∈ L(V(G)), generate one and only one first-grade component of H whose label is l,
and denote this component by P({l}).
Remark 2.2 The grade of H’s component is defined in Condition C of Section 1, it is obvious that
the set of all first-grade components of H are actually the set of H’s all isolated vertices.
■
Step 3. Other components of H are generated in sequence by their grades. For example, all of the
second-grade components are to be constructed before the construction of any higher-grade component.
Components of any grade larger than 1 are defined in a general form as follows. But firstly, another
definition is needed.
Definition 2.3. Suppose A is a member of Z, which is defined by Definition 2.1, and b is a
positive integer. Let U ∪P, of which P is H’s component which satisfies: L(V(P)) ⊆ A and g(P) b.
Then we call U a same-grade union of A and b, and denote it by U(A, b). (See Fig. 2.4.)
1
1
2
5
3
5
6
7
6
2
3
4
4
U({1,2,3,4}, 2)
7
8
8
U({5,6,7,8}, 3)
Figure 2.4. Examples of same-grade unions
Each component of U(A, b), where b is larger than 1, is defined as follows.
Definition 2.5. If and only if P is a join of two same-grade unions U(A1, b1) and U(A2, b2), i.e., P
U(A1, b1) U(A2, b2), of which A1, A2, b1 and b2 satisfy the following two conditions:
2.5.1 A1∩A2 and A1∪A2 A, of which A ∈ Z;
2.5.2 b1 and b2 are positive integers, and b1 b2 b,
then let P be one of H’s components denoted by P(A1, b1, A2, b2) or P(A, b1, b2), of which b1
corresponds to A’s "first half" containing smaller numbers.
After and only after for the same b, P of each A1, A2, b1 and b2 which satisfy conditions 2.5.1 and
2
2.5.2 is generated, we start to construct H’s components of grade b+1. When the highest-grade
component of H, which is a 2k-clique, is generated, the construction of H is complete, then we call H a
minimum-covering-computation graph.
■
Then we have
Claim 2.6. For any nonempty subset T of L(V(G)) and any member A of Z which satisfies T A,
there exists a maximal clique Q in U(A, |T|) such that L(V(Q)) T.
Proof. We use induction on |T|. The claim obviously holds when |T| 1.
For the induction step, let T be a subset of L(V(G)) with |T| 2, then by Definition 2.1, we can
suppose Am to be the only minimal member of Z which includes T. Because |T| 2, we can suppose that
Am A1∪A2 of which A1 and A2 are also Z’s members and A1∩A2 , then let T1 A1∩T and T2 A2
∩T. Since Am is the minimal member of Z which includes T, it is obvious that T1,T2 . By the
induction hypothesis, there exists a maximal clique Q1 in U(A1, |T1|) such that L(V(Q1)) T1, and there
exists a maximal clique Q2 in U(A2, |T2|) such that L(V(Q2)) T2. Then by Definition 2.5, there exists a
maximal clique Q in P(A1, |T1|, A2, |T2|) U(A1, |T1|) U(A2, |T2|) such that L(V(Q)) L(V(Q1)∪V(Q2))
T1∪T2 T.
P(A1, |T1|, A2, |T2|) is a component of U(Am, |T|). Because Am is the only minimal member of Z
which includes T, so for any member A of Z which includes T, A includes Am and P(A1, |T1|, A2, |T2|) is
also a component of U(A, |T|), thus there exists a maximal clique Q in U(A, |T|) such that L(V(Q)) T.
■
We need a claim as follows to understand the next step.
Claim 2.7. For any subgraph D of H, there is only one maximal subgraph of D which satisfies
Conditions 1 and 2 in Section 1.
Proof. Assume S1 and S2 are two maximal subgraphs of D which satisfy Conditions 1 and 2, then
S1 ⊈ S2 and S2 ⊈ S1. Let S S1∪S2, then S⊃S1 and S⊃S2. However, it is obvious that S also satisfies
Conditions 1 and 2, which is contradictory to the assumption.
■
Step 4. For each component of H in ascending order of its grade, which is defined in Condition
C of Section 1, by deleting a minimal subgraph of the component, get a maximal subgraph S of it
which satisfies Conditions 1 and 2. Suppose the grade of the current component is b. If S is empty,
iterate this computation for the next component, whose grade is also b or otherwise b+1 when all
components of grade b have been computed. If S is nonempty, record S for the next step.
■
Then we have a claim as follows.
Claim 2.8. For the recorded S of Step 4, the size of its maximum clique is no larger than the
minimum size of graph G’s vertex cover.
Proof. Suppose C is a minimum vertex cover of G, then by Claim 2.6, there exists a maximal
clique Q in U(L(V(H)), |C|) such that L(V(Q)) L(C). Thus Q satisfies Conditions 1 and 2, so there
exists component of U(L(V(H)), |C|) which has nonempty subgraph satisfying Conditions 1 and 2.
Because each component P of H is computed in ascending order of its grade and by Claim 2.7 S is the
only maximal subgraph of P which satisfies Conditions 1 and 2, so for the recorded nonempty S of Step
4, the size of its maximum clique is no larger than |C|.
■
Step 5. For the recorded S of Step 4, by deleting a maximal subset of V(S), get a minimal-order
subgraph I of S which contains a nonempty subgraph SS that satisfies Conditions 1 and 2. Record this I
for the next step.
Remark 2.9. By Claim 2.7, we can tell whether I contains a nonempty subgraph that satisfies
Conditions 1 and 2 by finding out the only one maximal subgraph of I which satisfies the two
conditions.
■
Then for I recorded by Step 5, we have a claim as follows.
Claim 2.10. If I is a clique, then L(V(I)) is the label set of a minimum vertex cover of graph G.
Proof. By Step 5, I contains a nonempty subgraph that satisfies Conditions 1 and 2, so by
Condition 1 or 2, we can tell that there exists a vertex cover C of G such that L(V(I)) L(C).
Suppose b is the size of a maximum clique of the recorded S in Step 4. If I is a clique, then |V(I)|
b because I is a subgraph of S. By Claim 2.8, b and so |V(I)| are no larger than the minimum size of
graph G’s vertex cover. Thus L(V(I)) L(C) is the label set of a minimum vertex cover of G.
■
3
Thus we have
Step 6. Check whether the recorded I of Step 5 is a clique. If I is a clique, then output L(V(I)). If
I is not a clique, then go to the next stage of the algorithm.
Remark 2.11. Sections 5 and 6 explain "the next stage of the algorithm".
■
3.
Claim 3.1
EFFICIENCY OF THE ALGORITHM
For L(V(H)) {1, 2, 3, …,2k}(k 1),
| V H |
k
i 1
2
i
2 .
Proof.
When k 1, H has two first-grade components and one second-grade component, and so
V(H) 4 (2k2).
We use induction on k and assume the claim holds for any k when 1 k h.
Now for k h 1, L(V(H)) {1, 2, 3, …,2h1}. By Steps 1 to 3 of the algorithm, we can suppose
two isomorphic minimum-covering-computation graph H1 and H2 which satisfy: L(V(H1)) {1, 2,
3, …,2h} and L(V(H2)) {2h 1, 2h 2,…,2h1}. Then by the induction hypothesis,
| V H 1 |
h
i 1
2
2 .
i
Because H2 is isomorphic to H1, V(H2) V(H1). Suppose P is a component of H1 or H2, then by
Steps 1 to 3 of the algorithm, we can tell that P is also a component of H. Thus we have
Corollary 3.1.1. Let X ∪P, of which P is H’s component and P is also H1’s or H2’s component,
h
then
| V X | 2
2i 2 .
i 1
Now the other vertex-amount we need to count is of H’s components of which each is neither H1’s
nor H2’s component. Suppose P* is such a kind of component, then by Definition 2.1 and 2.5, we can
tell that L(V(P*)) ⊈ L(V(H1)), L(V(P*)) ⊈ L(V(H2)) and so L(V(P*)) L(V(H)). Thus by Definition 2.5,
P* U(A1, b1) U(A2, b2) U(L(V(H1)), b1) U(L(V(H2)), b2).
In the above formula, for each U(L(V(H1)), b1), b2 of U(L(V(H2)), b2) can be any integer from 1 to
|L(V(H2))| 2h, which means that each U(L(V(H1)), b1) was copied for 2h times to generate different
components of H. Because
2h
H1 b 1U L V H1 , b1 ,
1
so H1 was copied for 2h times to construct H. And there is a same conclusion for H2. Combine this
conclusion with Corollary 3.1.1, then we have
V(H) V(X) + 2h (V(H1) + V(H2))
h
h
h
2 2 2 ( 2 2 2 2)
i1
i1
i1
h 1
2 2 .
i 1
i
2
h
i
i
i
Thus the induction succeeds and the claim is proved.
■
Claim 3.2. Suppose graph G of order n is the input and T1(n) is the running time of Stage 1 of
the algorithm, then T1(n) O(n5+logn).
Proof. By Step 1, k is the smallest integer which satisfies n 2k, so 2k1 n. Then by Claim 3.1,
| V H |
k
i 1
2
i
2
k
i 1
2 2
i 1
k ( k 3) / 2
2n
( log 2 n 4 ) / 2
.
Thus the asymptotic upper bounds for |V(H)| and |E(H)| can be n(5+logn)/2 and n5+logn, respectively, so
the construction of H runs in O(n5+logn) time.
Then in Step 4, the running time for computing subgraphs of H’s components by Conditions 1 and
2 is still no more than O(n5+logn), and this is also the case for Step 5. Therefore Stage 1 of the algorithm
takes no more than O(n5+logn) time.
■
4
4.
TESTING OF THE ALGORITHM
The algorithm was tested by the two attached programs, "Generating_k_and_H" and "Testing_G",
on MATLAB (R2016a, trial use). The first file generates and saves k and H for the second file. The
second file generates random graphs of random "edge density", i.e., random probability of each edge,
for testing of the algorithm. "Edge density" in the program approximately equals to the ratio of |E(G)|
to |E(K)|, of which K is a 2k-order complete graph, and it is generated from a normal distribution with
mean 0.5 and standard deviation 0.2,0.3,0.4,0.5 or 0.6 for the testing. The reasons for the introduction
of this distribution are that there exist the most graphs, no matter labelled or unlabeled, when "edge
density" equals to 0.5, and there exists a same amount of graphs of "edge density" being d as that of
"edge density" being 1 d.
An ordinary personal computer is used for the testing. Testing parameters and results are listed as
follows.
k, |V(G)|
3, 8
4, 16
4, 16
4, 16
4, 16
4, 16
5, 32
5, 32
5, 32
Number of
tested
graphs
100,000
50,000
220,000
150,000
130,000
60,000
400
400
400
Table 4.1. Testing parameters and results
Standard
Approximate
No. of graphs to
deviation of
running time
which Stage 1 is
"edge density" (for Testing_G.m)
not applicable
0.3
16 minutes
0
0.2
10 hours
2
0.3
43 hours
19
0.4
29 hours
8
0.5
25 hours
9
0.6
12 hours
2
0.3
58 hours
1
0.4
58 hours
2
0.5
58 hours
1
No. of graphs to
which Stage 2 is
not applicable
0
0
0
0
0
0
0
0
0
Remark 4.2. Stage 2 is explained in Section 5.
Remark 4.3. When k 5, the running time for the computer used for testing is so long that it is
impossible to test a large number of graphs in a short period. There is no test data for k 5 for the same
reason.
k, |V(G)|
4, 16
5, 32
Table 4.4. Recorded "edge densities" of graphs to which Stage 1 is not applicable
Recorded "edge densities" of graphs to which Stage 1 is not applicable
0.2410, 0.2599, 0.2973, 0.3170, 0.3173, 0.3204, 0.3323, 0.3359, 0.3533, 0.3551, 0.3642,
0.3773, 0.3774, 0.3942, 0.4089, 0.4232, 0.4436, 0.4475, 0.4495, 0.4500, 0.4509, 0.4539,
0.4821, 0.4919, 0.4938, 0.5032, 0.5318, 0.5337, 0.5343, 0.5356, 0.6200, 0.6476
0.1479, 0.1889, 0.2127, 0.2868
By Table 4.1, when k 3, Stage 1 is applicable to 100% of tested random graphs (i.e. Step 5 of
Stage 1 yields a clique and, by Claim 2.10, Step 6 yields the label set of a minimum vertex cover of
each tested graphs for k 3); when k 4, Stage 1 is applicable to about 99.99% of tested random
graphs; and when k 5, Stage 1 is applicable to about 99.67% of tested random graphs.
By Table 4.4, the recorded "edge densities" of tested graphs to which Stage 1 is not applicable
scatter.
Except for k 3, the numbers of tested graphs are much less than the numbers of unlabeled graphs
of respective orders. However, because of the large number of tested graphs, and because the
possibility of duplicated graphs to which the algorithm is applicable is same as that of duplicated
graphs to which the algorithm is not applicable, it is reasonable to believe that the "applicability ratios"
of tested graphs are close to the "applicability ratios" of all unlabeled (or labelled) graphs of respective
orders.
5
5.
STAGE 2 OF THE ALGORITHM
For those graphs to which Stage 1 of the algorithm is not applicable, a stronger version of
Condition 2 is introduced as follows.
Condition 2+. For any edge uv of the subgraph, there exists a vertex cover C of G such that
L(N[uv]) L(C), where
N[uv] {w ∈ N[u]∩N[v] : There exists a vertex cover C* of G such that L(N[u]∩N[v]∩N[w])
L(C*)}.
■
With this change, the algorithm goes to Stage 2 for those graphs to which Stage 1 is not applicable.
Steps 1 to 3 are for construction of H and apparently they do not need to run again. Besides, for those
components of H which have been computed in Step 4 of Stage 1 and concluded that they do not have
nonempty subgraph satisfying Conditions 1 and 2, they do not need to be computed again in Stage 2
because Condition 2+ is stronger than Condition 2. Thus Stage 2 starts at Step 4 for the component of H
which has nonempty subgraph satisfying Conditions 1 and 2, and the following computation is same as
Stage 1 except that Condition 2 is substituted by Condition 2+.
Stage 2 of the algorithm has been implemented in the attached MATLAB program. It was found
that Stage 2 yielded cliques for all of the 44 tested graphs for which Stage 1 did not. Because Claims
2.8 and 2.10 still hold for Stage 2, so Stage 2 is applicable to all of the 44 tested graphs. Then since
Condition 2+ is stronger than Condition 2, we can tell that Stage 2 actually works for all of the 711,200
tested graphs.
By the definition of N[uv] in Condition 2+, 3-cliques of S in Step 4 and SS in Step 5 require
computation, thus the running time for the algorithm that reaches Stage 2 as its final stage is
O(n3(5+logn)/2).
However, the definition of N[uv] in Condition 2+ looks "strange", so it requires an expression in a
more general way as shown in the following section.
6.
FURTHER STAGES OF THE ALGORITHM
Because it has not been theoretically proved that Stage 2 of the algorithm is applicable to all
graphs, it is worthwhile to conceive further stages of the algorithm which shall be in a general form that
is consistent with Stages 1 and 2. Then the concept of hyperedge of hypergraph is needed.
A hypergraph consists of a collection of vertices and a collection of hyperedges. If the vertex set is
V, then the hyperedges are subsets of V. Then for Stage t of the algorithm, where t is a positive integer,
we give the following five rules for the algorithm:
1. Suppose Es(H) is the set of all hyperedges of size s in H, then let
Es(H) = {e : e is the vertex set of an s-clique in H and H is generated by Steps 1 to 3}, s
t+1;
Es(H) = , s t+1.
2. For any hyperedge e of size no larger than t, let N[e] {x : {x}∪e is a subset of a hyperedge
of H}.
3. For any hyperedge e of size t+1, let N[e] {x : x ∈ N[{v}], v ∈ e}.
4. By deleting hyperedges in H, subgraph S in Step 4 and subgraph SS in Step 5 shall satisfy
Condition X. For any hyperedge e of the subgraph, there exists a vertex cover C of G such that
L(N[e]) L(C).
5. If Stage t does not yield clique at Step 5, start Stage t+1 at Step 4 for the component of H
where Stage t ends.
Then it is not difficult to show that the above rules are consistent with Stages 1 and 2 of the
algorithm.
Claims 2.8 and 2.10 can be easily proved to hold for any stage of the algorithm, so Stage t yields a
minimum vertex cover of graph G at Step 6 if it yields a clique at Step 5.
Because in Stage t, hyperedges of size t+1, i.e., vertex sets of (t+1)-cliques, of S in Step 4 and SS
in Step 5 require computation, the algorithm runs in O(n(t+1) (5+logn)/2) time if the "maximum" stage to
6
which it reaches is Stage t.
7.
CONCLUSIONS
Conclusions for this algorithm are summarized as follows.
A. For minimum vertex cover problem of an order-n graph, the algorithm runs in O(n(t+1) (5+logn)/2)
time if the "maximum" stage to which it reaches is Stage t, where t is a positive integer.
Therefore, if the "maximum" stage to which the algorithm reaches for a graph is Stage 1 or 2,
then the algorithm runs in O(n5+logn) or O(n3(5+logn)/2) time for the graph respectively.
B. Stage 1 of the algorithm is applicable to, i.e., yields a proved minimum vertex cover for, each
of more than 99% of tested graphs of order no larger than 32, while Stage 2 of the algorithm
works for 100% of tested graphs of order no larger than 32. And it is reasonable to believe that
the above ratios are close to the "real applicability ratios" for this algorithm to all unlabeled
(or labelled) graphs of order no larger than 32.
C. The "applicability ratio" of Stage 1 for all tested order-8 graphs, order-16 graphs and order-32
graphs is 100%, 99.99% and 99.67% respectively, so it is unlikely that the "applicability ratio"
of Stage 1 decreases sharply when the order of graph increases. Besides, Stage 2 works for
each of the tested graphs and we still have stages beyond Stage 2. Therefore it is an efficient
algorithm which is already applicable to practical use.
D. As summarized above, the performance of this algorithm is extraordinary for NP-Complete
problems. Although there is no systematic and theoretical explanation for it yet, it is at least an
important finding which is valuable for further research. However, like many findings or
conjectures in mathematics, its theoretical explanation may take many years even decades to
be found. Thus it is decided to make the algorithm public so that the explanation and
improvement of it become possible.
E. Further tests for graphs of order 64 or more are necessary and valuable. However, they require
computer of large capacity which is not available for the author.
Reference
[1] Levitin, A.: Introduction to the Design and Analysis of Algorithms. Pearson Education, Boston
(2007)
Appendix 1.
Generating_k_and_H.m
% This MATLAB file is the first program to test the algorithm in the
manuspcript and shall run before the running of file "Testing_G".
function Generating_k_and_H
k=input('When the order of each randomly generated graph is 2^k, k equals
to:');
if (k<=0)||(k<ceil(k))
error('k must be positive interger')
end
if k==6
disp('Note: This program requires large memory when k is larger than
5. The size of the swap file is recommended to be set to 100GB-200GB for
k=6.')
7
disp('And the running time for k=6 on an ordinary personal computer
is about 2 to 3 hours.')
end
if k>=7
disp('Note: For k>6, this program requires large memory which may
exceed the capacity of the computer,')
disp('and the running time on an ordinary personal computer is very
long.')
end
if k>7
error('For k>7, all of the uint8 class of matrix in this program shall
be changed to other class.')
end
Z=Generating_Z(k);
H=Generating_H(k,Z);
disp('Saving file "k_and_H". Please wait.') % When k is larger than 5,
the following saving of file 'k_and_H' may take a long time.
save('k_and_H','k','H','-v7.3')
end
% Step 1
function Z=Generating_Z(k)
% Cell-array Z represents the family of sets by Definition 2.1 of the
algorithm, while its non-empty cells are arrays which represent the members
of
% family Z. Z's cells of size 1 are first generated as follows.
Z=cell(k+1,2^k);
for c=1:2^k
M=c;
Z{1,c}=M;
end
% Z's cells of larger size are generated as follows. Non-empty cells of
the rth row of cell-array Z represent family Z's member of size 2^(r-1).
for r=2:k+1
for c=1:2^(k+1-r)
Z{r,c}=union(Z{r-1,2*c-1}, Z{r-1,2*c});
end
end
end
% Steps 2&3
function H=Generating_H(k,Z)
% In this program, component P of graph H is expressed by its adjacency
matrix where diagonal numbers are the labels of corresponding vertices
8
% instead of zeros. For first-grade component P(Z{1,d2}), H{1,d2,1,1} is
assigned to be its adjacency matrix. However, for higher-grade component
% P(Z{d1,d2},d3,d4) where d1>1, H{d1,d2,d3,d4} is assigned to be its
adjacency matrix.
H=cell(k+1,2^k,2^(k-1),2^(k-1));
% k+1=size(Z,1). 2^k=size(Z,2).
% When component P's grade is larger than 2^(k-1), P is the join of
same-grade unions U(Z{k,1},d3) and U(Z{k,2},d4). Because
|Z{k,1}|=|Z{k,2}|=2^(k-1),
% so both d3 and d4 are no larger than 2^(k-1) for any H{d1,d2,d3,d4}.
U=cell(k+1,2^k,2^(k-1));
% Same-grade union U(Z{d1,d2},D3) is expressed by its adjacency matrix
U{d1,d2,D3} where diagonal numbers are the labels of corresponding
vertices
% instead of zeros.
% Matrices for the first-grade components and first-grade unions are first
generated as follows.
for d2=1:2^k
H{1,d2,1,1}=uint8(d2); % The uint8 class is used to save memory and
time. However, it shall be changed to other class when k is larger than
7.
U=Generating_U(U,k,H{1,d2,1,1},1,d2,1);
end
% Matrices for higher-grade components and higher-grade unions are
generated as follows.
for b=2:2^k
disp(['Proceeding to b = ',num2str(b),' of ',num2str(2^k)]) % This
command is to show the progress of the running program when k is large.
for d3=max(1,b-2^(k-1)):min(b-1,2^(k-1))
% b=d3+d4. b3 and b4 are positive integers and both maximum values
of d3 and d4 for H{d1,d2,d3,d4} are 2^(k-1).
d4=b-d3;
for d1=1:k
if (size(Z{d1,1},2)<d3)||(size(Z{d1,1},2)<d4)
% By Definition 2.5, for component P(Z{d1+1,*},d3,d4) and
nonempty H{d1+1,*,d3,d4}, d3 and d4 are not larger than size(Z{d1,1},2).
continue
end
for d2=1:2:2^k-1
if isempty(Z{d1,d2+1})==0
% Z{d1,d2} and Z{d1,d2+1} represent A1 and A2 in
Definition 2.5, respectively.
sx=size(U{d1,d2,d3},1); % Matrix U{d1,d2,d3} expresses
same-grade union U(Z{d1,d2},d3)
9
sy=size(U{d1,d2+1,d4},1); % Matrix U{d1,d2+1,d4}
expresses same-grade union U(Z{d1,d2+1},d4)
H{d1+1,(d2+1)/2,d3,d4}=uint8([U{d1,d2,d3},ones(sx,sy);ones(sy,sx),U{d
1,d2+1,d4}]);
% H{d1+1,(d2+1)/2,d3,d4} is the adjacency matrix of
graph H's component P(Z{d1+1,(d2+1)/2},d3,d4), which is the join of
% U(Z{d1,d2},d3) and U(Z{d1,d2+1},d4).
% Z{d1+1,(d2+1)/2}=union(Z{d1,d2},Z{d1,d2+1}).
if d1<k
% When d1=k, H{d1+1,*,*,*} expresses component of
same-grade union U(Z{k+1,1},*). Because |Z{k+1,1}| = 2^k, i.e., Z{k+1,1}
% contains all labels, so U(Z{k+1,1},*) will not be
used to generate any component of H and thus they are not generated here.
U=Generating_U(U,k,H{d1+1,(d2+1)/2,d3,d4},d1+1,(d2+1)/2,b);
end
else % which indicates Z{d1,d2+1} is empty, then go to the
next d1.
break
end
end
end
end
end
end
function U=Generating_U(U,k,P,a1,a2,b)
% This function is to put component P(Z{a1,a2},b1,b2) into each same-grade
union U(Z{d1,d2},b) of which b=b1+b2 and Z{a1,a2} is a subset of Z{d1,d2}.
d1=a1;
d2=a2;
while d1<=k
% As explained in lines 80 and 81, U(Z{k+1,*},*) will not be used to
generate any cell of H and thus they are not generated here.
sizeU=size(U{d1,d2,b},1);
sizeP=size(P,1);
U{d1,d2,b}=uint8([U{d1,d2,b},zeros(sizeU,sizeP);zeros(sizeP,sizeU),P]
);
% The above command puts component P(Z{a1,a2},b1,b2) into same-grade
union U(Z{d1,d2},b).
d1=d1+1; % It is obvious that for any d1, there exists at most one
Z{d1,d2} which includes Z{a1,a2}.
10
d2=ceil(d2/2); % If Z{d1,d2} includes Z{a1,a2}, then
Z{d1+1,ceil(d2/2)} includes Z{a1,a2}.
end
end
Appendix 2.
Testing_G.m
% This MATLAB file is the second program to test the algorithm in the
manuspcript. Another file named "Generating_k_and_H.m" needs to run
first.
function Testing_G
% By generating m random graphs, of which the order is 2^k, the program
finds out how many of the tested random graphs do not yield clique in Step
5
% and thus are not the graphs to which the algorithm is applicable. (If
a clique is yielded in Step 5, then by Claim 2.10, its label set is the
label
% set of a minimum vertex cover of the tested graph.)
m=input('How many random graphs are to be generated and tested for the
algorithm?');
if (m<=0) || (m<ceil(m))
error('m must be positive integer')
end
disp('Loading file "k_and_H". Please wait.') % When k is larger than 5,
the following loading of file 'k_and_H' may take a long time.
load('k_and_H');
% File "k_and_H.mat" is generated by
"Generating_k_and_H.m".
if k==5
disp('Note: When k=5, the running time for an ordinary personal
computer to compute a graph is about 9 minutes.')
end
if k>5
disp('Note: When k>5, the running time for an ordinary personal
computer to compute a graph is very long.')
end
n1=0; % n1 will be the number of tested graphs to which Stage 1 of the
algorithm is not applicable.
n2=0; % n2 will be the number of tested graphs to which Stage 2 is not
applicable.
d1=[]; % d1 will record the approximate edge density of each graph to which
Stage 1 is not applicable.
d2=[]; % d2 will record the approximate edge density of each graph to which
Stage 2 is not applicable.
11
for h=1:m
disp(['Computing the ',num2str(h),'th random graph'])
density=2;
while density<=0 || density>1
density=0.5+0.3*randn;
end
% "density" will approximately equals to the ratio of |E(G)| to |E(K)|,
of which K is a 2^k-order complete graph, and it is generated from a normal
% distribution with mean 0.5 and standard deviation 0.2,0.3,0.4,0.5
or 0.6.
G=Generating_G(k,density);
if G==zeros(size(G))
disp(['The ',num2str(h),'th random graph has no edge.'])
continue
end
stage=1;
bstart=2;
[S1,bstart]=Step_4(stage,bstart,k,G,H);
I1=Step_5(stage,S1,G);
result1=Step_6(I1);
if result1==1 % which indicates I1 is a clique.
disp(['Stage 1 of the algorithm is applicable to the
',num2str(h),'th random graph.']);
else
disp(['Stage 1 of the algorithm is NOT applicable to the
',num2str(h),'th random graph.']);
n1=n1+1;
d1(length(d1)+1)=density;
stage=2;
[S2,~]=Step_4(stage,bstart,k,G,H);
I2=Step_5(stage,S2,G);
result2=Step_6(I2);
if result2==1 % which indicates I2 is a clique.
disp(['Stage 2 of the algorithm is applicable to the
',num2str(h),'th random graph.']);
else
disp(['Stage 2 of the algorithm is NOT applicable to the
',num2str(h),'th random graph.']);
n2=n2+1;
d2(length(d2)+1)=density;
end
end
disp(['Currently, Stage 1 of the algorithm is NOT applicable to
',num2str(n1),' of ',num2str(h),' computed random graphs.'])
12
disp(['Currently, Stage 2 of the algorithm is NOT applicable to
',num2str(n2),' of ',num2str(h),' computed random graphs.'])
end
d1
disp('d1 records the approximate edge density of each graph to which Stage
1 is not applicable.')
d2
disp('d2 records the approximate edge density of each graph to which Stage
2 is not applicable.')
end
function G=Generating_G(k,density)
G=sprandsym(2^k,density,0.1);
% sprandsym(n,density) is a symmetric random, n-by-n, sparse matrix with
approximately density*n*n nonzeros. However, when density equals to 1,
the
% function still generates some zeros. For better performance,
sprandsym(n,density,rc) is used here and it works much better, although
ocassionly still
% generates zeros, for density=1.
G(logical(eye(2^k)))=0;
G=spones(G);
end
function [S,bstart]=Step_4(stage,bstart,k,G,H)
% For each component P of graph H in ascending order of its grade, by
deleting a minimal subgraph of P, get P's maximal subgraph S which satisfies
% Conditions 1 and 2 (or 2+ for Stage 2). Iterate this computation for
the next component until P's S is nonempty.
% In this program, component P of graph H is expressed by its adjacency
matrix, where diagonal numbers are the labels of corresponding vertices
% instead of zeros. For first-grade component P(Z{1,d2}), H{1,d2,1,1} is
assigned to be its adjacency matrix. However, for higher-grade component
% P(Z{d1,d2},d3,d4) where d1>1, H{d1,d2,d3,d4} is assigned to be its
adjacency matrix.
if stage==1
for d2=1:2^k % This "for loop" is only for graph H's first-grade
components. Stage 2 never starts from any first-grade component because
a
% nonempty subgraph of a first-grade component is always a clique.
P=H{1,d2,1,1};
S=Maximal_subgraph_satisfying_conditions_1and2_or_1and2plus(stage,P,G
);
13
if isempty(S)==0 % which indicates P(Z{1,d2}) has a nonempty
subgraph S, which must be P(Z{1,d2}) itself, that satisfies Conditions
1 and 2.
return
end
end
end
for b=bstart:2^k % This "for loop" is for graph H's components of grade
larger than 1. Thus bstart=2 for Stage 1, while Stage 2 starts at the grade
% of H's component which has nonempty subgraph satisfying Conditions
1 and 2.
for d3=max(1,b-2^(k-1)):min(b-1,2^(k-1))
% b=d3+d4. d3 and d4 must be positive integers and both maximum
values of d3 and d4 for H{d1,d2,d3,d4} are 2^(k-1).
d4=b-d3;
for d1=2:k+1
for d2=1:2^k
P=H{d1,d2,d3,d4};
if isempty(P)==0
% which indicates this cell represents
a nonempty component P(Z{d1,d2},d3,d4).
S=Maximal_subgraph_satisfying_conditions_1and2_or_1and2plus(stage,P,G
);
if isempty(S)==0 % which indicates P(Z{d1,d2},d3,d4)
has a nonempty subgraph S which satisfies conditions 1 & 2(or 2+ for Stage
2).
bstart=b; % This command records the grade of H's
component at which the possible Stage 2 will start.
return
end
end
end
end
end
end
end
function I=Step_5(stage,S,G)
% For the recorded S of Step 4, by deleting a maximal subset of V(S), get
a minimal-order subgraph I of S which contains a nonempty subgraph SS that
% satisfies Conditions 1 and 2(or 2+ for Stage 2). Record this I for the
next step.
v=1;
while v<=size(S,1)
14
% The vth vertex of S is to be checked whether it can be deleted, i.e.,
whether S still has a nonempty subgraph which satisfies Conditions 1 and
2
% (or 2+ for Stage 2) after the vth vertex of S is deleted.
I=S;
I(v,:)=[];
I(:,v)=[];
SS=Maximal_subgraph_satisfying_conditions_1and2_or_1and2plus(stage,I,
G);
if isempty(SS)==0 % which indicates that I still has a nonempty
subgraph which satisfies Conditions 1 and 2(or 2+ for Stage 2).
S=SS; % If this command were S=I instead of S=SS, then the next
round of function
"Maximal_subgraph_satisfying_conditions_1and2_or_1and2plus"
% would delete again the edges of E(I)-E(SS).
% For any i which satisfies 1 <= i < v, it is obvious that the ith
vertex of the last S remains not deletable and so it is still the ith vertex
% of the current S.
else % which indicates the vth vertex of S shall not be deleted.
v=v+1;
end
end
I=S;
end
function result=Step_6(I)
% This function checks whether I represents a clique.
if size(I,1)==1
result=1;
return
end
for i=1:size(I,1)-1 % Because of symmetry of matrix I, only the "upper
half" of the matrix is checked.
for j=i+1: size(I,1)
if I(i,j)==0
result=0;
return;
end
end
end
result=1;
end
15
function
M=Maximal_subgraph_satisfying_conditions_1and2_or_1and2plus(stage,D,G
)
% For the graph defined by adjacency matrix D, this function finds out
its maximal subgraph which satisfies Conditions 1 and 2 (or 2+ for Stage
2).
if isempty(D)==1
M=D;
return
end
x=1;
while x==1
DD=A_round_of_edge_deleting_by_Condition_2or2plus(stage,D,G);
if DD==D % which indicates all of the edges of the graph expressed
by D satisfy Condition 2 (or 2+ for Stage 2).
break
end
% At this point, DD~=D, which indicates certain edges of the graph
expressed by D were deleted in the last round of edge-deleting and D becomes
DD.
% However,the graph expressed by DD may still have edges which do not
satisfy Condition 2 (or 2+ for Stage 2), so the next round of edge-deleting
% is necessary until DD=D.
D=DD;
end
% At this point, all edges of the graph expressed by D satisfy Condition
2 (or 2+ for Stage 2). Then it is obvious that for any vertex which has
% incident edge, Condition 1 is satisfied. Thus the following part of this
function is to delete the isolated vertices which do not satisfy Condition
1.
i=1;
while i<=size(D,1) % The ith vertex of D will be checked whether it is
an islated vertex and whether it satisfies Condition 1.
A=D(i,:); % Thus A(i)=D(i,i) is the label of the ith vertex of D.
A(i)=0;
if A==zeros(size(A))
% which indicates the ith vertex of D has no incident edge, then
we need to check whether the label of this vertex is also the label of
a G's
% vertex cover of size 1 as follows.
GG=G;
GG(D(i,i),:)=0; % The label of the xth vertex of G is x.
GG(:,D(i,i))=0;
% D(i,i) is the label of the ith vertex of D. Suppose j=D(i,i).
16
If j is the label of a G's vertex cover of size 1, then for any G(r,c)~=0,
% r=j or c=j. (Note that all of the diagonal numbers of matrix G
are zeros.)
if GG==zeros(size(G))
% which indicates the label of the ith vertex of D is the label
of a G's vertex cover of size 1, then this vertex satisfies Condition 1
% and shall not be deleted.
i=i+1;
else % Then this vertex does not satisfy Condition 1 and shall
be deleted.
D(i,:)=[];
D(:,i)=[];
end
else % which indicates the ith vertex of D has incident edge and shall
not be deleted.
i=i+1;
end
end
M=D;
end
function DD=A_round_of_edge_deleting_by_Condition_2or2plus(stage,D,G)
% For each round of edge-deleting, all edges which do not satisfy Condition
2 (or 2+ for Stage 2) of the input graph expressed by D are deleted.
% However, the deletion may cause some more edges which no longer satisfy
Condition 2 (or 2+ for Stage 2), and these edges may not be all deleted
in
% this round of edge-deleting.
i=2;
while i<=size(D,1)
for j=1:i-1 % Because matrix D is symmetrical, only the part under
its diagonal is checked.
if D(i,j)==1
dia=diag(D); % dia(1,x) is the label of the xth vertex of D.
Ne2=D(i,:).*D(j,:);
% ".*" is element-by-element multiplication, so Ne2(1,x) is
non-zero only if both D(i,x) and D(j,x) are non-zero. Therefore, Ne2(1,x)
is
% non-zero only if x=i, x=j, or the xth vertex is adjacent to
both the ith and the jth vertices, so Ne2=N[ij] for Stage 1.
if stage==2 % then Condition 2+ shall be satisfied instead of
Condition 2.
for k=1:size(D,1)
if k~=i && k~=j && Ne2(1,k)==1 % which indicates the
17
kth vertex is adjacent to both the ith and the jth vertices.
Ne3=Ne2.*D(k,:); % Similar as explained for Ne2,
Ne3(1,x) is non-zero only if x=i, x=j, x=k, or the xth vertex is adjacent
to
% all of the ith, jth and kth vertices.
LNe3=dia(logical(Ne3)); % LNe3 is the label set of
Ne3.
C=Does_the_label_set_cover_E_of_G(LNe3,G);
if C==0 % which indicates the 3-clique of vertices
i,j and k does not satisfy the definition of N[uv] in Condition 2+, so
% vertex k shall not be considered to be an element
of N[ij] for Stage 2.
Ne2(1,k)=0;
end
end
end
% At this point, Ne2=N[ij] for Stage 2.
end
LNe2=dia(logical(Ne2));
% LNe2 is the label set of Ne2=N[ij].
C=Does_the_label_set_cover_E_of_G(LNe2,G);
if C==0
% which indicates the edge corresponding to D(i,j)
does not satisfy Condition 2 (or 2+ for Stage 2).
D(i,j)=0;
D(j,i)=0;
end
end
end
i=i+1;
end
DD=D;
end
function C=Does_the_label_set_cover_E_of_G(LN,G)
% This function checks whether LN is a label set of a vertex cover of G.
G(LN,:)=0;
G(:,LN)=0;
if G==zeros(size(G))
C=1;
else
C=0;
end
end
18
| 8 |
arXiv:1711.01057v2 [] 13 Nov 2017
Open subgroups of the automorphism group of a
right-angled building
Tom De Medts
Ana C. Silva
November 15, 2017
Abstract
We study the group of type-preserving automorphisms of a rightangled building, in particular when the building is locally finite. Our
aim is to characterize the proper open subgroups as the finite index
closed subgroups of the stabilizers of proper residues.
One of the main tools is the new notion of firm elements in a rightangled Coxeter group, which are those elements for which the final
letter in each reduced representation is the same. We also introduce
the related notions of firmness for arbitrary elements of such a Coxeter
group and n-flexibility of chambers in a right-angled building. These
notions and their properties are used to determine the set of chambers
fixed by the fixator of a ball. Our main result is obtained by combining these facts with ideas by Pierre-Emmanuel Caprace and Timothée
Marquis in the context of Kac–Moody groups over finite fields, where
we had to replace the notion of root groups by a new notion of root
wing groups.
1
Introduction
A Coxeter group is right-angled if the entries of its Coxeter matrix are all
equal to 1, 2 or ∞ (see Definition 2.1 below for more details). A right-angled
building is a building for which the underlying Coxeter group is right-angled.
The most prominent examples of right-angled buildings are trees. To some
extent, the combinatorics of right-angled Coxeter groups and right-angled
buildings behave like the combinatorics of trees, but in a more complicated
and therefore in many aspects more interesting fashion.
Right-angled buildings have received attention from very different perspectives. One of the earlier motivations for their study was the connection
with lattices; see, for instance, [RR06, Tho06, TW11, KT12, CT13]. On the
other hand, the automorphism groups of locally finite right-angled buildings are totally disconnected locally compact (t.d.l.c.) groups, and their
full automorphism group was shown to be an abstractly simple group by
Pierre-Emmanuel Caprace in [Cap14], making these groups valuable in the
1
study of t.d.l.c. groups. Caprace’s work also highlighted important combinatorial aspects of right-angled buildings; in particular, his study of parallel residues and his notion of wings (see Definition 3.6 below) are fundamental tools. From this point of view, we have, in a joint work with
Koen Struyve, introduced and investigated universal groups for right-angled
buildings; see [DMSS16]. More recently, Andreas Baudisch, Amador MartinPizarro and Martin Ziegler have studied right-angled buildings from a modeltheoretic point of view; see [BMPZ17].
In this paper, we continue the study of right-angled buildings in a combinatorial and topological fashion. In particular, we introduce some new
tools in right-angled Coxeter groups and we study the (full) automorphism
group of right-angled buildings. Our main goal is to characterize the proper
open subgroups of the automorphism group of a locally finite semi-regular
right-angled building as the closed finite index subgroups of the stabilizer of
a proper residue; see Theorem 4.29 below.
The first tool we introduce is the notion of firm elements in a rightangled Coxeter group: these are the elements with the property that every
possible reduced representation of that element ends with the same letter
(see Definition 2.10 below), i.e., the last letter cannot be moved away by
elementary operations. If an element of the Coxeter group is not firm, then
we define its firmness as the maximal length of a firm prefix.
This notion will be used to define the concepts of firm chambers in a
right-angled building and of n-flexibility of chambers with respect to another
chamber; this then leads to the notion of the n-flex of a given chamber. See
Definition 3.9 below.
A second new tool is the concept of a root wing group, which we define in
Definition 4.6. Strictly speaking, this is not a new definition since the root
wing groups are defined as wing fixators, and as such they already appear
in the work of Caprace [Cap14]. However, we associate such a group to
a root in an apartment of the building, and we explore the fact that they
behave very much like root subgroups in groups of a more algebraic nature,
such as automorphism groups of Moufang spherical buildings or Kac–Moody
groups.
Outline of the paper. In Section 2, we provide the necessary tools for
right-angled Coxeter groups. In Section 2.1, we recall the notion of a poset
≺w that we can associate to any word w in the generators, introduced in
[DMSS16]. Section 2.2 introduces the concepts of firm elements and the
firmness of elements in a right-angled Coxeter group. Our main result in
that section is the fact that long elements cannot have a very low firmness;
see Theorem 2.18.
Section 3 collects combinatorial facts about right-angled buildings. After recalling the important notions of parallel residues and wings, due to
2
Caprace [Cap14], in Section 3.1, we proceed in Section 3.2 to introduce the
notion of chambers that are n-flexible with respect to another chamber and
the notion of the square closure of a set of chambers (which is based on
results from [DMSS16]); see Definitions 3.9 and 3.12. Our main result in
Section 3 is Theorem 3.13, showing that the square closure of a ball of radius
n around a chamber c0 is precisely the set of chambers that are n-flexible
with respect to c0 .
In Section 4, we study the automorphism group of a semi-regular rightangled building. We begin with a short Section 4.1 that uses the results of
the previous sections to show that the set of chambers fixed by a ball fixator
is bounded; see Theorem 4.4. In Section 4.2, we associate a root wing group
Uα to each root (Definition 4.6), we show that Uα acts transitively on the
set of apartments through α (Proposition 4.7) and we adopt some facts
from [CM13] to the setting of root wing groups.
We then continue towards our characterization of the open subgroups
of the full automorphism group of a semi-regular locally finite right-angled
building. Our final result is Theorem 4.29 showing that every proper open
subgroup is a finite index subgroup of the stabilizer of a proper residue.
We distinguish between the case where the open subgroup is compact (Section 4.3) and non-compact (Section 4.4). In the compact case, we provide
a characterization that remains valid for right-angled buildings that are not
locally finite, and we use our knowledge about the fixed-point set of ball
fixators; see Proposition 4.15. In the non-compact case, we have to restrict
to locally finite buildings. We follow, to a very large extent, the strategy
taken by Pierre-Emmanuel Caprace and Timothée Marquis in [CM13] in
their study of open subgroups of Kac–Moody groups over finite fields; in
particular, we show that an open subgroup of Aut(∆) contains sufficiently
many root wing groups, and much of the subtleties of the proof go into
determining precisely the types of the root groups contained in the open
subgroup, which will then, in turn, pin down the residue, the stabilizer of
which contains the given open subgroup as a finite index subgroup.
In the final Section 5, we mention two applications of our main theorem. The first is a rather immediate corollary, namely the fact that the
automorphism group of a semi-regular locally finite right-angled building is
a Noetherian group; see Proposition 5.3. The second application shows that
every open subgroup of the automorphism group is the reduced envelope of
a cyclic subgroup; see Proposition 5.6.
Acknowledgments. This paper would never have existed without the
help of Pierre-Emmanuel Caprace. Not only did he suggest the study of
open subgroups of the automorphism group of right-angled buildings to us;
we also benefited a lot from discussions with him.
We also thank the Research Foundation in Flanders (F.W.O.-Vlaanderen)
3
for their support through the project “Automorphism groups of locally finite
trees” (G011012N).
2
Right-angled Coxeter groups
We begin by recalling some basic definitions and facts about Coxeter groups.
Definition 2.1. (i) A Coxeter group is a group W with generating set
S = {s1 , . . . , sn } and with presentation
W = hs ∈ S | (st)mst = 1i
where mss = 1 for all s ∈ S and mst = mts ≥ 2 for all i 6= j.
It is allowed that mst = ∞, in which case the relation involving st
is omitted. The pair (W, S) is called a Coxeter system of rank n.
The matrix M = (msi sj ) is called the Coxeter matrix of (W, S). The
Coxeter matrix is often conveniently encoded by its Coxeter diagram,
which is a labeled graph with vertex set S where two vertices are joined
by an edge labeled mst if and only if mst ≥ 3.
(ii) A Coxeter system (W, S) is called right-angled if all entries of the Coxeter matrix are 1, 2 or ∞. In this case, we call the Coxeter diagram Σ
of W a right-angled Coxeter diagram; all its edges have label ∞.
Definition 2.2. Let (W, S) be a Coxeter system and let J ⊆ S.
(i) We define WJ := hs | s ∈ Ji ≤ W and we call this a standard
parabolic subgroup of W . It is itself a Coxeter group, with Coxeter
system (WJ , J). Any conjugate of a standard parabolic subgroup WJ
is called a parabolic subgroup of W .
(ii) The subset J ⊆ S is called a spherical subset if WJ is finite. When
(W, S) is right-angled, J is spherical if and only if |st| ≤ 2 for all
s, t ∈ J.
(iii) The subset J ⊆ S is called essential if each irreducible component
of J is non-spherical. In general, the union J0 of all irreducible nonspherical components of J is called the essential component of J.
If P is a parabolic subgroup of W conjugate to some WJ , then the
essential component P0 of P is the corresponding conjugate of WJ0 ,
where J0 is the essential component of J. Observe that P0 has finite
index in P .
(iv) Let E ⊆ W . We define the parabolic closure of E, denoted by Pc(E),
as the smallest parabolic subgroup of W containing E.
Lemma 2.3 ([CM13, Lemma 2.4]). Let H1 ≤ H2 be subgroups of W . If H1
has finite index in H2 , then Pc(H1 ) has finite index in Pc(H2 ).
4
2.1
A poset of reduced words
Let Σ = (W, S) be a right-angled Coxeter system and let MS be the free
monoid over S, the elements of which we refer to as words. Notice that there
is an obvious map MS → W denoted by w 7→ w; if w ∈ MS is a word, then
its image w under this map is called the element represented by w, and the
word w is called a representation of w. For w1 , w2 ∈ MS , we write w1 ∼ w2
when w1 = w2 . By some slight abuse of notation, we also say that w2 is a
representation of w1 (rather than a representation of w1 ).
Definition 2.4. A Σ-elementary operation on a word w ∈ MS is an operation of one of the following two types:
(1) Delete a subword of the form ss, with s ∈ S.
(2) Replace a subword st by ts if mst = 2.
A word w ∈ MS is called reduced (with respect to Σ) if it cannot be shortened
by a sequence of Σ-elementary operations.
Clearly, applying elementary operations on a word w does not alter its
value in W . Conversely, if w1 ∼ w2 for two words w1 , w2 ∈ MS , then w1
can be transformed into w2 by a sequence of Σ-elementary operations. The
number of letters in a reduced representation of w ∈ W is called the length
of w and is denoted by l(w). Tits proved in [Tit69] (for arbitrary Coxeter
systems) that two reduced words represent the same element of W if and
only if one can be obtained from the other by a sequence of elementary
operations of type (2) (or rather its generalization to all values for mst ).
Definition 2.5. Let w = s1 s2 · · · sℓ ∈ MS . If σ ∈ Sym(ℓ), then we let
σ.w be the word obtained by permuting the letters in w according to the
permutation σ, i.e.,
σ.w := sσ(1) sσ(2) · · · sσ(ℓ) .
In particular, if w′ is obtained from w by applying an elementary operation of
type (2) replacing si si+1 by si+1 si , then σ.w = w′ for σ = (i i+ 1) ∈ Sym(ℓ).
In this case, si and si+1 commute and we call σ = (i i + 1) a w-elementary
transposition.
In this way, we can associate an elementary transposition to each Σ-elementary operation of type (2). It follows that two reduced words w and w′
represent the same element of W if and only if
w′ = (σn · · · σ1 ).w, where each σi is a
(σi−1 · · · σ1 ).w-elementary transposition,
i.e., if w′ is obtained from w by a sequence of elementary transpositions.
5
Definition 2.6. If w ∈ MS is a reduced word of length ℓ, then we define
Rep(w) := {σ ∈ Sym(ℓ) | σ = σn · · · σ1 , where each σi is a
(σi−1 · · · σ1 ).w-elementary transposition}.
In other words, the set Rep(w) consists of the permutations of ℓ letters which
give rise to reduced representations of w.
We now define a partial order ≺w on the letters of a reduced word w in
MS with respect to Σ.
Definition 2.7 ([DMSS16, Definition 2.6]). Let w = s1 · · · sℓ be a reduced
word of length ℓ in MS and let Iw = {1, . . . , ℓ}. We define a partial order
“≺w ” on Iw as follows:
i ≺w j ⇐⇒ σ(i) > σ(j) for all σ ∈ Rep(w).
Note that i ≺w j implies that i > j. As a mnemonic, one can regard
j ≻w i as “j → i”, i.e., the generator si comes always after the generator sj
regardless of the reduced representation of w.
We point out a couple of basic but enlightening consequences of the
definition of this partial order.
Observation 2.8. Let w = s1 · · · si · · · sj · · · sℓ be a reduced word in MS
with respect to a right-angled Coxeter diagram Σ.
(i) If |si sj | = ∞, then i ≻w j.
The converse is not true. Indeed, suppose there is i < k < j such that
|si sk | = ∞ and |sk sj | = ∞. Then i ≻w j, independently of whether
|si sj | = 2 or ∞.
(ii) If i 6≻w j, then by (i), it follows that |si sj | = 2 and, moreover, for each
k ∈ {i + 1, . . . , j − 1}, either |si sk | = 2 or |sk sj | = 2 (or both).
(iii) On the other hand, if sj and sj+1 are consecutive letters in w, then
|sj sj+1 | = ∞ if and only if j ≻w j + 1.
Lemma 2.9 ([DMSS16, Lemma 2.8]). Let w = w1 · si · · · sj · w2 ∈ MS be a
reduced word. If i 6≻w j, then there exist two reduced representations of w
of the form
w1 · · · si sj · · · w2 and w1 · · · sj si · · · w2 ,
i.e., the positions of si and sj can be exchanged using only elementary operations on the generators {si , si+1 , . . . , sj−1 , sj }, without changing the prefix
w1 and the suffix w2 .
6
2.2
Firm elements of right-angled Coxeter groups
In this section we define firm elements in a right-angled Coxeter group W
and we introduce the concept of firmness to measure “how firm” an arbitrary
elements of W is. This concept will be used over and over throughout the
paper. See, in particular, Definition 3.9, Theorem 3.13, Theorem 4.4 and
Proposition 4.7. Our main result in this section is Theorem 2.18, showing
that the firmness of elements cannot drop below a certain value once they
become sufficiently long.
Definition 2.10. Let w ∈ W be represented by some reduced word w =
s1 · · · sℓ ∈ MS .
(i) We say that w is firm if i ≻w ℓ for all i ∈ {1, . . . , ℓ−1}. In other words,
w is firm if its final letter sℓ is in the final position in each possible
reduced representation of w. Equivalently, w is firm if and only if there
is a unique r ∈ S such that l(wr) < l(w).
(ii) Let F # (w) be the largest k such that w can be represented by a reduced
word in the form
s1 · · · sk tk+1 · · · tℓ , with s1 · · · sk firm.
We call F # (w) the firmness of w. We will also use the notation
F # (w) := F # (w).
Lemma 2.11. Let w = s1 · · · sk tk+1 · · · tℓ be a reduced word such that s1 · · · sk
is firm and F # (w) = k. Then
(i) |sk ti | = 2 for all i ∈ {k + 1, . . . , ℓ}.
(ii) i ≻w k for all i ∈ {1, . . . , k − 1}.
(iii) Let r ∈ S. If l(wr) > l(w), then F # (wr) ≥ F # (w).
Proof. (i) Assume the contrary and let j be minimal such that |sk tj | = ∞.
Using elementary operations to swap tj to the left in w as much as
possible, we rewrite
w ∼ s1 · · · sk t′1 · · · t′p tj · · ·
as a word with s1 · · · sk t′1 · · · t′p tj firm, which is a contradiction to the
maximality of k.
(ii) The fact that the prefix p = s1 · · · sk is firm tells us that i ≻p k for all
i ∈ {1, . . . , k − 1}. By Lemma 2.9, this implies that also i ≻w k for all
i ∈ {1, . . . , k − 1}.
(iii) Since l(wr) > l(w), firm prefixes of w are also firm prefixes of wr,
hence the result.
7
The following definition will be a useful tool to identify which letters of
the word appear in a firm subword.
Definition 2.12. Let w = s1 · · · sℓ ∈ MS be a reduced word and consider
the poset (Iw , ≺w ) as in Definition 2.7. For any i ∈ {1, . . . , ℓ}, we define
Iw (i) = j ∈ {1, . . . , ℓ} | j ≻w i .
In words, Iw (i) is the set of indices j such that sj comes at the left of si in
any reduced representation of the element w ∈ W .
Observation 2.13. Let w = s1 · · · sℓ ∈ MS be a reduced word.
(i) Let i ∈ {1, . . . , ℓ} and write Iw (i) = {j1 , . . . , jk } with jp < jp+1 for
all p. Then we can perform elementary operations on w so that
w ∼ s j 1 · · · s j k s i t1 · · · tq
and the word sj1 · · · sjk si is firm.
In particular, if Iw (i) = ∅, then we can rewrite w as si w1 .
(ii) If i ≻w j, then Iw (i) ( Iw (j).
(iii) It follows from (i) that F # (w) = maxi∈{1,...,ℓ} |Iw (i)| + 1.
Remark 2.14. If the Coxeter system (W, S) is spherical, then F # (w) = 1
for all w ∈ W . Indeed, as each pair of distinct generators commute, we
always have Iw (i) = ∅.
The next definition will allow us to deal with possibly infinite words.
Definition 2.15. (i) A (finite or infinite) sequence (r1 , r2 , . . . ) of letters in S will be called a reduced increasing sequence if l(r1 · · · ri ) <
l(r1 · · · ri ri+1 ) for all i ≥ 1.
(ii) Let w ∈ MS . A sequence (r1 , r2 , . . . ) of letters in S will be called a
reduced increasing w-sequence if l(wr1 · · · ri ) < l(wr1 · · · ri ri+1 ) for all
i ≥ 0.
Lemma 2.16. Let α = (r1 , r2 , . . . ) be a reduced increasing sequence in S.
Assume that each subsequence of α of the form
(ra1 , ra2 , . . . ) with |rai rai+1 | = ∞ for all i
has ≤ b elements. Then there is some positive integer f (b) depending only
on b and on the Coxeter system (W, S), such that α has ≤ f (b) elements.
8
Proof. We will prove this result by induction on |S|; the case |S| = 1 is
trivial.
Suppose now that |S| ≥ 2. If (W, S) is a spherical Coxeter group, then
the result is obvious since the length of any reduced increasing sequence is
bounded by the length of the longest element of W . We may thus assume
that there is some s ∈ S that does not commute with some other generator
in S \ {s}.
Since the sequence α is a reduced increasing sequence, we know that
between any two s’s, there must be some ti such that |sti | = ∞. Consider
the subsequence of α given by
(s, t1 , s, t2 , . . . ).
This subsequence has ≤ b elements by assumption, and between any two
generators s in the original sequence α, we only use letters in S \ {s}. The
result now follows from the induction hypothesis.
Lemma 2.17. Let w ∈ W . Then there is some k(w) ∈ N, depending only
on w, such that for every reduced increasing w-sequence (r1 , r2 , . . . ) in S,
we have
F # (wr1 · · · rk(w) ) > F # (w).
Proof. Assume that there is a reduced increasing w-sequence α = (r1 , r2 , . . . )
in S such that
F # (wr1 · · · ri ) = F # (w) for all i.
(∗)
Let w0 = w, wi = wi−1 ri and denote Ii = Iwi (i) for all i. Let b = F # (w).
By assumption (∗) and Observation 2.13(iii), we have |Ii | ≤ b − 1 for all
i. Moreover, if i < j with |ri rj | = ∞, then Ii ( Ij by Observations 2.8(i)
and 2.13(ii); it follows that each subsequence of α of the form
(ra1 , ra2 , . . . ) with |rai rai+1 | = ∞ for all i
has at most b elements. By Lemma 2.16, this implies that the sequence
α has at most f (b) elements. We conclude that every reduced increasing
w-sequence (r1 , r2 , . . . , rk(w) ) in S with k(w) := f (F # (w)) + 1 must have
strictly increasing firmness.
Theorem 2.18. Let (W, S) be a right-angled Coxeter system. For all n ≥ 0,
there is some d(n) ∈ N depending only on n, such that F # (w) > n for all
w ∈ W with l(w) > d(n).
Proof. This follows by induction on n from Lemma 2.17 since there are only
finitely many elements in W of any given length.
9
3
Right-angled buildings
We will start by recalling the procedure of “closing squares” in right-angled
buildings from [DMSS16] and we define the square closure of a set of chambers. Our goal in this section is to describe the square closure of a ball in the
building and to show that this is a bounded set, i.e., it has finite diameter;
see Theorem 3.13.
3.1
Preliminaries
Throughout this section, let (W, S) be a right-angled Coxeter system with
Coxeter diagram Σ and let ∆ be a right-angled building of type (W, S). We
regard buildings as chamber systems, following the notation in [Wei09].
Definition 3.1. Let δ : ∆×∆ → W be the Weyl distance of the building ∆.
The gallery distance between the chambers c1 and c2 is defined as
dW (c1 , c2 ) := l(δ(c1 , c2 )),
i.e., the length of a minimal gallery between the chambers c1 and c2 .
For a fixed chamber c0 ∈ Ch(∆) we define the spheres at a fixed gallery
distance from c0 as
S(c0 , n) := {c ∈ Ch(∆) | dW (c0 , c) = n}
and the balls as
B(c0 , n) := {c ∈ Ch(∆) | dW (c0 , c) ≤ n}.
Definition 3.2. (i) Let c be a chamber in ∆ and R be a residue in ∆.
The projection of c on R is the unique chamber in R that is closest to
c and it is denoted by projR (c).
(ii) If R1 and R2 are two residues, then the set of chambers
projR1 (R2 ) := {projR1 (c) | c ∈ Ch(R2 )}
is again a residue and the rank of projR1 (R2 ) is bounded above by the
ranks of both R1 and R2 ; see [Cap14, Section 2].
(iii) The residues R1 and R2 are called parallel if projR1 (R2 ) = R1 and
projR2 (R1 ) = R2 .
In particular, if P1 and P2 are two parallel panels, then the chamber sets
of P1 and P2 are mutually in bijection under the respective projection maps
(see again [Cap14, Section 2]).
10
Definition 3.3. Let J ⊆ S. We define the set
J ⊥ = {t ∈ S \ J | ts = st for all s ∈ J}.
If J = {s}, then we write the set J ⊥ as s⊥ .
Proposition 3.4 ([Cap14, Proposition 2.8]). Let ∆ be a right-angled building of type (W, S).
(i) Any two parallel residues have the same type.
(ii) Let J ⊆ S. Given a residue R of type J, a residue R′ is parallel to R
if and only if R′ is of type J, and R and R′ are both contained in a
common residue of type J ∪ J ⊥ .
Proposition 3.5 ([Cap14, Corollary 2.9]). Let ∆ be a right-angled building.
Parallelism of residues of ∆ is an equivalence relation.
Another very important notion in right-angled buildings is that of wings,
introduced in [Cap14, Section 3]. For our purposes, it will be sufficient to
consider wings with respect to panels.
Definition 3.6. Let c ∈ Ch(∆) and s ∈ S. Denote the unique s-panel
containing c by Ps,c . Then the set of chambers
Xs (c) = {x ∈ Ch(∆) | projPs,c (x) = c}
is called the s-wing of c.
Notice that if P is any s-panel, then the set of s-wings of each of the
different chambers of P forms a partition of Ch(∆) into equally many combinatorially convex subsets (see [Cap14, Proposition 3.2]).
3.2
Sets of chambers closed under squares
We start by presenting two results proved in [DMSS16, Lemmas 2.9 and 2.10]
that can be used in right-angled buildings to modify minimal galleries using
the commutation relations of the Coxeter group. We will refer to these
results as the “Closing Squares Lemmas” (see also Figure 1 below). We
s
use the notation c1 ∼ c2 to denote that two chambers c1 and c2 of ∆ are
s-adjacent, i.e., are contained in a common s-panel of ∆.
Lemma 3.7 (Closing Squares 1). Let c0 be a fixed chamber in ∆. Let
c1 , c2 ∈ S(c0 , n) and c3 ∈ S(c0 , n + 1) such that
t
c1 ∼ c3
and
s
c2 ∼ c3
for some s 6= t. Then |st| = 2 in Σ and there exists c4 ∈ S(c0 , n − 1) such
that
s
t
c1 ∼ c4 and c2 ∼ c4 .
11
Lemma 3.8 (Closing Squares 2). Let c0 be a fixed chamber in ∆. Let
c1 , c2 ∈ S(c0 , n) and c3 ∈ S(c0 , n − 1) such that
s
c1 ∼ c2
and
t
c2 ∼ c3
for some s 6= t. Then |st| = 2 in Σ and there exists c4 ∈ S(c0 , n − 1) such
that
t
s
c1 ∼ c4 and c3 ∼ c4 .
c3
t
n+1
c1
s
c1
c2
s
n
n
t
t
t
c4
n−1
c4
..
.
c2
s
s
..
.
c3
n−1
c0
c0
(a) Lemma 3.7
(b) Lemma 3.8
Figure 1: Closing Squares Lemmas
Definition 3.9. Let c0 be a fixed chamber of ∆ and let n ∈ N.
(i) Let c ∈ Ch(∆). Then we call c firm with respect to c0 if and only if
δ(c0 , c) ∈ W is firm (as in Definition 2.10(i)).
(ii) We will create a partition of the sphere S(c0 , n) by defining
A1 (n) = {c ∈ S(c0 , n) | c is firm},
A2 (n) = {c ∈ S(c0 , n) | c is not firm},
as in Figure 2. Notice that this is equivalent to the definition given in
[DMSS16, Definition 4.3].
(iii) Let c ∈ S(c0 , k) for some k > n. We say that c is n-flexible with respect
to c0 if for each minimal gallery γ = (c0 , c1 , . . . , cn+1 , . . . , ck = c) from
c0 to c, none of the chambers cn+1 , . . . , ck is firm. By convention, all
chambers of B(c0 , n) are also n-flexible with respect to c0 .
Observe that a chamber c is n-flexible with respect to c0 if and only
if F # (δ(c0 , c)) ≤ n. In particular, if c is n-flexible, then so is any
chamber on any minimal gallery between c0 and c.
(iv) We define the n-flex of c0 , denoted by Flex(c0 , n), to be the set of all
chambers of ∆ that are n-flexible with respect to c0 .
12
n+1
t
t
t
t
c1
c2
t
c2
c1
t
n
c3
t s
s s s
s t
c3
s
n
t
n−1
n−1
..
.
..
.
c0
c0
(a) ci firm: for all t 6= s,
l(δ(c0 , ci )t) > l(δ(c0 , ci )).
(b) ci not firm: for some t 6= s,
l(δ(c0 , ci )t) < l(δ(c0 , ci )).
Figure 2: Partition of S(c0 , n).
We also record the following result, which we rephrased in terms of firm
chambers; its Corollary 3.11 will be used several times in Section 4.
Lemma 3.10 ([DMSS16, Lemma 2.15]). Let c0 be a fixed chamber of ∆
and let s ∈ S. Let d ∈ S(c0 , n) and e ∈ B(c0 , n + 1) \ Ch(Ps,d ). If c :=
projPs,d (e) ∈ S(c0 , n + 1), then c is not firm with respect to c0 .
Corollary 3.11. Let c0 ∈ Ch(∆) and c ∈ S(c0 , n + 1) such that c is firm
with respect to c0 . Let d be the unique chamber of S(c0 , n) adjacent to c and
let s = δ(d, c) ∈ S. Then B(c0 , n) ⊂ Xs (d).
Proof. Let e ∈ B(c0 , n). If e = d, then of course e ∈ Xs (d), so assume e 6= d;
then e ∈ B(c0 , n + 1) \ Ch(Ps,d ). Notice that all chambers of Ps,d \ {d} have
the same Weyl distance from c0 as c and hence are firm. By Lemma 3.10, this
implies that the projection of e on Ps,d must be equal to d, so by definition
of the s-wing Xs (d), we get e ∈ Xs (d).
We now come to the concept of the square closure of a set of chambers
of ∆.
Definition 3.12. (i) We say that a subset T ⊆ W is closed under squares
if the following holds:
If wsi and wsj are contained in T for some w ∈ T with
|si sj | = 2, si 6= sj and l(wsi ) = l(wsj ) = l(w) + 1, then also
wsi sj = wsj si is an element of T .
(ii) Let c0 be a fixed chamber of ∆. A set of chambers C ⊆ Ch(∆) is
closed under squares with respect to c0 if for each n ∈ N, the following
holds (see Figure 1a):
13
If c1 , c2 ∈ C ∩ S(c0 , n) and c4 ∈ C ∩ S(c0 , n − 1) such that
sj
s
c4 ∼i c1 and c4 ∼ c2 for some |si sj | = 2 with si 6= sj , then
sj
the unique chamber c3 ∈ S(c0 , n + 1) such that c3 ∼ c1 and
s
c3 ∼i c2 is also in C .
In particular, if C is closed under squares with respect to c0 , then the
set of Weyl distances {δ(c0 , c) | c ∈ C } ⊆ W is closed under squares.
(iii) Let c0 ∈ Ch(∆) and let C ⊆ Ch(∆). We define the square closure of C
with respect to c0 to be the smallest subset of Ch(∆) containing C
and closed under squares with respect to c0 .
Theorem 3.13. Let c0 ∈ Ch(∆) and let n ∈ N. The square closure of
B(c0 , n) with respect to c0 is Flex(c0 , n). Moreover, the set Flex(c0 , n) is
bounded.
Proof. We will first show that Flex(c0 , n) is indeed closed under squares. Let
c3 be a chamber in Flex(c0 , n) at Weyl distance w from c0 and let c1 and c2
be chambers in Flex(c0 , n) adjacent to c3 , at Weyl distance wsi and wsj from
c0 , respectively, such that |si sj | = 2 and l(wsi ) = l(wsj ) = l(w) + 1. Let c3
be the unique chamber at Weyl distance wsi sj from c0 that is sj -adjacent
to c1 and si -adjacent to c2 .
c3 = vk
si
sj
sk
vk−1
c2
si
sk
c1
sj
d2
si
sj
l(w) + 1
sk
vk−2
d1
l(w)
vk−3
l(w) − 1
vn+1
n
vn
c0 = v0
Figure 3: Proof of Theorem 3.13
Our aim is to show that also c3 is an element of Flex(c0 , n). If l(wsi sj ) ≤ n,
then this is obvious, so we may assume that l(wsi sj ) > n.
14
Let γ = (c0 = v0 , . . . , vn+1 , . . . , vk = c3 ) be an arbitrary minimal gallery
between c0 and c3 , as in Figure 3 (so k = l(w) + 2 > n). We have to show
that none of the chambers vn+1 , . . . , vk is firm with respect to c0 . This is
clear for vk = c3 .
If k = n + 1, then there is nothing left to show, so assume k ≥ n + 2. If
vk−1 ∈ {c1 , c2 }, then vk−1 is n-flexible by assumption, and since k − 1 > n
it is not firm. (In fact, this shows immediately that in this case, none of the
chambers vn+1 , . . . , vk−1 is firm). So assume that vk−1 is distinct from c1
and c2 ; then vk−1 is sk -adjacent to c3 for some sk different from si and sj .
Then by closing squares (Lemma 3.7), we have |sj sk | = 2 and there is a
sj
s
chamber d1 ∈ S(c0 , l(w)) such that d1 ∼ vk−1 and d1 ∼k c1 . Similarly, there
s
s
is a chamber d2 ∈ S(c0 , l(w)) such that d2 ∼i vk−1 and d2 ∼k c2 . Hence vk−1
is not firm with respect to c0 .
Continuing this argument inductively (see Figure 3), we conclude that
none of the chambers vn+1 , . . . , vk is firm with respect to c0 . Hence c3 is
n-flexible; we conclude that Flex(c0 , n) is closed under squares with respect
to c0 .
Conversely, let C be a set of chambers closed under squares that contains
B(c0 , n); we have to prove that Flex(c0 , n) ⊆ C . So let c ∈ Flex(c0 , n) be
arbitrary; we will show by induction on k := dW (c0 , c) that c ∈ C . This is
obvious for k ≤ n, so assume k > n. Then c is not firm, hence there exist
s
s
c1 , c2 ∈ S(c0 , k − 1) such that c1 ∼1 c and c2 ∼2 c for some s1 6= s2 ∈ S. By
s
Lemma 3.7 we have |s1 s2 | = 2 and there is d ∈ S(c0 , k − 2) such that d ∼2 c1
s1
and d ∼ c2 .
Since c is n-flexible and c1 , c2 and d all lie on some minimal gallery
between c0 and c, it follows that also c1 , c2 and d are n-flexible. By the
induction hypothesis, all three elements are contained in C . Since C is
assumed to be closed under squares, however, we immediately deduce that
also c ∈ C .
We conclude that Flex(c0 , n) is the square closure of B(c0 , n) with respect
to c0 .
We finally show that Flex(c0 , n) is a bounded set. Recall that a chamber c
is contained in Flex(c0 , n) if and only if F # (δ(c0 , c)) ≤ n. By Theorem 2.18,
there is a constant d(n) such that F # (w) > n for all w ∈ W with l(w) >
d(n). This shows that Flex(c0 , n) ⊆ B(c0 , d(n)) is indeed bounded.
4
The automorphism group of a right-angled building
In this section, we study the group Aut(∆) of type-preserving automorphisms of a thick semi-regular right-angled building ∆. We will first study
the action of a ball fixator and introduce root wing groups. Next, we will
15
characterize the compact open subgroups of Aut(∆). Finally, when the
building is locally finite, we will show that any proper open subgroup of
Aut(∆) is a finite index subgroup of the stabilizer of a proper residue; see
Theorem 4.29.
Definition 4.1. Let ∆ be a right-angled building of type (W, S). Then ∆ is
called semi-regular if for each s, all s-panels of ∆ have the same number qs
of chambers. In this case, the building is said to have prescribed thickness
(qs )s∈S in its panels.
By [HP03, Proposition 1.2], there is a unique right-angled building of
type (W, S) of prescribed thickness (qs )s∈S for any choice of cardinal numbers qs ≥ 1.
Theorem 4.2 ([KT12, Theorem B], [Cap14, Theorem 1.1]). Let ∆ be a
thick semi-regular building of right-angled type (W, S). Assume that (W, S)
is irreducible and non-spherical. Then the group Aut(∆) of type-preserving
automorphisms of ∆ is abstractly simple and acts strongly transitively on ∆.
The strong transitivity has first been shown by Angela Kubena and Anne
Thomas [KT12] and has been reproved by Pierre-Emmanuel Caprace in the
same paper where he proved the simplicity [Cap14]. In our proof of Proposition 4.7 below, we will adapt Caprace’s proof of the strong transitivity to
a more specific setting.
The following extension result is very powerful and will be used in the
proof of Theorem 4.4 below.
Proposition 4.3 ([Cap14, Proposition 4.2]). Let ∆ be a semi-regular rightangled building. Let s ∈ S and P be an s-panel. Given any permutation
θ ∈ Sym(Ch(P)), there is some g ∈ Aut(∆) stabilizing P satisfying the
following two conditions:
(a) g|Ch(P) = θ;
(b) g fixes all chambers of ∆ whose projection on P is fixed by θ.
4.1
The action of the fixator of a ball in ∆
In this section we study the action of the fixator K in Aut(∆) of a ball
B(c0 , n) of radius n around a chamber c0 . Our goal will be to prove that
the fixed point set ∆K coincides with the square closure of the ball B(c0 , n)
with respect to c0 , which is Flex(c0 , n), and which we know is bounded by
Theorem 3.13.
Theorem 4.4. Let ∆ be a thick semi-regular right-angled building. Let c0
be a fixed chamber of ∆ and let n ∈ N. Consider the pointwise stabilizer
K = FixAut(∆) (B(c0 , n)) in Aut(∆) of the ball B(c0 , n).
Then the fixed-point set ∆K is equal to the bounded set Flex(c0 , n).
16
Proof. Recall from Theorem 3.13 that Flex(c0 , n) is precisely the square closure of B(c0 , n) with respect to c0 . First, notice that the fixed point set of
any automorphism fixing c0 is square closed with respect to c0 because the
chamber “closing the square” is unique (see Definition 3.12(ii)). It immediately follows that Flex(c0 , n) ⊆ ∆K .
We will now show that if c is a chamber not in Flex(c0 , n), then there
exists a g ∈ K not fixing c. Since c is not n-flexible, there exists a chamber
d on some minimal gallery between c0 and c with k := dW (c0 , d) > n such
that d is firm. Notice that any automorphism fixing c0 and c fixes every
chamber on any minimal gallery between c0 and c, so it suffices to show
that there exists a g ∈ K not fixing d.
s
Since d is firm, there is a unique chamber e ∈ S(c0 , k − 1) such that e ∼ d
for some s ∈ S. By Corollary 3.11, B(c0 , n) ⊆ Xs (e), where Xs (e) is the
s-wing of ∆ corresponding to e.
Now take any permutation θ of Ps,e fixing e and mapping d to some third
chamber d′′ different from d and e (which exists because ∆ is thick). By
Proposition 4.3, there is an element g ∈ Aut(∆) fixing Xs (e) and mapping
d to d′′ . In particular, g belongs to K and does not fix d, as required.
We conclude that ∆K = Flex(c0 , n). The fact that this set is bounded
was shown in Theorem 3.13.
4.2
Root wing groups
In this section we define groups that resemble root groups, using the partition of the chambers of a right-angled building by wings; we call these
groups root wing groups.
We show that a root wing group acts transitively on the set of apartments
of ∆ containing the given root. We also prove that the root wing groups
corresponding to roots disjoint from a ball B(c0 , n) are contained in the
fixator of that ball in the automorphism group.
We first fix some notation for the rest of this section.
Notation 4.5. (i) Fix a chamber c0 ∈ Ch(∆) and an apartment A0 containing c0 (which can be considered as the fundamental chamber and
the fundamental apartment). Let Φ denote the set of roots of A0 . For
each α ∈ Φ, we write −α for the root opposite α in A0 .
(ii) We will write A0 for the set of all apartments containing c0 . For any
A ∈ A0 , we will denote its set of roots by ΦA .
(iii) For any k ∈ N, we write Kr := FixAut(∆) (B(c0 , r)).
Definition 4.6. (i) When α ∈ ΦA is a root in an apartment A, its wall
∂α consists of the panels of ∆ having chambers in both α and −α.
Since the building is right-angled, these panels all have the same type
17
s ∈ S, which we refer to as the type of α and write as type(α) = s.
Notice that the s-wings of A are precisely the roots of A of type s.
(ii) Let α ∈ ΦA of type s and let c ∈ α be such that Ps,c ∈ ∂α. Then we
define the root wing group Uα as
Uα := Us (c) := FixAut(∆) (Xs (c)).
Observe that Uα does not depend of the choice of the chamber c as
all panels in the wall ∂α are parallel (see Definition 3.2(iii)) and hence
determine the same s-wings in ∆.
The fact that these groups behave, to some extent, like root groups in
Moufang spherical buildings or Moufang twin buildings, is illustrated by the
following fact.
Proposition 4.7. Let α ∈ ΦA be a root. The root wing group Uα acts
transitively on the set of apartments of ∆ containing α.
Proof. We carefully adapt the proof of the strong transitivity of Aut(∆) from
[Cap14, Proposition 6.1]. Let c be a chamber of α on the boundary and let
A and A′ be two apartments of ∆ containing α. The strategy in loc. cit.
(where A and A′ are arbitrary apartments containing c) is to construct an
infinite sequence of automorphisms g0 , g1 , g2 , . . . such that
(a) each gn fixes the ball B(c, n − 1) pointwise;
(b) let An := gn gn−1 · · · g0 (A); then An ∩ A′ ⊇ B(c, n) ∩ A′ .
We will show that the elements gi constructed in loc. cit. are all contained
in Uα ; the result then follows because Uα is a closed subgroup of Aut(∆).
To construct the element gn+1 , we consider the set E of chambers in
B(c, n + 1) ∩ A′ that are not contained in An (as in loc. cit.). The crucial
observation now is that by Theorem 4.4, the chambers of E are firm with
respect to c. Hence, for each x ∈ E, there is a unique chamber y ∈ S(c, n)
that is s-adjacent to x (for some s ∈ S). The element gn+1 constructed in
loc. cit. is then contained in the group generated by the subgroups Us (y)
for such pairs (y, s) corresponding to the various elements of E. However,
because the elements of E are firm, the root α is contained in each root
corresponding to a pair (y, s) in A′ ; [Cap14, Lemma 3.4(b)] now implies
that each such group Us (y) is contained in Uα .
Remark 4.8. The group Uα does not, in general, act sharply transitively
on the set of apartments containing α. This is clear already in the case of
trees: an automorphism fixing a half-tree and an apartment need not be
trivial.
Corollary 4.9. Let α ∈ ΦA be a root of type s and let c, c′ be two s-adjacent
chambers of A with c ∈ α and c′ ∈ −α. Then there exists an element in
hUα , U−α i stabilizing A and interchanging c and c′ .
18
Proof. Let A′ be an apartment different from A containing α (which exists
because ∆ is thick) and let β be the root opposite α in A′ . By Proposition 4.7, there is some g ∈ Uα mapping −α to β. Similarly, there is some
h ∈ U−α mapping β to α. Let γ := h.α; then there exists a third automorphism g′ ∈ Uα mapping γ to −α. The composition g′ hg ∈ Uα U−α Uα is the
required automorphism.
Next we present a property similar to the FPRS (“Fixed Points of Root
Subgroups”) property introduced in [CR09] for groups with a twin root
datum. It is the analogous statement of [CM13, Lemma 3.8], but in the case
of right-angled buildings, we can be more explicit.
Lemma 4.10. For every root α ∈ Φ with dist(c0 , α) > r, the group U−α is
contained in Kr = FixAut(∆) (B(c0 , r)).
Proof. Let α be a root at distance n > r from c0 and let s be the type of α.
Let c = projα (c0 ) and let c′ be the other chamber in Ps,c ∩ A0 ; notice that
c′ ∈ S(c0 , n − 1). We will show that B(c0 , r) ⊆ Xs (c′ ), which will then of
course imply that U−α = Us (c′ ) ⊆ Kr .
The chamber c is firm with respect to c0 because if c would be t-adjacent
to some chamber at distance n − 1 from c0 for some t 6= s, then ∂α would
contain panels of type s and of type t, which is impossible. Corollary 3.11
now implies that B(c0 , n−1) ⊆ Xs (c′ ), so in particular B(c0 , r) ⊆ Xs (c′ ).
Following the idea of [CM13, Lemmas 3.9 and 3.10], we present two
variations on the previous lemma that allow us to transfer the results to
other apartments containing the chamber c0 .
Lemma 4.11. Let g ∈ Aut(∆) and let A ∈ A0 containing the chamber
d = gc0 . Let b ∈ StabAut(∆) (c0 ) such that A = bA0 , and let α = bα0 be a
root of A with α0 ∈ Φ.
If dist(d, −α) > r, then bUα0 b−1 ⊆ gKr g−1 .
Proof. Analogous to the proof of [CM13, Lemma 3.9].
Definition 4.12 ([CM13, Section 2.4]). Let w ∈ W .
(i) A root α ∈ Φ is called w-essential if wn α ( α for some n ∈ Z.
(ii) A wall is called w-essential if it is the wall ∂α of some w-essential
root α.
Lemma 4.13. Let A ∈ A0 and let b ∈ StabAut(∆) (c0 ) such that A =
bA0 . Also, let α = bα0 (with α0 ∈ Φ) be a w-essential root for some
w ∈ StabAut(∆) (A)/ Fix Aut(∆) (A). Let g ∈ StabAut(∆) (A) be a representative
of w.
19
Then there exists some n ∈ Z such that
Uα0 ⊆ b−1 gn Kr g−n b
U−α0 ⊆ b
−1 −n
g
and
n
Kr g b.
Proof. The proof can be copied ad verbum from [CM13, Lemma 3.10].
4.3
Compact open subgroups of Aut(∆)
We now focus on the description of open subgroups of the automorphism
group of ∆. The main result of the next section will be that any proper
open subgroup of the automorphism group of a locally finite thick semiregular right-angled building ∆ is contained with finite index in the setwise
stabilizer in Aut(∆) of a proper residue of ∆ (see Theorem 4.29 below).
We will split the proof in the cases where the open subgroup is compact
and non-compact. In this section, we first deal with the (easier) compact
case.
Throughout this section, we assume that ∆ is a thick irreducible semiregular right-angled building (not necessarily locally finite) and we will denote the Davis realization of ∆ by X (see [Dav98]). Using the work developed in Section 4.1, we can prove that an open subgroup of Aut(∆) which
is locally X-elliptic on X must be compact.
Definition 4.14. A group acting continuously on a space X is called locally
X-elliptic if every compactly generated subgroup of Aut(∆) fixes a point
in X.
Proposition 4.15. Let H be an open subgroup of Aut(∆). Then the following are equivalent:
(a) H is locally X-elliptic;
(b) H fixes a point of X;
(c) H is a finite index subgroup of the stabilizer of a spherical residue of ∆;
(d) H is compact.
Proof. Notice that the points of X correspond precisely to the spherical
residues of ∆ and that the maximal compact open subgroups of Aut(∆)
are precisely the stabilizers of a maximal spherical residue, so the only nontrivial implication is (a) =⇒ (b).
So assume that H is locally X-elliptic. We will rely on [CL10, Theorem 1.1] to show first that H has a global fixed point on X or H fixes an
end of X. Notice that X has finite geometric dimension (namely equal to
the highest possible rank of a spherical parabolic subgroup of (W, S)) and
hence also finite telescopic dimension (see loc. cit. for these notions). For
20
each finite subset F ⊂ H, we let XF be the set of fixed points in X of hF i;
then each XF is non-empty because X is locally X-elliptic, and the collection {XF } is a filtering family of closed convex subspaces of X. By [CL10,
T
Theorem 1.1], either the intersection XF is nonempty, or the intersection
T
of the boundaries ∂XF is nonempty. In the first case, H fixes a point
of X; in the second case, H fixes an end of X.
Assume that H fixes an end of X; we will show that H then also fixes
a point of X. Since H is open, it contains the fixator of some finite ball,
i.e., K := FixAut(∆) (B(c0 , n)) ⊆ H for some c0 ∈ Ch(∆) and some n ∈ N.
Moreover, for each h ∈ H, the group Hh := hh, Ki is open and compactly
generated. Since H is locally X-elliptic by assumption, each Hh has a global
fixed point, i.e., X Hh 6= ∅.
S
Hence H = Hh with each Hh open and compactly generated and we
can take this union to be countable because Aut(∆) is second countable.
Observe that X Hh ⊆ X K for each h ∈ H. By Theorem 4.4, the fixed-point
set X K is bounded. Since a countable intersection of compact bounded nonempty sets is non-empty, we conclude that X H is non-empty; hence H fixes
a point of X, as claimed.
4.4
Open subgroups of Aut(∆), with ∆ locally finite
We will assume from now on that ∆ is a thick irreducible semi-regular locally
finite right-angled building. Consider an open subgroup H of Aut(∆) and
assume that H is non-compact.
Definition 4.16. We continue to use the conventions from Notation 4.5
and we will identify the apartment A0 with W .
(i) Given a root α ∈ Φ, let rα denote the unique reflection of W setwise
stabilizing the panels in ∂α and let Uα be the root wing group introduced in Definition 4.6. By Corollary 4.9, the reflection rα ∈ W lifts
to an automorphism nα ∈ hUα , U−α i ≤ Aut(∆) stabilizing A0 .
(ii) For each c ∈ Ch(∆) and each subset J ⊆ S, we write RJ,c for the
residue of ∆ of type J containing c. We use the shorter notation RJ :=
RJ,c0 when c = c0 . Moreover, we write PJ := StabAut(∆) (RJ ), and we
call this a standard parabolic subgroup of Aut(∆). Any conjugate of PJ ,
i.e., any stabilizer of an arbitrary residue, is then called a parabolic
subgroup.
(iii) Let J ⊆ S be minimal such that there is a g ∈ Aut(∆) such that
H ∩ g−1 PJ g has finite index in H. In particular, J is essential (see
Definition 2.2(iii)). See also [CM13, Lemma 3.4].
For such a g, we set H1 = gHg−1 ∩ PJ . Thus H1 stabilizes RJ and it
is an open subgroup of Aut(∆) contained in gHg−1 with finite index;
21
since H is non-compact, so is H1 . Hence we may assume without loss
of generality that g = 1 and hence H1 = H ∩ PJ has finite index in H.
(iv) Let A0 be the set of apartments of ∆ containing c0 . For A ∈ A0 we let
NA := StabH1 (A)
and WA := NA / FixH1 (A),
which we identify with a subgroup of W . For h ∈ NA , let h denote its
image in WA ≤ W .
The idea will be to prove that H1 contains a hyperbolic element h such
that the chamber c0 achieves the minimal displacement of h. Moreover, we
can find the element h in the stabilizer in H1 of an apartment A1 containing c0 . Thus we can identify it with an element h of W and consider its
parabolic closure (see Definition 2.2(iv)). The key point will be to prove
that the type of Pc(h) is J, which will be achieved in Lemma 4.24.
We will also show that H1 acts transitively on the chambers of RJ ; this
will allow us to conclude that any open subgroup of Aut(∆) containing H1
as a finite index subgroup is contained in the stabilizer of RJ∪J ′ for some
spherical subset J ′ of J ⊥ (Proposition 4.26).
This strategy is analogous (and, of course, inspired by) [CM13, Section 3]. As the arguments of loc. cit. are of a geometric nature, we will
be able to adapt them to our setting. The root groups associated with the
Kac–Moody group in that paper can be replaced by the root wing groups
defined in Section 4.2. It should not come as a surprise that many of our
proofs will simply consist of appropriate references to arguments in [CM13].
Lemma 4.17. For all A ∈ A0 , there exists a hyperbolic automorphism
h ∈ NA such that
Pc(h) = hrα | α is an h-essential root of Φi
and is of finite index in Pc(WA ).
Proof. Using the fact that the reflections rα lift to elements nα ∈ hUα , U−α i
(see Definition 4.16(i)), the proof is the same as for [CM13, Lemma 3.5].
Lemma 4.18. There exists an apartment A ∈ A0 such that the orbit NA .c0
is unbounded. In particular, the parabolic closure in W of WA is nonspherical.
Proof. The proofs of [CM13, Lemmas 3.6 and 3.7] continue to hold without
a single change. Notice that this depends crucially on the fact that H1 is
non-compact.
Definition 4.19. (i) Let A1 ∈ A0 be an apartment such that the essential component of Pc(WA1 ) is non-empty and maximal with respect
22
to this property (see Definition 2.2(iii)); such an apartment exists by
Lemma 4.18. Choose h1 ∈ NA1 as in Lemma 4.17. In particular, h1 is
a hyperbolic element of H1 .
(ii) Up to conjugating H1 by an element of StabAut(∆) (RJ ), we can assume
without loss of generality that Pc(h1 ) is a standard parabolic subgroup
that is non-spherical and has essential type I (6= ∅). Moreover, the type
I is maximal in the following sense: if A ∈ A0 is such that Pc(WA )
contains a parabolic subgroup of essential type IA with I ⊆ IA , then
I = IA .
Definition 4.20. Recall that Φ is the set of roots of the apartment A0 . For
each T ⊆ S, let
ΦT := {α ∈ Φ | RT contains at least one panel of ∂α}
and
L+
T := hUα | α ∈ ΦT i,
where Uα is the root wing group introduced in Definition 4.6.
Our next goal is to prove that H1 contains L+
J , where J is as in Definition 4.16(iii); as we will see in Lemma 4.22 below, this fact is equivalent to
H1 being transitive on the chambers of RJ .
We will need the results in Section 4.2 regarding fixators of balls and
root wing groups.
Notation 4.21. Since H1 is open, we fix, for the rest of the section, some
r ∈ N such that FixAut(∆) (B(c0 , r)) ⊆ H1 .
The next lemma corresponds to [CM13, Lemma 3.11], but some care is
needed because of our different definition of the groups Uα .
Lemma 4.22. Let T ⊆ S be essential and let A ∈ A0 . Then the following
are equivalent:
(a) H1 contains L+
T;
(b) H1 is transitive on RT ;
(c) NA is transitive on RT ∩ A;
(d) WA contains the standard parabolic subgroup WT of W .
Proof. It is clear that (c) and (d) are equivalent.
We first show that (a) implies (c). It suffices to show that for each
chamber c1 of A that is s-adjacent to c0 for some s ∈ T , there is an element
of NA mapping c0 to c1 . Let α be the root of A0 containing c0 but not
the chamber c2 in A0 that is s-adjacent to c0 ; notice that Uα and U−α
23
are contained in L+
T . By Proposition 4.7, there is some g ∈ Uα fixing c0
and mapping c1 to c2 . Now the element nα ∈ hUα , U−α i stabilizes A0 and
interchanges c0 and c2 ; it follows that the conjugate g−1 nα g stabilizes A and
interchanges c0 and c1 , as required.
The proofs of the implications (d) ⇒ (b) ⇒ (a) are exactly as in [CM13,
Lemma 3.11].
The next statement is the analogue of [CM13, Lemma 3.12].
Lemma 4.23. Let A ∈ A0 . There exists IA ⊆ S such that WA contains a
parabolic subgroup PIA of W of type IA as a finite index subgroup.
Proof. The proof can be copied ad verbum from [CM13, Lemma 3.12].
For each A ∈ A0 , we fix such an IA ⊆ S; without loss of generality,
we may assume that IA is essential. We also consider the corresponding
parabolic subgroup PIA contained in WA . Observe that PIA1 has finite index
in Pc(WA1 ) by Lemma 2.3, where A1 is as in Definition 4.19(i). Therefore
I = IA 1 .
The next task in the process of showing that H1 contains L+
J is to prove
that J = I, which is achieved by the following sequence of steps, each of
which follows from the previous ones and which are analogues of results in
[CM13].
Lemma 4.24. Let A ∈ A0 and let I and J be as in Definition 4.19(ii)
and 4.16(iii), respectively. Then:
(i) H1 contains L+
I ;
(ii) IA ⊂ I;
(iii) WA contains WI as a subgroup of finite index;
(iv) I = J.
Proof. (i) This follows from the fact that I = IA1 and PI = WI ; the
conclusion follows from Lemma 4.22.
(ii) See [CM13, Lemma 3.14].
(iii) See [CM13, Lemma 3.15].
(iv) See [CM13, Lemma 3.16].
Corollary 4.25. The group H1 acts transitively on the chambers of RJ .
Proof. This follows by combining Lemmas 4.22 and 4.24.
24
We are approaching our main result; the following proposition already
shows, in particular, that H is contained in the stabilizer of a residue, and it
will only require slightly more effort to show that it is a finite index subgroup
of such a stabilizer.
Proposition 4.26. Every subgroup of Aut(∆) containing H1 as a subgroup
of finite index is contained in a stabilizer StabAut(∆) (RJ∪J ′ ), where J ′ is a
spherical subset of J ⊥ .
Proof. The proof is exactly the same as in [CM13, Lemma 3.19].
Notice that since ∆ is irreducible, the index set J ∪ J ′ is only equal to
S if already J = S.
Lemma 4.27. The group H1 is a finite index subgroup of StabAut(∆) (RJ ).
Proof. Let G := StabAut(∆) (RJ ). We already know that H1 stabilizes RJ
(see Definition 4.16(iii)) and acts transitively on the set of chambers of RJ
(see Corollary 4.25). Notice that the stabilizer in G of a chamber of RJ is
compact, hence H1 is a cocompact subgroup of G. Since H1 is also open
in G, we conclude that H1 is a finite index subgroup of G.
Lemma 4.28. For every spherical J ′ ⊆ J ⊥ , the index of StabAut(∆) (RJ ) in
StabAut(∆) (RJ∪J ′ ) is finite.
Proof. By [Cap14, Lemma 2.2], we have Ch(RJ∪J ′ ) = Ch(RJ ) × Ch(RJ ′ ).
As J ′ is spherical, the chamber set Ch(RJ ′ ) is finite; the result follows.
We are now ready to prove our main theorem.
Theorem 4.29. Let ∆ be a thick irreducible semi-regular locally finite rightangled building of rank at least 2. Then any proper open subgroup of Aut(∆)
is contained with finite index in the stabilizer in Aut(∆) of a proper residue.
Proof. Let H be a proper open subgroup of Aut(∆). If H is compact, then
the result follows from Proposition 4.15.
So assume that H is not compact. By Definition 4.16(iii), we may assume
that H contains a finite index subgroup H1 which, by Corollary 4.25, acts
transitively on the chambers of some residue RJ . By Proposition 4.26, H is
a subgroup of G := StabAut(∆) (RJ∪J ′ ) for some spherical J ′ ⊆ J ⊥ . On the
other hand, Lemmas 4.27 and 4.28 imply that H1 is a finite index subgroup
of G; since H1 is a finite index subgroup of H, it follows that also H has
finite index in G.
It only remains to show that RJ∪J ′ is a proper residue. If not, then
G = Aut(∆), but since G is simple (Theorem 4.2) and infinite, it has no
proper finite index subgroups. Since H is a proper open subgroup of G, the
result follows.
25
5
Two applications of the main theorem
In this last section we present two consequences of Theorem 4.29, both of
which were suggested to us by Pierre-Emmanuel Caprace. The first states
that the automorphism group of a locally finite thick semi-regular rightangled building ∆ is Noetherian (see Definition 5.1); the second deals with
reduced envelopes in Aut(∆).
Definition 5.1. We call a topological group Noetherian if it satisfies the
ascending chain condition on open subgroups.
We will prove that the group Aut(∆) is Noetherian by making use of the
following characterization.
Lemma 5.2 ([CM13, Lemma 3.22]). Let G be a locally compact group.
Then G is Noetherian if and only if every open subgroup of G is compactly
generated.
Proposition 5.3. Let ∆ be a locally finite thick semi-regular right-angled
building. Then the group Aut(∆) is Noetherian.
Proof. By Lemma 5.2, we have to show that every open subgroup of Aut(∆)
is compactly generated. By Theorem 4.29, every open subgroup of Aut(∆)
is contained with finite index in the stabilizer of a residue of ∆.
Stabilizers of residues are compactly generated, since they are generated
by the stabilizer of a chamber c0 (which is a compact open subgroup) together with a choice of elements mapping c0 to each of its (finitely many)
neighbors. Since a closed cocompact subgroup of a compactly generated
group is itself compactly generated (see [MS59]), we conclude that indeed
every open subgroup of Aut(∆) is compactly generated and hence Aut(∆)
is Noetherian.
Our next application deals with reduced envelopes, a notion introduced
by Colin Reid in [Rei16b] in the context of arbitrary totally disconnected
locally compact (t.d.l.c.) groups.
Definition 5.4. (i) Two subgroups H1 and H2 of a group G are called
commensurable if H1 ∩ H2 has finite index in both H1 and H2 .
(ii) Let G be a totally disconnected locally compact (t.d.l.c.) group and
let H ≤ G be a subgroup. An envelope of H in G is an open subgroup
of G containing H. An envelope E of H is called reduced if for any
open subgroup E2 with [H : H ∩ E2 ] < ∞ we have [E : E ∩ E2 ] < ∞.
Not every subgroup of G has a reduced envelope, but clearly any two
reduced envelopes of a given group are commensurable.
26
Theorem 5.5 ([Rei16a, Theorem B]). Let G be a t.d.l.c. group and let H
be a (not necessarily closed) compactly generated subgroup of G. Then there
exists a reduced envelope for H in G.
We will apply Reid’s result to show the following.
Proposition 5.6. Every open subgroup of Aut(∆) is commensurable with
the reduced envelope of a cyclic subgroup.
Proof. Let H be an open subgroup of Aut(∆) and assume without loss of
generality that J ⊆ S and H1 = H ∩ StabAut(∆) (RJ ) are as in Definition 4.16(iii). Let h1 be the hyperbolic element of H1 as in Definition 4.19,
so that Pc(h1 ) = WJ .
By Theorem 5.5, the group hh1 i has a reduced envelope E in Aut(∆).
In particular, [E : E ∩ H1 ] is finite.
On the other hand, H2 := E ∩ StabAut(∆) (RJ ) is an open subgroup of G
containing hh1 i, hence Lemma 4.27 applied on H2 shows that H2 is a finite
index subgroup of StabAut(∆) (RJ ) for the same subset J ⊆ S, i.e.,
[StabAut(∆) (RJ ) : StabAut(∆) (RJ ) ∩ E] < ∞.
Since also H1 has finite index in StabAut(∆) (RJ ) by Lemma 4.27 again, it
follows that also [H1 : H1 ∩ E] is finite. We conclude that H1 , and hence
also H, is commensurable with E, which is the reduced envelope of a cyclic
subgroup.
References
[BMPZ17] Andreas Baudisch, Amador Martin-Pizarro, and Martin Ziegler. A
model-theoretic study of right-angled buildings. J. Eur. Math. Soc.
(JEMS), 19(10):3091–3141, 2017.
[Cap14]
Pierre-Emmanuel Caprace. Automorphism groups of right-angled buildings: simplicity and local splittings. Fund. Math., 224(1):17–51, 2014.
[CL10]
Pierre-Emmanuel Caprace and Alexander Lytchak. At infinity of finitedimensional CAT(0) spaces. Math. Ann., 346(1):1–21, 2010.
[CM13]
Pierre-Emmanuel Caprace and Timothée Marquis. Open subgroups of
locally compact Kac-Moody groups. Math. Z., 274(1-2):291–313, 2013.
[CR09]
Pierre-Emmanuel Caprace and Bertrand Rémy. Simplicity and superrigidity of twin building lattices. Invent. Math., 176(1):169–221, 2009.
[CT13]
Inna Capdeboscq and Anne Thomas. Cocompact lattices in complete
Kac-Moody groups with Weyl group right-angled or a free product of
spherical special subgroups. Math. Res. Lett., 20(2):339–358, 2013.
[Dav98]
Michael W. Davis. Buildings are CAT (0). In Geometry and cohomology
in group theory (Durham, 1994), volume 252 of London Math. Soc.
Lecture Note Ser., pages 108–123. Cambridge Univ. Press, Cambridge,
1998.
27
[DMSS16] Tom De Medts, Ana C. Silva, and Koen Struyve. Universal groups for
right-angled buildings. ArXiv e-prints, March 2016.
[HP03]
Frédéric Haglund and Frédéric Paulin. Constructions arborescentes
d’immeubles. Math. Ann., 325(1):137–164, 2003.
[KT12]
Angela Kubena and Anne Thomas. Density of commensurators for
uniform lattices of right-angled buildings. J. Group Theory, 15(5):565–
611, 2012.
[MS59]
Alexander M. Macbeath and Stanisłav Świerczkowski. On the set of
generators of a subgroup. Nederl. Akad. Wetensch. Proc. Ser. A 62 =
Indag. Math., 21:280–281, 1959.
[Rei16a]
Colin D. Reid. Distal actions on coset spaces in totally disconnected,
locally compact groups. ArXiv e-prints, October 2016.
[Rei16b]
Colin D. Reid. Dynamics of flat actions on totally disconnected, locally
compact groups. New York J. Math., 22:115–190, 2016.
[RR06]
Bertrand Rémy and Mark Ronan. Topological groups of Kac-Moody
type, right-angled twinnings and their lattices. Comment. Math. Helv.,
81(1):191–219, 2006.
[Tho06]
Anne Thomas. Lattices acting on right-angled buildings. Algebr. Geom.
Topol., 6:1215–1238, 2006.
[Tit69]
Jacques Tits. Le problème des mots dans les groupes de Coxeter. In
Symposia Mathematica (INDAM, Rome, 1967/68), Vol. 1, pages 175–
185. Academic Press, London, 1969.
[TW11]
Anne Thomas and Kevin Wortman. Infinite generation of noncocompact lattices on right-angled buildings. Algebr. Geom. Topol.,
11(2):929–938, 2011.
[Wei09]
Richard M. Weiss. The structure of affine buildings, volume 168 of
Annals of Mathematics Studies. Princeton University Press, Princeton,
NJ, 2009.
28
| 4 |
ON THE MULTI-DIMENSIONAL ELEPHANT RANDOM WALK
arXiv:1709.07345v1 [math.PR] 21 Sep 2017
BERNARD BERCU AND LUCILE LAULIN
Abstract. The purpose of this paper is to investigate the asymptotic behavior
of the multi-dimensional elephant random walk (MERW). It is a non-Markovian
random walk which has a complete memory of its entire history. A wide range of
literature is available on the one-dimensional ERW. Surprisingly, no references are
available on the MERW. The goal of this paper is to fill the gap by extending the
results on the one-dimensional ERW to the MERW. In the diffusive and critical
regimes, we establish the almost sure convergence, the law of iterated logarithm
and the quadratic strong law for the MERW. The asymptotic normality of the
MERW, properly normalized, is also provided. In the superdiffusive regime, we
prove the almost sure convergence as well as the mean square convergence of
the MERW. All our analysis relies on asymptotic results for multi-dimensional
martingales.
1. Introduction
The elephant random walk (ERW) is a fascinating discrete-time random process
arising from mathematical physics. It is a non-Markovian random walk on Z which
has a complete memory of its entire history. This anomalous random walk was introduced by Schütz and Trimper [20], in order to investigate how long-range memory
affects the random walk and induces a crossover from a diffusive to superdiffusive
behavior. It was referred to as the ERW in allusion to the traditional saying that
elephants can always remember where they have been. The ERW shows three differents regimes depending on the location of its memory parameter p which lies
between zero and one.
Over the last decade, the ERW has received considerable attention in the mathematical physics literature in the diffusive regime p < 3/4 and the critical regime
p = 3/4, see e.g. [1],[2],[8],[9],[10],[13],[17],[19] and the references therein. Quite
recently, Baur and Bertoin [1] and independently Coletti, Gava and Schütz [6] have
proven the asymptotic normality of the ERW, properly normalized, with an explicit
asymptotic variance.
The superdiffusive regime p > 3/4 is much harder to handle. Initially, it was suggested by Schütz and Trimper [20] that, even in the superdiffusive regime, the ERW
has a Gaussian limiting distribution. However, it turns out [3] that this limiting
distribution is not Gaussian, as it was already predicted in [10], see also [6],[19].
Surprisingly, to the best of our knowledge, no references are available on the multidimensional elephant random walk (MERW) on Zd , except [8],[18] in the special
case d = 2. The goal of this paper is to fill the gap by extending the results on the
Key words and phrases. Elephant random walk, Multi-dimensional martingales, Almost sure
convergence, Asymptotic normality.
1
2
BERNARD BERCU AND LUCILE LAULIN
one-dimensional ERW to the MERW. To be more precise, we shall study the influence of the memory parameter p on the MERW and we will show that the critical
value is given by
2d + 1
.
pd =
4d
In the diffusive and critical regimes p ≤ pd , the reader will find the natural extension
to higher dimension of the results recently established in [1],[3],[6],[7] on the almost
sure asymptotic behavior of the ERW as well as on its asymptotic normality. One
can notice that unlike in the classic random walk, the asymptotic normality of the
MERW holds in any dimension d ≥ 1. In the superdiffusive regime p > pd , we will
also prove some extensions of the results in [3],[8],[18].
Our strategy is to make an extensive use of the theory of martinagles [11],[15], in
particular the strong law of large numbers and the central limit theorem for multidimensional martingales [11], as well as the law of iterated logarithm [21],[22]. We
strongly believe that our approach could be successfully extended to MERW with
stops [8],[16], to amnesiac MERW [9], as well as to MERW with reinforced memory
[1],[14].
The paper is organized as follows. In Section 2, we introduce the exact MERW and
the multi-dimensional martingale we will extensively make use of. The main results
of the paper are given in Section 3. As usual, we first investigate the diffusive
regime p < pd and we establish the almost sure convergence, the law of iterated
logarithm and the quadratic strong law for the MERW. The asymptotic normality
of the MERW, properly normalized, is also provided. Next, we prove similar results
in the critical regime p = pd . At last, we study the superdiffusive regime p > pd and
we prove the almost sure convergence as well as the mean square convergence of the
MERW to a non-degenerate random vector. Our martingale approach is described
in Appendix A, while all technical proofs are postponed to Appendices B and C.
2. The multi-dimensional elephant random walk
First of all, let us introduce the MERW. It is the natural extension to higher
dimension of the one-dimensional ERW defined in the pioneer work of Schütz and
Trimper [20]. For a given dimension d ≥ 1, let (Sn ) be a random walk on Zd ,
starting at the origin at time zero, S0 = 0. At time n = 1, the elephant moves
in one of the 2d directions with the same probability 1/2d. Afterwards, at time
n ≥ 1, the elephant chooses uniformly at random an integer k among the previous
times 1, . . . , n. Then, he moves exactly in the same direction as that of time k with
probability p or in one of the 2d − 1 remaining directions with the same probability
(1 − p)/(2d − 1), where the parameter p stands for the memory parameter of the
MERW. From a mathematical point of view, the step of the elephant at time n ≥ 1
in given by
(2.1)
Xn+1 = An Xk
ON THE MULTI-DIMENSIONAL ELEPHANT RANDOM WALK
3
where
with
+Id
−Id
+Jd
−Jd
An =
+Jdd−1
−Jdd−1
with probability
p
with probability
1−p
2d−1
1−p
2d−1
1−p
2d−1
with probability
with probability
..
.
with probability
with probability
1 0 ··· 0
0 1 · · · 0
Id =
... . . . . . . ...
0 ··· 0 1
and
1−p
2d−1
1−p
2d−1
0 1 ··· 0
0 0 · · · 0
Jd =
... . . . . . . ... .
1 ··· 0 0
One can observe that the permutation matrix Jd satisfies Jdd = Id . Therefore, the
position of the elephant at time n ≥ 1 is given by
(2.2)
Sn+1 = Sn + Xn+1 .
It follows from our very definition of the MERW that at any n ≥ 1, Xn+1 = An Xbn
where An is the random matrix described before while bn is a random variable
uniformly distributed on {1, ..., n}. Moreover, as An and bn are conditionally independent, we clearly have E [Xn+1 |Fn ] = E [An ] E [Xbn |Fn ] where Fn stands for
the σ-algebra, Fn = σ(X1 , . . . , Xn ). Hence, we can deduce from the law of total
probability that at any time n ≥ 1,
1 2dp − 1
a
(2.3)
E [Xn+1 |Fn ] =
Sn = Sn
a.s.
n 2d − 1
n
where a is the fundamental parameter of the MERW,
(2.4)
a=
2dp − 1
.
2d − 1
Consequently, we immediately obtain from (2.2) and (2.3) that for any n ≥ 1,
a
(2.5)
E [Sn+1 |Fn ] = γn Sn
where
γn = 1 + .
n
Furthermore,
n
Y
Γ(a + 1 + n)
γk =
Γ(a + 1)Γ(n + 1)
k=1
where Γ is the standard Euler Gamma function. The critical value associated with
the memory parameter p of the MERW is
(2.6)
pd =
2d + 1
.
4d
4
BERNARD BERCU AND LUCILE LAULIN
As a matter of fact,
1
1
1
a = ⇐⇒ p = pd ,
a > ⇐⇒ p > pd .
a < ⇐⇒ p < pd ,
2
2
2
Definition 2.1. The MERW (Sn ) is said to be diffusive if p < pd , critical if p = pd ,
and superdiffusive if p > pd .
All our investigation in the three regimes relies on a martingale approach. To be
more precise, the asymptotic behavior of (Sn ) is closely related to the one of the
sequence (Mn ) defined, for all n ≥ 0, by Mn = an Sn where a0 = 1, a1 = 1 and, for
all n ≥ 2,
(2.7)
an =
n−1
Y
k=1
γk−1 =
Γ(a + 1)Γ(n)
.
Γ(n + a)
It follows from a well-known property of the Euler Gamma function that
Γ(n + a)
(2.8)
lim
= 1.
n→∞ Γ(n)na
Hence, we obtain from (2.7) and (2.8) that
(2.9)
lim na an = Γ(a + 1).
n→∞
Furthermore, since an = γn an+1 , we can deduce from (2.5) that for all n ≥ 1,
E [Mn+1 |Fn ] = Mn
a.s.
It means that (Mn ) is a multi-dimensional martingale. Our goal is to extend the
results recently established in [3] to MERW. One can observe that our approach is
much more tricky than that of [3] as it requires to study the asymptotic behavior of
the multi-dimensional martingale (Mn ).
3. Main results
3.1. The diffusive regime. Our first result deals with the strong law of large
numbers for the MERW in the diffusive regime where 0 ≤ p < pd .
Theorem 3.1. We have the almost sure convergence
1
(3.1)
lim Sn = 0
a.s.
n→∞ n
Some refinements on the almost sure rates of convergence for the MERW are as
follows.
Theorem 3.2. We have the quadratic strong law
n
1
1 X 1
T
S
S
=
Id
(3.2)
lim
k
k
2
n→∞ log n
k
d(1
−
2a)
k=1
a.s.
In particular,
n
(3.3)
1 X kSk k2
1
lim
=
2
n→∞ log n
k
(1 − 2a)
k=1
a.s.
ON THE MULTI-DIMENSIONAL ELEPHANT RANDOM WALK
Moreover, we also have the law of iterated logarithm
1
kSn k2
=
(3.4)
lim sup
(1 − 2a)
n→∞ 2n log log n
5
a.s.
Our next result is devoted to the asymptotic normality of the MERW in the diffusive
regime 0 ≤ p < pd .
Theorem 3.3. We have the asymptotic normality
1
1
L
√ Sn −→ N 0,
(3.5)
Id .
(1 − 2a)d
n
Remark 3.1. We clearly have from (2.4) that
2d − 1
1
=
.
1 − 2a
2d(1 − 2p) + 1
Hence, in the special case d = 1, the critical value pd = 3/4 and the asymptotic
variance
1
1
=
.
1 − 2a
3 − 4p
Consequently, we find again the asymptotic normality for the one-dimensional ERW
in the diffusive regime 0 ≤ p < 3/4 recently established in [1],[3],[6].
3.2. The critical regime. We now focus our attention on the critical regime where
the memory parameter p = pd .
Theorem 3.4. We have the almost sure convergence
1
Sn = 0
a.s.
(3.6)
lim √
n→∞
n log n
We continue with some refinements on the almost sure rates of convergence for the
MERW.
Theorem 3.5. We have the quadratic strong law
n
X
1
1
1
Sk SkT = Id
(3.7)
lim
2
n→∞ log log n
(k log k)
d
a.s
k=2
In particular,
n
(3.8)
X kSk k2
1
lim
=1
2
n→∞ log log n
(k
log
k)
k=2
Moreover, we also have the law of iterated logarithm
kSn k2
(3.9)
lim sup
=1
n→∞ 2n log n log log log n
a.s.
a.s.
Our next result concerns the asymptotic normality of the MERW in the critical
regime p = pd .
Theorem 3.6. We have the asymptotic normality
1
1
L
√
(3.10)
Sn −→ N 0, Id .
d
n log n
6
BERNARD BERCU AND LUCILE LAULIN
Remark 3.2. As before, in the special case d = 1, we find again [1],[3],[6] the
asymptotic normality for the one-dimensional ERW
√
Sn
L
−→ N (0, 1).
n log n
3.3. The superdiffusive regime. Finally, we get a handle on the more arduous
superdiffusive regime where pd < p ≤ 1.
Theorem 3.7. We have the almost sure convergence
1
Sn = L
a.s.
n→∞ na
where the limiting value L is a non-degenerate random vector. Moreover, we also
have the mean square convergence
h 1
2i
(3.12)
lim E
= 0.
S
−
L
n
n→∞
na
(3.11)
lim
Theorem 3.8. The expected value of L is E[L] = 0, while its covariance matrix is
given by
1
(3.13)
E LLT =
Id .
d(2a − 1)Γ(2a)
In particular,
(3.14)
E kLk2 =
1
.
(2a − 1)Γ(2a)
Remark 3.3. Another possibility for the MERW is that, at time n = 1, the elephant
moves in one direction, say the first direction e1 of the standard basis (e1 , . . . , ed )
of Rd , with probability q or in one of the 2d − 1 remaining directions with the same
probability (1 − q)/(2d − 1), where the parameter q lies in the interval [0, 1]. Afterwards, at any time n ≥ 2, the elephant moves exactly as before, which means that his
steps are given by (2.1). Then, the results of Section 3 holds true except Theorem
3.8 where
2dq − 1
1
E[L] =
e1
Γ(a + 1) 2d − 1
and
2dq − 1
1
1
1
E[LLT ] =
e1 eT1 − Id +
Id ,
Γ(2a + 1) 2d − 1
d
d(2a − 1)Γ(2a)
which also leads to
1
.
E kLk2 =
(2a − 1)Γ(2a)
Appendix A
A multi-dimensional martingale approach
We clearly obtain from (2.1) that for any time n ≥ 1, kXn k = 1. Consequently, it
follows from (2.2) that kSn k ≤ n. Therefore, the sequence (Mn ) given, for all n ≥ 0,
ON THE MULTI-DIMENSIONAL ELEPHANT RANDOM WALK
7
by Mn = an Sn , is a locally square-integrable multi-dimensional martingale. It can
be rewritten in the additive form
n
X
(A.1)
Mn =
ak εk
k=1
since its increments ∆Mn = Mn − Mn−1 satisfy ∆Mn = an Sn − an−1 Sn−1 = an εn
where εn = Sn − γn−1Sn−1 . The predictable quadratic variation associated with
(Mn ) is the random square matrix of order d given, for all n ≥ 1, by
(A.2)
n
X
E ∆Mk (∆Mk )T |Fk−1 .
hMin =
k=1
We already saw from (2.5) that E [εn+1 |Fn ] = 0. Moreover, we deduce from (2.2)
together with (2.3) that
2a
T
T
E Sn+1 Sn+1
|Fn = E Sn SnT |Fn + Sn SnT + E Xn+1 Xn+1
|Fn
n
2a
T
(A.3)
Sn SnT + E Xn+1 Xn+1
|Fn
a.s.
=
1+
n
In order to calculate the right-hand side of (A.3), one can notice that for any n ≥ 1,
Xn XnT
=
d
X
i=1
IXni 6=0 ei eTi
where (e1 , . . . , ed ) stands for the standard basis of the Euclidean space Rd and Xni is
the i-th coordinate of the random vector Xn . Moreover, it follows from (2.1) together
with the law of total probability that any time n ≥ 1 and for any 1 ≤ i ≤ d,
n
i
P(Xn+1
1X
6 0|Fn ) =
=
P((An Xk )i 6= 0|Fn )
n k=1
n
n
1X
1X
I i P(An = ±Id ) +
(1 − IXki 6=0 )P(An = ±Jd )
=
n k=1 Xk 6=0
n k=1
NnX (i)
=
P(An = Id ) − P(An = Jd ) + 2P(An = Jd )
n
which implies that for any 1 ≤ i ≤ d,
a X
(1 − a)
i
(A.4)
E IXn+1
Nn (i) +
a.s.
6=0 |Fn =
n
d
where
n
X
X
Nn (i) =
IXki 6=0
k=1
and the parameter a is given by (2.4). Hence, we infer from (A.3) and (A.4) that
(A.5)
a
(1 − a)
T
Id
E Xn+1 Xn+1
|Fn = Σn +
n
d
a.s.
8
BERNARD BERCU AND LUCILE LAULIN
where
(A.6)
Σn =
d
X
NnX (i)ei eTi .
i=1
One can observe the elementary fact that for all n ≥ 1, Tr(Σn ) = n where Tr(Σn )
stands for the trace of the positive definite matrix Σn . Therefore, we obtain from
(A.3) together with (A.5) that
T
|Fn − γn2 Sn SnT
E εn+1 εTn+1|Fn = E Sn+1 Sn+1
2a
a
(1 − a)
= 1+
Sn SnT + Σn +
Id − γn2 Sn SnT
n
n
d
a 2
(1 − a)
a
(A.7)
Σn +
Id −
Sn SnT
a.s.
=
n
d
n
which ensures that
a 2
a
1−a
E kεn+1 k2 |Fn =
Tr(Σn ) +
Tr(Id ) −
kSn k2
n
d
n
(A.8)
= 1 − (γn − 1)2 kSn k2
a.s.
By the same token,
E kεn+1k4 |Fn = 1 − 3(γn − 1)4 kSn k4 − 2(γn − 1)2 kSn k2 + 4(γn − 1)2 ξn
where, thanks to (A.5),
a
(1 − a)
kSn k2 .
ξn = E hSn , Xn+1 i2 |Fn = SnT Σn Sn +
n
d
It leads to
2(1 − a)
4
4
4
(γn − 1)2 kSn k2
E kεn+1 k |Fn = 1 − 3(γn − 1) kSn k − 2 1 −
d
4a
(A.9)
a.s.
+ (γn − 1)2 SnT Σn Sn
n
Therefore, as Σn ≤ nId for the usual order of positive definite matrices, we clearly
obtain from (A.9) that
E kεn+1 k4 |Fn ≤ 1 − 3(γn − 1)4 kSn k4
2
+ (γn − 1)2 2a(d − 1) + 2 − d kSn k2 a.s.
(A.10)
d
Consequently, we obtain from (A.8) and (A.10) the almost sure upper bounds
4
(A.11) sup E kεn+1 k2 |Fn ≤ 1
and
sup E kεn+1 k4 |Fn ≤
a.s.
3
n≥0
n≥0
Hereafter, we deduce from (A.2) and (A.7) that
hMin =
(A.12)
a21 E[ε1 εT1 ]
+
n−1
X
k=1
n
X
a2k+1 E εk+1 εTk+1|Fk
n−1
1
X
1
1
2
Id
ak + a
a2k+1 Σk − Id − ζn
=
d k=1
k
d
k=1
ON THE MULTI-DIMENSIONAL ELEPHANT RANDOM WALK
9
where
ζn = a2
n−1
X
ak+1 2
k
k=1
Sk SkT .
Hence, by taking the trace on both sides of (A.12), we find that
(A.13)
TrhMin =
n
X
k=1
a2k
2
−a
n−1
X
ak+1 2
k
k=1
kSk k2 .
The asymptotic behavior of the multi-dimensional martingale (Mn ) is closely related
to the one of
n
n
X
X
Γ(a + 1)Γ(k) 2
2
vn =
ak =
.
Γ(a + k)
k=1
k=1
One can observe that we always have TrhMin ≤ vn . In accordance with Definition
2.1, we have three regimes. In the diffusive regime where a < 1/2,
(A.14)
lim
vn
n→∞
n1−2a
=ℓ
where
ℓ=
(Γ(a + 1))2
.
1 − 2a
In the critical regime where a = 1/2,
vn
π
= (Γ(a + 1))2 = .
(A.15)
lim
n→∞ log n
4
Finally, in the superdiffusive regime where a > 1/2, vn converges to the finite value
lim vn =
n→∞
(A.16)
∞
X
Γ(a + 1)Γ(k + 1) 2
Γ(a + k + 1)
1, 1, 1
= 3 F2
1
a + 1, a + 1
k=0
=
∞
X
k=0
(1)k (1)k (1)k
(a + 1)k (a + 1)k k!
where, for any α ∈ R, (α)k = α(α + 1) · · · (α + k − 1) for k ≥ 1, (α)0 = 1 stands for
the Pochhammer symbol and 3 F2 is the generalized hypergeometric function defined
by
3 F2
∞
X
(a)k (b)k (c)k k
a, b, c
z =
z .
d, e
(d)k (e)k k!
k=0
.
Appendix B
Proofs of the almost sure convergence results
B.1. The diffusive regime.
Proof of Theorem 3.1. First of all, we focus our attention on the proof of the
almost sure convergence (3.1). We already saw from (A.13) that TrhMin ≤ vn .
Moreover, we obtain from (A.14) that, in the diffusive regime where 0 < a < 1/2,
vn increases to infinity with the speed n1−2a . On the one hand, it follows from the
10
BERNARD BERCU AND LUCILE LAULIN
strong law of large numbers for multi-dimensional martingales given e.g. by the last
part of Theorem 4.3.15 in [11] that for any γ > 0,
1+γ
kMn k2
= o log TrhMin
a.s
(B.1)
λmax hMin
where λmax hMin stands for the maximal eigenvalue of the random square matrix
hMin . However, as hMin is a positive definite matrix and TrhMin ≤ vn , we clearly
have λmax hMin ≤ TrhMin ≤ vn . Consequenly, we obtain from (B.1) that
kMn k2 = o vn (log vn )1+γ
a.s
which implies that
kMn k2 = o n1−2a (log n)1+γ
(B.2)
a.s.
Hence, as Mn = an Sn , it follows from (2.9) and (B.2) that for any γ > 0,
kSn k2 = o n(log n)1+γ
a.s.
which completes the proof of Theorem 3.1.
Proof of Theorem 3.2. We shall now proceed to the proof of the almost sure
rates of convergence given in Theorem 3.2. First of all, we claim that
1
1
(B.3)
lim Σn = Id
a.s.
n→∞ n
d
where Σn is the random square matrix of order d given by (A.6). As a matter of
fact, in order to prove (B.3) it is only necessary to show that for any 1 ≤ i ≤ d,
1
NnX (i)
=
lim
n→∞
n
d
(B.4)
For any 1 ≤ i ≤ d, denote
Λn (i) =
a.s.
NnX (i)
.
n
One can observe that
n
1
Λn (i) +
I i
n+1
n + 1 Xn+1 6=0
which leads, via (A.4), to the recurrence relation
Λn+1 (i) =
(B.5)
Λn+1 (i) =
(1 − a)
1
n
γn Λn (i) +
+
δn+1 (i)
n+1
d(n + 1) n + 1
i
i
where δn+1 (i) = IXn+1
6=0 − E[IXn+1
6=0 |Fn ]. After straightforward calculations, the
solution of this recurrence relation is given by
n
1
(1 − a) X
(B.6)
Λn (i) =
Λ1 (i) +
ak + Ln (i)
nan
d
k=2
where
Ln (i) =
n
X
k=2
ak δk (i).
ON THE MULTI-DIMENSIONAL ELEPHANT RANDOM WALK
11
However, (Ln (i)) is a square-integrable real martingale with predictable quadratic
variation hL(i)in satisfying hL(i)in ≤ vn a.s. Then, it follows from the standard
strong law of large numbers for martingales given by Theorem 1.3.24 in [11] that
(Ln (i))2 = O(vn log vn ) a.s. Consequently, as na2n is equivalent to (1 − 2a)vn , we
obtain that for any 1 ≤ i ≤ d,
1
Ln (i) = 0
a.s.
(B.7)
lim
n→∞ nan
Furthermore, one can easily check from (2.9) that
n
1 X
1
(B.8)
lim
ak =
.
n→∞ nan
1−a
k=1
Therefore, we find from (B.6) together with (B.7) and (B.8) that for any 1 ≤ i ≤ d,
1
a.s.
(B.9)
lim Λn (i) =
n→∞
d
which immediately leads to (B.4). Hereafter, it follows from the conjunction of (3.1),
(A.7) and (B.4) that
1
(B.10)
lim E εn+1 εTn+1 |Fn = Id
a.s.
n→∞
d
By the same token, we also obtain from (A.12) and Toeplitz lemma that
1
1
hMin = Id
a.s.
(B.11)
lim
n→∞ vn
d
We are now in the position to prove the quadratic strong law (3.2). For any vector
u of Rd , denote Mn (u) = hu, Mn i and εn (u) = hu, εni. We clearly have from (A.1)
n
X
Mn (u) =
ak εk (u).
k=1
Consequently, (Mn (u)) is a square-integrable real martingale. Moreover, it follows
from (B.10) that
1
lim E |εn+1 (u)|2|Fn = kuk2
a.s.
n→∞
d
Moreover, we can deduce from (A.11) and the Cauchy-Schwarz inequality that
4
a.s.
sup E |εn+1(u)|4 |Fn ≤ kuk4
3
n≥0
Furthermore, we clearly have from (2.9) and (A.14) that
a2n
,
n→∞
vn
which of course implies that fn converges to zero. Therefore, it follows from the
quadratic strong law for real martingales given e.g. in Theorem 3 of [4], that for
any vector u of Rd ,
n
1 X Mk2 (u) 1
(B.12)
lim
fk
= kuk2
a.s.
n→∞ log vn
v
d
k
k=1
lim nfn = 1 − 2a
where
fn =
12
BERNARD BERCU AND LUCILE LAULIN
Consequently, we find from (A.14) and (B.12) that
n
(B.13)
(1 − 2a)
1 X a2k 2
kuk2
Mk (u) =
lim
2
n→∞ log n
vk
d
a.s.
k=1
Hereafter, as Mn = an Sn and n2 a4n is equivalent to (1 − 2a)2 vn2 , we obtain from
(B.13) that for any vector u of Rd ,
n
(B.14)
1
1 X 1 T
u Sk SkT u =
kuk2
lim
2
n→∞ log n
k
d(1 − 2a)
a.s.
k=1
By virtue of the second part of Proposition 4.2.8 in [11], we can conclude from (B.14)
that
n
1 X 1
1
(B.15)
lim
Sk SkT =
Id
a.s.
2
n→∞ log n
k
d(1 − 2a)
k=1
which completes the proof of (3.2). By taking the trace on both sides of (B.15), we
immediately obtain (3.3). Finally, we shall proceed to the proof of the law of iterated
logarithm given by (3.4). We already saw that a4n vn−2 is equivalent to (1 − 2a)2 n−2 .
It ensures that
+∞ 4
X
a
n
(B.16)
n=1
vn2
< +∞.
Hence, it follows from the law of iterated logarithm for real martingales due to Stout
[21],[22], see also Corollary 6.4.25 in [11], that for any vector u of Rd ,
1/2
1/2
1
1
lim sup
Mn (u) = − lim inf
Mn (u)
n→∞
2vn log log vn
2vn log log vn
n→∞
1
(B.17)
a.s.
= √ kuk
d
Consequently, as Mn (u) = an hu, Sn i, we obtain from (A.14) together with (B.17)
that
1/2
1/2
1
1
lim sup
hu, Sn i = − lim inf
hu, Sni
n→∞
2n log log n
2n log log n
n→∞
1
= p
kuk
a.s.
d(1 − 2a)
In particular, for any vector u of Rd ,
(B.18)
lim sup
n→∞
However,
1
1
hu, Sn i2 =
kuk2
2n log log n
d(1 − 2a)
2
kSn k =
d
X
i=1
hei , Sn i2
a.s.
ON THE MULTI-DIMENSIONAL ELEPHANT RANDOM WALK
13
where (e1 , . . . , ed ) is the standard basis of Rd . Finally, we deduce from (B.18) that
lim sup
n→∞
1
kSn k2
=
2n log log n
(1 − 2a)
a.s.
which achieves the proof of Theorem 3.2.
.
B.2. The critical regime.
Proof of Theorem 3.4. We already saw from (A.15) that in the critical regime
where a = 1/2, vn increases slowly to infinity with a logarithmic speed log n. We
obtain once again from the last part of Theorem 4.3.15 in [11] that for any γ > 0,
kMn k2 = o vn (log vn )1+γ
a.s
which leads to
kMn k2 = o log n(log log n)1+γ
(B.19)
a.s.
However, we clearly have from (2.9) with a = 1/2 that
π
(B.20)
lim na2n = .
n→∞
4
Consequently, as Mn = an Sn , we deduce from (B.19) and (B.20) that for any γ > 0,
kSn k2 = o n log n(log log n)1+γ
a.s.
which completes the proof of Theorem 3.4.
Proof of Theorem 3.5. The proof of Theorem 3.5 is left to the reader as it follows
the same lines as that of Theorem 3.2.
.
B.3. The superdiffusive regime.
Proof of Theorem 3.7. We already saw from (A.16) that in the superdiffusive
regime where 1/2 < a ≤ 1, vn converges to a finite value. As previously seen,
TrhMin ≤ vn . Hence, we clearly have
lim TrhMin < ∞
a.s.
n→∞
Therefore, if
(B.21)
Ln =
Mn
,
Γ(a + 1)
we can deduce from the second part of Theorem 4.3.15 in [11] that
(B.22)
lim Mn = M
n→∞
and
lim Ln = L
n→∞
a.s.
where the limiting values M and L are the random vectors of Rd given by
∞
∞
X
X
1
M=
ak εk
and
L=
ak εk .
Γ(a
+
1)
k=1
k=1
14
BERNARD BERCU AND LUCILE LAULIN
Consequently, as Mn = an Sn , (3.11) clearly follows from (2.9) and (B.22) We now
focus our attention on the mean square convergence (3.12). As M0 = 0, we have
from (A.1) and (A.2) that for all n ≥ 1,
n
X
2
E[kMn k ] =
E[k∆Mk k2 ] = E[TrhMin ] ≤ vn .
k=1
Hence, we obtain from (A.16) that
1, 1, 1
sup E kMn k2 ≤ 3 F2
1 < ∞,
a + 1, a + 1
n≥1
which means that the martingale (Mn ) is bounded in L2 . Therefore, we have the
mean square convergence
lim E kMn − Mk2 = 0,
n→∞
which clearly leads to (3.12).
Proof of Theorem 3.8. First of all, we clearly have for all n ≥ 1, E[Mn ] = 0
which implies that E[M] = 0 leading to E[L] = 0. Moreover, taking expectation on
both sides of (A.3) and (A.5), we obtain that for all n ≥ 1,
2a
T
T
E Sn+1 Sn+1
= 1+
E Sn SnT + E Xn+1 Xn+1
n
a
2a
(1 − a)
= 1+
E Sn SnT + E [Σn ] +
(B.23)
Id .
n
n
d
However, we claim that
n
(B.24)
E [Σn ] = Id .
d
As a matter of fact, taking expectation on both sides of (B.6), we find that for any
1 ≤ i ≤ d,
n
1
(1 − a) X
(B.25)
E[Λn (i)] =
E[Λ1 (i)] +
ak .
nan
d
k=2
On the one hand, we clearly have
1
E[Λ1 (i)] = .
d
On the other hand, it follows from Lemma B.1 in [3] that
n
X
k=2
(B.26)
n
X
Γ(a + 1)Γ(k)
n−1
X
Γ(a + 1)Γ(k + 1)
=
Γ(k + a)
Γ(k + a + 1)
k=1
1
Γ(a + 1)Γ(n + 1)
(1 − nan )
=
1−
=
.
(a − 1)
Γ(a + n)
(a − 1)
ak =
k=2
Consequently, we can deduce from (B.25) and (B.26) that for any 1 ≤ i ≤ d,
1 1 (1 − nan ) 1
= .
(B.27)
E[Λn (i)] =
−
nan d
d
d
ON THE MULTI-DIMENSIONAL ELEPHANT RANDOM WALK
15
Therefore, we get from (A.6) and (B.27) that
E[Σn ] = n
d
X
d
E[Λn (i)]ei eTi
i=1
n
nX T
ei ei = Id .
=
d i=1
d
Hereafter, we obtain from (B.23) and (B.24) that
1
2a
T
(B.28)
E Sn+1 Sn+1
= 1+
E Sn SnT + Id .
n
d
It is not hard to see that the solution of this recurrence relation is given by
!
n−1
X
1
Γ(n
+
2a)
Γ(2a
+
1)Γ(k
+
1)
E[S1 S1T ] +
E Sn SnT =
Id
Γ(2a + 1)Γ(n)
d
Γ(k + 2a + 1)
k=1
!
n
Γ(n + 2a) X Γ(k)
1
(B.29)
Id
=
Γ(n)
Γ(k
+
2a)
d
k=1
since
1
E[S1 S1T ] = Id .
d
Therefore, it follows once again from Lemma B.1 in [3] that
Γ(n + 2a)
1
n
T
−1
Id .
(B.30)
E Sn Sn =
(2a − 1) Γ(n + 1)Γ(2a)
d
Hence, we obtain from (B.21) together with (B.30) that
Γ(n + 2a)
1
na2n
T
−
1
Id
E[Ln Ln ] =
(2a − 1)(Γ(a + 1))2 Γ(n + 1)Γ(2a)
d
2
n
Γ(n)
Γ(n + 2a)
1
=
(B.31)
−1
Id .
(2a − 1) Γ(n + a)
Γ(n + 1)Γ(2a)
d
Finally, we find from (3.12) and (B.31) that
lim E[Ln LTn ] = E[LLT ] =
n→∞
which achieves the proof of Theorem 3.8.
.
1
Id
d(2a − 1)Γ(2a)
Appendix C
Proofs of the asymptotic normality results
C.1. The diffusive regime.
Proof of Theorem 3.3. In order to establish the asymptotic normality (3.5), we
shall make use of the central limit theorem for multi-dimensional martingales given
e.g. by Corollary 2.1.10 of [11]. First of all, we already saw from (B.11) that
(C.1)
1
1
hMin = Id
n→∞ vn
d
lim
a.s.
16
BERNARD BERCU AND LUCILE LAULIN
Consequently, it only remains to show that (Mn ) satisfies Lindeberg’s condition, in
other words, for all ε > 0,
n
P
1 X
E k∆Mn k2 I{k∆Mn k≥ε√vn } |Fk−1 −→ 0.
vn k=1
We have from (A.11) that for all ε > 0
n
n
1 X
1 X
2
√
E k∆Mn k I{k∆Mn k≥ε vn } |Fk−1 ≤ 2 2
E k∆Mn k4 |Fk−1
vn k=1
ε vn k=1
n
1 X
a4k
≤ sup E kεk k |Fk−1 2 2
ε vn k=1
1≤k≤n
≤
However, we already saw from (B.16) that
+∞ 4
X
a
n
n=1
vn2
4
n
4 X 4
a .
3ε2 vn2 k=1 k
< +∞.
Hence, it follows from Kronecker’s lemma that
n
1 X 4
lim
ak = 0,
n→∞ v 2
n k=1
which ensures that Lindeberg’s condition is satisfied. Therefore, we can conclude
from the central limit theorem for martingales that
1
1
L
(C.2)
√ Mn −→ N 0, Id .
vn
d
p
√
As Mn = an Sn and nan is equivalent to vn (1 − 2a), we find from (C.2) that
1
1
L
√ Sn −→ N 0,
Id ,
n
d(1 − 2a)
which completes the proof of Theorem 3.3.
.
C.2. The critical regime.
Proof of Theorem 3.6. Via the same lines as in the proof of (B.11), we can
deduce from (3.6), (A.13) and (A.15) that in the critical regime
1
1
(C.3)
lim
hMin = Id
a.s.
n→∞ vn
d
Moreover, it follows from (A.15) and (B.20) that a2n vn−1 is equivalent to (n log n)−1 .
It implies that
∞
X
a4n
(C.4)
< +∞.
2
v
n
k=1
ON THE MULTI-DIMENSIONAL ELEPHANT RANDOM WALK
17
As previously seen, we infer from (C.4) that (Mn ) satisfies Lindeberg’s condition.
Therefore, we can conclude from the central limit theorem for martingales that
1
1
L
(C.5)
√ Mn −→ N 0, Id .
vn
d
√
√
Finally, as Mn = an Sn and an n log n is equivalent to vn , we obtain from that
(C.5) that
1
L
√
Sn −→ N (0, 1),
n log n
which achieves the proof of Theorem 3.6.
References
[1] Baur, E. and Bertoin, J. Elephant Random Walks and their connection to Pólya-type urns.
Phys. Rev. E 94, 052134 (2016).
[2] Boyer, D., Romo-Cruz, J. C. R. Solvable random-walk model with memory and its relations
with Markovian models of anomalous diffusion. Phys. Rev. E 90, 042136 (2014).
[3] Bercu, B. A martingale approach for the elephant random walk. arXiv:1707.04130, (2017).
[4] Bercu, B. On the convergence of moments in the almost sure central limit theorem for martingales with statistical applications. Stochastic Process. Appl. 111, 1 (2004), pp. 157–173.
[5] Chauvin, B., Pouyanne, N., Sahnoun, R. Limit distributions for large Pólya urns. Ann.
Appl. Probab. 21, (2011), pp 1-32.
[6] Coletti, C. F., Gava, R., Schütz, G. M. Central limit theorem and related results for the
elephant random walk. J. Math. Phys. 58, 053303 (2017).
[7] Coletti, C. F., Gava, R., Schütz, G. M. A strong invariance principle for the elephant
random walk. arXiv:1707.06905, (2017).
[8] Cressoni, J. C., Viswanathan, G. M., Da Silva, M. A. A., Exact solution of an
anisotropic 2D random walk model with strong memory correlations. J. Phys. A: Math. Theor.
46, 505002 (2013).
[9] Cressoni, J. C., Da Silva, M. A. A., Viswanathan, G. M. Amnestically induced persistence in random walks. Phys. Rev. Let. 98, 070603 (2007).
[10] Da Silva, M. A. A., Cressoni, J. C., Schütz, G. M., Viswanathan, G. M., Trimper,
S. Non-Gaussian propagator for elephant random walks. Phys. Rev. E 88, 022115 (2013).
[11] Duflo, M., Random iterative models, Vol. 34 of Applications of Mathematics. SpringerVerlag, Berlin, 1997.
[12] Janson, S., Functional limit theorems for multitype branching processes and generalized
Pólya urns. Stochastic Process. Appl. 110, 2 (2004), pp. 177–245.
[13] Kumar, N., Harbola, U., Lindenberg, K. Memory-induced anomalous dynamics: Emergence of diffusion, subdiffusion, and superdiffusion from a single random walk model. Phys.
Rev. E 82, 021101 (2010).
[14] Harris, R. Random walkers with extreme value memory: modelling the peak-end rule. New
J. Phys. 17, 053049 (2015).
[15] Hall, P., and Heyde, C. C. Martingale limit theory and its application. Academic Press
Inc., New York, 1980.
[16] Harbola, U., Kumar, N., Lindenberg, K. Memory-induced anomalous dynamics in a
minimal random walk model. Phys. Rev. E 90, 022136 (2014).
[17] Kürsten, R. Random recursive trees and the elephant random walk. Phys. Rev. E 93, 032111
(2016).
[18] Lyu, J., Xin, J., Yu, Y. Residual diffusivity in elephant random walk models with stops.
arXiv:1705.02711, (2017).
[19] Paraan, F. N. C. and Esguerra, J. P. Exact moments in a continuous time random walk
with complete memory of its history. Phys. Rev. E 74, 032101 (2006).
18
BERNARD BERCU AND LUCILE LAULIN
[20] Schütz, G. M., and Trimper, S. Elephants can always remember: Exact long-range memory effects in a non-Markovian random walk. Phys. Rev. E 70, 045101 (2004).
[21] Stout, W. F. A martingale analogue of Kolmogorovs law of the iterated logarithm. Z.
Wahrscheinlichkeitstheorie 15 (1970), pp. 279-290.
[22] Stout, W. F., Almost sure convergence, Probability and Mathematical Statistics, Vol. 24,
Academic Press, New York-London, 1974.
Université de Bordeaux, Institut de Mathématiques de Bordeaux, UMR 5251, 351
Cours de la Libération, 33405 Talence cedex, France.
Ecole normale supérieure de Rennes, Département de Mathématiques, Campus
de Ker lann, Avenue Robert Schuman, 35170 Bruz
| 10 |