Dataset Viewer (First 5GB)
text
stringlengths 6
128k
|
---|
Chandra and VLA observations of the symbiotic star R Aqr in 2004 reveal
significant changes over the three to four year interval between these
observations and previous observations taken with the VLA in 1999 and with
Chandra in 2000. This paper reports on the evolution of the outer thermal X-ray
lobe-jets and radio jets. The emission from the outer X-ray lobe-jets lies
farther away from the central binary than the outer radio jets, and comes from
material interpreted as being shock heated to ~10^6 K, a likely result of
collision between high speed material ejected from the central binary and
regions of enhanced gas density. Between 2000 and 2004, the Northeast (NE)
outer X-ray lobe-jet moved out away from the central binary, with an apparent
projected motion of ~580 km s^-1. The Southwest (SW) outer X-ray lobe-jet
almost disappeared between 2000 and 2004, presumably due to adiabatic expansion
and cooling. The NE radio bright spot also moved away from the central binary
between 2000 and 2004, but with a smaller apparent velocity than of the NE
X-ray bright spot. The SW outer lobe-jet was not detected in the radio in
either 1999 or 2004. The density and mass of the X-ray emitting material is
estimated. Cooling times, shock speeds, pressure and confinement are discussed.
|
In this paper, a quadratic pencil of Schr\"odinger type difference operator
$L_{\lambda}$ is taken under investigation to give a general perspective on the
spectral analysis of non-selfadjoint difference equations of second order.
Introducing Jost-type solutions, structural and quantitative properties of
spectrum of the operator $L_{\lambda}$ are analyzed and hence, a discrete
analog of the theory in Degasperis, (\emph{J.Math.Phys}. 11: 551--567, 1970)
and Bairamov et. al, (\emph{Quaest. Math.} 26: 15--30, 2003) is developed. In
addition, several analogies are established between difference and
$q$-difference cases. Finally, the principal vectors of $L_{\lambda}$ are
introduced to lay a groundwork for the spectral expansion.
Mathematics Subject Classification (2000): 39A10, 39A12, 39A13
|
In recent times, I've encountered a principle known as cloud computing, a
model that simplifies user access to data and computing power on a demand
basis. The main objective of cloud computing is to accommodate users' growing
needs by decreasing dependence on human resources, minimizing expenses, and
enhancing the speed of data access. Nevertheless, preserving security and
privacy in cloud computing systems pose notable challenges. This issue arises
because these systems have a distributed structure, which is susceptible to
unsanctioned access - a fundamental problem. In the context of cloud computing,
the provision of services on demand makes them targets for common assaults like
Denial of Service (DoS) attacks, which include Economic Denial of
Sustainability (EDoS) and Distributed Denial of Service (DDoS). These
onslaughts can be classified into three categories: bandwidth consumption
attacks, specific application attacks, and connection layer attacks. Most of
the studies conducted in this arena have concentrated on a singular type of
attack, with the concurrent detection of multiple DoS attacks often overlooked.
This article proposes a suitable method to identify four types of assaults:
HTTP, Database, TCP SYN, and DNS Flood. The aim is to present a universal
algorithm that performs effectively in detecting all four attacks instead of
using separate algorithms for each one. In this technique, seventeen server
parameters like memory usage, CPU usage, and input/output counts are extracted
and monitored for changes, identifying the failure point using the CUSUM
algorithm to calculate the likelihood of each attack. Subsequently, a fuzzy
neural network is employed to determine the occurrence of an attack. When
compared to the Snort software, the proposed method's results show a
significant improvement in the average detection rate, jumping from 57% to 95%.
|
Final states involving tau leptons are important components of searches for
new particles at the Large Hadron Collider (LHC). A proper treatment of tau
spin effects in the Monte Carlo (MC) simulations is important for understanding
the detector acceptance as well as for the measurements of tau polarization and
tau spin correlations. In this note we present a TauSpinner package designed to
simulate the spin effects. It relies on the availability of the four-momenta of
the taus and their decay products in the analyzed data. The flavor and the
four-momentum of the boson decaying to the tau-tau+ or tau+- nu pair need to be
known. In the Z/gamma* case the initial state quark configuration is attributed
from the intermediate boson kinematics, and the parton distribution functions
(PDF's). TauSpinner is the first algorithm suitable for emulation of tau spin
effects in tau-embedded samples. It is also the first tool that offers the user
the flexibility to simulate a desired spin effect at the analysis level. An
algorithm to attribute tau helicity states to a previously generated sample is
also provided.
|
We are concerned with the study of the existence and multiplicity of
solutions for Dirichlet boundary value problems, involving the $( p( m ), \, q(
m ) )-$ equation and the nonlinearity is superlinear but does not fulfil the
Ambrosetti-Rabinowitz condition in the framework of Sobolev spaces with
variable exponents in a complete manifold. The main results are proved using
the mountain pass theorem and Fountain theorem with Cerami sequences. Moreover,
an example of a $( p( m ), \, q( m ) )$ equation that highlights the
applicability of our theoretical results is also provided.
|
This paper presents a general procedure based on using the method of types to
calculate the box dimension of sets. The approach unifies and simplifies
multiple box counting arguments. In particular, we use it to generalize the
formula for the box dimension of self-affine carpets of Gatzouras-Lalley and of
Bara\'nski type to their higher dimensional sponge analogues. In addition to a
closed form, we also obtain a variational formula which resembles the
Ledrappier-Young formula for Hausdorff dimension.
|
Arithmetic progressions of length $3$ may be found in compact subsets of the
reals that satisfy certain Fourier -- as well as Hausdorff -- dimensional
requirements. It has been shown that a very similar result holds in the
integers under analogous conditions, with Fourier dimension being replaced by
the decay of a discrete Fourier transform. In this paper we make this
correspondence more precise, using a well-known construction by Salem.
Specifically, we show that a subset of the integers can be mapped to a compact
subset of the continuum in a way which preserves certain dimensional properties
as well as arithmetic progressions of arbitrary length. The higher-dimensional
version of this construction is then used to show that certain parallelogram
configurations must exist in sparse subsets of $\mathbb{Z}^n$ satisfying
appropriate density and Fourier-decay conditions.
|
We measure by inelastic neutron scattering the spin excitation spectra as a
function of applied magnetic field in the quantum spin-ladder material
(C5H12N)2CuBr4. Discrete magnon modes at low fields in the quantum disordered
phase and at high fields in the saturated phase contrast sharply with a spinon
continuum at intermediate fields characteristic of the Luttinger-liquid phase.
By tuning the magnetic field, we drive the fractionalization of magnons into
spinons and, in this deconfined regime, observe both commensurate and
incommensurate continua.
|
A variety of representation learning approaches have been investigated for
reinforcement learning; much less attention, however, has been given to
investigating the utility of sparse coding. Outside of reinforcement learning,
sparse coding representations have been widely used, with non-convex objectives
that result in discriminative representations. In this work, we develop a
supervised sparse coding objective for policy evaluation. Despite the
non-convexity of this objective, we prove that all local minima are global
minima, making the approach amenable to simple optimization strategies. We
empirically show that it is key to use a supervised objective, rather than the
more straightforward unsupervised sparse coding approach. We compare the
learned representations to a canonical fixed sparse representation, called
tile-coding, demonstrating that the sparse coding representation outperforms a
wide variety of tilecoding representations.
|
Recent works have studied implicit biases in deep learning, especially the
behavior of last-layer features and classifier weights. However, they usually
need to simplify the intermediate dynamics under gradient flow or gradient
descent due to the intractability of loss functions and model architectures. In
this paper, we introduce the unhinged loss, a concise loss function, that
offers more mathematical opportunities to analyze the closed-form dynamics
while requiring as few simplifications or assumptions as possible. The unhinged
loss allows for considering more practical techniques, such as time-vary
learning rates and feature normalization. Based on the layer-peeled model that
views last-layer features as free optimization variables, we conduct a thorough
analysis in the unconstrained, regularized, and spherical constrained cases, as
well as the case where the neural tangent kernel remains invariant. To bridge
the performance of the unhinged loss to that of Cross-Entropy (CE), we
investigate the scenario of fixing classifier weights with a specific
structure, (e.g., a simplex equiangular tight frame). Our analysis shows that
these dynamics converge exponentially fast to a solution depending on the
initialization of features and classifier weights. These theoretical results
not only offer valuable insights, including explicit feature regularization and
rescaled learning rates for enhancing practical training with the unhinged
loss, but also extend their applicability to other loss functions. Finally, we
empirically demonstrate these theoretical results and insights through
extensive experiments.
|
We perform simulations to test the effects of a moving gas filament on a
young star cluster (i.e. the "Slingshot" Model). We model Orion Nebula
Cluster-like clusters as Plummer spheres and the Integral Shaped Filament gas
as a cylindrical potential. We observe that in a static filament, an initially
spherical cluster evolves naturally into an elongated distribution of stars.
For sinusoidal moving filaments, we observe different remnants, and classify
them into 4 categories.%: 3 different objects and one transition object.
"Healthy" clusters, where almost all the stars stay inside the filament and the
cluster; "destroyed" clusters are the opposite case, with almost no particles
in the filament or near the centre of density of the clusters; "ejected"
clusters, where a large fraction of stars are close to the centre of density of
the stars , but almost none of them in the filament; and "transition" clusters,
where roughly the same number of particles is ejected from the cluster and from
the filament. An {{Orion Nebula Cluster-like}} cluster might stay inside the
filament or be ejected, but it will not be destroyed.
|
In this paper, we construct $2n-1$ locally indistinguishable orthogonal
product states in $\mathbb{C}^n\otimes\mathbb{C}^{4}~(n>4)$ and
$\mathbb{C}^n\otimes\mathbb{C}^{5}~(n\geq 5)$ respectively. Moreover, a set of
locally indistinguishable orthogonal product states with $2(n+2l)-8$ elements
in $\mathbb{C}^n\otimes\mathbb{C}^{2l}~(n\geq 2l>4)$ and a class of locally
indistinguishable orthogonal product states with $2(n+2k+1)-7$ elements in
$\mathbb{C}^n\otimes\mathbb{C}^{2k+1}~(n\geq 2k+1>5)$ are also constructed
respectively. These classes of quantum states are then shown to be
distinguishable by local operation and classical communication (LOCC) using a
suitable $\mathbb{C}^2\otimes\mathbb{C}^2$ maximally entangled state
respectively.
|
This thesis consists of two parts. The first part is about how quantum theory
can be recovered from first principles, while the second part is about the
application of diagrammatic reasoning, specifically the ZX-calculus, to
practical problems in quantum computing. The main results of the first part
include a reconstruction of quantum theory from principles related to
properties of sequential measurement and a reconstruction based on properties
of pure maps and the mathematics of effectus theory. It also includes a
detailed study of JBW-algebras, a type of infinite-dimensional Jordan algebra
motivated by von Neumann algebras. In the second part we find a new model for
measurement-based quantum computing, study how measurement patterns in the
one-way model can be simplified and find a new algorithm for extracting a
unitary circuit from such patterns. We use these results to develop a circuit
optimisation strategy that leads to a new normal form for Clifford circuits and
reductions in the T-count of Clifford+T circuits.
|
Smooth primitively polarized $\mathrm{K3}$ surfaces of genus 36 are studied.
It is proved that all such surfaces $S$, for which there exists an embedding
$\mathrm{R} \hookrightarrow \mathrm{Pic}(S)$ of some special lattice
$\mathrm{R}$ of rank 2, are parameterized up to an isomorphism by some
18-dimensional unirational algebraic variety. More precisely, it is shown that
a general $S$ is an anticanonical section of a (unique) Fano 3-fold with
canonical Gorenstein singularities.
|
The abundance of collapsed objects in the universe, or halo mass function, is
an important theoretical tool in studying the effects of primordially generated
non-Gaussianities on the large scale structure. The non-Gaussian mass function
has been calculated by several authors in different ways, typically by
exploiting the smallness of certain parameters which naturally appear in the
calculation, to set up a perturbative expansion. We improve upon the existing
results for the mass function by combining path integral methods and saddle
point techniques (which have been separately applied in previous approaches).
Additionally, we carefully account for the various scale dependent combinations
of small parameters which appear. Some of these combinations in fact become of
order unity for large mass scales and at high redshifts, and must therefore be
treated non-perturbatively. Our approach allows us to do this, and to also
account for multi-scale density correlations which appear in the calculation.
We thus derive an accurate expression for the mass function which is based on
approximations that are valid over a larger range of mass scales and redshifts
than those of other authors. By tracking the terms ignored in the analysis, we
estimate theoretical errors for our result and also for the results of others.
We also discuss the complications introduced by the choice of smoothing filter
function, which we take to be a top-hat in real space, and which leads to the
dominant errors in our expression. Finally, we present a detailed comparison
between the various expressions for the mass functions, exploring the accuracy
and range of validity of each.
|
We discuss the potential for using microfabrication techniques for
laser-driven accelerator construction. We introduce microfabrication processes
in general, and then describe our investigation of a particular trial process.
We conclude by considering the issues microfabrication raises for possible
future structures.
|
Gravity/fluid correspondence becomes an important tool to investigate the
strongly correlated fluids. We carefully investigate the holographic fluids at
the finite cutoff surface by considering different boundary conditions in the
scenario of gravity/fluid correspondence. We find that the sonic velocity of
the boundary fluids at the finite cutoff surface is critical to clarify the
superficial similarity between bulk viscosity and perturbation of the pressure
for the holographic fluid, where we set a special boundary condition at the
finite cutoff surface to explicitly express this superficial similarity.
Moreover, we further take the sonic velocity into account to investigate a case
with more general boundary condition. In this more general case, two parameters
in the first order stress tensor of holographic fluid cannot be fixed, one can
still extract the information of transport coefficients by considering the
sonic velocity seriously.
|
We investigate some properties of regularity of homomorphisms of local
algebras over positive characteristic fields. We state a result of
monomialization of such a homomorphism between algebras of analytic or
algebraic power series. From this we deduce some extensions in positive
characteristic of results due to S. Izumi and A. Gabrielov.
|
We extend the measurement range of optical correlation-domain reflectometry
(OCDR) by modulating the laser output frequency at two frequencies, while
preserving spatial resolution. We demonstrate distributed reflectivity sensing
with a ten-fold extended measurement range.
|
NGC 602 is an outstanding young open cluster in the Small Magellanic Cloud.
We have analyzed the new HI data taken with the Galactic Australian Square
Kilometre Array Pathfinder survey project at an angular resolution of 30". The
results show that there are three velocity components in the NGC 602 region. We
found that two of them having ~20 km s$^{-1}$ velocity separation show
complementary spatial distribution with a displacement of 147 pc. We present a
scenario that the two clouds collided with each other and triggered the
formation of NGC 602 and eleven O stars. The average time scale of the
collision is estimated to be ~8 Myr, while the collision may have continued
over a few Myr. The red shifted HI cloud extending ~500 pc flows possibly to
the Magellanic Bridge, which was driven by the close encounter with the Large
Magellanic Cloud 200 Myr ago (Fujimoto & Noguchi 1990; Muller & Bekki 2007).
Along with the RMC136 and LHA 120-N 44 regions the present results lend support
for that the galaxy interaction played a role in forming high-mass stars and
clusters.
|
We report that a triangular Fabry-Perot resonator filled with a parity-odd
linear anisotropic medium exhibiting the one-way light speed anisotropy acts as
a perfect diode. A Linear crystal such as the nematic liquid crystals whose
molecular structures break parity can exhibit the one-way light speed
anisotropy. The one-way light speed anisotropy also can be induced in a
non-linear medium in the presence of constant electric and magnetic field
strengths.
|
Coherent control is an optical technique to manipulate quantum states of
matter. The coherent control of 40-THz optical phonons in diamond was
demonstrated by using a pair of sub-10-fs optical pulses. The optical phonons
were detected via transient transmittance using a pump and probe protocol. The
optical and phonon interferences were observed in the transient transmittance
change and its behavior was well reproduced by quantum mechanical calculations
with a simple model which consists of two electronic levels and shifted
harmonic oscillators.
|
As early as the 1920's Marshall suggested that firms co-locate in cities to
reduce the costs of moving goods, people, and ideas. These 'forces of
agglomeration' have given rise, for example, to the high tech clusters of San
Francisco and Boston, and the automobile cluster in Detroit. Yet, despite its
importance for city planners and industrial policy-makers, until recently there
has been little success in estimating the relative importance of each
Marshallian channel to the location decisions of firms.
Here we explore a burgeoning literature that aims to exploit the co-location
patterns of industries in cities in order to disentangle the relationship
between industry co-agglomeration and customer/supplier, labour and idea
sharing. Building on previous approaches that focus on across- and
between-industry estimates, we propose a network-based method to estimate the
relative importance of each Marshallian channel at a meso scale. Specifically,
we use a community detection technique to construct a hierarchical
decomposition of the full set of industries into clusters based on
co-agglomeration patterns, and show that these industry clusters exhibit
distinct patterns in terms of their relative reliance on individual Marshallian
channels.
|
We present a formulation of special relativistic, dissipative hydrodynamics
(SRDHD) derived from the well-established M\"uller- Israel-Stewart (MIS)
formalism using an expansion in deviations from ideal behaviour. By re-summing
the non-ideal terms, our approach extends the Euler equations of motion for an
ideal fluid through a series of additional source terms that capture the
effects of bulk viscosity, shear viscosity and heat flux. For efficiency these
additional terms are built from purely spatial derivatives of the primitive
fluid variables. The series expansion is parametrized by the dissipation
strength and timescale coefficients, and is therefore rapidly convergent near
the ideal limit. We show, using numerical simulations, that our model
reproduces the dissipative fluid behaviour of other formulations. As our
formulation is designed to avoid the numerical stiffness issues that arise in
the traditional MIS formalism for fast relaxation timescales, it is roughly an
order of magnitude faster than standard methods near the ideal limit.
|
This paper investigates the use of graph rewriting systems as a modelling
tool, and advocates the embedding of such systems in an interactive
environment. One important application domain is the modelling of biochemical
systems, where states are represented by port graphs and the dynamics is driven
by rules and strategies. A graph rewriting tool's capability to interactively
explore the features of the rewriting system provides useful insights into
possible behaviours of the model and its properties. We describe PORGY, a
visual and interactive tool we have developed to model complex systems using
port graphs and port graph rewrite rules guided by strategies, and to navigate
in the derivation history. We demonstrate via examples some functionalities
provided by PORGY.
|
We investigate the minimal surface problem in the three dimensional
Heisenberg group, H, equipped with its standard Carnot-Caratheodory metric.
Using a particular surface measure, we characterize minimal surfaces in terms
of a sub-elliptic partial differential equation and prove an existence result
for the Plateau problem in this setting. Further, we provide a link between our
minimal surfaces and Riemannian constant mean curvature surfaces in H equipped
with different Riemannian metrics approximating the Carnot-Caratheodory metric.
We generate a large library of examples of minimal surfaces and use these to
show that the solution to the Dirichlet problem need not be unique. Moreover,
we show that the minimal surfaces we construct are in fact X-minimal surfaces
in the sense of Garofalo and Nhieu.
|
We have investigated the collisional properties of 41K atoms at ultracold
temperature. To show the possibility to use 41K as a coolant, a Bose-Einstein
condensate of 41K atoms in the stretched state (F=2, m_F=2) was created for the
first time by direct evaporation in a magnetic trap. An upper bound of three
body loss coefficient for atoms in the condensate was determined to be 4(2)
10^{-29} cm -6 s-1. A Feshbach resonance in the F=1, m_F=-1 state was observed
at 51.42(5) G, which is in good agreement with theoretical prediction.
|
The electronic structure of Mn doped GaAs and GaN have been examined within a
multiband Hubbard model. By virtue of the positioning of the Mn d states, Mn
doped GaAs is found to belong to the p-d metal regime of the
Zaanen-Sawatzky-Allen phase diagram and its variants while Mn doping in GaN
belongs to the covalent insulator regime. Their location in the phase diagram
also determines how they would behave under quantum confinement which would
increase the charge transfer energy. The ferromagnetic stability of Mn doped
GaAs, we find, increases with confinement therefore providing a route to higher
ferromagnetic transition temperatures.
|
The Drinfeld-Sokolov reduction method has been used to associate with $gl_n$
extensions of the matrix r-KdV system. Reductions of these systems to the fixed
point sets of involutive Poisson maps, implementing reduction of $gl_n$ to
classical Lie algebras of type $B, C, D$, are here presented. Modifications
corresponding, in the first place to factorisation of the Lax operator, and
then to Wakimoto realisations of the current algebra components of the
factorisation, are also described.
|
We investigate the relationship between the emergence of chaos
synchronization and the information flow in dynamical systems possessing
homogeneous or heterogeneous global interactions whose origin can be external
(driven systems) or internal (autonomous systems). By employing general models
of coupled chaotic maps for such systems, we show that the presence of a
homogeneous global field, either external or internal, for all times is not
indispensable for achieving complete or generalized synchronization in a system
of chaotic elements. Complete synchronization can also appear with
heterogeneous global fields; it does not requires the simultaneous sharing of
the field by all the elements in a system. We use the normalized mutual
information and the information transfer between global and local variables to
characterize complete and generalized synchronization. We show that these
information measures can characterize both types of synchronized states and
also allow to discern the origin of a global interaction field. A
synchronization state emerges when a sufficient amount of information provided
by a field is shared by all the elements in the system, on the average over
long times. Thus, the maximum value of the top-down information transfer can be
used as a predictor of synchronization in a system, as a parameter is varied.
|
Generics express generalizations about the world (e.g., birds can fly) that
are not universally true (e.g., newborn birds and penguins cannot fly).
Commonsense knowledge bases, used extensively in NLP, encode some generic
knowledge but rarely enumerate such exceptions and knowing when a generic
statement holds or does not hold true is crucial for developing a comprehensive
understanding of generics. We present a novel framework informed by linguistic
theory to generate exemplars -- specific cases when a generic holds true or
false. We generate ~19k exemplars for ~650 generics and show that our framework
outperforms a strong GPT-3 baseline by 12.8 precision points. Our analysis
highlights the importance of linguistic theory-based controllability for
generating exemplars, the insufficiency of knowledge bases as a source of
exemplars, and the challenges exemplars pose for the task of natural language
inference.
|
We propose a criterion for optimal parameter selection in coarse-grained
models of proteins, and develop a refined elastic network model (ENM) of bovine
trypsinogen. The unimodal density-of-states distribution of the trypsinogen ENM
disagrees with the bimodal distribution obtained from an all-atom model;
however, the bimodal distribution is recovered by strengthening interactions
between atoms that are backbone neighbors. We use the backbone-enhanced model
to analyze allosteric mechanisms of trypsinogen, and find relatively strong
communication between the regulatory and active sites.
|
We study theoretically the differential conductance at a junction between a
time reversal symmetry broken spin orbit coupled system with a tunable band gap
and a superconductor. We look for spin-dependent Andreev reflection (i.e,
sub-gap transport) and show that when various mass terms compete in energy,
there is substantial difference of Andreev reflection probability depending on
the spin of the incident electron. We further analyze the origin of such
spin-dependence and show how the incident angle of the electrons controls the
spin-dependence of the transport.
|
We study a variant of the Erd\H os unit distance problem, concerning angles
between successive triples of points chosen from a large finite point set.
Specifically, given a large finite set of $n$ points $E$, and a sequence of
angles $(\alpha_1,\ldots,\alpha_k)$, we give upper and lower bounds on the
maximum possible number of tuples of distinct points $(x_1,\dots, x_{k+2})\in
E^{k+2}$ satisfying $\angle (x_j,x_{j+1},x_{j+2})=\alpha_j$ for every $1\le j
\le k$ as well as pinned analogues.
|
A double stranded DNA molecule when pulled with a force acting on one end of
the molecule can become either partially or completely unzipped depending on
the magnitude of the force F. For a random DNA sequence, the number M of
unzipped base pairs goes as M~(F-Fc)^(-2) and diverges at the critical force Fc
with an exponent \chi=2. We find that when excluded volume effect is taken into
account for the unzipped part of the DNA, the exponent \chi=2 is not changed
but the critical force Fc is changed. The force versus temperature phase
diagram depends on only two parameters in the model, the persistence length and
the denaturation temperature. Furthermore a scaling form of the phase diagram
can be found. This scaling form is parameter independent and depends only on
the spatial dimension. It applies to all DNA molecules and should provide a
useful framework for comparison with experiments.
|
In this study, we propose a novel multi-modal end-to-end neural approach for
automated assessment of non-native English speakers' spontaneous speech using
attention fusion. The pipeline employs Bi-directional Recurrent Convolutional
Neural Networks and Bi-directional Long Short-Term Memory Neural Networks to
encode acoustic and lexical cues from spectrograms and transcriptions,
respectively. Attention fusion is performed on these learned predictive
features to learn complex interactions between different modalities before
final scoring. We compare our model with strong baselines and find combined
attention to both lexical and acoustic cues significantly improves the overall
performance of the system. Further, we present a qualitative and quantitative
analysis of our model.
|
We briefly review the theoretical formulations and applications of the
Aharonov--Bohm effect and the Aharonov--Casher effect with emphasis on
mesoscopic physics. Topics relating to the Aharonov--Bohm effect include:
locality, periodicity, non-integrable phase factors, Abelian gauge theory,
interference, the spectrum and persistent current of electrons on a ring
pierced by a magnetic field, Onsager reciprocity relations, and Aharonov--Bohm
interferometer. Topics relating to the Aharonov--Casher effect include: a
magnetic dipole in an electric field, locality, periodicity, non-Abelian gauge
invariance, SU(2) non-integrable phase factors, spin-orbit coupling, Pauli
equation, Rashba Hamiltonian, Aharonov--Casher interferometer, conductance and
polarization in two-channel systems due to the Aharonov--Casher effect.
|
We present a method for calculating the maximum elastic quadrupolar
deformations of relativistic stars, generalizing the previous Newtonian,
Cowling approximation integral given by [G. Ushomirsky et al., Mon. Not. R.
Astron. Soc. 319, 902 (2000)]. (We also present a method for Newtonian gravity
with no Cowling approximation.) We apply these methods to the m = 2 quadrupoles
most relevant for gravitational radiation in three cases: crustal deformations,
deformations of crystalline cores of hadron-quark hybrid stars, and
deformations of entirely crystalline color superconducting quark stars. In all
cases, we find suppressions of the quadrupole due to relativity compared to the
Newtonian Cowling approximation, particularly for compact stars. For the crust
these suppressions are up to a factor ~6, for hybrid stars they are up to ~4,
and for solid quark stars they are at most ~2, with slight enhancements instead
for low mass stars. We also explore ranges of masses and equations of state
more than in previous work, and find that for some parameters the maximum
quadrupoles can still be very large. Even with the relativistic suppressions,
we find that 1.4 solar mass stars can sustain crustal quadrupoles of a few
times 10^39 g cm^2 for the SLy equation of state or close to 10^40 g cm^2 for
equations of state that produce less compact stars. Solid quark stars of 1.4
solar masses can sustain quadrupoles of around 10^44 g cm^2. Hybrid stars
typically do not have solid cores at 1.4 solar masses, but the most massive
ones (~2 solar masses) can sustain quadrupoles of a few times 10^41 g cm^2 for
typical microphysical parameters and a few times 10^42 g cm^2 for extreme ones.
All of these quadrupoles assume a breaking strain of 0.1 and can be divided by
10^45 g cm^2 to yield the fiducial "ellipticities" quoted elsewhere.
|
An integral scheme for the efficient evaluation of two-center integrals over
contracted solid harmonic Gaussian functions is presented. Integral expressions
are derived for local operators that depend on the position vector of one of
the two Gaussian centers. These expressions are then used to derive the formula
for three-index overlap integrals where two of the three Gaussians are located
at the same center. The efficient evaluation of the latter is essential for
local resolution-of-the-identity techniques that employ an overlap metric. We
compare the performance of our integral scheme to the widely used Cartesian
Gaussian-based method of Obara and Saika (OS). Non-local interaction potentials
such as standard Coulomb, modified Coulomb and Gaussian-type operators, that
occur in range-separated hybrid functionals, are also included in the
performance tests. The speed-up with respect to the OS scheme is up to three
orders of magnitude for both, integrals and their derivatives. In particular,
our method is increasingly efficient for large angular momenta and highly
contracted basis sets.
|
A knowledge graph (KG) consists of a set of interconnected typed entities and
their attributes. Recently, KGs are popularly used as the auxiliary information
to enable more accurate, explainable, and diverse user preference
recommendations. Specifically, existing KG-based recommendation methods target
modeling high-order relations/dependencies from long connectivity user-item
interactions hidden in KG. However, most of them ignore the cold-start problems
(i.e., user cold-start and item cold-start) of recommendation analytics, which
restricts their performance in scenarios when involving new users or new items.
Inspired by the success of meta-learning on scarce training samples, we propose
a novel meta-learning based framework called MetaKG, which encompasses a
collaborative-aware meta learner and a knowledge-aware meta learner, to capture
meta users' preference and entities' knowledge for cold-start recommendations.
The collaborative-aware meta learner aims to locally aggregate user preferences
for each user preference learning task. In contrast, the knowledge-aware meta
learner is to globally generalize knowledge representation across different
user preference learning tasks. Guided by two meta learners, MetaKG can
effectively capture the high-order collaborative relations and semantic
representations, which could be easily adapted to cold-start scenarios.
Besides, we devise a novel adaptive task scheduler which can adaptively select
the informative tasks for meta learning in order to prevent the model from
being corrupted by noisy tasks. Extensive experiments on various cold-start
scenarios using three real data sets demonstrate that our presented MetaKG
outperforms all the existing state-of-the-art competitors in terms of
effectiveness, efficiency, and scalability.
|
Recent years have witnessed a significant increase in the number of paper
submissions to computer vision conferences. The sheer volume of paper
submissions and the insufficient number of competent reviewers cause a
considerable burden for the current peer review system. In this paper, we learn
a classifier to predict whether a paper should be accepted or rejected based
solely on the visual appearance of the paper (i.e., the gestalt of a paper).
Experimental results show that our classifier can safely reject 50% of the bad
papers while wrongly reject only 0.4% of the good papers, and thus dramatically
reduce the workload of the reviewers. We also provide tools for providing
suggestions to authors so that they can improve the gestalt of their papers.
|
Let $S_n$ denote the symmetric group of degree $n$ with $n\geq 3$. Set
$S=\{c_n=(1\ 2\ldots \ n),c_n^{-1},(1\ 2)\}$. Let
$\Gamma_n=\mathrm{Cay}(S_n,S)$ be the Cayley graph on $S_n$ with respect to
$S$. In this paper, we show that $\Gamma_n$ ($n\geq 13$) is a normal Cayley
graph, and that the full automorphism group of $\Gamma_n$ is equal to
$\mathrm{Aut}(\Gamma_n)=R(S_n)\rtimes \langle\mathrm{Inn}(\phi)\rangle\cong
S_n\rtimes \mathbb{Z}_2$, where $R(S_n)$ is the right regular representation of
$S_n$, $\phi=(1\ 2)(3\ n)(4\ n-1)(5\ n-2)\cdots$ $(\in S_n)$, and
$\mathrm{Inn}(\phi)$ is the inner isomorphism of $S_n$ induced by $\phi$.
|
The results of a full simulation of an endcap Time-of-Flight detector upgrade
based on multigap resistive plate chambers for the BESIII experiment are
presented. The simulation and reconstruction software is based on Geant4 and
has been implemented into the BESIII Offline Software System. The results of
the simulations are compared with beam test results and it is shown that a
total time resolution $\sigma$ of about 80 ps can be achieved allowing for a
pion and kaon separation up to momenta of 1.4 GeV/c at a 95% confidence level.
|
The two-dimensional regular and chaotic electro-convective flow states of a
dielectric liquid between two infinite parallel planar electrodes are
investigated using a two-relaxation-time lattice Boltzmann method. Positive
charges injected at the metallic planar electrode located at the bottom of the
dielectric liquid layer are transported towards the grounded upper electrode by
the synergy of the flow and the electric field. The various flow states can be
characterized by a non-dimensional parameter, the electric Rayleigh number.
Gradually increasing the electric Rayleigh number, the flow system sequentially
evolves via quasi-periodic, periodic, and chaotic flow states with five
identified bifurcations. The turbulence kinetic energy spectrum is shown to
follow the -3 law as the flow approaches turbulence. The spectrum is found to
follow a -5 law when the flow is periodic.
|
Intrinsic noise in objective function and derivatives evaluations may cause
premature termination of optimization algorithms. Evaluation complexity bounds
taking this situation into account are presented in the framework of a
deterministic trust-region method. The results show that the presence of
intrinsic noise may dominate these bounds, in contrast with what is known for
methods in which the inexactness in function and derivatives' evaluations is
fully controllable. Moreover, the new analysis provides estimates of the
optimality level achievable, should noise cause early termination. It finally
sheds some light on the impact of inexact computer arithmetic on evaluation
complexity.
|
We study the KPZ equation on a torus and derive Gaussian fluctuations in
large time.
|
We explore the possibility that the large electroclinic effect observed in
ferroelectric liquid crystals arises from the presence of an ordered array of
disclination lines and walls. If the spacing of these defects is in the
subvisible range, this modulated phase would be similar macroscopically to a
smectic A phase. The application of an electric field distorts the array,
producing a large polarization, and hence a large electroclinic effect. We show
that with suitable elastic parameters and sufficiently large chirality, the
modulated phase is favored over the smectic A and helically twisted smectic C*
phases. We propose various experimental tests of this scenario.
|
Graph clustering is a fundamental problem that has been extensively studied
both in theory and practice. The problem has been defined in several ways in
literature and most of them have been proven to be NP-Hard. Due to their high
practical relevancy, several heuristics for graph clustering have been
introduced which constitute a central tool for coping with NP-completeness, and
are used in applications of clustering ranging from computer vision, to data
analysis, to learning. There exist many methodologies for this problem, however
most of them are global in nature and are unlikely to scale well for very large
networks. In this paper, we propose two scalable local approaches for
identifying the clusters in any network. We further extend one of these
approaches for discovering the overlapping clusters in these networks. Some
experimentation results obtained for the proposed approaches are also
presented.
|
With its electrically tunable light absorption and ultrafast photoresponse,
graphene is a promising candidate for high-speed chip-integrated photonics. The
generation mechanisms of photosignals in graphene photodetectors have been
studied extensively in the past years. However, the knowledge about efficient
light conversion at graphene pn-junctions has not yet been translated into
high-performance devices. Here, we present a graphene photodetector integrated
on a silicon slot-waveguide, acting as a dual-gate to create a pn-junction in
the optical absorption region of the device. While at zero bias the
photo-thermoelectric effect is the dominant conversion process, an additional
photoconductive contribution is identified in a biased configuration. Extrinsic
responsivities of 35 mA/W, or 3.5 V/W, at zero bias and 76 mA/W at 300 mV bias
voltage are achieved. The device exhibits a 3 dB-bandwidth of 65 GHz, which is
the highest value reported for a graphene-based photodetector.
|
In this paper, we show that there is a large class of fermionic systems for
which it is possible to find, for any dimension, a finite closed set of
eigenoperators and eigenvalues of the Hamiltonian. Then, the hierarchy of the
equations of motion closes and analytical expressions for the Green's functions
are obtained in terms of a finite number of parameters, to be self-consistently
determined. Several examples are given. In particular, for these examples it is
shown that in the one-dimensional case it is possible to derive by means of
algebraic constraints a set of equations which allow us to determine the
self-consistent parameters and to obtain a complete exact solution.
|
In this note, we calculate the electronic properties of a realistic atomistic
model of amorphous graphene. The model contains odd membered rings,
particularly five and seven membered rings and no coordination defects. We show
that odd-membered rings increase the electronic density of states at the Fermi
level relative to crystalline graphene; a honeycomb lattice with semi-metallic
character. Some graphene samples contain amorphous regions, which even at small
concentrations, may strongly affect many of the exotic properties of
crystalline graphene, which arise because of the linear dispersion and
semi-metallic character of perfectly crystalline graphene. Estimates are given
for the density of states at the Fermi level using a tight-binding model for
the $\pi$ states.
|
Having smaller energy density than batteries, supercapacitors have
exceptional power density and cyclability. Their energy density can be
increased using ionic liquids and electrodes with sub-nanometer pores, but this
tends to reduce their power density and compromise the key advantage of
supercapacitors. To help address this issue through material optimization, here
we unravel the mechanisms of charging sub-nanometer pores with ionic liquids
using molecular simulations, navigated by a phenomenological model. We show
that charging of ionophilic pores is a diffusive process, often accompanied by
overfilling followed by de-filling. In sharp contrast to conventional
expectations, charging is fast because ion diffusion during charging can be an
order of magnitude faster than in bulk, and charging itself is accelerated by
the onset of collective modes. Further acceleration can be achieved using
ionophobic pores by eliminating overfilling/de-filling and thus leading to
charging behavior qualitatively different from that in conventional, ionophilic
pores.
|
In the last years several proof of principle experiments have demonstrated
the advantages of quantum technologies respect to classical schemes. The
present challenge is to overpass the limits of proof of principle
demonstrations to approach real applications. This letter presents such an
achievement in the field of quantum enhanced imaging. In particular, we
describe the realization of a sub-shot noise wide field microscope based on
spatially multi-mode non-classical photon number correlations in twin beams.
The microscope produces real time images of 8000 pixels at full resolution, for
(500micrometers)2 field-of-view, with noise reduced to the 80% of the shot
noise level (for each pixel), suitable for absorption imaging of complex
structures. By fast post-elaboration, specifically applying a quantum enhanced
median filter, the noise can be further reduced (less than 30% of the shot
noise level) by setting a trade-off with the resolution, demonstrating the best
sensitivity per incident photon ever achieved in absorption microscopy.
|
We investigate the thermodynamical behavior and the scaling symmetries of the
scalar dressed black brane (BB) solutions of a recently proposed, exactly
integrable Einstein-scalar gravity model [1], which also arises as
compactification of (p-1)-branes with a smeared charge. The extremal, zero
temperature, solution is a scalar soliton interpolating between a conformal
invariant AdS vacuum in the near-horizon region and a scale covariant metric
(generating hyperscaling violation on the boundary field theory)
asymptotically. We show explicitly that for the boundary field theory this
implies the emergence of an UV length scale (related to the size of the brane),
which decouples in the IR, where conformal invariance is restored. We also show
that at high temperatures the system undergoes a phase transition. Whereas at
small temperature the Schwarzschild-AdS BB is stable, above a critical
temperature the scale covariant, scalar-dressed BB solution, becomes
energetically preferred. We calculate the critical exponent z and the
hyperscaling violation parameter of the scalar-dressed phase. In particular we
show that the hyperscaling violation parameter is always negative. We also show
that the above features are not a peculiarity of the exact integrable model of
Ref.[1], but are a quite generic feature of Einstein-scalar and
Einstein-Maxwell-scalar gravity models for which the squared-mass of the scalar
field is positive and the potential vanishes exponentially as the scalar field
goes to minus infinity.
|
Recent experimental observation of weak ergodicity breaking in Rydberg atom
quantum simulators has sparked interest in quantum many-body scars -
eigenstates which evade thermalisation at finite energy densities due to novel
mechanisms that do not rely on integrability or protection by a global
symmetry. A salient feature of some quantum many-body scars is their sub-volume
bipartite entanglement entropy. In this work we demonstrate that such exact
many-body scars also possess extensive multipartite entanglement structure if
they stem from an su(2) spectrum generating algebra. We show this analytically,
through scaling of the quantum Fisher information, which is found to be
super-extensive for exact scarred eigenstates in contrast to generic thermal
states. Furthermore, we numerically study signatures of multipartite
entanglement in the PXP model of Rydberg atoms, showing that extensive quantum
Fisher information density can be generated dynamically by performing a global
quench experiment. Our results identify a rich multipartite correlation
structure of scarred states with significant potential as a resource in quantum
enhanced metrology.
|
We construct a sequence of modular compactifications of the space of marked
trigonal curves by allowing the branch points to coincide to a given extent.
Beginning with the standard admissible cover compactification, the sequence
first proceeds through contractions of the boundary divisors and then through
flips of the so-called Maroni strata, culminating in a Fano model for even
genera and a Fano fibration for odd genera. While the sequence of divisorial
contractions arises from a more general construction, the sequence of flips
uses the particular geometry of triple covers. We explicitly describe the Mori
chamber decomposition given by this sequence of flips.
|
We investigate the R\'enyi entropy and entanglement entropy of an interval
with an arbitrary length in the canonical ensemble, microcanonical ensemble and
primary excited states at large energy density in the thermodynamic limit of a
two-dimensional large central charge $c$ conformal field theory. As a
generalization of the recent work [Phys. Rev. Lett. 122 (2019) 041602], the
main purpose of the paper is to see whether one can distinguish these various
large energy density states by the R\'enyi entropies of an interval at
different size scales, namely, short, medium and long. Collecting earlier
results and performing new calculations in order to compare with and fill gaps
in the literature, we give a more complete and detailed analysis of the
problem. Especially, we find some corrections to the recent results for the
holographic R\'enyi entropy of a medium size interval, which enlarge the
validity region of the results. Based on the R\'enyi entropies of the three
interval scales, we find that R\'enyi entropy cannot distinguish the canonical
and microcanonical ensemble states for a short interval, but can do the job for
both medium and long intervals. At the leading order of large $c$ the
entanglement entropy cannot distinguish the canonical and microcanonical
ensemble states for all interval lengths, but the difference of entanglement
entropy for a long interval between the two states would appear with $1/c$
corrections. We also discuss R\'enyi entropy and entanglement entropy
differences between the thermal states and primary excited state. Overall, our
work provides an up-to-date picture of distinguishing different thermal or
primary states at various length scales of the subsystem.
|
The region of small transverse momentum in q-qbar- and gg-initiated processes
must be studied in the framework of resummation to account for the large,
logarithmically-enhanced contributions to physical observables. In this letter,
we study resummed differential cross-sections for Higgs production via
bottom-quark fusion. We find that the differential distribution peaks at
approximately 15 GeV, a number of great experimental importance to measuring
this production channel.
|
Given an abelian $p$-group $G$ of rank $n$, we construct an action of the
torus $\mathbb{T}^n$ on the stable module $\infty$-category of
$G$-representations over a field of characteristic $p$. The homotopy fixed
points are given by the $\infty$-category of module spectra over the Tate
construction of the torus. The relationship thus obtained arises from a Galois
extension in the sense of Rognes, with Galois group given by the torus. As one
application, we give a homotopy-theoretic proof of Dade's classification of
endotrivial modules for abelian $p$-groups. As another application, we give a
slight variant of a key step in the Benson-Iyengar-Krause proof of the
classification of localizing subcategories of the stable module category.
|
We present a novel approach to disentangle two key contributions to the
largest-scale anisotropy of the galaxy distribution: (i) the intrinsic dipole
due to clustering and anisotropic geometry, and (ii) the kinematic dipole due
to our peculiar velocity. Including the redshift and angular size of galaxies,
in addition to their fluxes and positions allows us to measure both the
direction and amplitude of our velocity independently of the intrinsic dipole
of the source distribution. We find that this new approach applied to future
galaxy surveys (LSST and Euclid) and a SKA radio continuum survey will allow to
measure our velocity ($\beta = v/c$) with a relative error in the amplitude
$\sigma(\beta)/\beta \sim (1.3 - 4.5)\%$ and in direction, $\theta_{\beta} \sim
0.9^\circ - 3.9^\circ$, well beyond what can be achieved when analysing only
the number count dipole. We also find that galaxy surveys are able to measure
the intrinsic large-scale anisotropy with a relative uncertainty of $\lesssim
5\%$ (measurement error, not including cosmic variance). Our method enables two
simultaneous tests of the Cosmological Principle: comparing the observations of
our peculiar velocity with the CMB dipole, and testing for a significant
intrinsic anisotropy on large scales which would indicate effects beyond the
standard cosmological model.
|
Insider threats are a growing concern for organizations due to the amount of
damage that their members can inflict by combining their privileged access and
domain knowledge. Nonetheless, the detection of such threats is challenging,
precisely because of the ability of the authorized personnel to easily conduct
malicious actions and because of the immense size and diversity of audit data
produced by organizations in which the few malicious footprints are hidden. In
this paper, we propose an unsupervised insider threat detection system based on
audit data using Bayesian Gaussian Mixture Models. The proposed approach
leverages a user-based model to optimize specific behaviors modelization and an
automatic feature extraction system based on Word2Vec for ease of use in a
real-life scenario. The solution distinguishes itself by not requiring data
balancing nor to be trained only on normal instances, and by its little domain
knowledge required to implement. Still, results indicate that the proposed
method competes with state-of-the-art approaches, presenting a good recall of
88\%, accuracy and true negative rate of 93%, and a false positive rate of
6.9%. For our experiments, we used the benchmark dataset CERT version 4.2.
|
In this paper we consider a natural extremal graph theoretic problem of
topological sort, concerning the minimization of the (topological)
connectedness of the independence complex of graphs in terms of its dimension.
We observe that the lower bound $\frac{\dim(\mathcal{I}(G))}{2} - 2$ on the
connectedness of the independence complex $\mathcal{I}(G)$ of line graphs of
bipartite graphs $G$ is tight. In our main theorem we characterize the extremal
examples. Our proof of this characterization is based on topological machinery.
Our motivation for studying this problem comes from a classical conjecture of
Ryser. Ryser's Conjecture states that any $r$-partite $r$-uniform hypergraph
has a vertex cover of size at most $(r - 1)$-times the size of the largest
matching. For $r = 2$, the conjecture is simply K\"onig's Theorem. It has also
been proven for $r = 3$ by Aharoni using a beautiful topological argument. In a
separate paper we characterize the extremal examples for the $3$-uniform case
of Ryser's Conjecture (i.e., Aharoni's Theorem), and in particular resolve an
old conjecture of Lov\'asz for the case of Ryser-extremal $3$-graphs.
Our main result in this paper will provide us with valuable structural
information for that characterization. Its proof is based on the observation
that link graphs of Ryser-extremal $3$-uniform hypergraphs are exactly the
bipartite graphs we study here.
|
Drinfeld realisations are constructed for the quantum affine superalgebras of
the series ${\rm\mathfrak{osp}}(1|2n)^{(1)}$,${\rm\mathfrak{sl}}(1|2n)^{(2)}$
and ${\rm\mathfrak{osp}}(2|2n)^{(2)}$. By using the realisations, we develop
vertex operator representations and classify the finite dimensional irreducible
representations for these quantum affine superalgebras.
|
We prove some uniqueness results for weak solutions to some classes of
parabolic Dirichlet problems.
|
We estimated the dynamical surface mass density (Sigma) at the solar
Galactocentric distance between 2 and 4 kpc from the Galactic plane, as
inferred from the observed kinematics of the thick disk. We find Sigma(z=2
kpc)=57.6+-5.8 Mo pc^-2, and it shows only a tiny increase in the z-range
considered by our investigation. We compared our results with the expectations
for the visible mass, adopting the most recent estimates in the literature for
contributions of the Galactic stellar disk and interstellar medium, and
proposed models of the dark matter distribution. Our results match the
expectation for the visible mass alone, never differing from it by more than
0.8 $Mo pc^-2 at any z, and thus we find little evidence for any dark
component. We assume that the dark halo could be undetectable with our method,
but the dark disk, recently proposed as a natural expectation of the LambdaCDM
models, should be detected. Given the good agreement with the visible mass
alone, models including a dark disk are less likely, but within errors its
existence cannot be excluded. In any case, these results put constraints on its
properties: thinner models (scale height lower than 4 kpc) reconcile better
with our results and, for any scale height, the lower-density models are
preferred. We believe that successfully predicting the stellar thick disk
properties and a dark disk in agreement with our observations could be a
challenging theoretical task.
|
We show that there exist exceptional collections of length 3 consisting of
line bundles on the three fake projective planes that have a 2-adic
uniformisation with torsion free covering group. We also compute the Hochschild
cohomology of the right orthogonal of the subcategory of the bounded derived
category of coherent sheaves generated by these exceptional collections.
|
We introduce a family of trees that interpolate between the Bethe lattice and
$\bbZ$. We prove complete localization for the Anderson model on any member of
that family.
|
We consider a Graviweak Unification model with the assumption of the
existence of the hidden (invisible) sector of our Universe parallel to the
visible world. This Hidden World (HW) is assumed to be a Mirror World (MW) with
broken mirror parity. We start with a diffeomorphism invariant theory of a
gauge field valued in a Lie algebra g, which is broken spontaneously to the
direct sum of the spacetime Lorentz algebra and the Yang-Mills algebra: $\tilde
{\mathfrak g} = {\mathfrak su}(2)^{(grav)}_L \oplus {\mathfrak su}(2)_L$ -- in
the ordinary world, and $\tilde {\mathfrak g}' = {{\mathfrak
su}(2)'}^{(grav)}_R \oplus {\mathfrak su}(2)'_R$ -- in the hidden world. Using
an extension of the Plebanski action for general relativity, we recover the
actions for gravity, SU(2) Yang-Mills and Higgs fields in both (visible and
invisible) sectors of the Universe, and also the total action. After symmetry
breaking, all physical constants, including the Newton's constants,
cosmological constants, Yang-Mills couplings, and other parameters, are
determined by a single parameter $g$ presented in the initial action, and by
the Higgs VEVs. The Dark Energy problem of this model predicts a too large
supersymmetric breaking scale ($\sim 10^{10}-10^{12}$ GeV), which is not within
the reach of the LHC experiments.
|
Thuiller et al. analyzed the consequences of anticipated climate change on
plant, bird, and mammal phylogenetic diversity (PD) across Europe. They
concluded that species loss will not be clade specific across the Tree of Life,
and that there will not be an overall decline in PD across the whole of Europe.
We applaud their attempt to integrate phylogenetic knowledge into scenarios of
future extinction but their analyses raise a series of concerns. We focus here
on their analyses of plants.
|
The carrier mobility of anisotropic two-dimensional (2D) semiconductors under
longitudinal acoustic (LA) phonon scattering was theoretically studied with the
deformation potential theory. Based on Boltzmann equation with relaxation time
approximation, an analytic formula of intrinsic anisotropic mobility was
deduced, which shows that the influence of effective mass to the mobility
anisotropy is larger than that of deformation potential constant and elastic
modulus. Parameters were collected for various anisotropic 2D materials (black
phosphorus, Hittorf's phosphorus, BC$_2$N, MXene, TiS$_3$, GeCH$_3$) to
calculate their mobility anisotropy. It was revealed that the anisotropic ratio
was overestimated in the past.
|
We provide the first differentially private algorithms for controlling the
false discovery rate (FDR) in multiple hypothesis testing, with essentially no
loss in power under certain conditions. Our general approach is to adapt a
well-known variant of the Benjamini-Hochberg procedure (BHq), making each step
differentially private. This destroys the classical proof of FDR control. To
prove FDR control of our method, (a) we develop a new proof of the original
(non-private) BHq algorithm and its robust variants -- a proof requiring only
the assumption that the true null test statistics are independent, allowing for
arbitrary correlations between the true nulls and false nulls. This assumption
is fairly weak compared to those previously shown in the vast literature on
this topic, and explains in part the empirical robustness of BHq. Then (b) we
relate the FDR control properties of the differentially private version to the
control properties of the non-private version. \end{enumerate} We also present
a low-distortion "one-shot" differentially private primitive for "top $k$"
problems, e.g., "Which are the $k$ most popular hobbies?" (which we apply to:
"Which hypotheses have the $k$ most significant $p$-values?"), and use it to
get a faster privacy-preserving instantiation of our general approach at little
cost in accuracy. The proof of privacy for the one-shot top~$k$ algorithm
introduces a new technique of independent interest.
|
We consider the problem of distributedly estimating Gaussian processes in
multi-agent frameworks. Each agent collects few measurements and aims to
collaboratively reconstruct a common estimate based on all data. Agents are
assumed with limited computational and communication capabilities and to gather
$M$ noisy measurements in total on input locations independently drawn from a
known common probability density. The optimal solution would require agents to
exchange all the $M$ input locations and measurements and then invert an $M
\times M$ matrix, a non-scalable task. Differently, we propose two suboptimal
approaches using the first $E$ orthonormal eigenfunctions obtained from the
\ac{KL} expansion of the chosen kernel, where typically $E \ll M$. The benefits
are that the computation and communication complexities scale with $E$ and not
with $M$, and computing the required statistics can be performed via standard
average consensus algorithms. We obtain probabilistic non-asymptotic bounds
that determine a priori the desired level of estimation accuracy, and new
distributed strategies relying on Stein's unbiased risk estimate (SURE)
paradigms for tuning the regularization parameters and applicable to generic
basis functions (thus not necessarily kernel eigenfunctions) and that can again
be implemented via average consensus. The proposed estimators and bounds are
finally tested on both synthetic and real field data.
|
An eikonal model has been used to assess the relationship between calculated
strengths for first forbidden beta decay and calculated cross sections for
(p,n) charge exchange reactions. It is found that these are proportional for
strong transitions, suggesting that hadronic charge exchange reactions may be
useful in determining the spin-dipole matrix elements for astrophysically
interesting leptonic transitions.
|
We significantly strengthen results on the structure of matrix rings over
finite fields and apply them to describe the structure of the so-called weakly
$n$-torsion clean rings. Specifically, we establish that, for any field $F$
with either exactly seven or strictly more than nine elements, each matrix over
$F$ is presentable as a sum of of a tripotent matrix and a $q$-potent matrix if
and only if each element in $F$ is presentable as a sum of a tripotent and a
$q$-potent, whenever $q>1$ is an odd integer. In addition, if $Q$ is a power of
an odd prime and $F$ is a field of odd characteristic, having cardinality
strictly greater than $9$, then, for all $n\geq 1$, the matrix ring
$\mathbb{M}_n(F)$ is weakly $(Q-1)$-torsion clean if and only if $F$ is a
finite field of cardinality $Q$.
A novel contribution to the ring-theoretical theme of this study is the
classification of finite fields $\FQ$ of odd order in which every element is
the sum of a tripotent and a potent. In this regard, we obtain an expression
for the number of consecutive triples $\gamma-1,\gamma,\gamma+1$ of non-square
elements in $\FQ$; in particular, $\FQ$ contains three consecutive non-square
elements whenever $\FQ$ contains more than 9 elements.
|
The isomeric first excited state of the isotope 229Th exhibits the lowest
nuclear excitation energy in the whole landscape of known atomic nuclei. For a
long time this energy was reported in the literature as 3.5(5) eV, however, a
new experiment corrected this energy to 7.6(5) eV, corresponding to a UV
transition wavelength of 163(11) nm. The expected isomeric lifetime is $\tau=$
3-5 hours, leading to an extremely sharp relative linewidth of Delta E/E ~
10^-20, 5-6 orders of magnitude smaller than typical atomic relative
linewidths. For an adequately chosen electronic state the frequency of the
nuclear ground-state transition will be independent from influences of external
fields in the framework of the linear Zeeman and quadratic Stark effect,
rendering 229mTh a candidate for a reference of an optical clock with very high
accuracy. Moreover, in the literature speculations about a potentially enhanced
sensitivity of the ground-state transition of $^{229m}$Th for eventual
time-dependent variations of fundamental constants (e.g. fine structure
constant alpha) can be found. We report on our experimental activities that aim
at a direct identification of the UV fluorescence of the ground-state
transition energy of 229mTh. A further goal is to improve the accuracy of the
ground-state transition energy as a prerequisite for a laser-based optical
control of this nuclear excited state, allowing to build a bridge between
atomic and nuclear physics and open new perspectives for metrological as well
as fundamental studies.
|
We study semiclassical correlation functions in Liouville field theory on a
two-sphere when all operators have large conformal dimensions. In the usual
approach, such computation involves solving the classical Liouville equation,
which is known to be extremely difficult for higher-point functions. To
overcome this difficulty, we propose a new method based on the Riemann-Hilbert
analysis, which is applied recently to the holographic calculation of
correlation functions in AdS/CFT. The method allows us to directly compute the
correlation functions without solving the Liouville equation explicitly. To
demonstrate its utility, we apply it to three-point functions, which are known
to be solvable, and confirm that it correctly reproduces the classical limit of
the DOZZ formula for quantum three-point functions. This provides good evidence
for the validity of this method.
|
For a positive integer $r$, a distance-$r$ independent set in an undirected
graph $G$ is a set $I\subseteq V(G)$ of vertices pairwise at distance greater
than $r$, while a distance-$r$ dominating set is a set $D\subseteq V(G)$ such
that every vertex of the graph is within distance at most $r$ from a vertex
from $D$. We study the duality between the maximum size of a distance-$2r$
independent set and the minimum size of a distance-$r$ dominating set in
nowhere dense graph classes, as well as the kernelization complexity of the
distance-$r$ independent set problem on these graph classes. Specifically, we
prove that the distance-$r$ independent set problem admits an almost linear
kernel on every nowhere dense graph class.
|
We have constructed an explicit see-saw model containing two singlet
neutrinos, one carrying a $(B-3L_e)$ gauge charge with an intermediate mass
scale of $\sim O(10^{10})$ GeV along with a sterile one near the GUT (grand
unification theory) scale of $\sim O(10^{16})$ GeV. With these mass scales and
a reasonable range of Yukawa couplings, the model can naturally account for the
near-maximal mixing of atmospheric neutrino oscillations and the small mixing
matter-enhanced oscillation solution to the solar neutrino deficit.
|
Self-organized synchronization is a ubiquitous collective phenomenon, in
which each unit adjusts their rhythms to achieve synchrony through mutual
interactions. The optomechanical systems, due to their inherently engineerable
nonlinearities, provide an ideal platform to study self-organized
synchronization. Here, we demonstrate the self-organized synchronization of
phonon lasers in a two-membrane-in-the-middle optomechanical system. The probe
of individual membrane enables to monitor the real-time transient dynamics of
synchronization, which reveal that the system enters into the synchronization
regime via torus birth bifurcation line. The phase-locking phenomenon and the
transition between in-phase and anti-phase regimes are directly observed.
Moreover, such a system greatly facilitate the controllable synchronous states,
and consequently a phononic memory is realized by tuning the system parameters.
This result is an important step towards the future studies of many-body
collective behaviors in multiresonator optomechanics with long distances, and
might find potential applications in quantum information processing and complex
networks.
|
We consider combining two important methods for constructing
quasi-equilibrium initial data for binary black holes: the conformal
thin-sandwich formalism and the puncture method. The former seeks to enforce
stationarity in the conformal three-metric and the latter attempts to avoid
internal boundaries, like minimal surfaces or apparent horizons. We show that
these two methods make partially conflicting requirements on the boundary
conditions that determine the time slices. In particular, it does not seem
possible to construct slices that are quasi-stationary and avoid physical
singularities and simultaneously are connected by an everywhere positive lapse
function, a condition which must obtain if internal boundaries are to be
avoided. Some relaxation of these conflicting requirements may yield a soluble
system, but some of the advantages that were sought in combining these
approaches will be lost.
|
We show how to construct fully symmetric, gapped states without topological
order on a honey- comb lattice for S = 1/2 spins using the language of
projected entangled pair states(PEPS). An explicit example is given for the
virtual bond dimension D = 4. Four distinct classes differing by lattice
quantum numbers are found by applying the systematic classification scheme
introduced by two of the authors [S. Jiang and Y. Ran, Phys. Rev. B 92, 104414
(2015)]. Lack of topological degeneracy or other conventional forms of symmetry
breaking, and the existence of energy gap in the proposed wave functions, are
checked by numerical calculations of the entanglement entropy and various
correlation functions. Our work provides the first explicit realization of a
featureless quantum insulator for spin-1/2 particles on a honeycomb lattice.
|
Intense light-matter interactions and unique structural and electrical
properties make Van der Waals heterostructures composed by Graphene (Gr) and
monolayer transition metal dichalcogenides (TMD) promising building blocks for
tunnelling transistors, flexible electronics, as well as optoelectronic
devices, including photodetectors, photovoltaics and quantum light emitting
devices (QLEDs), bright and narrow-line emitters using minimal amounts of
active absorber material. The performance of such devices is critically ruled
by interlayer interactions which are still poorly understood in many respects.
Specifically, two classes of coupling mechanisms have been proposed: charge
transfer (CT) and energy transfer (ET), but their relative efficiency and the
underlying physics is an open question. Here, building on a time resolved Raman
scattering experiment, we determine the electronic temperature profile of Gr in
response to TMD photo-excitation, tracking the picosecond dynamics of the G and
2D bands. Compelling evidence for a dominant role ET process accomplished
within a characteristic time of ~ 4 ps is provided. Our results suggest the
existence of an intermediate process between the observed picosecond ET and the
generation of a net charge underlying the slower electric signals detected in
optoelectronic applications.
|
We discuss the possibility of producing a new kind of nuclear system by
putting a few antibaryons inside ordinary nuclei. The structure of such systems
is calculated within the relativistic mean--field model assuming that the
nucleon and antinucleon potentials are related by the G-parity transformation.
The presence of antinucleons leads to decreasing vector potential and
increasing scalar potential for the nucleons. As a result, a strongly bound
system of high density is formed. Due to the significant reduction of the
available phase space the annihilation probability might be strongly suppressed
in such systems.
|
Let $M$ be a compact Hausdorff space. We prove that in this paper, every
self--adjoint matrix over $C(M)$ is approximately diagonalizable iff $\dim M\le
2$ and $\HO^2(M,\mathbb Z)\cong 0$. Using this result, we show that every
unitary matrix over $C(M)$ is approximately diagonalizable iff $\dim M\le 2$,
$\HO^1(M,\mathbb Z)\cong\HO^2(M,\mathbb Z)\cong 0$ when $M$ is a compact metric
space.
|
Assume that $\D$ is a Krull-Schmidt, Hom-finite triangulated category with a
Serre functor and a cluster-tilting object $T$. We introduce the notion of
relative cluster tilting objects, and $T[1]$-cluster tilting objects in $\D$,
which are a generalization of cluster-tilting objects. When $\D$ is
$2$-Calabi-Yau, the relative cluster tilting objects are cluster-tilting. Let
$\la={\rm End}^{op}_{\D}(T)$ be the opposite algebra of the endomorphism
algebra of $T$. We show that there exists a bijection between $T[1]$-cluster
tilting objects in $\D$ and support $\tau$-tilting $\la$-modules, which
generalizes a result of Adachi-Iyama-Reiten \cite{AIR}. We develop a basic
theory on $T[1]$-cluster tilting objects. In particular, we introduce a partial
order on the set of $T[1]$-cluster tilting objects and mutation of
$T[1]$-cluster tilting objects, which can be regarded as a generalization of
`cluster-tilting mutation'. As an application, we give a partial answer to a
question posed in \cite{AIR}.
|
The Fault Detection and Isolation Tools (FDITOOLS) is a collection of MATLAB
functions for the analysis and solution of fault detection and model detection
problems. The implemented functions are based on the computational procedures
described in the Chapters 5, 6 and 7 of the book: "A. Varga, Solving Fault
Diagnosis Problems - Linear Synthesis Techniques, Springer, 2017". This
document is the User's Guide for the version V1.0 of FDITOOLS. First, we
present the mathematical background for solving several basic exact and
approximate synthesis problems of fault detection filters and model detection
filters. Then, we give in-depth information on the command syntax of the main
analysis and synthesis functions. Several examples illustrate the use of the
main functions of FDITOOLS.
|
Time-dependent quantum mechanics provides an intuitive picture of particle
propagation in external fields. Semiclassical methods link the classical
trajectories of particles with their quantum mechanical propagation. Many
analytical results and a variety of numerical methods have been developed to
solve the time-dependent Schroedinger equation. The time-dependent methods work
for nearly arbitrarily shaped potentials, including sources and sinks via
complex-valued potentials. Many quantities are measured at fixed energy, which
is seemingly not well suited for a time-dependent formulation. Very few methods
exist to obtain the energy-dependent Green function for complicated potentials
without resorting to ensemble averages or using certain lead-in arrangements.
Here, we demonstrate in detail a time-dependent approach, which can accurately
and effectively construct the energy-dependent Green function for very general
potentials. The applications of the method are numerous, including chemical,
mesoscopic, and atomic physics.
|
Using a Coulomb gas method, we compute analytically the probability
distribution of the Renyi entropies (a standard measure of entanglement) for a
random pure state of a large bipartite quantum system. We show that, for any
order q>1 of the Renyi entropy, there are two critical values at which the
entropy's probability distribution changes shape. These critical points
correspond to two different transitions in the corresponding charge density of
the Coulomb gas: the disappearance of an integrable singularity at the origin
and the detachement of a single-charge drop from the continuum sea of all the
other charges. These transitions respectively control the left and right tails
of the entropy's probability distribution, as verified also by Monte Carlo
numerical simulations of the Coulomb gas equilibrium dynamics.
|
Materials with perpendicular magnetic anisotropy (PMA) effect with high Curie
temperature ($T_C$) is essential in applications. In this work, $Cr_2Te_3$ thin
films showing PMA with $T_C$ ranging from 165 K to 295 K were successfully
grown on $Al_2O_3$ by the molecular beam epitaxy (MBE) technique. The
structural analysis, magneto-transport and magnetic characterizations were
conducted to study the physical origin of the improved $T_C$. In particular,
ferromagnetic (FM) and antiferromagnetic (AFM) ordering competition were
investigated. A phenomenological model based on the coupling degree between FM
and AFM ordering was proposed to explain the observed $T_C$ enhancement. Our
findings indicate that the $T_C$ of $Cr_2Te_3$ thin film can be tuned, which
make it hold the potential for various magnetic applications.
|
We propose a general framework leveraging the halo-galaxy connection to link
galaxies observed at different redshift in a statistical way, and use the link
to infer the redshift evolution of the galaxy population. Our tests based on
hydrodynamic simulations show that our method can accurately recover the
stellar mass assembly histories up to $z\sim 3$ for present star-forming and
quiescent galaxies down to $10^{10}h^{-1}M_{\odot}$. Applying the method to
observational data shows that the stellar mass evolution of the main
progenitors of galaxies depends strongly on the properties of descendants, such
as stellar mass, halo mass, and star formation states. Galaxies hosted by
low-mass groups/halos at the present time have since $z\sim 1.8$ grown their
stellar mass $\sim 2.5$ times as fast as those hosted by massive clusters. This
dependence on host halo mass becomes much weaker for descendant galaxies with
similar star formation states. Star-forming galaxies grow about 2-4 times
faster than their quiescent counterparts since $z\sim 1.8$. Both TNG and EAGLE
simulations over-predict the progenitor stellar mass at $z>1$, particularly for
low-mass descendants.
|
Homography estimation serves as a fundamental technique for image alignment
in a wide array of applications. The advent of convolutional neural networks
has introduced learning-based methodologies that have exhibited remarkable
efficacy in this realm. Yet, the generalizability of these approaches across
distinct domains remains underexplored. Unlike other conventional tasks,
CNN-driven homography estimation models show a distinctive immunity to domain
shifts, enabling seamless deployment from one dataset to another without the
necessity of transfer learning. This study explores the resilience of a variety
of deep homography estimation models to domain shifts, revealing that the
network architecture itself is not a contributing factor to this remarkable
adaptability. By closely examining the models' focal regions and subjecting
input images to a variety of modifications, we confirm that the models heavily
rely on local textures such as edges and corner points for homography
estimation. Moreover, our analysis underscores that the domain shift immunity
itself is intricately tied to the utilization of these local textures.
|
Gravitational lensing in a weak but otherwise arbitrary gravitational field
can be described in terms of a 3 x 3 tensor, the "effective refractive index".
If the sources generating the gravitational field all have small internal
fluxes, stresses, and pressures, then this tensor is automatically isotropic
and the "effective refractive index" is simply a scalar that can be determined
in terms of a classic result involving the Newtonian gravitational potential.
In contrast if anisotropic stresses are ever important then the gravitational
field acts similarly to an anisotropic crystal. We derive simple formulae for
the refractive index tensor, and indicate some situations in which this will be
important.
|
A talk presents a new cosmological model with superstring-inspired $E_6$
unification, broken at the early stage of the Universe into $SO(10)\times
U(1)_Z$ -- into the ordinary world, and $SU(6)'\times SU(2)'_{\theta}$ -- into
the hidden world.
|
We will in this paper report on suggestive similarities between density
fluctuation power versus wavenumber on small (mm) and large (Mpc) scales. The
small scale measurements were made in fusion plasmas and compared to
predictions from classical fluid turbulence theory. The data is consistent with
the dissipative range of 2D turbulence. Alternatively, the results can be
fitted to a functional form that can not be explained by turbulence theory. The
large scale measurements were part of the Sloan Digital Sky Survey galaxy
redshift examination. We found that the equations describing fusion plasmas
also hold for the galaxy data. The comparable dependency of density fluctuation
power on wavenumber in fusion plasmas and galaxies might indicate a common
origin of these fluctuations.
|
Significant progress has been made in recent years on the development of
gravitational wave detectors. Sources such as coalescing compact binary
systems, neutron stars in low-mass X-ray binaries, stellar collapses and
pulsars are all possible candidates for detection. The most promising design of
gravitational wave detector uses test masses a long distance apart and freely
suspended as pendulums on Earth or in drag-free craft in space. The main theme
of this review is a discussion of the mechanical and optical principles used in
the various long baseline systems in operation around the world - LIGO (USA),
Virgo (Italy/France), TAMA300 and LCGT (Japan), and GEO600 (Germany/U.K.) - and
in LISA, a proposed space-borne interferometer. A review of recent science runs
from the current generation of ground-based detectors will be discussed, in
addition to highlighting the astrophysical results gained thus far. Looking to
the future, the major upgrades to LIGO (Advanced LIGO), Virgo (Advanced Virgo),
LCGT and GEO600 (GEO-HF) will be completed over the coming years, which will
create a network of detectors with significantly improved sensitivity required
to detect gravitational waves. Beyond this, the concept and design of possible
future "third generation" gravitational wave detectors, such as the Einstein
Telescope (ET), will be discussed.
|
In this paper, we propose an efficient mobility control algorithm for the
downlink multi-cell orthogonal frequency division multiplexing access (OFDMA)
system for co-channel interference reduction. It divides each cell into several
areas. The mobile nodes in each area find their own optimal position according
to their present location. Both the signal to interference plus noise ratio
(SINR) and the capacity for each node are increased by the proposed mobility
control algorithm. Simulation results say that, even the frequency reuse factor
(FRF) is equal to 1, the average capacity is improved after applying the
mobility control algorithm, compared to existing partial frequency reuse (PFR)
scheme.
|
A new method for analyzing low density parity check (LDPC) codes and low
density generator matrix (LDGM) codes under bit maximum a posteriori
probability (MAP) decoding is introduced. The method is based on a rigorous
approach to spin glasses developed by Francesco Guerra. It allows to construct
lower bounds on the entropy of the transmitted message conditional to the
received one. Based on heuristic statistical mechanics calculations, we
conjecture such bounds to be tight. The result holds for standard irregular
ensembles when used over binary input output symmetric channels. The method is
first developed for Tanner graph ensembles with Poisson left degree
distribution. It is then generalized to `multi-Poisson' graphs, and, by a
completion procedure, to arbitrary degree distribution.
|
By using a suitable transform related to Sobolev inequality, we investigate
the sharp constants and optimizers in radial space for the following weighted
Caffarelli-Kohn-Nirenberg-type inequalities: \begin{equation*}
\int_{\mathbb{R}^N}|x|^{\alpha}|\Delta u|^2 dx \geq
S^{rad}(N,\alpha)\left(\int_{\mathbb{R}^N}|x|^{-\alpha}|u|^{p^*_{\alpha}}
dx\right)^{\frac{2}{p^*_{\alpha}}}, \quad u\in C^\infty_c(\mathbb{R}^N),
\end{equation*} where $N\geq 3$, $4-N<\alpha<2$,
$p^*_{\alpha}=\frac{2(N-\alpha)}{N-4+\alpha}$. Then we obtain the explicit form
of the unique (up to scaling) radial positive solution $U_{\lambda,\alpha}$ to
the weighted fourth-order Hardy (for $\alpha>0$) or H\'{e}non (for $\alpha<0$)
equation: \begin{equation*}
\Delta(|x|^{\alpha}\Delta u)=|x|^{-\alpha} u^{p^*_{\alpha}-1},\quad u>0 \quad
\mbox{in}\quad \mathbb{R}^N.
\end{equation*} %Furthermore, we characterize all the solutions to the
linearized problem related to above equation at $U_{1,\alpha}$. For $\alpha\neq
0$, it is known the solutions of above equation are invariant for dilations
$\lambda^{\frac{N-4+\alpha}{2}}u(\lambda x)$ but not for translations. However
we show that if $\alpha$ is an even integer, there exist new solutions to the
linearized problem, which related to above equation at $U_{1,\alpha}$, that
"replace" the ones due to the translations invariance. This interesting
phenomenon was first shown by Gladiali, Grossi and Neves [Adv. Math. 249, 2013,
1-36] for the second-order H\'{e}non problem. Finally, as applications, we
investigate the reminder term of above inequality and also the existence of
solutions to some related perturbed equations.
|
Software testing is a critical element of software quality assurance and
represents the ultimate review of specification, design and coding. Software
testing is the process of testing the functionality and correctness of software
by running it. Software testing is usually performed for one of two reasons:
defect detection, and reliability estimation. The problem of applying software
testing to defect detection is that software can only suggest the presence of
flaws, not their absence (unless the testing is exhaustive). The problem of
applying software testing to reliability estimation is that the input
distribution used for selecting test cases may be flawed. The key to software
testing is trying to find the modes of failure - something that requires
exhaustively testing the code on all possible inputs. Software Testing,
depending on the testing method employed, can be implemented at any time in the
development process.
|
In this article we study the time evolution of an interacting field
theoretical system, i.e. \phi^4-field theory in 2+1 space-time dimensions, on
the basis of the Kadanoff-Baym equations for a spatially homogeneous system
including the self-consistent tadpole and sunset self-energies. We find that
equilibration is achieved only by inclusion of the sunset self-energy.
Simultaneously, the time evolution of the scalar particle spectral function is
studied for various initial states. We also compare associated solutions of the
corresponding Boltzmann equation to the full Kadanoff-Baym theory. This
comparison shows that a consistent inclusion of the spectral function has a
significant impact on the equilibration rates only if the width of the spectral
function becomes larger than 1/3 of the particle mass. Furthermore, based on
these findings, the conventional transport of particles in the on-shell
quasiparticle limit is extended to particles of finite life time by means of a
dynamical spectral function A(X,\vec{p},M^2). The off-shell propagation is
implemented in the Hadron-String-Dynamics (HSD) transport code and applied to
the dynamics of nucleus-nucleus collisions.
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 371