arxiv_id
stringlengths
11
18
title
stringlengths
10
217
abstract
stringlengths
9
2.1k
subjects
stringclasses
150 values
scrubbed_comments
stringlengths
2
1.33k
category
stringclasses
1 value
__index_level_0__
int64
1
14.6k
2305.18853v2
Polarity of points for systems of nonlinear stochastic heat equations in the critical dimension
Let $u(t, x) = (u_1(t, x), \dots, u_d(t, x))$ be the solution to the systems of nonlinear stochastic heat equations \[ \begin{split} \frac{\partial}{\partial t} u(t, x) &= \frac{\partial^2}{\partial x^2} u(t, x) + \sigma(u(t, x)) \dot{W}(t, x),\\ u(0, x) &= u_0(x), \end{split} \] where $t \ge 0$, $x \in \mathbb{R}$, $\dot{W}(t, x) = (\dot{W}_1(t, x), \dots, \dot{W}_d(t, x))$ is a vector of $d$ independent space-time white noises, and $\sigma: \mathbb{R}^d \to \mathbb{R}^{d\times d}$ is a matrix-valued function. We say that a subset $S$ of $\mathbb{R}^d$ is polar for $\{u(t, x), t \ge 0, x \in \mathbb{R}\}$ if \[ \mathbb{P}\{u(t,x) \in S \text{ for some } t>0 \text{ and } x\in\mathbb{R} \}=0. \] The main result of this paper shows that, in the critical dimension $d=6$, all points in $\mathbb{R}^d$ are polar for $\{u(t, x), t \ge 0, x \in \mathbb{R}\}$. This solves an open problem of Dalang, Khoshnevisan and Nualart (2009, 2013) and Dalang, Mueller and Xiao (2021). We also provide a sufficient condition for a subset $S$ of $\mathbb{R}^d$ to be polar.
Probability (math.PR)
There is a crucial error in the paper because the formula (4.2) is not true in general. As a result, the decomposition (4.3) is not valid, which leads to a gap in the proof
factual/methodological/other critical errors in manuscript
13,310
2305.19464v2
On differential polynomial rings with locally nilpotent derivations
Let $R$ be a $\mathbb{Q}$-algebra and $d$ be a locally nilpotent derivation on $R$. We will show that the Jacobson radical of a differential polynomial ring $R[x;d]$ equals $I[x;d]$ where $I$ is a nil ideal of $R$. This answers a question posed by Agata Smoktunowicz.
Rings and Algebras (math.RA)
Lemma 3 is incorrect
factual/methodological/other critical errors in manuscript
13,313
2305.19969v2
On the $p$-isogenies of elliptic curves with multiplicative reduction over quadratic fields
Let $q > 5$ be a prime and $K$ a quadratic number field. In this article we extend a previous result of Najman and the author and prove that if $E/K$ is an elliptic curve with potentially multiplicative reduction at all primes $\mathfrak q \mid q$, then $E$ does not have prime isogenies of degree greater than $71$ and different from $q$. As an application to our main result, we present a variant of the asymptotic version of Fermat's Last Theorem over quadratic imaginary fields of class number one.
Number Theory (math.NT)
An error in the displayed equation on page 7 affects the validity of the main result (Theorem 2). The point (x,x^τ) is not a rational point on X_0^{w}
factual/methodological/other critical errors in manuscript
13,317
2306.00735v2
Shape Transitions in Network Model of Active Elastic Shells
Morphogenesis involves the transformation of initially simple shapes, such as multicellular spheroids, into more complex $3D$ shapes. These shape changes are governed by mechanical forces including molecular motor-generated forces as well as hydrostatic fluid pressure, both of which are actively regulated in living matter through mechano-chemical feedback. Inspired by autonomous, biophysical shape change, such as occurring in the model organism hydra, we introduce a minimal, active, elastic model featuring a network of springs in a globe-like spherical shell geometry. In this model there is coupling between activity and the shape of the shell: if the local curvature of a filament represented by a spring falls below a critical value, its elastic constant is actively changed. This results in deformation of the springs that changes the shape of the shell. By combining excitation of springs and pressure regulation, we show that the shell undergoes a transition from spheroidal to either elongated ellipsoidal or a different spheroidal shape, depending on pressure. There exists a critical pressure at which there is an abrupt change from ellipsoids to spheroids, showing that pressure is potentially a sensitive switch for material shape. More complex shapes, involving loss of cylindrical symmetry, can arise when springs are excited both above (spring constants increase) and below (spring constants decrease) the curvature threshold. We thus offer biologically inspired design principles for autonomous shape transitions in active elastic shells.
Soft Condensed Matter (cond-mat.soft)
There are some numerical accuracy issues in a part of the results section
factual/methodological/other critical errors in manuscript
13,320
2306.03414v2
DreamSparse: Escaping from Plato's Cave with 2D Diffusion Model Given Sparse Views
Synthesizing novel view images from a few views is a challenging but practical problem. Existing methods often struggle with producing high-quality results or necessitate per-object optimization in such few-view settings due to the insufficient information provided. In this work, we explore leveraging the strong 2D priors in pre-trained diffusion models for synthesizing novel view images. 2D diffusion models, nevertheless, lack 3D awareness, leading to distorted image synthesis and compromising the identity. To address these problems, we propose DreamSparse, a framework that enables the frozen pre-trained diffusion model to generate geometry and identity-consistent novel view image. Specifically, DreamSparse incorporates a geometry module designed to capture 3D features from sparse views as a 3D prior. Subsequently, a spatial guidance model is introduced to convert these 3D feature maps into spatial information for the generative process. This information is then used to guide the pre-trained diffusion model, enabling it to generate geometrically consistent images without tuning it. Leveraging the strong image priors in the pre-trained diffusion models, DreamSparse is capable of synthesizing high-quality novel views for both object and scene-level images and generalising to open-set images. Experimental results demonstrate that our framework can effectively synthesize novel view images from sparse views and outperforms baselines in both trained and open-set category images. More results can be found on our project page: this https URL .
Computer Vision and Pattern Recognition (cs.CV)
[REDACTED-NAME]
factual/methodological/other critical errors in manuscript
13,324
2306.03611v4
Lyapunov Exponents for Open Billiard Flows
In this paper we prove that with respect to every ergodic invariant measure the positive Lyapunov exponents for the billiard flow in an open billiard in $\R^d$ ($d\geq 3$) are all equal. We should stress that we do not make any particular assumptions about the shape and size of the components of our obstacles -- they are just assumed to be strictly convex and compact with $C^3$ boundaries and satisfy the so called no eclipse condition.
Dynamical Systems (math.DS)
There is a mistake in the proof of the [REDACTED-NAME] 4.2
factual/methodological/other critical errors in manuscript
13,325
2306.05281v3
A Graph Reconstruction by Dynamic Signal Coefficient for Fault Classification
To improve the performance in identifying the faults under strong noise for rotating machinery, this paper presents a dynamic feature reconstruction signal graph method, which plays the key role of the proposed end-to-end fault diagnosis model. Specifically, the original mechanical signal is first decomposed by wavelet packet decomposition (WPD) to obtain multiple subbands including coefficient matrix. Then, with originally defined two feature extraction factors MDD and DDD, a dynamic feature selection method based on L2 energy norm (DFSL) is proposed, which can dynamically select the feature coefficient matrix of WPD based on the difference in the distribution of norm energy, enabling each sub-signal to take adaptive signal reconstruction. Next the coefficient matrices of the optimal feature sub-bands are reconstructed and reorganized to obtain the feature signal graphs. Finally, deep features are extracted from the feature signal graphs by 2D-Convolutional neural network (2D-CNN). Experimental results on a public data platform of a bearing and our laboratory platform of robot grinding show that this method is better than the existing methods under different noise intensities.
Signal Processing (eess.SP)
The feature extraction algorithm DFSL has errors in derivation and experimental deficiencies
factual/methodological/other critical errors in manuscript
13,329
2306.05693v2
Robust Active and Passive Beamforming for RIS-Assisted Full-Duplex Systems under Imperfect CSI
The sixth-generation (6G) wireless technology recognizes the potential of reconfigurable intelligent surfaces (RIS) as an effective technique for intelligently manipulating channel paths through reflection to serve desired users. Full-duplex (FD) systems, enabling simultaneous transmission and reception from a base station (BS), offer the theoretical advantage of doubled spectrum efficiency. However, the presence of strong self-interference (SI) in FD systems significantly degrades performance, which can be mitigated by leveraging the capabilities of RIS. Moreover, accurately obtaining channel state information (CSI) from RIS poses a critical challenge. Our objective is to maximize downlink (DL) user data rates while ensuring quality-of-service (QoS) for uplink (UL) users under imperfect CSI from reflected channels. To address this, we introduce the robust active BS and passive RIS beamforming (RAPB) scheme for RIS-FD, accounting for both SI and imperfect CSI. RAPB incorporates distributionally robust design, conditional value-at-risk (CVaR), and penalty convex-concave programming (PCCP) techniques. Additionally, RAPB extends to active and passive beamforming (APB) with perfect channel estimation. Simulation results demonstrate the UL/DL rate improvements achieved considering various levels of imperfect CSI. The proposed RAPB/APB schemes validate their effectiveness across different RIS deployment and RIS/BS configurations. Benefited from robust beamforming, RAPB outperforms existing methods in terms of non-robustness, deployment without RIS, conventional successive convex approximation, and half-duplex systems.
Information Theory (cs.IT)
Some content may be incorrect. Need revision
factual/methodological/other critical errors in manuscript
13,331
2306.05719v2
On Separatrices of foliations on $\mathbb{CP}^2$ with a unique singular point
We show that holomorphic foliations on $\mathbb{CP}^2$ with a unique singularity, either nilpotent or saddle-node, have two analytic, non-algebraic, separatrices through it. We also show that two foliations on $\mathbb{CP}^2$ of degree $d$ with a unique saddle-node singularity are (locally) formally conjugated. We give some examples of foliations on $\mathbb{CP}^2$ with a unique singularity and with a rational first integral, and study different families of foliations in low degree.
Dynamical Systems (math.DS)
Serious mistake detected in one formula used in some of the main results of the paper in Section 4
factual/methodological/other critical errors in manuscript
13,333
2306.05722v3
Ridge Estimation with Nonlinear Transformations
Ridge estimation is an important manifold learning technique. The goal of this paper is to examine the effects of nonlinear transformations on the ridge sets. The main result proves the inclusion relationship between ridges: $\cR(f\circ p)\subseteq \cR(p)$, provided that the transformation $f$ is strictly increasing and concave on the range of the function $p$. Additionally, given an underlying true manifold $\cM$, we show that the Hausdorff distance between $\cR(f\circ p)$ and its projection onto $\cM$ is smaller than the Hausdorff distance between $\cR(p)$ and the corresponding projection. This motivates us to apply an increasing and concave transformation before the ridge estimation. In specific, we show that the power transformations $f^{q}(y)=y^q/q,-\infty<q\leq 1$ are increasing and concave on $\RR_+$, and thus we can use such power transformations when $p$ is strictly positive. Numerical experiments demonstrate the advantages of the proposed methods.
Machine Learning (cs.LG)
There are some flaws in the proofs for Lemma 1 and Theorem 1. We want to withdraw this version to prevent any potential misunderstanding for readers
factual/methodological/other critical errors in manuscript
13,334
2306.05935v2
Pressure consistency for binary hard-sphere mixtures from an integral equation approach
The site-site Ornstein-Zernike equation combined with the Verlet-modified bridge function has been applied to the binary hard sphere mixtures and pressure consistency has been tested. An equation of state has been computed for the case where a packing fraction is $\eta = 0.49$, diameter ratios are $\sigma_{2}/\sigma_{1} = 0.3$ and $0.6$, and the mole fractions are $x_{1} = 0.125, 0.5, 0.75$, and $1$. An excess chemical potential for each component has been obtained as well. Our findings for thermodynamic properties are in good agreement with available data in literature.
Statistical Mechanics (cond-mat.stat-mech)
This electronic preprint contains incorrect data and the calculations need to be redone. Therefore, it was withdrawn by the author
factual/methodological/other critical errors in manuscript
13,335
2306.06300v2
NERFBK: A High-Quality Benchmark for NERF-Based 3D Reconstruction
This paper introduces a new real and synthetic dataset called NeRFBK specifically designed for testing and comparing NeRF-based 3D reconstruction algorithms. High-quality 3D reconstruction has significant potential in various fields, and advancements in image-based algorithms make it essential to evaluate new advanced techniques. However, gathering diverse data with precise ground truth is challenging and may not encompass all relevant applications. The NeRFBK dataset addresses this issue by providing multi-scale, indoor and outdoor datasets with high-resolution images and videos and camera parameters for testing and comparing NeRF-based algorithms. This paper presents the design and creation of the NeRFBK benchmark, various examples and application scenarios, and highlights its potential for advancing the field of 3D reconstruction.
Computer Vision and Pattern Recognition (cs.CV)
paper result has problem
factual/methodological/other critical errors in manuscript
13,337
2306.06478v2
Relative cup product in diffeological spaces
We define a relative cup product in the De Rham cohomology of any diffeological space. To do it, two different versions of the relative De Rham cohomology groups are presented. As an application, we define the cohomological Lusternik-Schnirelmann category of a diffeological space and we prove a cohomological lower bound given by the length of the cup product.
Algebraic Topology (math.AT)
There is a mistake in the proof of Lemma 11.1. As a consequence, Sections 11 and 12 will be removed in a forthcoming version of the paper
factual/methodological/other critical errors in manuscript
13,340
2306.08884v2
Observability inequality, the interpolation inequality and the spectral inequality for the degenerate parabolic equation in R
This paper investigates the interrelationships between the observability inequality, the Hölder-type interpolation inequality, and the spectral inequality for the degenerate parabolic equation in $\mathbb{R}$. We elucidate the distinctive properties of observable sets pertaining to the degenerate parabolic equation. Specifically, we establish that a measurable set in $\mathbb{R}$ fulfills the observability inequality when it exhibits $\gamma$-thickness at a scale $L$, where $\gamma>0$ and $L>0$.
Analysis of PDEs (math.AP)
An issue with one of the proofs in the article affects the final result
factual/methodological/other critical errors in manuscript
13,354
2306.09140v3
Little finitistic dimensions and generalized derived categories
In this paper, we introduced a generalization of a derived category, which is called $n$-derived category and denoted by $D_{n}(R)$, of a given ring $R$ for each $n\in\mathbb{N}\cup\{\infty\}$. The $n$-derived category of a ring is proved to be very closely connected with its left little finitistic dimension. We also introduce and investigate the notions of $n$-exact sequences, $n$-projective (resp., $n$-injective) modules and $n$-exact complexes. In particular, we characterize the left little finitistic dimensions in terms of all above notions. Besides, the $n$-global dimension $n$-gldim$(R)$ of $R$ is introduced and investigated. Finally, we build a connection of the classical derived categories and $n$-derived categories.
Rings and Algebras (math.RA)
there is an error in Theorem 2.3
factual/methodological/other critical errors in manuscript
13,356
2306.09163v2
On the Galois correspondence ratio for Hopf-Galois extensions arising from nilpotent $\mathbb{F}_p$-algebras
For a Hopf-Galois structure on a Galois extension $L/K$ of fields that arises from a finite nilpotent $\mathbb{F}_p$-algebra $A$, we look at the Galois correspondence ratio, which measures the failure of surjectivity of the Galois correspondence for the Hopf-Galois structure on $L/K$. Using methods of elementary linear algebra, we observe that the number of subgroups of the adjoint group of $A$ is equal to the number of subgroups of the additive group of $N$. Then we count left ideals of $A$ and thereby determine the GCR for all nilpotent $\mathbb{F}_p$-algebras of dimension 4, and also show that for a set of $\mathbb{F}_p$-algebras of arbitrary dimension $n$ and exponent $e$, the GCR approaches 0 for large $p$, $n$ or $e$.
Rings and Algebras (math.RA)
Theorem 1 is false for A = F_2[x]/(x^3): (A, +) \cong C_2 x C_2; (A, \circ) \cong C_4
factual/methodological/other critical errors in manuscript
13,357
2306.10074v2
Linearizations via Dirac delta function
This note is to show that the position-space embedding in \cite{ESP2021embedding} in the position and occupation bases can be obtained by considering the dynamics of Dirac delta function $$\delta(\mathbf{x}- \mathbf{z}(t)) = \delta(x_1-z_1(t))\cdots \delta(x_d-z_d(t)),$$ where $\mathbf{z}(t)\in \mathbb{R}^d$ is the solution of a nonlinear dynamical system and $\mathbf{x}\in \mathbb{R}^d$ is a variable in the position space.
Dynamical Systems (math.DS)
Incorrect statements
factual/methodological/other critical errors in manuscript
13,362
2306.10077v2
Stacking of Hyperparameter Tuned Models for Tagging Coding Problems
Coding problems are problems that require a solution in the form of a computer program. Coding problems are popular among students and professionals as it enhances their skills and career opportunities. An AI system that would help those who practice coding problems would be highly useful and there is a huge potential for such a system. In this work, we propose a model which uses stacking of hyperparameter tuned boosting models to achieve impressive metric scores of 77.8% accuracy and 0.815 PR-AUC on the dataset that was scraped from Codeforces and Leetcode. We open source the dataset and the models developed for this work.
Machine Learning (cs.LG)
Error corrections have to be made for certain metrics
factual/methodological/other critical errors in manuscript
13,363
2306.10343v2
Complete self-shrinkers with bounded the second fundamental form in $\mathbb{R}^{n+1}$
Let $X:M^n\to \mathbb{R}^{n+1}$ be a complete properly immersed self-shrinker. In this paper, we prove that if the squared norm of the second fundamental form $S$ satisfies $1\leq S< C$ for some constant $C$, then $S=1$. Further we classify the $n$-dimensional complete proper self-shrinkers with constant squared norm of the second fundamental form in $\mathbb{R}^{n+1}$, which solve the conjecture proposed by Q.M. Cheng and G. Wei when the self-shrinker is proper.
Differential Geometry (math.DG)
There is an error in the paper
factual/methodological/other critical errors in manuscript
13,366
2306.10409v2
Iterative Hierarchy and Ranking Process (IHRP): A Novel Effective Hierarchy Method for Densely Connected Systems and Case Study in Student Performance Assessment
In real-life decision-making problems, determining the influences of the factors on the decision attribute is one of the primary tasks. To affect the decision attribute most, finding a proper hierarchy among the factors and determining their importance values in the system becomes quite important. Interpretive structural modeling (ISM) is a widely used hierarchy-building method that mines factor inter-influences based on expert opinions. This paper discusses one of the main drawbacks of the conventional ISM method in systems where the factors are densely interrelated. We refer to such systems as "dense systems". We propose a novel iterative hierarchy-building technique, called 'Iterative Hierarchy and Ranking Process'(IHRP) which performs effectively in such dense systems. To take the vagueness of the expert opinions into account, intuitionistic fuzzy linguistics has been used in the research work. In this paper, we propose a two-stage calculation of the relative importance of the factors in the system based on their hierarchical positions and rank the factors accordingly. We have performed a case study on student performance assessment by taking up novel Indian high-school administrative factors' data collected by surveying the experts in this field. A comparative study has been conducted in terms of the correlation of the factor ranking achieved by the proposed method and conventional ISM method with that of standard outranking methods like TOPSIS, and VIKOR. Our proposed IHRP framework achieves an 85-95% correlation compared to a 50-60% correlation for the conventional ISM method. This proves the effectiveness of the proposed method in determining a better hierarchy than the conventional method, especially in dense systems.
Artificial Intelligence (cs.AI)
The paper has been rejected from the mentioned journal "[REDACTED-NAME] of [REDACTED-NAME] and [REDACTED-NAME]" and also some of the results in this version was mis-calculated
factual/methodological/other critical errors in manuscript
13,367
2306.10557v2
The chow weight structure for geometric motives of quotient stacks
We construct the Chow weight structure on the derived category of geometric motives with arbitrary coefficients for X a finite type scheme over a field characteristic 0 and G an affine algebraic group. In particular we also show that the heart of this weight structure recovers the category of Chow motives on [X/G].
Algebraic Geometry (math.AG)
Lemma 4.4 is wrong, which also invalidates Theorem 4.6. We have decided to withdraw the article while we work on a fix
factual/methodological/other critical errors in manuscript
13,368
2306.10801v2
Three way information paradox and its resolution using islands
Black holes possess finite degrees of freedom and thus cannot fuel unbounded entanglement growth of any system. Instead of the usual information paradox where the coupled system is one entity, the Hawking radiation, here we couple a black hole $\chi_0$ with two infinite entities: a thermal bath $\chi_1$ and an auxiliary system $\chi _2$. This produces a novel information paradox in the sense that gravitational correction to black hole entropy does not rule out paradoxical growth of $\chi _1$ and $\chi _2$ entropies. This immediately raises what kind of resolution such a paradox has, and we address this question working in the AdS$_2$ JT gravity model, using the island formula, and ideas of entanglement monogamy. We find the quantum extremal surface that cures the black hole entropy growth, argue to the nature of how $\chi _1$ and $\chi _2$ entropies must behave using monogamy, and derive an island which satisfies these expectations. A direct consequence of our results is that gravitation builds entanglement between $\chi _1$ and $\chi _2$, even though they start out independently.
High Energy Physics - Theory (hep-th)
We realized that some of the crucial assumptions in section 4 are not true in general. Therefore we wish to withdraw the paper
factual/methodological/other critical errors in manuscript
13,369
2306.10946v2
Tourist Attractions Recommendation based on Attention Knowledge Graph Convolution Network
The recommendation algorithm based on knowledge graphs is at a relatively mature stage. However, there are still some problems in the recommendation of specific areas. For example, in the tourism field, selecting suitable tourist attraction attributes process is complicated as the recommendation basis for tourist attractions. In this paper, we propose the improved Attention Knowledge Graph Convolution Network model, named (Att-KGCN), which automatically discovers the neighboring entities of the target scenic spot semantically. The attention layer aggregates relatively similar locations and represents them with an adjacent vector. Then, according to the tourist's preferred choices, the model predicts the probability of similar spots as a recommendation system. A knowledge graph dataset of tourist attractions used based on tourism data on Socotra Island-Yemen. Through experiments, it is verified that the Attention Knowledge Graph Convolution Network has a good effect on the recommendation of tourist attractions and can make more recommendations for tourists' choices.
Information Retrieval (cs.IR)
I have incorrect information
factual/methodological/other critical errors in manuscript
13,370
2306.12062v3
Borodin-Kostochka conjecture for a family of $P_6$-free graphs
Borodin and Kostochka conjectured that every graph $G$ with $\Delta\ge9$ satisfies $\chi\le$ max $\{\omega, \Delta-1\}$. Gupta and Pradhan proved the Borodin-Kostochka conjecture for ($P_5$, $C_4$)-free graphs. In this note, we prove the Borodin-Kostochka conjecture for ($P_6$, apple, torch)-free graphs, that is, graphs with no induced $P_6$, no induced $C_5$ with a hanging edge, and no induced $C_5$ and $C_4$ sharing exactly two edges. This generalizes the result of Gupta and Pradhan from the perspective of allowing the existence of $P_5$.
Combinatorics (math.CO)
This paper needs to be rewrote and reorganized. The last section might not be fully correct, it needs some further check
factual/methodological/other critical errors in manuscript
13,373
2306.12088v2
An Efficient Virtual Data Generation Method for Reducing Communication in Federated Learning
Communication overhead is one of the major challenges in Federated Learning(FL). A few classical schemes assume the server can extract the auxiliary information about training data of the participants from the local models to construct a central dummy dataset. The server uses the dummy dataset to finetune aggregated global model to achieve the target test accuracy in fewer communication rounds. In this paper, we summarize the above solutions into a data-based communication-efficient FL framework. The key of the proposed framework is to design an efficient extraction module(EM) which ensures the dummy dataset has a positive effect on finetuning aggregated global model. Different from the existing methods that use generator to design EM, our proposed method, FedINIBoost borrows the idea of gradient match to construct EM. Specifically, FedINIBoost builds a proxy dataset of the real dataset in two steps for each participant at each communication round. Then the server aggregates all the proxy datasets to form a central dummy dataset, which is used to finetune aggregated global model. Extensive experiments verify the superiority of our method compared with the existing classical method, FedAVG, FedProx, Moon and FedFTG. Moreover, FedINIBoost plays a significant role in finetuning the performance of aggregated global model at the initial stage of FL.
Machine Learning (cs.LG)
There are some errors in the experimental setup of this paper
factual/methodological/other critical errors in manuscript
13,374
2306.12783v2
Proof of reserves and non-double spends for Chaumian Mints
E-cash was invented in 1982 by David Chaum as an anonymous cryptographic electronic cash system based on blind signatures. It is not a decentralized form of money as Bitcoin. It requires trust on the server or Mint issuing the e-cash tokens and validating the transactions for preventing double spends. Moreover, the users also need to trust the Mint to not debase the value of e-cash tokens by Minting an uncontrolled number. In particular, this is critical for e-cash tokens representing a note of another asset as a currency, or bitcoin, or another cryptocurrency. Thus it would be suitable to implement a public auditing system providing a proof of reserves that ensures that the Mint is not engaging into a fractional reserve system. In this article we describe how to implement a proof of reserves system for Chaumian Mints. The protocol also provides a proof of non-double spends.
Cryptography and Security (cs.CR)
There is an error in the 3rd point of section 3.1
factual/methodological/other critical errors in manuscript
13,376
2306.12857v2
Efficient Partitioning Method of Large-Scale Public Safety Spatio-Temporal Data based on Information Loss Constraints
The storage, management, and application of massive spatio-temporal data are widely applied in various practical scenarios, including public safety. However, due to the unique spatio-temporal distribution characteristics of re-al-world data, most existing methods have limitations in terms of the spatio-temporal proximity of data and load balancing in distributed storage. There-fore, this paper proposes an efficient partitioning method of large-scale public safety spatio-temporal data based on information loss constraints (IFL-LSTP). The IFL-LSTP model specifically targets large-scale spatio-temporal point da-ta by combining the spatio-temporal partitioning module (STPM) with the graph partitioning module (GPM). This approach can significantly reduce the scale of data while maintaining the model's accuracy, in order to improve the partitioning efficiency. It can also ensure the load balancing of distributed storage while maintaining spatio-temporal proximity of the data partitioning results. This method provides a new solution for distributed storage of mas-sive spatio-temporal data. The experimental results on multiple real-world da-tasets demonstrate the effectiveness and superiority of IFL-LSTP.
Machine Learning (cs.LG)
There are declarative errors in the method part, and the new version needs time to prepare, so we should withdraw the changes first, so as not to mislead others
factual/methodological/other critical errors in manuscript
13,377
2306.12859v2
Reinforcement Federated Learning Method Based on Adaptive OPTICS Clustering
Federated learning is a distributed machine learning technology, which realizes the balance between data privacy protection and data sharing computing. To protect data privacy, feder-ated learning learns shared models by locally executing distributed training on participating devices and aggregating local models into global models. There is a problem in federated learning, that is, the negative impact caused by the non-independent and identical distribu-tion of data across different user terminals. In order to alleviate this problem, this paper pro-poses a strengthened federation aggregation method based on adaptive OPTICS clustering. Specifically, this method perceives the clustering environment as a Markov decision process, and models the adjustment process of parameter search direction, so as to find the best clus-tering parameters to achieve the best federated aggregation method. The core contribution of this paper is to propose an adaptive OPTICS clustering algorithm for federated learning. The algorithm combines OPTICS clustering and adaptive learning technology, and can effective-ly deal with the problem of non-independent and identically distributed data across different user terminals. By perceiving the clustering environment as a Markov decision process, the goal is to find the best parameters of the OPTICS cluster without artificial assistance, so as to obtain the best federated aggregation method and achieve better performance. The reliability and practicability of this method have been verified on the experimental data, and its effec-tiveness and superiority have been proved.
Machine Learning (cs.LG)
There is a declarative error in the method part, we want to withdraw the revision first, so as not to mislead others
factual/methodological/other critical errors in manuscript
13,378
2306.12883v4
The Gruenberg-Kegel graph of finite solvable rational groups
A finite group $G$ is said to be rational if every character of $G$ is rational-valued. The Gruenberg-Kegel graph of a finite group $G$ is the undirected graph whose vertices are the primes dividing the order of $G$ and the edges join different primes $p$ and $q$ whenever $G$ contains an element of order $pq$. In this paper, we complete the classification of the Gruenberg-Kegel graphs of finite solvable rational groups initiated in \cite{BKMdR}.
Group Theory (math.GR)
We found a mistake in the proof
factual/methodological/other critical errors in manuscript
13,379
2306.13387v2
Improved Competitive Ratios for Online Bipartite Matching on Degree Bounded Graphs
We consider the online bipartite matching problem on $(k,d)$-bounded graphs, where each online vertex has at most $d$ neighbors, each offline vertex has at least $k$ neighbors, and $k\geq d\geq 2$. The model of $(k,d)$-bounded graphs is proposed by Naor and Wajc (EC 2015 and TEAC 2018) to model the online advertising applications in which offline advertisers are interested in a large number of ad slots, while each online ad slot is interesting to a small number of advertisers. They proposed deterministic and randomized algorithms with a competitive ratio of $1 - (1-1/d)^k$ for the problem, and show that the competitive ratio is optimal for deterministic algorithms. They also raised the open questions of whether strictly better competitive ratios can be achieved using randomized algorithms, for both the adversarial and stochastic arrival models. In this paper we answer both of their open problems affirmatively. For the adversarial arrival model, we propose a randomized algorithm with competitive ratio $1 - (1-1/d)^k + \Omega(d^{-4}\cdot e^{-\frac{k}{d}})$ for all $k\geq d\geq 2$. We also consider the stochastic model and show that even better competitive ratios can be achieved. We show that for all $k\geq d\geq 2$, the competitive ratio is always at least $0.8237$. We further consider the $b$-matching problem when each offline vertex can be matched at most $b$ times, and provide several competitive ratio lower bounds for the adversarial and stochastic model.
Data Structures and Algorithms (cs.DS)
Some error was discovered in the analysis for the stochastic setting
factual/methodological/other critical errors in manuscript
13,381
2306.13707v3
Dirac leptogenesis via scatterings
Leptogenesis typically requires the introduction of heavy particles whose out-of-equilibrium decays are essential for generating a matter-antimatter asymmetry, according to one of Sakharov's conditions. We show that in Dirac leptogenesis, scatterings between the light degrees of freedom - Standard Model particles plus Dirac neutrinos - suffice to generate the asymmetry. Sakharov's conditions are satisfied because the right-handed neutrino partners are out of equilibrium. Consequently, heavy degrees of freedom never needed to be produced in the early universe, allowing for a reheating temperature below their mass scale. We solve the Boltzmann equations and discuss the viable parameter space together with observational signatures such as an increased number of effective neutrinos in the early universe as well as proton decay for some realizations.
High Energy Physics - Phenomenology (hep-ph)
Withdrawn due to critical error in calculation, see arXiv:2308.14767
factual/methodological/other critical errors in manuscript
13,382
2306.14208v2
PaRUS: A Virtual Reality Shopping Method Focusing on Context between Products and Real Usage Scenes
The development of AR and VR technologies is enhancing users' online shopping experiences in various ways. However, in existing VR shopping applications, shopping contexts merely refer to the products and virtual malls or metaphorical scenes where users select products. This leads to the defect that users can only imagine rather than intuitively feel whether the selected products are suitable for their real usage scenes, resulting in a significant discrepancy between their expectations before and after the purchase. To address this issue, we propose PaRUS, a VR shopping approach that focuses on the context between products and their real usage scenes. PaRUS begins by rebuilding the virtual scenario of the products' real usage scene through a new semantic scene reconstruction pipeline, which preserves both the structured scene and textured object models in the scene. Afterwards, intuitive visualization of how the selected products fit the reconstructed virtual scene is provided. We conducted two user studies to evaluate how PaRUS impacts user experience, behavior, and satisfaction with their purchase. The results indicated that PaRUS significantly reduced the perceived performance risk and improved users' trust and satisfaction with their purchase results.
Human-Computer Interaction (cs.HC)
a mistake: the participant number of the first user study should be 24 instead of 16
factual/methodological/other critical errors in manuscript
13,386
2306.14684v5
Grothendieck topology of $C^*$-algebras
For any topological space there is a sheaf cohomology. A Grothendieck topology is a generalization of the classical topology such that it also possesses a sheaf cohomology. On the other hand any noncommutative $C^*$-algebra is a generalization of a locally compact Hausdorff space. Here we define a Grothendieck topology arising from $C^*$-algebras which is a generalization of the topology of the spectra of commutative $C^*$-algebras. This construction yields a noncommutative generalization of the sheaf cohomology of topological spaces. The presented here theory gives a unified approach to the Gelfand duality and the duality between the commutative von Neumann algebras and measure locales. The generalization of the Dixmier-Douady theory concerning $C^*$-algebras of foliations is also discussed.
Operator Algebras (math.OA)
There is a fatal mistake in this article
factual/methodological/other critical errors in manuscript
13,387
2306.14864v2
An orthogonalization-free implementation of the LOBPCG method in solving Kohn-Sham equation
In the classic implementation of the LOBPCG method, orthogonalization and the R-R (Rayleigh-Ritz) procedure cost nonignorable CPU time. Especially this consumption could be very expensive to deal with situations with large block sizes. In this paper, we propose an orthogonalization-free framework of implementing the LOBPCG method for SCF (self-consistent field) iterations in solving the Kohn-Sham equation. In this framework, orthogonalization is avoided in calculations, which can decrease the computational complexity. And the R-R procedure is implemented parallelly through OpenMP, which can further reduce computational time. During numerical experiments, an effective preconditioning strategy is designed, which can accelerate the LOBPCG method remarkably. Consequently, the efficiency of the LOBPCG method can be significantly improved. Based on this, the SCF iteration can solve the Kohn-Sham equation efficiently. A series of numerical experiments are inducted to demonstrate the effectiveness of our implementation, in which significant improvements in computational time can be observed.
Numerical Analysis (math.NA)
There are some mistakes in Fig 4.2 & 4.3, which can lead to some wrong conclusion dismatching the theory
factual/methodological/other critical errors in manuscript
13,388
2306.15079v3
From $O(\sqrt{n})$ to $O(\log(n))$ in Quadratic Programming
A "dark cloud" hangs over numerical optimization theory for decades, namely, whether an optimization algorithm $O(\log(n))$ iteration complexity exists. "Yes", this paper answers, with a new optimization algorithm and strict theory proof. It starts with box-constrained quadratic programming (Box-QP), and many practical optimization problems fall into Box-QP. General smooth quadratic programming (QP), nonsmooth Lasso, and support vector machine (or regression) can be reformulated as Box-QP via duality theory. It is the first time to present an $O(\log(n))$ iteration complexity QP algorithm, in particular, which behaves like a "direct" method: the required number of iterations is deterministic with exact value $\left\lceil\log\left(\frac{3.125n}{\epsilon}\right)/\log(1.5625)\right\rceil$. This significant breakthrough enables us to transition from the $O(\sqrt{n})$ to the $O(\log(n))$ optimization algorithm, whose amazing scalability is particularly relevant in today's era of big data and artificial intelligence.
Optimization and Control (math.OC)
There is an error in the proof
factual/methodological/other critical errors in manuscript
13,389
2306.15142v3
LRANet: Towards Accurate and Efficient Scene Text Detection with Low-Rank Approximation Network
Recently, regression-based methods, which predict parameterized text shapes for text localization, have gained popularity in scene text detection. However, the existing parameterized text shape methods still have limitations in modeling arbitrary-shaped texts due to ignoring the utilization of text-specific shape information. Moreover, the time consumption of the entire pipeline has been largely overlooked, leading to a suboptimal overall inference speed. To address these issues, we first propose a novel parameterized text shape method based on low-rank approximation. Unlike other shape representation methods that employ data-irrelevant parameterization, our approach utilizes singular value decomposition and reconstructs the text shape using a few eigenvectors learned from labeled text contours. By exploring the shape correlation among different text contours, our method achieves consistency, compactness, simplicity, and robustness in shape representation. Next, we propose a dual assignment scheme for speed acceleration. It adopts a sparse assignment branch to accelerate the inference speed, and meanwhile, provides ample supervised signals for training through a dense assignment branch. Building upon these designs, we implement an accurate and efficient arbitrary-shaped text detector named LRANet. Extensive experiments are conducted on several challenging benchmarks, demonstrating the superior accuracy and efficiency of LRANet compared to state-of-the-art methods. Code will be released soon.
Computer Vision and Pattern Recognition (cs.CV)
There were some errors in the experimental results of the first version, such as inaccurate measurement of FPS and low F-meansure
factual/methodological/other critical errors in manuscript
13,390
2306.15596v2
Automated Fuzzing Harness Generation for Library APIs and Binary Protocol Parsers
Fuzzing is a widely used software security testing technique that is designed to identify vulnerabilities in systems by providing invalid or unexpected input. Continuous fuzzing systems like OSS-FUZZ have been successful in finding security bugs in many different software systems. The typical process of finding security bugs using fuzzing involves several steps: first, the "fuzz-worthy" functions that are likely to contain vulnerabilities must be identified; second, the setup requirements for the API must be understood before it can be called; third, a fuzzing harness must be written and bound to a coverage-guided fuzzer like LLVM's LibFuzzer; and finally, the security bugs discovered by the fuzzing harness must be triaged and checked for reproducibility. This project focuses on automating the first two steps in this process. In particular, we present an automated system that can generate fuzzing harnesses for library APIs and binary protocol parsers by analyzing unit tests. This allows for the scaling of the fuzzing infrastructure in proportion to the growth of the codebase, without the need for manual coding of harnesses. Additionally, we develop a metric to assess the "fuzz-worthiness" of an API, enabling us to prioritize the most promising targets for testing.
Cryptography and Security (cs.CR)
Needed correct citations
factual/methodological/other critical errors in manuscript
13,393
2306.15868v2
GraSS: Contrastive Learning with Gradient Guided Sampling Strategy for Remote Sensing Image Semantic Segmentation
Self-supervised contrastive learning (SSCL) has achieved significant milestones in remote sensing image (RSI) understanding. Its essence lies in designing an unsupervised instance discrimination pretext task to extract image features from a large number of unlabeled images that are beneficial for downstream tasks. However, existing instance discrimination based SSCL suffer from two limitations when applied to the RSI semantic segmentation task: 1) Positive sample confounding issue; 2) Feature adaptation bias. It introduces a feature adaptation bias when applied to semantic segmentation tasks that require pixel-level or object-level features. In this study, We observed that the discrimination information can be mapped to specific regions in RSI through the gradient of unsupervised contrastive loss, these specific regions tend to contain singular ground objects. Based on this, we propose contrastive learning with Gradient guided Sampling Strategy (GraSS) for RSI semantic segmentation. GraSS consists of two stages: Instance Discrimination warm-up (ID warm-up) and Gradient guided Sampling contrastive training (GS training). The ID warm-up aims to provide initial discrimination information to the contrastive loss gradients. The GS training stage aims to utilize the discrimination information contained in the contrastive loss gradients and adaptively select regions in RSI patches that contain more singular ground objects, in order to construct new positive and negative samples. Experimental results on three open datasets demonstrate that GraSS effectively enhances the performance of SSCL in high-resolution RSI semantic segmentation. Compared to seven baseline methods from five different types of SSCL, GraSS achieves an average improvement of 1.57\% and a maximum improvement of 3.58\% in terms of mean intersection over the union. The source code is available at this https URL
Machine Learning (cs.LG)
there are some errors
factual/methodological/other critical errors in manuscript
13,394
2306.15878v2
A Diamond Model Analysis on Twitter's Biggest Hack
Cyberattacks have prominently increased over the past few years now, and have targeted actors from a wide variety of domains. Understanding the motivation, infrastructure, attack vectors, etc. behind such attacks is vital to proactively work against preventing such attacks in the future and also to analyze the economic and social impact of such attacks. In this paper, we leverage the diamond model to perform an intrusion analysis case study of the 2020 Twitter account hijacking Cyberattack. We follow this standardized incident response model to map the adversary, capability, infrastructure, and victim and perform a comprehensive analysis of the attack, and the impact posed by the attack from a Cybersecurity policy standpoint.
Cryptography and Security (cs.CR)
Discrepancies in the paper
factual/methodological/other critical errors in manuscript
13,395
2306.16631v3
Purity based continuity bounds for quantum information measures
In quantum information theory, communication capacities are mostly given in terms of entropic formulas. Continuity of such entropic quantities are significant, as they ensure uniformity of measures against perturbations of quantum states. Traditionally, continuity bounds have been provided in terms of the trace distance, which is a bonafide metric on the set of quantum states. In the present contribution we derive continuity bounds for various information measures based on the difference in purity of the concerned quantum states. In a finite-dimensional system, we establish continuity bounds for von Neumann entropy which depend only on purity distance and dimension of the system. We then obtain uniform continuity bounds for conditional von Neumann entropy in terms of purity distance which is free of the dimension of the conditioning subsystem. Furthermore, we derive the uniform continuity bounds for other entropic quantities like relative entropy distance, quantum mutual information and quantum conditional mutual information. As an application, we investigate the variation in squashed entanglement with respect to purity. We also obtain a bound to the quantum conditional mutual information of a quantum state which is arbitrarily close to a quantum Markov chain.
Quantum Physics (quant-ph)
The paper is withdrawn as some errors are noticed and they need to be fixed
factual/methodological/other critical errors in manuscript
13,400
2306.17106v2
Flow Dynamics of a Dodecane Jet in Oxygen Crossflow at Supercritical Pressures
In advanced aero-propulsion engines, kerosene is often injected into the combustor at supercritical pressures, where flow dynamics is distinct from the subcritical counterpart. Large-eddy simulation combined with real-fluid thermodynamics and transport theories of a N-dodecane jet in oxygen crossflow at supercritical pressures is presented. Liquid dodecane at 600 K is injected into a supercritical oxygen environment at 700 K at different supercritical pressures and jet-to-crossflow momentum flux ratios (J). Various vortical structures are discussed in detail. The results shown that, with the same jet-to-crossflow velocity ratio of 0.75, the upstream shear layer (USL) is absolutely unstable at 6.0 MPa (J = 7.1) and convectively unstable at 3.0 MPa (J = 13.2). This trend is consistent with the empirical criterion for the stability characteristics of a jet in crossflow at subcritical pressures (Jcr = 10). While decreasing J to 7.1 at 3.0 MPa, however, the dominant Strouhal number of the USL varies along the upstream jet trajectory, and the USL becomes convectively unstable. Such abnormal change in stability behavior can be attributed to the real-fluid effect induced by strong density stratification at pressure of 3.0 MPa, under which a point of inflection in the upstream mixing layer renders large density gradient and tends to stabilize the USL. The stability behavior with varying pressure and J is further corroborated by linear stability analysis. The analysis of spatial mixing deficiencies reveals that the mixing efficiency is enhanced at a higher jet-to-crossflow momentum flux ratio.
Fluid Dynamics (physics.flu-dyn)
1. The citations "Cortelezzi & Karagozian 2001; Yang & Wang 2005" in Page2 were wrong, which are not "compressible". This would mislead the readers. 2. [REDACTED-NAME] number announced in Page7 was wrong by an order of magnitude
factual/methodological/other critical errors in manuscript
13,402
2306.17345v2
Leavitt Path Algebras and Quantum Graphs
By adapting recent work in extending graph $C^*$-algebras to quantum graphs, we introduce Leavitt quantum graphs as an analogue of quivers where the edge set and vertex set is replaced by a $C^*$-algebra and the maps between the sets by $*$-homomorphisms. Moreover, we develop the theory around these structures and construct a notion of Leavitt path algebra over them which is an extension of the usual Leavitt path algebra associated to a quiver.
Rings and Algebras (math.RA)
The article has been withdrawn due to a conflict in the nomenclature of the quivers introduced in this work
factual/methodological/other critical errors in manuscript
13,405
2306.17727v2
Improved NL2SQL based on Multi-layer Expert Network
The Natural Language to SQL (NL2SQL) technique is used to convert natural language queries into executable SQL statements. Typically, slot-filling is employed as a classification method for multi-task cases to achieve this goal. However, slot-filling can result in inaccurate SQL statement generation due to negative migration issues arising from different classification tasks. To overcome this limitation, this study introduces a new approach called Multi-Layer Expert Generate SQL (MLEG-SQL), which utilizes a dedicated multi-task hierarchical network. The lower layer of the network extracts semantic features of natural language statements, while the upper layer builds a specialized expert system for handling specific classification tasks. This hierarchical approach mitigates performance degradation resulting from different task conflicts. The proposed method was evaluated on the WiKSQL dataset and was found to be effective in generating accurate SQL statements.
Computation and Language (cs.CL)
the paper's figure has something wrong
factual/methodological/other critical errors in manuscript
13,407
2307.00900v2
Linear Independence of Hecke Operators on modular symbols of higher weights
We prove that for sufficiently large primes $p$, the Hecke operators $T_1, T_2, \ldots, T_D$ acts linearly independent on the space of weight $2k$ cuspidal modular symbol $\mathbb{S}_{2k}(\Gamma_0(p))$ with $k\geq 1$ for $D^2\ll p$. This is a generalization of work of Vanderkam to the higher weights modular symbols.
Number Theory (math.NT)
Mistakes detected
factual/methodological/other critical errors in manuscript
13,410
2307.01515v2
LPN: Language-guided Prototypical Network for few-shot classification
Few-shot classification aims to adapt to new tasks with limited labeled examples. To fully use the accessible data, recent methods explore suitable measures for the similarity between the query and support images and better high-dimensional features with meta-training and pre-training strategies. However, the potential of multi-modality information has barely been explored, which may bring promising improvement for few-shot classification. In this paper, we propose a Language-guided Prototypical Network (LPN) for few-shot classification, which leverages the complementarity of vision and language modalities via two parallel branches. Concretely, to introduce language modality with limited samples in the visual task, we leverage a pre-trained text encoder to extract class-level text features directly from class names while processing images with a conventional image encoder. Then, a language-guided decoder is introduced to obtain text features corresponding to each image by aligning class-level features with visual features. In addition, to take advantage of class-level features and prototypes, we build a refined prototypical head that generates robust prototypes in the text branch for follow-up measurement. Finally, we aggregate the visual and text logits to calibrate the deviation of a single modality. Extensive experiments demonstrate the competitiveness of LPN against state-of-the-art methods on benchmark datasets.
Computer Vision and Pattern Recognition (cs.CV)
results error in table 1, the last line
factual/methodological/other critical errors in manuscript
13,414
2307.01627v2
Noncoprime action of a cyclic group
Let $A$ be a finite nilpotent group acting fixed point freely on the finite (solvable) group $G$ by automorphisms. It is conjectured that the nilpotent length of $G$ is bounded above by $\ell(A)$, the number of primes dividing the order of $A$ counted with multiplicities. In the present paper we consider the case $A$ is cyclic and obtain that the nilpotent length of $G$ is at most $2\ell(A)$ if $|G|$ is odd. More generally we prove that the nilpotent length of $G$ is at most $2\ell(A)+ \mathbf{c}(G;A)$ when $G$ is of odd order and $A$ normalizes a Sylow system of $G$ where $\mathbf{c}(G;A)$ denotes the number of trivial $A$-modules appearing in an $A$-composition series of $G$.
Group Theory (math.GR)
The proof of Theorem 2.6 is incorrect. Without this theorem the main claim of the paper becomes unproven
factual/methodological/other critical errors in manuscript
13,416
2307.02277v2
Privacy-Preserving Federated Heavy Hitter Analytics for Non-IID Data
Federated heavy-hitter analytics involves the identification of the most frequent items within distributed data. Existing methods for this task often encounter challenges such as compromising privacy or sacrificing utility. To address these issues, we introduce a novel privacy-preserving algorithm that exploits the hierarchical structure to discover local and global heavy hitters in non-IID data by utilizing perturbation and similarity techniques. We conduct extensive evaluations on both synthetic and real datasets to validate the effectiveness of our approach. We also present FedCampus, a demonstration application to showcase the capabilities of our algorithm in analyzing population statistics.
Distributed, Parallel, and Cluster Computing (cs.DC)
technical error in Theorem 1
factual/methodological/other critical errors in manuscript
13,418
2307.02347v7
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks. In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases. Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification. Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the "LSUN-Bedroom" dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.
Computer Vision and Pattern Recognition (cs.CV)
We have a serious bug and the method is not that good as thought. We need to withraw it totally
factual/methodological/other critical errors in manuscript
13,419
2307.04316v2
Accelerating Secure and Verifiable Data Deletion in Cloud Storage via SGX and Blockchain
Secure data deletion enables data owners to fully control the erasure of their data stored on local or cloud data centers and is essential for preventing data leakage, especially for cloud storage. However, traditional data deletion based on unlinking, overwriting, and cryptographic key management either ineffectiveness in cloud storage or rely on unpractical assumption. In this paper, we present SevDel, a secure and verifiable data deletion scheme, which leverages the zero-knowledge proof to achieve the verification of the encryption of the outsourced data without retrieving the ciphertexts, while the deletion of the encryption keys are guaranteed based on Intel SGX. SevDel implements secure interfaces to perform data encryption and decryption for secure cloud storage. It also utilizes smart contract to enforce the operations of the cloud service provider to follow service level agreements with data owners and the penalty over the service provider, who discloses the cloud data on its servers. Evaluation on real-world workload demonstrates that SevDel achieves efficient data deletion verification and maintain high bandwidth savings.
Cryptography and Security (cs.CR)
It has some technical problems, we need to address it before publishing. Many thanks
factual/methodological/other critical errors in manuscript
13,426
2307.05226v3
General solution to a problem of pulling back singularities
The main aim of this paper is to give an affirmative general solution to the following pullback problem. Consider a finite holomorphic map germ $\phi : (\mathbb{C}^{n}, 0) \to (\mathbb{C}^{n}, 0)$ and an analytic subvariety germ $X$ in the target. Then if the preimage $Y = \phi^{-1}(X)$, taken with the reduced structure, is smooth, so is $X$. The case, where $Y$ is not contained in the ramification divisor $Z$ of $\phi$, was established by Ebenfelt--Rothschild (2007) and afterwards by Lebl (2008) and Denkowski (2016). The hypersurface case was achieved by Giraldo--Roeder (2020) and recently by Jelonek (2023).
Algebraic Geometry (math.AG)
The paper has been withdrawn because of the basic mistake: the map $f$ has in general rank q, not p < q (Section 3)
factual/methodological/other critical errors in manuscript
13,427
2307.05921v2
Reading Radiology Imaging Like The Radiologist
Automated radiology report generation aims to generate radiology reports that contain rich, fine-grained descriptions of radiology imaging. Compared with image captioning in the natural image domain, medical images are very similar to each other, with only minor differences in the occurrence of diseases. Given the importance of these minor differences in the radiology report, it is crucial to encourage the model to focus more on the subtle regions of disease occurrence. Secondly, the problem of visual and textual data biases is serious. Not only do normal cases make up the majority of the dataset, but sentences describing areas with pathological changes also constitute only a small part of the paragraph. Lastly, generating medical image reports involves the challenge of long text generation, which requires more expertise and empirical training in medical knowledge. As a result, the difficulty of generating such reports is increased. To address these challenges, we propose a disease-oriented retrieval framework that utilizes similar reports as prior knowledge references. We design a factual consistency captioning generator to generate more accurate and factually consistent disease descriptions. Our framework can find most similar reports for a given disease from the CXR database by retrieving a disease-oriented mask consisting of the position and morphological characteristics. By referencing the disease-oriented similar report and the visual features, the factual consistency model can generate a more accurate radiology report.
Computer Vision and Pattern Recognition (cs.CV)
There are data writing errors in the paper
factual/methodological/other critical errors in manuscript
13,429
2307.06627v2
Fast and Practical Quantum-Inspired Classical Algorithms for Solving Linear Systems
We propose fast and practical quantum-inspired classical algorithms for solving linear systems. Specifically, given sampling and query access to a matrix $A\in\mathbb{R}^{m\times n}$ and a vector $b\in\mathbb{R}^m$, we propose classical algorithms that produce a data structure for the solution $x\in\mathbb{R}^{n}$ of the linear system $Ax=b$ with the ability to sample and query its entries. The resulting $x$ satisfies $\|x-A^{+}b\|\leq\epsilon\|A^{+}b\|$, where $\|\cdot\|$ is the spectral norm and $A^+$ is the Moore-Penrose inverse of $A$. Our algorithm has time complexity $\widetilde{O}(\kappa_F^4/\kappa\epsilon^2)$ in the general case, where $\kappa_{F} =\|A\|_F\|A^+\|$ and $\kappa=\|A\|\|A^+\|$ are condition numbers. Compared to the prior state-of-the-art result [Shao and Montanaro, arXiv:2103.10309v2 ], our algorithm achieves a polynomial speedup in condition numbers. When $A$ is $s$-sparse, our algorithm has complexity $\widetilde{O}(s \kappa\log(1/\epsilon))$, matching the quantum lower bound for solving linear systems in $\kappa$ and $1/\epsilon$ up to poly-logarithmic factors [Harrow and Kothari]. When $A$ is $s$-sparse and symmetric positive-definite, our algorithm has complexity $\widetilde{O}(s\sqrt{\kappa}\log(1/\epsilon))$. Technically, our main contribution is the application of the heavy ball momentum method to quantum-inspired classical algorithms for solving linear systems, where we propose two new methods with speedups: quantum-inspired Kaczmarz method with momentum and quantum-inspired coordinate descent method with momentum. Their analysis exploits careful decomposition of the momentum transition matrix and the application of novel spectral norm concentration bounds for independent random matrices. Finally, we also conduct numerical experiments for our algorithms on both synthetic and real-world datasets, and the experimental results support our theoretical claims.
Data Structures and Algorithms (cs.DS)
Theorem 3 and Theorem 5 are incorrect, and more efforts are needed to fix existing issues
factual/methodological/other critical errors in manuscript
13,431
2307.07749v2
A preconditioned MINRES method for block lower triangular Toeplitz systems
In this study, a novel preconditioner based on the absolute-value block $\alpha$-circulant matrix approximation is developed, specifically designed for nonsymmetric dense block lower triangular Toeplitz (BLTT) systems that emerge from the numerical discretization of evolutionary equations. Our preconditioner is constructed by taking an absolute-value of a block $\alpha$-circulant matrix approximation to the BLTT matrix. To apply our preconditioner, the original BLTT linear system is converted into a symmetric form by applying a time-reversing permutation transformation. Then, with our preconditioner, the preconditioned minimal residual method (MINRES) solver is employed to solve the symmetrized linear system. With properly chosen $\alpha$, the eigenvalues of the preconditioned matrix are proven to be clustered around $\pm1$ without any significant outliers. With the clustered spectrum, we show that the preconditioned MINRES solver for the preconditioned system has a convergence rate independent of system size. To the best of our knowledge, this is the first preconditioned MINRES method with size-independent convergence rate for the dense BLTT system. The efficacy of the proposed preconditioner is corroborated by our numerical experiments, which reveal that it attains optimal convergence.
Numerical Analysis (math.NA)
there was an error
factual/methodological/other critical errors in manuscript
13,434
2307.07788v2
Deciding One to One property of Boolean maps: Condition and algorithm in terms of implicants
This paper addresses the computational problem of deciding invertibility (or one to one-ness) of a Boolean map $F$ in $n$-Boolean variables. This problem has a special case of deciding invertibilty of a map $F:\mathbb{F}_{2}^n\rightarrow\mathbb{F}_{2}^n$ over the binary field $\mathbb{F}_2$. Further the problem can be extended and stated over a finite field $\mathbb{F}$ instead of $\mathbb{F}_2$. Algebraic condition for invertibility of $F$ in this special case over a finite field is well known to be equivalent to invertibility of the Koopman operator of $F$ as shown in \cite{RamSule}. In this paper a condition for invertibility is derived in the special case of Boolean maps $F:B_0^n\rightarrow B_0^n$ where $B_0$ is the two element Boolean algebra in terms of \emph{implicants} of Boolean equations. This condition is then extended to the case of general maps in $n$ variables. Hence this condition answers the special case of invertibility of the map $F$ defined over the binary field $\mathbb{F}_2$ alternatively, in terms of implicants instead of the Koopman operator. The problem of deciding invertibility of a map $F$ (or that of finding its $GOE$) over finite fields appears to be distinct from the satisfiability problem (SAT) or the problem of deciding consistency of polynomial equations over finite fields. Hence the well known algorithms for deciding SAT or of solvability using Grobner basis for checking membership in an ideal generated by polynomials is not known to answer the question of invertibility of a map. Similarly it appears that algorithms for satisfiability or polynomial solvability are not useful for computation of $GOE(F)$ even for maps over the binary field $\mathbb{F}_2$.
Symbolic Computation (cs.SC)
I need to fix some errors in proofs of theorems that I have noticed. Paper will be replaced as version 2 shortly
factual/methodological/other critical errors in manuscript
13,435
2307.07939v2
Finite-time stochastic control for complex dynamical systems: The estimate for control time and energy consumption
Controlling complex dynamical systems has been a topic of considerable interest in academic circles in recent decades. While existing works have primarily focused on closed-loop control schemes with infinite-time durations, this paper introduces a novel finite-time, closed-loop stochastic controller that pays special attention to control time and energy and their dependence on system parameters. This technique of stochastic control not only enables finite-time control in chaotic dynamical systems but also facilitates finite-time synchronization in unidirectionally coupled systems. Notably, our new scheme offers several advantages over existing deterministic finite-time controllers from a physical standpoint of time and energy consumption. Using numerical experiments based on random ecosystems, neural networks, and Lorenz systems, we demonstrate the effectiveness of our analytical results. It is anticipated that this proposed stochastic scheme will have widespread applicability in controlling complex dynamical systems and achieving network synchronization.
Optimization and Control (math.OC)
There are some mistakes in the paper. We need to revise it
factual/methodological/other critical errors in manuscript
13,436
2307.08685v2
Evaluating Climate Models with Sliced Elastic Distance
The validation of global climate models plays a crucial role in ensuring the accuracy of climatological predictions. However, existing statistical methods for evaluating differences between climate fields often overlook time misalignment and therefore fail to distinguish between sources of variability. To more comprehensively measure differences between climate fields, we introduce a new vector-valued metric, the sliced elastic distance. This new metric simultaneously accounts for spatial and temporal variability while decomposing the total distance into shape differences (amplitude), timing variability (phase), and bias (translation). We compare the sliced elastic distance against a classical metric and a newly developed Wasserstein-based approach through a simulation study. Our results demonstrate that the sliced elastic distance outperforms previous methods by capturing a broader range of features. We then apply our metric to evaluate the historical model outputs of the Coupled Model Intercomparison Project (CMIP) members, focusing on monthly average surface temperatures and monthly total precipitation. By comparing these model outputs with quasi-observational ERA5 Reanalysis data products, we rank the CMIP models and assess their performance. Additionally, we investigate the progression from CMIP phase 5 to phase 6 and find modest improvements in the phase 6 models regarding their ability to produce realistic climate dynamics.
Methodology (stat.ME)
Issue with the method having limitations for the application area
factual/methodological/other critical errors in manuscript
13,438
2307.10105v2
Percolation on hypergraphs and the hard-core model
We prove tight bounds on the site percolation threshold for $k$-uniform hypergraphs of maximum degree $\Delta$ and for $k$-uniform hypergraphs of maximum degree $\Delta$ in which any pair of edges overlaps in at most $r$ vertices. The hypergraphs that achieve these bounds are hypertrees, but unlike in the case of graphs, there are many different $k$-uniform, $\Delta$-regular hypertrees. Determining the extremal tree for a given $k, \Delta, r$ involves an optimization problem, and our bounds arise from a convex relaxation of this problem. By combining our percolation bounds with the method of disagreement percolation we obtain improved bounds on the uniqueness threshold for the hard-core model on hypergraphs satisfying the same constraints. Our uniqueness conditions imply exponential weak spatial mixing, and go beyond the known bounds for rapid mixing of local Markov chains and existence of efficient approximate counting and sampling algorithms. Our results lead to natural conjectures regarding the aforementioned algorithmic tasks, based on the intuition that uniqueness thresholds for the extremal hypertrees for percolation determine computational thresholds.
Probability (math.PR)
Error in the proof of Theorem 3.1
factual/methodological/other critical errors in manuscript
13,441
2307.10338v2
Irracionalidade recíproca
Prime numbers play a key role in number theory and have applications beyond Mathematics. In particular, in the Theory of Codes and also in Cryptography, the properties of prime numbers are relevant, because, from them, it is possible to guarantee the storage of data and the sending of messages in a secure way. And this is evident in e-commerce when personal data must be kept confidential. The proof that $\sqrt{p}$ is an irrational number, for every positive prime $p$, is known, if not by everyone, at least by the majority of Mathematics students, and such a proof is, in general, given by means of a basic property of numbers primes: if $p$ divides the product of two integers, then it divides at least one of them. This result forms the basis of other equally important results, such as, for example, what is given by the Fundamental Theorem of Arithmetic, which is the basic result of the Theory of Numbers. In this article, we present a proof of the irrationality of $\sqrt[2n]{p}$ using results from Quadratic Residue Theory, especially, by Gauss's Law of Quadratic Reciprocity.
History and Overview (math.HO)
Significant errors
factual/methodological/other critical errors in manuscript
13,443
2307.10540v3
Mean Field Games for Optimal Investment Under Relative Performance Criteria
In this paper, we study the portfolio optimization problem formulated by Lacker and Soret. They formulate a finite time horizon model that allows agents to be competitive, measuring their utility not only by their absolute wealth but also relative performance compared to the average of other agents. While the finite population or $n$-player game is tractable in some cases, the authors present the Mean Field Game framework to solve this problem. Here, we seek to use this framework to clearly detail the optimal investment and consumption strategies in the CRRA utility case as was briefly outlined in Lacker and Soret, but also derive a solution in the CARA utility case.
Mathematical Finance (q-fin.MF)
Error in Section 5.2
factual/methodological/other critical errors in manuscript
13,446
2307.11176v2
Local equivalence via homological algebra
We study local equivalence of bounded complexes over a polynomial ring $R[w]$, where $R$ is a noetherian ring. We provide a homological algebra approach to the results, the variants of which have been proved in many places in the literature.
Commutative Algebra (math.AC)
There is an irrecoverable error in Lemma 2.5. There are counterexamples even in case R=Q[x]. The lemma is crucial for the rest of the paper and it does not work unless strong assumptions are made (like: the modules are graded)
factual/methodological/other critical errors in manuscript
13,448
2307.11501v2
The lack of exponential stability of a Bresse system subjected only to two dampings
In this paper, we study the indirect boundary stabilization of a Bresse system with only two dissipation laws. This system, which models the dynamics of a beam, is a hyperbolic system with three wave speeds. We study the asymptotic behaviour of the eigenvalues and of the eigenvectors of the underlying operator in the case of three distinct wave velocities which is not physically relevant. Since the imaginary axis is proved to be an asymptote for one family of eigenvalues, the stability can not be exponential. Of course, this paper is only interesting from a mathematical point of view.
Analysis of PDEs (math.AP)
We have realized that Theorem 2.4 is not correct
factual/methodological/other critical errors in manuscript
13,449
2307.11797v6
Sign-changing solutions to Schiffer's overdetermined problem on wavy cylinder
In this paper, we prove the existence of $k$ families of smooth unbounded domains $\Omega_s\subset\mathbb{R}^{N+1}$ with $N\geq1$, where \begin{equation} \Omega_s=\left\{(x,t)\in \mathbb{R}^N\times \mathbb{R}:\vert x\vert<1+s\cos \left(\frac{2\pi}{T(s)}t\right)+s w_s\left(\frac{2\pi}{T(s)}t\right)\right\},\nonumber \end{equation} such that \begin{equation} -\Delta u=\lambda u\,\, \text{in}\,\,\Omega, \,\, \partial_\nu u=0,\,\,u=\text{const}\,\,\text{on}\,\,\partial\Omega\nonumber \end{equation} admits a bounded sign-changing solution with exactly $k+1$ nodal domains. These results can be regarded as counterexamples to the Schiffer conjecture on unbounded domain. These results also indicate that there exist non-spherical unbounded regions without Pompeiu property. Our construction shows that the condition "$\partial\Omega$ is homeomorphic to the unit sphere" is necessary for Williams conjecture to hold. In addition, these conclusions may have potential applications in remote sensing or CT.
Analysis of PDEs (math.AP)
We find that the proof of our main Theorem 1.1 and Theorem 1.2 are wrong, where the essential reason is that the linearized operator in 23 page is not order 1 elliptic. Thus we wish to withdraw all versions of this paper
factual/methodological/other critical errors in manuscript
13,450
2307.12117v2
Dynamics of a Leslie-Gower type predator-prey system with herd behavior and constant harvesting in prey
In this paper, the dynamics of a Leslie-Gower type predator-prey system with herd behavior and constant harvesting in prey are investigated. Earlier work has shown that the herd behavior in prey merely induces a supercritical Hopf bifurcation in the classic Leslie-Gower predator-prey system in the absence of harvesting. However, the work in this paper shows that the presence of herd behavior and constant harvesting in prey can give rise to numerous kinds of bifurcation at the non-hyperbolic equilibria in the classic Leslie-Gower predator-prey system such as two saddle-node bifurcations and one Bogdanov-Takens bifurcation of codimension two at the degenerate equilibria and one degenerate Hopf bifurcation of codimension three at the weak focus. Hence, the research results reveal that the herd behavior and constant harvesting in prey have a strong influence on the dynamics and also contribute to promoting the ecological diversity and maintaining the long-term economic benefits.
Dynamical Systems (math.DS)
The article has errors
factual/methodological/other critical errors in manuscript
13,452
2307.12333v2
An axiomatized PDE model of deep neural networks
Inspired by the relation between deep neural network (DNN) and partial differential equations (PDEs), we study the general form of the PDE models of deep neural networks. To achieve this goal, we formulate DNN as an evolution operator from a simple base model. Based on several reasonable assumptions, we prove that the evolution operator is actually determined by convection-diffusion equation. This convection-diffusion equation model gives mathematical explanation for several effective networks. Moreover, we show that the convection-diffusion model improves the robustness and reduces the Rademacher complexity. Based on the convection-diffusion equation, we design a new training method for ResNets. Experiments validate the performance of the proposed method.
Machine Learning (cs.LG)
The experiment design in the paper lacks careful thought and may be misleading in demonstrating our contribution
factual/methodological/other critical errors in manuscript
13,453
2307.13864v3
Articulation Points and Freezing Sets
We show that articulation points are unnecessary in freezing sets.
Geometric Topology (math.GT)
Error in the argument given as proof of the major assertion. The induction is incorrect
factual/methodological/other critical errors in manuscript
13,457
2307.14120v4
Calculating the maximum number of maximum cliques for simple graphs
A simple graph on $n$ vertices may contain a lot of maximum cliques. But how many can it potentially contain? We will define prime and composite graphs, and we will show that if $n \ge 15$, then the grpahs with the maximum number of maximum cliques have to be composite. Moreover, we will show an edge bound from which we will prove that if any factor of a composite graph has $\omega(G_i) \ge 5$, then it cannot have the maximum number of maximum cliques. Using this we will show that the graph that contains $3^{\lfloor n/3 \rfloor}c$ maximum cliques has the most number of maximum cliques on $n$ vertices, where $c\in\{1,\frac{4}{3},2\}$, depending on $n \text{ mod } 3$.
Combinatorics (math.CO)
The proof of Theorem 10 is incorrect. A counterexample for which it does not hold is E3^5 U K1. This is a prime graph as its complement is connected, so m=1. However it has 243 maximum cliques while only having 105 edges, so the left side is definitely not smaller than the right side. This makes all subsequent Theorems false
factual/methodological/other critical errors in manuscript
13,458
2307.15090v2
Understanding Forward Process of Convolutional Neural Network
This paper reveal the selective rotation in the CNNs' forward processing. It elucidates the activation function as a discerning mechanism that unifies and quantizes the rotational aspects of the input data. Experiments show how this defined methodology reflects the progress network distinguish inputs based on statistical indicators, which can be comprehended or analyzed by applying structured mathematical tools. Our findings also unveil the consistency between artificial neural networks and the human brain in their data processing pattern.
Machine Learning (cs.LG)
something wrong in this paper
factual/methodological/other critical errors in manuscript
13,460
2307.15502v2
A Positive Answer to a Question of K. Borsuk on the Capacity of Polyhedra with Finite by Cyclic Fundamental Group
Karol Borsuk in 1968 asked: Is it true that every finite polyhedron dominates only finitely many different shapes? Danuta Kolodziejczyk showed that generally an answer to the Borsuk question is negative and also presented a positive answer by proving that every polyhedron with finite fundamental group dominates only finitely many different homotopy types (hence shapes). In this paper, we show that polyhedra with finite by cyclic fundamental group dominate only finitely many homotopy types. As a consequence, we give a partial positive answer to this question of Kolodziejczyk: Does every polyhedron with abelian fundamental group dominate only finitely many different homotopy types? In fact, we that every polyhedron with abelian fundamental group of rank 1 dominates only finitely many different homotopy types. Finally, we prove that every polyhedron dominates only finitely many homotopy types of simply connected CW-complexes.
Algebraic Topology (math.AT)
Due to a gap in the proof of the main result
factual/methodological/other critical errors in manuscript
13,462
2307.15626v2
A quantitative phase-field model for void evolution in defect supersaturated environments: a novel introduction of defect reaction asymmetry
Voids develop in crystalline materials under energetic particle irradiation, as in nuclear reactors. Understanding the underlying mechanisms of void nucleation and growth is of utmost importance as it leads to dimensional instability of the metallic materials. In the past two decades, researchers have adopted the phase-field approach to study the phenomena of void evolution under irradiation. The approach involves modeling the boundary between the void and matrix with a diffused interface. However, none of the existing models are quantitative in nature. This work introduces a thermodynamically consistent, quantitative diffuse interface model based on KKS formalism to describe the void evolution under irradiation. The model concurrently considers both vacancies and self-interstitials in the description of void evolution. Unique to our model is the presence of two mobility parameters in the equation of motion of the phase-field variable. The two mobility parameters relate the driving force for vacancy and self-interstitial interaction to the interface motion, analogous to dislocation motion through climb and glide processes. The asymptotic matching of the phase-field model with the sharp-interface theory fixes the two mobility parameters in terms of the material parameters in the sharp-interface model. The Landau coefficient, which controls the height of the double-well function in the phase field variable, and the gradient coefficient of the phase field variable are fixed based on the interfacial energy and interface width of the boundary. With all the parameters in the model determined in terms of the material parameters, we thus have a new phase field model for void evolution. Simple test cases will show the void evolution under various defect supersaturation to validate our new phase-field model.
Materials Science (cond-mat.mtrl-sci)
We have developed a much better version of this work. The derivation of the phase-field model in the arxiv version is incorrect. Also, we did not expand on the different contributions to the heat flux. Moreover, we have now developed a new numerical strategy to solve the equations using the KKS model. So, for the aforementioned reasons, we request the arxiv to withdraw this article
factual/methodological/other critical errors in manuscript
13,463
2307.15945v7
Unconventional optical response in monolayer graphene upon dominant intraband scattering
Scattering dynamics influence the graphenes transport properties and inhibits the charge carrier deterministic behaviour. The intra or inter-band scattering mechanisms are vital for graphenes optical conductivity response under specific considerations of doping. Here, we investigated the influence of scattering systematically on optical conductivity using a semi-classical multiband Boltzmann equation with inclusion of both electron-electron $\&$ electron-phonon collisions. We found unconventional characteristics of linear optical response with a significant deviation from the universal conductivity $\frac{e^2}{4\hbar}$ in doped monolayer graphene. This is explained through phenomenological relaxation rates under low doping regime with dominant intraband scattering. Such novel optical responses are vanished at high temperatures or overdoping conditions due to strong Drude behaviour. With the aid of approximations around Dirac points we have developed analytical formalism for many body interactions and is in good agreement with the Kubo approaches.
Mesoscale and Nanoscale Physics (cond-mat.mes-hall)
Version 5 has the correct diagrams with labels uploaded after corrections. Withdrawing version 6 to not confuse readers with wrong information
factual/methodological/other critical errors in manuscript
13,465
2307.16874v4
Shifting Cryptocurrency Influence: A High-Resolution Network Analysis of Market Leaders
Over the last decade, the cryptocurrency market has experienced unprecedented growth, emerging as a prominent financial market. As this market rapidly evolves, it necessitates re-evaluating which cryptocurrencies command the market and steer the direction of blockchain technology. We implement a network-based cryptocurrency market analysis to investigate this changing landscape. We use novel hourly-resolution data and Kendall's Tau correlation to explore the interconnectedness of the cryptocurrency market. We observed critical differences in the hierarchy of cryptocurrencies determined by our method compared to rankings derived from daily data and Pearson's correlation. This divergence emphasizes the potential information loss stemming from daily data aggregation and highlights the limitations of Pearson's correlation. Our findings show that in the early stages of this growth, Bitcoin held a leading role. However, during the 2021 bull run, the landscape changed drastically. We see that while Ethereum has emerged as the overall leader, it was FTT and its associated exchange, FTX, that greatly led to the increase at the beginning of the bull run. We also find that highly-influential cryptocurrencies are increasingly gaining a commanding influence over the market as time progresses, despite the growing number of cryptocurrencies making up the market.
Statistical Finance (q-fin.ST)
Withdrawing this preprint due to a minor error in the code implementation that affects the results
factual/methodological/other critical errors in manuscript
13,470
2308.00237v5
EC-Conf: An Ultra-fast Diffusion Model for Molecular Conformation Generation with Equivariant Consistency
Despite recent advancement in 3D molecule conformation generation driven by diffusion models, its high computational cost in iterative diffusion/denoising process limits its application. In this paper, an equivariant consistency model (EC-Conf) was proposed as a fast diffusion method for low-energy conformation generation. In EC-Conf, a modified SE (3)-equivariant transformer model was directly used to encode the Cartesian molecular conformations and a highly efficient consistency diffusion process was carried out to generate molecular conformations. It was demonstrated that, with only one sampling step, it can already achieve comparable quality to other diffusion-based models running with thousands denoising steps. Its performance can be further improved with a few more sampling iterations. The performance of EC-Conf is evaluated on both GEOM-QM9 and GEOM-Drugs sets. Our results demonstrate that the efficiency of EC-Conf for learning the distribution of low energy molecular conformation is at least two magnitudes higher than current SOTA diffusion models and could potentially become a useful tool for conformation generation and sampling. We release our code at this https URL .
Biomolecules (q-bio.BM)
There are some mistakes in the paper, and we need to revise it substantially
factual/methodological/other critical errors in manuscript
13,471
2308.01097v4
Spatio-Temporal Branching for Motion Prediction using Motion Increments
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications, but it remains a challenging task due to the stochastic and aperiodic nature of future poses. Traditional methods rely on hand-crafted features and machine learning techniques, which often struggle to model the complex dynamics of human motion. Recent deep learning-based methods have achieved success by learning spatio-temporal representations of motion, but these models often overlook the reliability of motion data. Additionally, the temporal and spatial dependencies of skeleton nodes are distinct. The temporal relationship captures motion information over time, while the spatial relationship describes body structure and the relationships between different nodes. In this paper, we propose a novel spatio-temporal branching network using incremental information for HMP, which decouples the learning of temporal-domain and spatial-domain features, extracts more motion information, and achieves complementary cross-domain knowledge learning through knowledge distillation. Our approach effectively reduces noise interference and provides more expressive information for characterizing motion by separately extracting temporal and spatial features. We evaluate our approach on standard HMP benchmarks and outperform state-of-the-art methods in terms of prediction accuracy.
Computer Vision and Pattern Recognition (cs.CV)
The incremental information of our paper includes the displacement information from the last frame of the historical sequence, derived from the motion information of the first frame in the future sequence and the motion information of the last frame of the historical sequence. This implicitly contains future information, inadvertently giving an unfair advantage in the human motion prediction task
factual/methodological/other critical errors in manuscript
13,473
2308.01880v3
Biosignature false positives in potentially habitable planets around M dwarfs: the effect of UV radiation from one flare
Many past studies have predicted the steady-state production and maintenance of abiotic O$_2$ and O$_3$ in the atmospheres of CO$_2$-rich terrestrial planets orbiting M dwarf stars. However, the time-dependent responses of these planetary atmospheres to flare events - and the possible temporary production or enhancement of false positive biosignatures therein - has been comparatively less well studied. Most past works that have modeled the photochemical response to flares have assumed abundant free oxygen like that of the modern or Proterozoic Earth. Here we examine in detail the photochemical impact of the UV emitted by a single flare on abiotic O$_2$/O$_3$ production in prebiotic, CO$_2$-dominated atmospheres of M dwarf planets with CO$_2$ levels ranging from 10% to 90% of 1 bar. We find that a single flare generally destroys O$_2$ while modestly enhancing O$_3$ column densities. We simulate the spectral observables of both the steady-state atmosphere and time-dependent spectral response over the flare window for both emitted and transmitted light spectra. Over the course of the flare, the O$_3$ UV Hartley band is modestly enhanced by a maximum of 6 ppm while the CO$_2$ molecular transit depths modestly decline by 7 ppm. In both emitted and transmitted light spectra, the 9.65 $\mu$m O$_3$ band is hidden by the overlapping 9.4 $\mu$m CO$_2$ band for all scenarios considered. Overall, we find that the possible enhancements of abiotic O$_3$ due to a single flare are small compared to O$_3$'s sensitivity to other parameters such as CO$_2$ and H$_2$O abundances or the availability of reducing gases such as H$_2$.
Earth and Planetary Astrophysics (astro-ph.EP)
An error was discovered in the code use to obtain part of the results, thus compromising the article's conclusions. The authors have asked for the paper to be withdrawn from the journal until the problem can be corrected
factual/methodological/other critical errors in manuscript
13,475
2308.02179v4
Strong Rigidity of Closed Minimal Hypersurfaces in Euclidean Spheres
Let M be a closed embedded minimal hypersurface in a Euclidean sphere of dimension n+1, we prove that it is strongly rigid. As applications we confirm the conjecture proposed by Choi and Schoen in [3] and the Chern conjecture for n less than 7.
Differential Geometry (math.DG)
Unfixable error found in Section 3
factual/methodological/other critical errors in manuscript
13,476
2308.02295v2
IRS-Enabled Covert and Reliable Communications: How Many Reflection Elements are Required?
Short-packet communications are applied to various scenarios where transmission covertness and reliability are crucial due to the open wireless medium and finite blocklength. Although intelligent reflection surface (IRS) has been widely utilized to enhance transmission covertness and reliability, the question of how many reflection elements at IRS are required remains unanswered, which is vital to system design and practical deployment. The inherent strong coupling exists between the transmission covertness and reliability by IRS, leading to the question of intractability. To address this issue, the detection error probability at the warder and its approximation are derived first to reveal the relation between covertness performance and the number of reflection elements. Besides, to evaluate the reliability performance of the system, the decoding error probability at the receiver is also derived. Subsequently, the asymptotic reliability performance in high covertness regimes is investigated, which provides theoretical predictions about the number of reflection elements at IRS required to achieve a decoding error probability close to 0 with given covertness requirements. Furthermore, Monte-Carlo simulations verify the accuracy of the derived results for detection (decoding) error probabilities and the validity of the theoretical predictions for reflection elements. Moreover, results show that more reflection elements are required to achieve high reliability with tighter covertness requirements, longer blocklength and higher transmission rates.
Information Theory (cs.IT)
The paper has some shortcomings in the theoretical analysis. And it will not be published at the conference, as clamied in last comments
factual/methodological/other critical errors in manuscript
13,477
2308.02332v2
Novel Online-Offline MA2C-DDPG for Efficient Spectrum Allocation and Trajectory Optimization in Dynamic Spectrum Sharing UAV Networks
Unmanned aerial vehicle (UAV) communication is of crucial importance for diverse practical applications. However, it is susceptible to the severe spectrum scarcity problem and interference since it operates in the unlicensed spectrum band. In order to tackle those issues, a dynamic spectrum sharing network is considered with the anti-jamming technique. Moreover, an intelligent spectrum allocation and trajectory optimization scheme is proposed to adapt to diverse jamming models by exploiting our designed novel online-offline multi-agent actor-critic and deep deterministic policy-gradient framework. Simulation results demonstrate the high efficiency of our proposed framework. It is also shown that our proposed scheme achieves the largest transmission rate among all benchmark schemes.
Signal Processing (eess.SP)
Some technical errors occured in the manuscript
factual/methodological/other critical errors in manuscript
13,478
2308.02854v2
Efron's Mean Volume Formula in Higher Dimensions
In this short paper, an older Efron's result is extended to obtain a cutting plane integral formula for the mean volume of a random simplex in any d dimensions.
Probability (math.PR)
The assumption that the convex hull of d+2 points in R^d is either a d-simplex or a bi d-simplex is true only in d<4. In higher dimensions, there are more simplical polytopes, among which the cyclic polytope maximalizes the number of facets. As a consequence, there is no simple linear relation between the number of vertices and facets in d>3, from which one could connect the expected values
factual/methodological/other critical errors in manuscript
13,482
2308.02946v4
On the expected efficiency of branch and bound for the asymmetric TSP
Let the costs $C(i,j)$ for an instance of the asymmetric traveling salesperson problem be independent uniform $[0,1]$ random variables. We consider the efficiency of branch and bound algorithms that use the assignment relaxation as a lower bound. We show that w.h.p. the number of steps taken in any such branch and bound algorithm is $e^{\Omega(n^a)}$ for some small absolute constant $a>0$.
Data Structures and Algorithms (cs.DS)
There is a bug in the proof of Lemma 9
factual/methodological/other critical errors in manuscript
13,483
2308.02993v3
Plurifinely open sets and complex Monge-Ampère measures
The aim of the paper is to investigate the structure of plurifinely open sets. As an application, we will prove an equality on complex Monge-Ampère measures in plurifinely open sets.
Complex Variables (math.CV)
There is an error in my article. [REDACTED-NAME] [REDACTED-NAME] Son pointed it out. I would like to thank [REDACTED-NAME] [REDACTED-NAME] Son. I would also like to thank [REDACTED-NAME] for his comments
factual/methodological/other critical errors in manuscript
13,485
2308.04136v3
Sub-SQL electronic field sensing by simultaneously using quantum entanglements and squeezings
Quantum entanglement and quantum squeezing are two most typical approaches to beat the standard quantum limit (SQL) of the sensitive phase estimations in quantum metrology. Each of them has already been utilized individually to improve the sensitivity of electric field sensing with the trapped ion platform, but the upper bound of the demonstrated sensitivity gain is very limited, i.e., the experimental 3dB and theoretical 6dB, over the SQL. Here, by simultaneously using the internal (spin)-external (oscillator) state entanglements and the oscillator squeezings to effectively amplify the accumulation phase and compress the mean excited phonon number at the same time, we show that these sensitivity gains can be effectively surpassed, once the relevant parameters can be properly set. Hopefully, the proposal provides a novel approach to the stronger beaten of the SQL for the sensitive sensings of the desired electric field and also the other metrologies.
Quantum Physics (quant-ph)
We find some calculation erros, thus we want to withdraw it to correct and improve it
factual/methodological/other critical errors in manuscript
13,488
2308.04802v2
Generalized Unbiased Scene Graph Generation
Existing Unbiased Scene Graph Generation (USGG) methods only focus on addressing the predicate-level imbalance that high-frequency classes dominate predictions of rare ones, while overlooking the concept-level imbalance. Actually, even if predicates themselves are balanced, there is still a significant concept-imbalance within them due to the long-tailed distribution of contexts (i.e., subject-object combinations). This concept-level imbalance poses a more pervasive and challenging issue compared to the predicate-level imbalance since subject-object pairs are inherently complex in combinations. Hence, we introduce a novel research problem: Generalized Unbiased Scene Graph Generation (G-USGG), which takes into account both predicate-level and concept-level imbalance. To the end, we propose the Multi-Concept Learning (MCL) framework, which ensures a balanced learning process across rare/ uncommon/ common concepts. MCL first quantifies the concept-level imbalance across predicates in terms of different amounts of concepts, representing as multiple concept-prototypes within the same class. It then effectively learns concept-prototypes by applying the Concept Regularization (CR) technique. Furthermore, to achieve balanced learning over different concepts, we introduce the Balanced Prototypical Memory (BPM), which guides SGG models to generate balanced representations for concept-prototypes. Extensive experiments demonstrate the remarkable efficacy of our model-agnostic strategy in enhancing the performance of benchmark models on both VG-SGG and OI-SGG datasets, leading to new state-of-the-art achievements in two key aspects: predicate-level unbiased relation recognition and concept-level compositional generability.
Computer Vision and Pattern Recognition (cs.CV)
The author requests to withdraw this paper due to a critical definitional error in Multi-[REDACTED-NAME] for [REDACTED-NAME] SGG Debiasing. This error aligned with the definition of [REDACTED-NAME] SGG tasks, resulting in an unfair comparison with state-of- the-art (SOTA) methods, which in turn, hindered the ability to evaluate the paper's contributions
factual/methodological/other critical errors in manuscript
13,492
2308.04970v2
Fitting Concentric Elliptical Shapes Under General Model
The problem of fitting concentric ellipses is a vital problem in image processing, pattern recognition, and astronomy. Several methods have been developed but all address very special cases. In this paper, this problem has been investigated under a more general setting, and two estimators for estimating the parameters have been proposed. Since both estimators are obtained iterative fashion, several numerical schemes are investigated and the best initial guess is determined. Furthermore, the constraint Cramé Rao lower bound for this problem is derived and it is compared with the variance of each estimator. Finally, our theory is assessed and validated by a series of numerical experiments on both real and synthetic data.
Computation (stat.CO)
It has many serous mistakes highlighted by reviewers so we would like to withdraw it and fixing them. This could take several months
factual/methodological/other critical errors in manuscript
13,493
2308.05274v2
Local-Global Information Interaction Debiasing for Dynamic Scene Graph Generation
The task of dynamic scene graph generation (DynSGG) aims to generate scene graphs for given videos, which involves modeling the spatial-temporal information in the video. However, due to the long-tailed distribution of samples in the dataset, previous DynSGG models fail to predict the tail predicates. We argue that this phenomenon is due to previous methods that only pay attention to the local spatial-temporal information and neglect the consistency of multiple frames. To solve this problem, we propose a novel DynSGG model based on multi-task learning, DynSGG-MTL, which introduces the local interaction information and global human-action interaction information. The interaction between objects and frame features makes the model more fully understand the visual context of the single image. Long-temporal human actions supervise the model to generate multiple scene graphs that conform to the global constraints and avoid the model being unable to learn the tail predicates. Extensive experiments on Action Genome dataset demonstrate the efficacy of our proposed framework, which not only improves the dynamic scene graph generation but also alleviates the long-tail problem.
Computer Vision and Pattern Recognition (cs.CV)
The author has withdrawn this paper due to a critical definitional error in multi-task learning for dynamic SGG debiasing. This error aligned with the definition of dynamic SGG tasks, resulting in an unfair comparison with state-of-the-art (SOTA) methods, which in turn, hindered the ability to evaluate the paper's contributions
factual/methodological/other critical errors in manuscript
13,494
2308.05569v4
Vacuum Branching, Dark Energy, Dark Matter
Beginning with the Everett-DeWitt many-worlds interpretation of quantum mechanics, there have been a series of proposals for how the state vector of a quantum system might split at any instant into orthogonal branches, each of which exhibits approximately classical behavior. In an earlier version of the present work, we proposed a decomposition of a state vector into branches by finding the minimum of a measure of the mean squared quantum complexity of the branches in the branch decomposition. With respect to a particular Lorentz frame, for a system beginning in a state of low complexity, branching occurs repeatedly over time with each branch splitting successively into further sub-branches among which the branch followed by the real world is chosen according to the Born rule. Alternatively, in an explicitly Lorentz covariant formulation, the real world is a single random draw from the set of branches at asymptotically late time, which can then be restored to finite time in a particular Lorentz frame by sequentially retracing the set of branching events implied by the late time choice. In the present article, we adapt the earlier formulation to quantum electrodynamics in temporal gauge on a lattice in Minkowski space. The earlier version, however, here is simplified by replacing a definition of complexity based on the physical vacuum with a definition based on the bare vacuum. As a consequence of this replacement, the physical vacuum itself is predicted to branch yielding branches with energy densities slightly larger than that of the unbranched vacuum. If the vacuum energy renormalization constant is chosen as usual to give 0 energy density to the unbranched vacuum, vacuum branches will appear to have a combination of dark energy and dark matter densities but no additional particle content.
General Relativity and Quantum Cosmology (gr-qc)
found a problem - no longer believe there is a reasonable chance main punchline correct - previous paper, 2105.04545, however, remains correct
factual/methodological/other critical errors in manuscript
13,496
2308.05720v2
Correlating neutrino millicharge and muon $(g-2)$ in an abelian $L_μ-L_τ$ model
The inclusion of an additional $U(1)$ gauge symmetry is a common feature in many extensions of the Standard Model, revealing the intricate connections between particle physics and cosmology. The $L_{\mu} - L_{\tau}$ model stands as a prominent member of this distinguished family, characterized by its anomaly-free nature and resilience in the face of collider constraints. This framework provides a unique vantage point for investigating both the intriguing mystery of the muon $(g-2)$ anomaly and the puzzling issue of the Hubble tension. However, due to the presence of kinetic mixing between the photon and $Z'$ in this model, the neutrinos have the potential to acquire minuscule electric charges, often referred to as millicharges ($q_{\nu}$) which is directly related to the strength of the new gauge couplings. A crucial question emerges: how does the model's inclusion of millicharges, while adhering to the stringent constraints imposed by experimental observations, influence its inherent ability to address the muon $(g-2)$ anomaly and the Hubble tension? We find the current upper bounds on $q_{\nu}$ derived from experiments such as the beam dump, XENONnT and LUX-ZEPLIN experiments can impose strong constraints on the $U(1)_{L_{\mu} - L_{\tau}}$ coupling. Consequently, these constraints may limit the ability of the model to fully accommodate the current measurement of $(g-2)_{\mu}$ while having a relatively minor impact on the resolution of the Hubble tension.
High Energy Physics - Phenomenology (hep-ph)
The paper is withdrawn as an error is noticed that requires correction
factual/methodological/other critical errors in manuscript
13,498
2308.06159v2
A visual and simple proof for the Picard Little Theorem
One of the most famous results in Complex Analysis is the Little Picard Theorem, that characterizes the image set of an arbitrary entire function. Specifically, the theorem states that this image set is either the whole complex plane or the whole complex plane except a point. The traditional proofs for the theorem involve technical tools such as either modular functions, Harnack's inequality, Bloch and Landau theorems... The previous fact makes the Little Picard Theorem to be often presented at initial courses on Complex Analysis without a proof. This manuscript provides a short and visual proof by only using basic concepts that are covered in any standard course on Complex Analysis. In fact, the essence of the proof is a good understanding of composition of the complex exponential map with itself and its underlying geometrical properties.
General Mathematics (math.GM)
There is an error when claiming that "a continuity argument obviously extends this property to any z such that Re(f(z)) in (0,1/2)" page 10 (it is not obvious why this could be true). Some attempts have been done in order to solve this issue in a future version, but unsuccesful until now. If fixed, I will submit a new version
factual/methodological/other critical errors in manuscript
13,501
2308.06221v2
Automated Sizing and Training of Efficient Deep Autoencoders using Second Order Algorithms
We propose a multi-step training method for designing generalized linear classifiers. First, an initial multi-class linear classifier is found through regression. Then validation error is minimized by pruning of unnecessary inputs. Simultaneously, desired outputs are improved via a method similar to the Ho-Kashyap rule. Next, the output discriminants are scaled to be net functions of sigmoidal output units in a generalized linear classifier. We then develop a family of batch training algorithm for the multi layer perceptron that optimizes its hidden layer size and number of training epochs. Next, we combine pruning with a growing approach. Later, the input units are scaled to be the net function of the sigmoidal output units that are then feed into as input to the MLP. We then propose resulting improvements in each of the deep learning blocks thereby improving the overall performance of the deep architecture. We discuss the principles and formulation regarding learning algorithms for deep autoencoders. We investigate several problems in deep autoencoders networks including training issues, the theoretical, mathematical and experimental justification that the networks are linear, optimizing the number of hidden units in each layer and determining the depth of the deep learning model. A direct implication of the current work is the ability to construct fast deep learning models using desktop level computational resources. This, in our opinion, promotes our design philosophy of building small but powerful algorithms. Performance gains are demonstrated at each step. Using widely available datasets, the final network's ten fold testing error is shown to be less than that of several other linear, generalized linear classifiers, multi layer perceptron and deep learners reported in the literature.
Machine Learning (cs.LG)
the paper need heavy editing incuding some changes in the results that needs to be updated
factual/methodological/other critical errors in manuscript
13,502
2308.07524v2
A new stability result for the 2D Boussinesq equations with horizontal dissipation
For $\mathbb{R}^2$, the stability of smooth solutions of 2D anisotropic Boussinesq equations with horizontal dissipation is an open problem. In this work, we present a partial answer to this problem in a rougher function space $H^{0,s}(\mathbb{R}^2)$. Moreover, the previous stability results with the regularity index $\frac 12<s<1$ is extended to an integer $1\le s$ and the elementary techniques are used.
Analysis of PDEs (math.AP)
The paper contains a error in the last section
factual/methodological/other critical errors in manuscript
13,506
2308.07873v2
Near-Optimal Last-iterate Convergence of Policy Optimization in Zero-sum Polymatrix Markov games
Computing approximate Nash equilibria in multi-player general-sum Markov games is a computationally intractable task. However, multi-player Markov games with certain cooperative or competitive structures might circumvent this intractability. In this paper, we focus on multi-player zero-sum polymatrix Markov games, where players interact in a pairwise fashion while remain overall competitive. To the best of our knowledge, we propose the first policy optimization algorithm called Entropy-Regularized Optimistic-Multiplicative-Weights-Update (ER-OMWU) for finding approximate Nash equilibria in finite-horizon zero-sum polymatrix Markov games with full information feedback. We provide last-iterate convergence guarantees for finding an $\epsilon$-approximate Nash equilibrium within $\tilde{O}(1/\epsilon)$ iterations, which is near-optimal compared to the optimal $O(1/\epsilon)$ iteration complexity in two-player zero-sum Markov games, which is a degenerate case of zero-sum polymatrix games with only two players involved. Our algorithm combines the regularized and optimistic learning dynamics with separated smooth value update within a single loop, where players update strategies in a symmetric and almost uncoupled manner. It provides a natural dynamics for finding equilibria and is more probable to be adapted to a sample-efficient and fully decentralized implementation where only partial information feedback is available in the future.
Computer Science and Game Theory (cs.GT)
Proof of Lemma 3.4 is wrong, \bar{L}_{h+1}^{t-k} should be replaced by \sqrt{\bar{L}_{h+1}^{t-k}}. In this case the proof of the main theorem should be substantially modified
factual/methodological/other critical errors in manuscript
13,508
2308.10271v3
Smoothing curves carefully
This paper proves an elementary topological fact about closed curves on surfaces, namely that by carefully smoothing an intersection point, one can reduce self-intersection by exactly $1$. This immediately implies a positive answer to a problem first raised by Basmajian in the 1990s: among all closed geodesics of a hyperbolic surface that self-intersect at least $k$ times, does the shortest one self-intersect exactly $k$ times? The answer is also shown to be positive for arbitrary Riemannian metrics.
Geometric Topology (math.GT)
[REDACTED-NAME] 3, there is a serious gap in the proof of the main theorem (certain quadrilateral cases are missing)
factual/methodological/other critical errors in manuscript
13,513
2308.10545v2
Constraints Based on Non-detection of Kilonova Optical Searching
Mergers of binary neutron stars are multimessenger sources of gravitational waves that have an optical luminous counterpart, commonly referred to as 'kilonova'. Inspired by the detection of GW170817, intensive searches have been conducted during the LIGO/Virgo O3 run. However, despite these efforts, no verified kilonova was detected. In this work, we present a parameter constraint method based on non-detection of optical searching considering both GW skymap, limited sky coverage, cadence, limiting magnitudes and the probability of astrophysical origin. We use our method to place constraints on EoS of neutron star based on follow-up during O3 run and obtain $M_{\rm TOV} = 2.170^{+0.120}_{-0.108}\ M_{\odot}$ at 90\% confidence level with the combination of other observations. And we also take outlook for WFST targeting kilonova throughout the LIGO/Virgo O4 run. With more events handled, we will obtain more stringent constraints on EoS and kilonova populations.
High Energy Astrophysical Phenomena (astro-ph.HE)
The paper contains some errors in Section 3 about data used in this work and acknowledgement description that should be corrected now
factual/methodological/other critical errors in manuscript
13,516
2308.11402v2
A Partially Observable Deep Multi-Agent Active Inference Framework for Resource Allocation in 6G and Beyond Wireless Communications Networks
Resource allocation is of crucial importance in wireless communications. However, it is extremely challenging to design efficient resource allocation schemes for future wireless communication networks since the formulated resource allocation problems are generally non-convex and consist of various coupled variables. Moreover, the dynamic changes of practical wireless communication environment and user service requirements thirst for efficient real-time resource allocation. To tackle these issues, a novel partially observable deep multi-agent active inference (PODMAI) framework is proposed for realizing intelligent resource allocation. A belief based learning method is exploited for updating the policy by minimizing the variational free energy. A decentralized training with a decentralized execution multi-agent strategy is designed to overcome the limitations of the partially observable state information. Exploited the proposed framework, an intelligent spectrum allocation and trajectory optimization scheme is developed for a spectrum sharing unmanned aerial vehicle (UAV) network with dynamic transmission rate requirements as an example. Simulation results demonstrate that our proposed framework can significantly improve the sum transmission rate of the secondary network compared to various benchmark schemes. Moreover, the convergence speed of the proposed PODMAI is significantly improved compared with the conventional reinforcement learning framework. Overall, our proposed framework can enrich the intelligent resource allocation frameworks and pave the way for realizing real-time resource allocation.
Systems and Control (eess.SY)
Some technical errors occured in the manuscript
factual/methodological/other critical errors in manuscript
13,519
2308.11521v2
Self-Deception: Reverse Penetrating the Semantic Firewall of Large Language Models
Large language models (LLMs), such as ChatGPT, have emerged with astonishing capabilities approaching artificial general intelligence. While providing convenience for various societal needs, LLMs have also lowered the cost of generating harmful content. Consequently, LLM developers have deployed semantic-level defenses to recognize and reject prompts that may lead to inappropriate content. Unfortunately, these defenses are not foolproof, and some attackers have crafted "jailbreak" prompts that temporarily hypnotize the LLM into forgetting content defense rules and answering any improper questions. To date, there is no clear explanation of the principles behind these semantic-level attacks and defenses in both industry and academia. This paper investigates the LLM jailbreak problem and proposes an automatic jailbreak method for the first time. We propose the concept of a semantic firewall and provide three technical implementation approaches. Inspired by the attack that penetrates traditional firewalls through reverse tunnels, we introduce a "self-deception" attack that can bypass the semantic firewall by inducing LLM to generate prompts that facilitate jailbreak. We generated a total of 2,520 attack payloads in six languages (English, Russian, French, Spanish, Chinese, and Arabic) across seven virtual scenarios, targeting the three most common types of violations: violence, hate, and pornography. The experiment was conducted on two models, namely the GPT-3.5-Turbo and GPT-4. The success rates on the two models were 86.2% and 67%, while the failure rates were 4.7% and 2.2%, respectively. This highlighted the effectiveness of the proposed attack method. All experimental code and raw data will be released as open-source to inspire future research. We believe that manipulating AI behavior through carefully crafted prompts will become an important research direction in the future.
Computation and Language (cs.CL)
Serious errors were found in the experiment, which may lead to the overturning of the overall conclusions of the paper
factual/methodological/other critical errors in manuscript
13,520
2308.11838v2
A Benchmark Study on Calibration
Deep neural networks are increasingly utilized in various machine learning tasks. However, as these models grow in complexity, they often face calibration issues, despite enhanced prediction accuracy. Many studies have endeavored to improve calibration performance through data preprocessing, the use of specific loss functions, and training frameworks. Yet, investigations into calibration properties have been somewhat overlooked. Our study leverages the Neural Architecture Search (NAS) search space, offering an exhaustive model architecture space for thorough calibration properties exploration. We specifically create a model calibration dataset. This dataset evaluates 90 bin-based and 12 additional calibration measurements across 117,702 unique neural networks within the widely employed NATS-Bench search space. Our analysis aims to answer several longstanding questions in the field, using our proposed dataset: (i) Can model calibration be generalized across different tasks? (ii) Can robustness be used as a calibration measurement? (iii) How reliable are calibration metrics? (iv) Does a post-hoc calibration method affect all models uniformly? (v) How does calibration interact with accuracy? (vi) What is the impact of bin size on calibration measurement? (vii) Which architectural designs are beneficial for calibration? Additionally, our study bridges an existing gap by exploring calibration within NAS. By providing this dataset, we enable further research into NAS calibration. As far as we are aware, our research represents the first large-scale investigation into calibration properties and the premier study of calibration issues within NAS.
Machine Learning (cs.LG)
There was an error in the paper
factual/methodological/other critical errors in manuscript
13,521
2308.12319v2
RemovalNet: DNN Fingerprint Removal Attacks
With the performance of deep neural networks (DNNs) remarkably improving, DNNs have been widely used in many areas. Consequently, the DNN model has become a valuable asset, and its intellectual property is safeguarded by ownership verification techniques (e.g., DNN fingerprinting). However, the feasibility of the DNN fingerprint removal attack and its potential influence remains an open problem. In this paper, we perform the first comprehensive investigation of DNN fingerprint removal attacks. Generally, the knowledge contained in a DNN model can be categorized into general semantic and fingerprint-specific knowledge. To this end, we propose a min-max bilevel optimization-based DNN fingerprint removal attack named RemovalNet, to evade model ownership verification. The lower-level optimization is designed to remove fingerprint-specific knowledge. While in the upper-level optimization, we distill the victim model's general semantic knowledge to maintain the surrogate model's performance. We conduct extensive experiments to evaluate the fidelity, effectiveness, and efficiency of the RemovalNet against four advanced defense methods on six metrics. The empirical results demonstrate that (1) the RemovalNet is effective. After our DNN fingerprint removal attack, the model distance between the target and surrogate models is x100 times higher than that of the baseline attacks, (2) the RemovalNet is efficient. It uses only 0.2% (400 samples) of the substitute dataset and 1,000 iterations to conduct our attack. Besides, compared with advanced model stealing attacks, the RemovalNet saves nearly 85% of computational resources at most, (3) the RemovalNet achieves high fidelity that the created surrogate model maintains high accuracy after the DNN fingerprint removal process. Our code is available at: this https URL .
Computer Vision and Pattern Recognition (cs.CV)
some mistake
factual/methodological/other critical errors in manuscript
13,522
2308.13364v3
Positivity of Singular Hermitian Metrics for Holomorphic Vector Bundles
We introduce a notion of Nakano and Demailly positivity for singular Hermitian metrics of holomorphic vector bundles. Our definitions support the usual Hörmander and Nadel type vanishing theorems with estimates, at least on essentially Stein manifolds. As an application, we establish a sharp Ohsawa-Takegoshi type $L^2$ extension theorem. We use the method of Berndtsson and Lempert to prove the latter theorem, and for this purpose we require a Berndtsson-type positivity theorem for holomorphic vector bundles, which we also prove.
Complex Variables (math.CV)
Proposition 4.11 is wrong; there are counterexamples. As a consequence, Proposition 5.1 is also wrong, and there too one has counterexamples. Thus the definitions of k-positivity put forth in the article do not capture what was intended
factual/methodological/other critical errors in manuscript
13,525
2308.13495v3
Open Gaze: Open Source eye tracker for smartphone devices using Deep Learning
Eye tracking has been a pivotal tool in diverse fields such as vision research, language analysis, and usability assessment. The majority of prior investigations, however, have concentrated on expansive desktop displays employing specialized, costly eye tracking hardware that lacks scalability. Remarkably little insight exists into ocular movement patterns on smartphones, despite their widespread adoption and significant usage. In this manuscript, we present an open-source implementation of a smartphone-based gaze tracker that emulates the methodology proposed by a GooglePaper (whose source code remains proprietary). Our focus is on attaining accuracy comparable to that attained through the GooglePaper's methodology, without the necessity for supplementary hardware. Through the integration of machine learning techniques, we unveil an accurate eye tracking solution that is native to smartphones. Our approach demonstrates precision akin to the state-of-the-art mobile eye trackers, which are characterized by a cost that is two orders of magnitude higher. Leveraging the vast MIT GazeCapture dataset, which is available through registration on the dataset's website, we successfully replicate crucial findings from previous studies concerning ocular motion behavior in oculomotor tasks and saliency analyses during natural image observation. Furthermore, we emphasize the applicability of smartphone-based gaze tracking in discerning reading comprehension challenges. Our findings exhibit the inherent potential to amplify eye movement research by significant proportions, accommodating participation from thousands of subjects with explicit consent. This scalability not only fosters advancements in vision research, but also extends its benefits to domains such as accessibility enhancement and healthcare applications.
Computer Vision and Pattern Recognition (cs.CV)
This paper results are incorrectly reported. The paper is not authentic and conclusions are not correct
factual/methodological/other critical errors in manuscript
13,527
2308.14081v3
U-SEANNet: A Simple, Efficient and Applied U-Shaped Network for Diagnosis of Nasal Diseases on Nasal Endoscopic Images
Numerous studies have affirmed that deep learning models can facilitate early diagnosis of lesions in endoscopic images. However, the lack of available datasets stymies advancements in research on nasal endoscopy, and existing models fail to strike a good trade-off between model diagnosis performance, model complexity and parameters size, rendering them unsuitable for real-world application. To bridge these gaps, we created the first large-scale nasal endoscopy dataset, named 7-NasalEID, comprising 11,352 images that contain six common nasal diseases and normal samples. Subsequently, we proposed U-SEANNet, an innovative U-shaped architecture, underpinned by depth-wise separable convolution. Moreover, to enhance its capacity for detecting nuanced discrepancies in input images, U-SEANNet employs the Global-Local Channel Feature Fusion module, enabling it to utilize salient channel features from both global and local contexts. To demonstrate U-SEANNet's potential, we benchmarked U-SEANNet against seventeen modern architectures through five-fold cross-validation. The experimental results show that U-SEANNet achieves a commendable accuracy of 93.58%. Notably, U-SEANNet's parameters size and GFLOPs are only 0.78M and 0.21, respectively. Our findings suggest U-SEANNet is the state-of-the-art model for nasal diseases diagnosis in endoscopic images.
Image and Video Processing (eess.IV)
There are some descriptive errors in the manuscript
factual/methodological/other critical errors in manuscript
13,532
2308.14306v2
Evaluating the Robustness to Instructions of Large Language Models
Recently, Instruction fine-tuning has risen to prominence as a potential method for enhancing the zero-shot capabilities of Large Language Models (LLMs) on novel tasks. This technique has shown an exceptional ability to boost the performance of moderately sized LLMs, sometimes even reaching performance levels comparable to those of much larger model variants. The focus is on the robustness of instruction-tuned LLMs to seen and unseen tasks. We conducted an exploration of six models including Alpaca, Vicuna, WizardLM, and Traditional Task-oriented Models(Flan-T5-XL/XXL, T0++) using real-world relation extraction datasets as case studies. We carried out a comprehensive evaluation of these instruction-following LLMs which have been tuned based on open-domain instructions and task-oriented instructions. The main discussion is their performance and robustness towards instructions. We have observed that in most cases, the model's performance in dealing with unfamiliar instructions tends to worsen significantly, and the robustness of the model for RE instructions deteriorates compared to QA. Further, we discovered that up until a certain parameter size threshold (3B), the performance of the FLAN-T5 model improves as the parameter count increases. The robustness of different scales of FLAN-T5 models to RE instruction is worse than the robustness to QA instruction.
Computation and Language (cs.CL)
In our study, erroneous data analysis inadvertently led to misleading outcomes. Incorrect variables were included, distorting results. This emphasizes the significance of robust data processing and analysis techniques in research
factual/methodological/other critical errors in manuscript
13,533